Dec  6 00:42:21 np0005548731 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  6 00:42:21 np0005548731 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  6 00:42:21 np0005548731 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  6 00:42:21 np0005548731 kernel: BIOS-provided physical RAM map:
Dec  6 00:42:21 np0005548731 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  6 00:42:21 np0005548731 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  6 00:42:21 np0005548731 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  6 00:42:21 np0005548731 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  6 00:42:21 np0005548731 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  6 00:42:21 np0005548731 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  6 00:42:21 np0005548731 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  6 00:42:21 np0005548731 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  6 00:42:21 np0005548731 kernel: NX (Execute Disable) protection: active
Dec  6 00:42:21 np0005548731 kernel: APIC: Static calls initialized
Dec  6 00:42:21 np0005548731 kernel: SMBIOS 2.8 present.
Dec  6 00:42:21 np0005548731 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  6 00:42:21 np0005548731 kernel: Hypervisor detected: KVM
Dec  6 00:42:21 np0005548731 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  6 00:42:21 np0005548731 kernel: kvm-clock: using sched offset of 4708081883 cycles
Dec  6 00:42:21 np0005548731 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  6 00:42:21 np0005548731 kernel: tsc: Detected 2799.998 MHz processor
Dec  6 00:42:21 np0005548731 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  6 00:42:21 np0005548731 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  6 00:42:21 np0005548731 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  6 00:42:21 np0005548731 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  6 00:42:21 np0005548731 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  6 00:42:21 np0005548731 kernel: Using GB pages for direct mapping
Dec  6 00:42:21 np0005548731 kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec  6 00:42:21 np0005548731 kernel: ACPI: Early table checksum verification disabled
Dec  6 00:42:21 np0005548731 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  6 00:42:21 np0005548731 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  6 00:42:21 np0005548731 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  6 00:42:21 np0005548731 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  6 00:42:21 np0005548731 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  6 00:42:21 np0005548731 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  6 00:42:21 np0005548731 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  6 00:42:21 np0005548731 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  6 00:42:21 np0005548731 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  6 00:42:21 np0005548731 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  6 00:42:21 np0005548731 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  6 00:42:21 np0005548731 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  6 00:42:21 np0005548731 kernel: No NUMA configuration found
Dec  6 00:42:21 np0005548731 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  6 00:42:21 np0005548731 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  6 00:42:21 np0005548731 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  6 00:42:21 np0005548731 kernel: Zone ranges:
Dec  6 00:42:21 np0005548731 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  6 00:42:21 np0005548731 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  6 00:42:21 np0005548731 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  6 00:42:21 np0005548731 kernel:  Device   empty
Dec  6 00:42:21 np0005548731 kernel: Movable zone start for each node
Dec  6 00:42:21 np0005548731 kernel: Early memory node ranges
Dec  6 00:42:21 np0005548731 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  6 00:42:21 np0005548731 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  6 00:42:21 np0005548731 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  6 00:42:21 np0005548731 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  6 00:42:21 np0005548731 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  6 00:42:21 np0005548731 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  6 00:42:21 np0005548731 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  6 00:42:21 np0005548731 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  6 00:42:21 np0005548731 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  6 00:42:21 np0005548731 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  6 00:42:21 np0005548731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  6 00:42:21 np0005548731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  6 00:42:21 np0005548731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  6 00:42:21 np0005548731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  6 00:42:21 np0005548731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  6 00:42:21 np0005548731 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  6 00:42:21 np0005548731 kernel: TSC deadline timer available
Dec  6 00:42:21 np0005548731 kernel: CPU topo: Max. logical packages:   8
Dec  6 00:42:21 np0005548731 kernel: CPU topo: Max. logical dies:       8
Dec  6 00:42:21 np0005548731 kernel: CPU topo: Max. dies per package:   1
Dec  6 00:42:21 np0005548731 kernel: CPU topo: Max. threads per core:   1
Dec  6 00:42:21 np0005548731 kernel: CPU topo: Num. cores per package:     1
Dec  6 00:42:21 np0005548731 kernel: CPU topo: Num. threads per package:   1
Dec  6 00:42:21 np0005548731 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  6 00:42:21 np0005548731 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  6 00:42:21 np0005548731 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  6 00:42:21 np0005548731 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  6 00:42:21 np0005548731 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  6 00:42:21 np0005548731 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  6 00:42:21 np0005548731 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  6 00:42:21 np0005548731 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  6 00:42:21 np0005548731 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  6 00:42:21 np0005548731 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  6 00:42:21 np0005548731 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  6 00:42:21 np0005548731 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  6 00:42:21 np0005548731 kernel: Booting paravirtualized kernel on KVM
Dec  6 00:42:21 np0005548731 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  6 00:42:21 np0005548731 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  6 00:42:21 np0005548731 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  6 00:42:21 np0005548731 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  6 00:42:21 np0005548731 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  6 00:42:21 np0005548731 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  6 00:42:21 np0005548731 kernel: random: crng init done
Dec  6 00:42:21 np0005548731 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: Fallback order for Node 0: 0 
Dec  6 00:42:21 np0005548731 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  6 00:42:21 np0005548731 kernel: Policy zone: Normal
Dec  6 00:42:21 np0005548731 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  6 00:42:21 np0005548731 kernel: software IO TLB: area num 8.
Dec  6 00:42:21 np0005548731 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  6 00:42:21 np0005548731 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  6 00:42:21 np0005548731 kernel: ftrace: allocated 193 pages with 3 groups
Dec  6 00:42:21 np0005548731 kernel: Dynamic Preempt: voluntary
Dec  6 00:42:21 np0005548731 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  6 00:42:21 np0005548731 kernel: rcu: #011RCU event tracing is enabled.
Dec  6 00:42:21 np0005548731 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  6 00:42:21 np0005548731 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  6 00:42:21 np0005548731 kernel: #011Rude variant of Tasks RCU enabled.
Dec  6 00:42:21 np0005548731 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  6 00:42:21 np0005548731 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  6 00:42:21 np0005548731 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  6 00:42:21 np0005548731 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  6 00:42:21 np0005548731 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  6 00:42:21 np0005548731 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  6 00:42:21 np0005548731 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  6 00:42:21 np0005548731 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  6 00:42:21 np0005548731 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  6 00:42:21 np0005548731 kernel: Console: colour VGA+ 80x25
Dec  6 00:42:21 np0005548731 kernel: printk: console [ttyS0] enabled
Dec  6 00:42:21 np0005548731 kernel: ACPI: Core revision 20230331
Dec  6 00:42:21 np0005548731 kernel: APIC: Switch to symmetric I/O mode setup
Dec  6 00:42:21 np0005548731 kernel: x2apic enabled
Dec  6 00:42:21 np0005548731 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  6 00:42:21 np0005548731 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  6 00:42:21 np0005548731 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec  6 00:42:21 np0005548731 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  6 00:42:21 np0005548731 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  6 00:42:21 np0005548731 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  6 00:42:21 np0005548731 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  6 00:42:21 np0005548731 kernel: Spectre V2 : Mitigation: Retpolines
Dec  6 00:42:21 np0005548731 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  6 00:42:21 np0005548731 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  6 00:42:21 np0005548731 kernel: RETBleed: Mitigation: untrained return thunk
Dec  6 00:42:21 np0005548731 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  6 00:42:21 np0005548731 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  6 00:42:21 np0005548731 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  6 00:42:21 np0005548731 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  6 00:42:21 np0005548731 kernel: x86/bugs: return thunk changed
Dec  6 00:42:21 np0005548731 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  6 00:42:21 np0005548731 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  6 00:42:21 np0005548731 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  6 00:42:21 np0005548731 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  6 00:42:21 np0005548731 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  6 00:42:21 np0005548731 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  6 00:42:21 np0005548731 kernel: Freeing SMP alternatives memory: 40K
Dec  6 00:42:21 np0005548731 kernel: pid_max: default: 32768 minimum: 301
Dec  6 00:42:21 np0005548731 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  6 00:42:21 np0005548731 kernel: landlock: Up and running.
Dec  6 00:42:21 np0005548731 kernel: Yama: becoming mindful.
Dec  6 00:42:21 np0005548731 kernel: SELinux:  Initializing.
Dec  6 00:42:21 np0005548731 kernel: LSM support for eBPF active
Dec  6 00:42:21 np0005548731 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  6 00:42:21 np0005548731 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  6 00:42:21 np0005548731 kernel: ... version:                0
Dec  6 00:42:21 np0005548731 kernel: ... bit width:              48
Dec  6 00:42:21 np0005548731 kernel: ... generic registers:      6
Dec  6 00:42:21 np0005548731 kernel: ... value mask:             0000ffffffffffff
Dec  6 00:42:21 np0005548731 kernel: ... max period:             00007fffffffffff
Dec  6 00:42:21 np0005548731 kernel: ... fixed-purpose events:   0
Dec  6 00:42:21 np0005548731 kernel: ... event mask:             000000000000003f
Dec  6 00:42:21 np0005548731 kernel: signal: max sigframe size: 1776
Dec  6 00:42:21 np0005548731 kernel: rcu: Hierarchical SRCU implementation.
Dec  6 00:42:21 np0005548731 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  6 00:42:21 np0005548731 kernel: smp: Bringing up secondary CPUs ...
Dec  6 00:42:21 np0005548731 kernel: smpboot: x86: Booting SMP configuration:
Dec  6 00:42:21 np0005548731 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  6 00:42:21 np0005548731 kernel: smp: Brought up 1 node, 8 CPUs
Dec  6 00:42:21 np0005548731 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec  6 00:42:21 np0005548731 kernel: node 0 deferred pages initialised in 250ms
Dec  6 00:42:21 np0005548731 kernel: Memory: 7763956K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec  6 00:42:21 np0005548731 kernel: devtmpfs: initialized
Dec  6 00:42:21 np0005548731 kernel: x86/mm: Memory block size: 128MB
Dec  6 00:42:21 np0005548731 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  6 00:42:21 np0005548731 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  6 00:42:21 np0005548731 kernel: pinctrl core: initialized pinctrl subsystem
Dec  6 00:42:21 np0005548731 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  6 00:42:21 np0005548731 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  6 00:42:21 np0005548731 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  6 00:42:21 np0005548731 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  6 00:42:21 np0005548731 kernel: audit: initializing netlink subsys (disabled)
Dec  6 00:42:21 np0005548731 kernel: audit: type=2000 audit(1764999738.507:1): state=initialized audit_enabled=0 res=1
Dec  6 00:42:21 np0005548731 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  6 00:42:21 np0005548731 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  6 00:42:21 np0005548731 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  6 00:42:21 np0005548731 kernel: cpuidle: using governor menu
Dec  6 00:42:21 np0005548731 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  6 00:42:21 np0005548731 kernel: PCI: Using configuration type 1 for base access
Dec  6 00:42:21 np0005548731 kernel: PCI: Using configuration type 1 for extended access
Dec  6 00:42:21 np0005548731 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  6 00:42:21 np0005548731 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  6 00:42:21 np0005548731 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  6 00:42:21 np0005548731 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  6 00:42:21 np0005548731 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  6 00:42:21 np0005548731 kernel: Demotion targets for Node 0: null
Dec  6 00:42:21 np0005548731 kernel: cryptd: max_cpu_qlen set to 1000
Dec  6 00:42:21 np0005548731 kernel: ACPI: Added _OSI(Module Device)
Dec  6 00:42:21 np0005548731 kernel: ACPI: Added _OSI(Processor Device)
Dec  6 00:42:21 np0005548731 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  6 00:42:21 np0005548731 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  6 00:42:21 np0005548731 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  6 00:42:21 np0005548731 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  6 00:42:21 np0005548731 kernel: ACPI: Interpreter enabled
Dec  6 00:42:21 np0005548731 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  6 00:42:21 np0005548731 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  6 00:42:21 np0005548731 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  6 00:42:21 np0005548731 kernel: PCI: Using E820 reservations for host bridge windows
Dec  6 00:42:21 np0005548731 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  6 00:42:21 np0005548731 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  6 00:42:21 np0005548731 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [3] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [4] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [5] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [6] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [7] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [8] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [9] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [10] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [11] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [12] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [13] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [14] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [15] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [16] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [17] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [18] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [19] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [20] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [21] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [22] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [23] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [24] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [25] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [26] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [27] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [28] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [29] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [30] registered
Dec  6 00:42:21 np0005548731 kernel: acpiphp: Slot [31] registered
Dec  6 00:42:21 np0005548731 kernel: PCI host bridge to bus 0000:00
Dec  6 00:42:21 np0005548731 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  6 00:42:21 np0005548731 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  6 00:42:21 np0005548731 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  6 00:42:21 np0005548731 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  6 00:42:21 np0005548731 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  6 00:42:21 np0005548731 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  6 00:42:21 np0005548731 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  6 00:42:21 np0005548731 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  6 00:42:21 np0005548731 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  6 00:42:21 np0005548731 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  6 00:42:21 np0005548731 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  6 00:42:21 np0005548731 kernel: iommu: Default domain type: Translated
Dec  6 00:42:21 np0005548731 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  6 00:42:21 np0005548731 kernel: SCSI subsystem initialized
Dec  6 00:42:21 np0005548731 kernel: ACPI: bus type USB registered
Dec  6 00:42:21 np0005548731 kernel: usbcore: registered new interface driver usbfs
Dec  6 00:42:21 np0005548731 kernel: usbcore: registered new interface driver hub
Dec  6 00:42:21 np0005548731 kernel: usbcore: registered new device driver usb
Dec  6 00:42:21 np0005548731 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  6 00:42:21 np0005548731 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  6 00:42:21 np0005548731 kernel: PTP clock support registered
Dec  6 00:42:21 np0005548731 kernel: EDAC MC: Ver: 3.0.0
Dec  6 00:42:21 np0005548731 kernel: NetLabel: Initializing
Dec  6 00:42:21 np0005548731 kernel: NetLabel:  domain hash size = 128
Dec  6 00:42:21 np0005548731 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  6 00:42:21 np0005548731 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  6 00:42:21 np0005548731 kernel: PCI: Using ACPI for IRQ routing
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  6 00:42:21 np0005548731 kernel: vgaarb: loaded
Dec  6 00:42:21 np0005548731 kernel: clocksource: Switched to clocksource kvm-clock
Dec  6 00:42:21 np0005548731 kernel: VFS: Disk quotas dquot_6.6.0
Dec  6 00:42:21 np0005548731 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  6 00:42:21 np0005548731 kernel: pnp: PnP ACPI init
Dec  6 00:42:21 np0005548731 kernel: pnp: PnP ACPI: found 5 devices
Dec  6 00:42:21 np0005548731 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  6 00:42:21 np0005548731 kernel: NET: Registered PF_INET protocol family
Dec  6 00:42:21 np0005548731 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  6 00:42:21 np0005548731 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  6 00:42:21 np0005548731 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  6 00:42:21 np0005548731 kernel: NET: Registered PF_XDP protocol family
Dec  6 00:42:21 np0005548731 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  6 00:42:21 np0005548731 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  6 00:42:21 np0005548731 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  6 00:42:21 np0005548731 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  6 00:42:21 np0005548731 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  6 00:42:21 np0005548731 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  6 00:42:21 np0005548731 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 76600 usecs
Dec  6 00:42:21 np0005548731 kernel: PCI: CLS 0 bytes, default 64
Dec  6 00:42:21 np0005548731 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  6 00:42:21 np0005548731 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  6 00:42:21 np0005548731 kernel: ACPI: bus type thunderbolt registered
Dec  6 00:42:21 np0005548731 kernel: Trying to unpack rootfs image as initramfs...
Dec  6 00:42:21 np0005548731 kernel: Initialise system trusted keyrings
Dec  6 00:42:21 np0005548731 kernel: Key type blacklist registered
Dec  6 00:42:21 np0005548731 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  6 00:42:21 np0005548731 kernel: zbud: loaded
Dec  6 00:42:21 np0005548731 kernel: integrity: Platform Keyring initialized
Dec  6 00:42:21 np0005548731 kernel: integrity: Machine keyring initialized
Dec  6 00:42:21 np0005548731 kernel: Freeing initrd memory: 87804K
Dec  6 00:42:21 np0005548731 kernel: NET: Registered PF_ALG protocol family
Dec  6 00:42:21 np0005548731 kernel: xor: automatically using best checksumming function   avx       
Dec  6 00:42:21 np0005548731 kernel: Key type asymmetric registered
Dec  6 00:42:21 np0005548731 kernel: Asymmetric key parser 'x509' registered
Dec  6 00:42:21 np0005548731 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  6 00:42:21 np0005548731 kernel: io scheduler mq-deadline registered
Dec  6 00:42:21 np0005548731 kernel: io scheduler kyber registered
Dec  6 00:42:21 np0005548731 kernel: io scheduler bfq registered
Dec  6 00:42:21 np0005548731 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  6 00:42:21 np0005548731 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  6 00:42:21 np0005548731 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  6 00:42:21 np0005548731 kernel: ACPI: button: Power Button [PWRF]
Dec  6 00:42:21 np0005548731 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  6 00:42:21 np0005548731 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  6 00:42:21 np0005548731 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  6 00:42:21 np0005548731 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  6 00:42:21 np0005548731 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  6 00:42:21 np0005548731 kernel: Non-volatile memory driver v1.3
Dec  6 00:42:21 np0005548731 kernel: rdac: device handler registered
Dec  6 00:42:21 np0005548731 kernel: hp_sw: device handler registered
Dec  6 00:42:21 np0005548731 kernel: emc: device handler registered
Dec  6 00:42:21 np0005548731 kernel: alua: device handler registered
Dec  6 00:42:21 np0005548731 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  6 00:42:21 np0005548731 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  6 00:42:21 np0005548731 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  6 00:42:21 np0005548731 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  6 00:42:21 np0005548731 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  6 00:42:21 np0005548731 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  6 00:42:21 np0005548731 kernel: usb usb1: Product: UHCI Host Controller
Dec  6 00:42:21 np0005548731 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  6 00:42:21 np0005548731 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  6 00:42:21 np0005548731 kernel: hub 1-0:1.0: USB hub found
Dec  6 00:42:21 np0005548731 kernel: hub 1-0:1.0: 2 ports detected
Dec  6 00:42:21 np0005548731 kernel: usbcore: registered new interface driver usbserial_generic
Dec  6 00:42:21 np0005548731 kernel: usbserial: USB Serial support registered for generic
Dec  6 00:42:21 np0005548731 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  6 00:42:21 np0005548731 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  6 00:42:21 np0005548731 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  6 00:42:21 np0005548731 kernel: mousedev: PS/2 mouse device common for all mice
Dec  6 00:42:21 np0005548731 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  6 00:42:21 np0005548731 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  6 00:42:21 np0005548731 kernel: rtc_cmos 00:04: registered as rtc0
Dec  6 00:42:21 np0005548731 kernel: rtc_cmos 00:04: setting system clock to 2025-12-06T05:42:20 UTC (1764999740)
Dec  6 00:42:21 np0005548731 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  6 00:42:21 np0005548731 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  6 00:42:21 np0005548731 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  6 00:42:21 np0005548731 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  6 00:42:21 np0005548731 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  6 00:42:21 np0005548731 kernel: usbcore: registered new interface driver usbhid
Dec  6 00:42:21 np0005548731 kernel: usbhid: USB HID core driver
Dec  6 00:42:21 np0005548731 kernel: drop_monitor: Initializing network drop monitor service
Dec  6 00:42:21 np0005548731 kernel: Initializing XFRM netlink socket
Dec  6 00:42:21 np0005548731 kernel: NET: Registered PF_INET6 protocol family
Dec  6 00:42:21 np0005548731 kernel: Segment Routing with IPv6
Dec  6 00:42:21 np0005548731 kernel: NET: Registered PF_PACKET protocol family
Dec  6 00:42:21 np0005548731 kernel: mpls_gso: MPLS GSO support
Dec  6 00:42:21 np0005548731 kernel: IPI shorthand broadcast: enabled
Dec  6 00:42:21 np0005548731 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  6 00:42:21 np0005548731 kernel: AES CTR mode by8 optimization enabled
Dec  6 00:42:21 np0005548731 kernel: sched_clock: Marking stable (3197003112, 191403936)->(3955770445, -567363397)
Dec  6 00:42:21 np0005548731 kernel: registered taskstats version 1
Dec  6 00:42:21 np0005548731 kernel: Loading compiled-in X.509 certificates
Dec  6 00:42:21 np0005548731 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  6 00:42:21 np0005548731 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  6 00:42:21 np0005548731 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  6 00:42:21 np0005548731 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  6 00:42:21 np0005548731 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  6 00:42:21 np0005548731 kernel: Demotion targets for Node 0: null
Dec  6 00:42:21 np0005548731 kernel: page_owner is disabled
Dec  6 00:42:21 np0005548731 kernel: Key type .fscrypt registered
Dec  6 00:42:21 np0005548731 kernel: Key type fscrypt-provisioning registered
Dec  6 00:42:21 np0005548731 kernel: Key type big_key registered
Dec  6 00:42:21 np0005548731 kernel: Key type encrypted registered
Dec  6 00:42:21 np0005548731 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  6 00:42:21 np0005548731 kernel: Loading compiled-in module X.509 certificates
Dec  6 00:42:21 np0005548731 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  6 00:42:21 np0005548731 kernel: ima: Allocated hash algorithm: sha256
Dec  6 00:42:21 np0005548731 kernel: ima: No architecture policies found
Dec  6 00:42:21 np0005548731 kernel: evm: Initialising EVM extended attributes:
Dec  6 00:42:21 np0005548731 kernel: evm: security.selinux
Dec  6 00:42:21 np0005548731 kernel: evm: security.SMACK64 (disabled)
Dec  6 00:42:21 np0005548731 kernel: evm: security.SMACK64EXEC (disabled)
Dec  6 00:42:21 np0005548731 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  6 00:42:21 np0005548731 kernel: evm: security.SMACK64MMAP (disabled)
Dec  6 00:42:21 np0005548731 kernel: evm: security.apparmor (disabled)
Dec  6 00:42:21 np0005548731 kernel: evm: security.ima
Dec  6 00:42:21 np0005548731 kernel: evm: security.capability
Dec  6 00:42:21 np0005548731 kernel: evm: HMAC attrs: 0x1
Dec  6 00:42:21 np0005548731 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  6 00:42:21 np0005548731 kernel: Running certificate verification RSA selftest
Dec  6 00:42:21 np0005548731 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  6 00:42:21 np0005548731 kernel: Running certificate verification ECDSA selftest
Dec  6 00:42:21 np0005548731 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  6 00:42:21 np0005548731 kernel: clk: Disabling unused clocks
Dec  6 00:42:21 np0005548731 kernel: Freeing unused decrypted memory: 2028K
Dec  6 00:42:21 np0005548731 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  6 00:42:21 np0005548731 kernel: Write protecting the kernel read-only data: 30720k
Dec  6 00:42:21 np0005548731 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  6 00:42:21 np0005548731 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  6 00:42:21 np0005548731 kernel: Run /init as init process
Dec  6 00:42:21 np0005548731 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  6 00:42:21 np0005548731 systemd: Detected virtualization kvm.
Dec  6 00:42:21 np0005548731 systemd: Detected architecture x86-64.
Dec  6 00:42:21 np0005548731 systemd: Running in initrd.
Dec  6 00:42:21 np0005548731 systemd: No hostname configured, using default hostname.
Dec  6 00:42:21 np0005548731 systemd: Hostname set to <localhost>.
Dec  6 00:42:21 np0005548731 systemd: Initializing machine ID from VM UUID.
Dec  6 00:42:21 np0005548731 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  6 00:42:21 np0005548731 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  6 00:42:21 np0005548731 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  6 00:42:21 np0005548731 kernel: usb 1-1: Manufacturer: QEMU
Dec  6 00:42:21 np0005548731 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  6 00:42:21 np0005548731 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  6 00:42:21 np0005548731 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  6 00:42:21 np0005548731 systemd: Queued start job for default target Initrd Default Target.
Dec  6 00:42:21 np0005548731 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  6 00:42:21 np0005548731 systemd: Reached target Local Encrypted Volumes.
Dec  6 00:42:21 np0005548731 systemd: Reached target Initrd /usr File System.
Dec  6 00:42:21 np0005548731 systemd: Reached target Local File Systems.
Dec  6 00:42:21 np0005548731 systemd: Reached target Path Units.
Dec  6 00:42:21 np0005548731 systemd: Reached target Slice Units.
Dec  6 00:42:21 np0005548731 systemd: Reached target Swaps.
Dec  6 00:42:21 np0005548731 systemd: Reached target Timer Units.
Dec  6 00:42:21 np0005548731 systemd: Listening on D-Bus System Message Bus Socket.
Dec  6 00:42:21 np0005548731 systemd: Listening on Journal Socket (/dev/log).
Dec  6 00:42:21 np0005548731 systemd: Listening on Journal Socket.
Dec  6 00:42:21 np0005548731 systemd: Listening on udev Control Socket.
Dec  6 00:42:21 np0005548731 systemd: Listening on udev Kernel Socket.
Dec  6 00:42:21 np0005548731 systemd: Reached target Socket Units.
Dec  6 00:42:21 np0005548731 systemd: Starting Create List of Static Device Nodes...
Dec  6 00:42:21 np0005548731 systemd: Starting Journal Service...
Dec  6 00:42:21 np0005548731 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  6 00:42:21 np0005548731 systemd: Starting Apply Kernel Variables...
Dec  6 00:42:21 np0005548731 systemd: Starting Create System Users...
Dec  6 00:42:21 np0005548731 systemd: Starting Setup Virtual Console...
Dec  6 00:42:21 np0005548731 systemd: Finished Create List of Static Device Nodes.
Dec  6 00:42:21 np0005548731 systemd: Finished Apply Kernel Variables.
Dec  6 00:42:21 np0005548731 systemd-journald[305]: Journal started
Dec  6 00:42:21 np0005548731 systemd-journald[305]: Runtime Journal (/run/log/journal/ec2492e2e0d143d3b74f744a8272a0e6) is 8.0M, max 153.6M, 145.6M free.
Dec  6 00:42:21 np0005548731 systemd: Started Journal Service.
Dec  6 00:42:21 np0005548731 systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec  6 00:42:21 np0005548731 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec  6 00:42:21 np0005548731 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  6 00:42:21 np0005548731 systemd[1]: Finished Create System Users.
Dec  6 00:42:21 np0005548731 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  6 00:42:21 np0005548731 systemd[1]: Starting Create Volatile Files and Directories...
Dec  6 00:42:21 np0005548731 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  6 00:42:21 np0005548731 systemd[1]: Finished Setup Virtual Console.
Dec  6 00:42:21 np0005548731 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  6 00:42:21 np0005548731 systemd[1]: Starting dracut cmdline hook...
Dec  6 00:42:21 np0005548731 dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Dec  6 00:42:21 np0005548731 systemd[1]: Finished Create Volatile Files and Directories.
Dec  6 00:42:21 np0005548731 dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  6 00:42:21 np0005548731 systemd[1]: Finished dracut cmdline hook.
Dec  6 00:42:21 np0005548731 systemd[1]: Starting dracut pre-udev hook...
Dec  6 00:42:21 np0005548731 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  6 00:42:21 np0005548731 kernel: device-mapper: uevent: version 1.0.3
Dec  6 00:42:21 np0005548731 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  6 00:42:21 np0005548731 kernel: RPC: Registered named UNIX socket transport module.
Dec  6 00:42:21 np0005548731 kernel: RPC: Registered udp transport module.
Dec  6 00:42:21 np0005548731 kernel: RPC: Registered tcp transport module.
Dec  6 00:42:21 np0005548731 kernel: RPC: Registered tcp-with-tls transport module.
Dec  6 00:42:21 np0005548731 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  6 00:42:21 np0005548731 rpc.statd[444]: Version 2.5.4 starting
Dec  6 00:42:21 np0005548731 rpc.statd[444]: Initializing NSM state
Dec  6 00:42:21 np0005548731 rpc.idmapd[449]: Setting log level to 0
Dec  6 00:42:21 np0005548731 systemd[1]: Finished dracut pre-udev hook.
Dec  6 00:42:21 np0005548731 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  6 00:42:21 np0005548731 systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Dec  6 00:42:21 np0005548731 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  6 00:42:21 np0005548731 systemd[1]: Starting dracut pre-trigger hook...
Dec  6 00:42:21 np0005548731 systemd[1]: Finished dracut pre-trigger hook.
Dec  6 00:42:21 np0005548731 systemd[1]: Starting Coldplug All udev Devices...
Dec  6 00:42:21 np0005548731 systemd[1]: Created slice Slice /system/modprobe.
Dec  6 00:42:22 np0005548731 systemd[1]: Starting Load Kernel Module configfs...
Dec  6 00:42:22 np0005548731 systemd[1]: Finished Coldplug All udev Devices.
Dec  6 00:42:22 np0005548731 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  6 00:42:22 np0005548731 systemd[1]: Finished Load Kernel Module configfs.
Dec  6 00:42:22 np0005548731 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  6 00:42:22 np0005548731 systemd[1]: Reached target Network.
Dec  6 00:42:22 np0005548731 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  6 00:42:22 np0005548731 systemd[1]: Starting dracut initqueue hook...
Dec  6 00:42:22 np0005548731 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  6 00:42:22 np0005548731 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  6 00:42:22 np0005548731 kernel: vda: vda1
Dec  6 00:42:22 np0005548731 kernel: scsi host0: ata_piix
Dec  6 00:42:22 np0005548731 kernel: scsi host1: ata_piix
Dec  6 00:42:22 np0005548731 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  6 00:42:22 np0005548731 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  6 00:42:22 np0005548731 systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  6 00:42:22 np0005548731 systemd[1]: Reached target Initrd Root Device.
Dec  6 00:42:22 np0005548731 systemd[1]: Mounting Kernel Configuration File System...
Dec  6 00:42:22 np0005548731 systemd[1]: Mounted Kernel Configuration File System.
Dec  6 00:42:22 np0005548731 systemd[1]: Reached target System Initialization.
Dec  6 00:42:22 np0005548731 systemd[1]: Reached target Basic System.
Dec  6 00:42:22 np0005548731 kernel: ata1: found unknown device (class 0)
Dec  6 00:42:22 np0005548731 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  6 00:42:22 np0005548731 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  6 00:42:22 np0005548731 systemd-udevd[495]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 00:42:22 np0005548731 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  6 00:42:22 np0005548731 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  6 00:42:22 np0005548731 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  6 00:42:22 np0005548731 systemd[1]: Finished dracut initqueue hook.
Dec  6 00:42:22 np0005548731 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  6 00:42:22 np0005548731 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  6 00:42:22 np0005548731 systemd[1]: Reached target Remote File Systems.
Dec  6 00:42:22 np0005548731 systemd[1]: Starting dracut pre-mount hook...
Dec  6 00:42:22 np0005548731 systemd[1]: Finished dracut pre-mount hook.
Dec  6 00:42:22 np0005548731 systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec  6 00:42:22 np0005548731 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Dec  6 00:42:22 np0005548731 systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  6 00:42:22 np0005548731 systemd[1]: Mounting /sysroot...
Dec  6 00:42:23 np0005548731 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  6 00:42:23 np0005548731 kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec  6 00:42:23 np0005548731 kernel: XFS (vda1): Ending clean mount
Dec  6 00:42:23 np0005548731 systemd[1]: Mounted /sysroot.
Dec  6 00:42:23 np0005548731 systemd[1]: Reached target Initrd Root File System.
Dec  6 00:42:23 np0005548731 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  6 00:42:23 np0005548731 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  6 00:42:23 np0005548731 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  6 00:42:23 np0005548731 systemd[1]: Reached target Initrd File Systems.
Dec  6 00:42:23 np0005548731 systemd[1]: Reached target Initrd Default Target.
Dec  6 00:42:23 np0005548731 systemd[1]: Starting dracut mount hook...
Dec  6 00:42:23 np0005548731 systemd[1]: Finished dracut mount hook.
Dec  6 00:42:23 np0005548731 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  6 00:42:24 np0005548731 rpc.idmapd[449]: exiting on signal 15
Dec  6 00:42:24 np0005548731 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  6 00:42:24 np0005548731 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Network.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Timer Units.
Dec  6 00:42:24 np0005548731 systemd[1]: dbus.socket: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  6 00:42:24 np0005548731 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Initrd Default Target.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Basic System.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Initrd Root Device.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Initrd /usr File System.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Path Units.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Remote File Systems.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Slice Units.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Socket Units.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target System Initialization.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Local File Systems.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Swaps.
Dec  6 00:42:24 np0005548731 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped dracut mount hook.
Dec  6 00:42:24 np0005548731 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped dracut pre-mount hook.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  6 00:42:24 np0005548731 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  6 00:42:24 np0005548731 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped dracut initqueue hook.
Dec  6 00:42:24 np0005548731 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped Apply Kernel Variables.
Dec  6 00:42:24 np0005548731 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  6 00:42:24 np0005548731 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped Coldplug All udev Devices.
Dec  6 00:42:24 np0005548731 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped dracut pre-trigger hook.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  6 00:42:24 np0005548731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped Setup Virtual Console.
Dec  6 00:42:24 np0005548731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  6 00:42:24 np0005548731 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  6 00:42:24 np0005548731 systemd[1]: systemd-udevd.service: Consumed 1.096s CPU time.
Dec  6 00:42:24 np0005548731 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Closed udev Control Socket.
Dec  6 00:42:24 np0005548731 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Closed udev Kernel Socket.
Dec  6 00:42:24 np0005548731 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped dracut pre-udev hook.
Dec  6 00:42:24 np0005548731 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped dracut cmdline hook.
Dec  6 00:42:24 np0005548731 systemd[1]: Starting Cleanup udev Database...
Dec  6 00:42:24 np0005548731 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  6 00:42:24 np0005548731 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  6 00:42:24 np0005548731 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Stopped Create System Users.
Dec  6 00:42:24 np0005548731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  6 00:42:24 np0005548731 systemd[1]: Finished Cleanup udev Database.
Dec  6 00:42:24 np0005548731 systemd[1]: Reached target Switch Root.
Dec  6 00:42:24 np0005548731 systemd[1]: Starting Switch Root...
Dec  6 00:42:24 np0005548731 systemd[1]: Switching root.
Dec  6 00:42:24 np0005548731 systemd-journald[305]: Journal stopped
Dec  6 00:42:25 np0005548731 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  6 00:42:25 np0005548731 kernel: audit: type=1404 audit(1764999744.259:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  6 00:42:25 np0005548731 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 00:42:25 np0005548731 kernel: SELinux:  policy capability open_perms=1
Dec  6 00:42:25 np0005548731 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 00:42:25 np0005548731 kernel: SELinux:  policy capability always_check_network=0
Dec  6 00:42:25 np0005548731 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 00:42:25 np0005548731 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 00:42:25 np0005548731 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 00:42:25 np0005548731 kernel: audit: type=1403 audit(1764999744.388:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  6 00:42:25 np0005548731 systemd: Successfully loaded SELinux policy in 185.354ms.
Dec  6 00:42:25 np0005548731 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.848ms.
Dec  6 00:42:25 np0005548731 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  6 00:42:25 np0005548731 systemd: Detected virtualization kvm.
Dec  6 00:42:25 np0005548731 systemd: Detected architecture x86-64.
Dec  6 00:42:25 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 00:42:25 np0005548731 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  6 00:42:25 np0005548731 systemd: Stopped Switch Root.
Dec  6 00:42:25 np0005548731 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  6 00:42:25 np0005548731 systemd: Created slice Slice /system/getty.
Dec  6 00:42:25 np0005548731 systemd: Created slice Slice /system/serial-getty.
Dec  6 00:42:25 np0005548731 systemd: Created slice Slice /system/sshd-keygen.
Dec  6 00:42:25 np0005548731 systemd: Created slice User and Session Slice.
Dec  6 00:42:25 np0005548731 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  6 00:42:25 np0005548731 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  6 00:42:25 np0005548731 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  6 00:42:25 np0005548731 systemd: Reached target Local Encrypted Volumes.
Dec  6 00:42:25 np0005548731 systemd: Stopped target Switch Root.
Dec  6 00:42:25 np0005548731 systemd: Stopped target Initrd File Systems.
Dec  6 00:42:25 np0005548731 systemd: Stopped target Initrd Root File System.
Dec  6 00:42:25 np0005548731 systemd: Reached target Local Integrity Protected Volumes.
Dec  6 00:42:25 np0005548731 systemd: Reached target Path Units.
Dec  6 00:42:25 np0005548731 systemd: Reached target rpc_pipefs.target.
Dec  6 00:42:25 np0005548731 systemd: Reached target Slice Units.
Dec  6 00:42:25 np0005548731 systemd: Reached target Swaps.
Dec  6 00:42:25 np0005548731 systemd: Reached target Local Verity Protected Volumes.
Dec  6 00:42:25 np0005548731 systemd: Listening on RPCbind Server Activation Socket.
Dec  6 00:42:25 np0005548731 systemd: Reached target RPC Port Mapper.
Dec  6 00:42:25 np0005548731 systemd: Listening on Process Core Dump Socket.
Dec  6 00:42:25 np0005548731 systemd: Listening on initctl Compatibility Named Pipe.
Dec  6 00:42:25 np0005548731 systemd: Listening on udev Control Socket.
Dec  6 00:42:25 np0005548731 systemd: Listening on udev Kernel Socket.
Dec  6 00:42:25 np0005548731 systemd: Mounting Huge Pages File System...
Dec  6 00:42:25 np0005548731 systemd: Mounting POSIX Message Queue File System...
Dec  6 00:42:25 np0005548731 systemd: Mounting Kernel Debug File System...
Dec  6 00:42:25 np0005548731 systemd: Mounting Kernel Trace File System...
Dec  6 00:42:25 np0005548731 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  6 00:42:25 np0005548731 systemd: Starting Create List of Static Device Nodes...
Dec  6 00:42:25 np0005548731 systemd: Starting Load Kernel Module configfs...
Dec  6 00:42:25 np0005548731 systemd: Starting Load Kernel Module drm...
Dec  6 00:42:25 np0005548731 systemd: Starting Load Kernel Module efi_pstore...
Dec  6 00:42:25 np0005548731 systemd: Starting Load Kernel Module fuse...
Dec  6 00:42:25 np0005548731 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  6 00:42:25 np0005548731 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  6 00:42:25 np0005548731 systemd: Stopped File System Check on Root Device.
Dec  6 00:42:25 np0005548731 systemd: Stopped Journal Service.
Dec  6 00:42:25 np0005548731 systemd: Starting Journal Service...
Dec  6 00:42:25 np0005548731 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  6 00:42:25 np0005548731 systemd: Starting Generate network units from Kernel command line...
Dec  6 00:42:25 np0005548731 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  6 00:42:25 np0005548731 systemd: Starting Remount Root and Kernel File Systems...
Dec  6 00:42:25 np0005548731 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  6 00:42:25 np0005548731 systemd: Starting Apply Kernel Variables...
Dec  6 00:42:25 np0005548731 kernel: fuse: init (API version 7.37)
Dec  6 00:42:25 np0005548731 systemd: Starting Coldplug All udev Devices...
Dec  6 00:42:25 np0005548731 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  6 00:42:25 np0005548731 systemd-journald[678]: Journal started
Dec  6 00:42:25 np0005548731 systemd-journald[678]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  6 00:42:25 np0005548731 systemd[1]: Queued start job for default target Multi-User System.
Dec  6 00:42:25 np0005548731 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  6 00:42:25 np0005548731 systemd: Started Journal Service.
Dec  6 00:42:25 np0005548731 systemd[1]: Mounted Huge Pages File System.
Dec  6 00:42:25 np0005548731 systemd[1]: Mounted POSIX Message Queue File System.
Dec  6 00:42:25 np0005548731 systemd[1]: Mounted Kernel Debug File System.
Dec  6 00:42:25 np0005548731 systemd[1]: Mounted Kernel Trace File System.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Create List of Static Device Nodes.
Dec  6 00:42:25 np0005548731 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Load Kernel Module configfs.
Dec  6 00:42:25 np0005548731 kernel: ACPI: bus type drm_connector registered
Dec  6 00:42:25 np0005548731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  6 00:42:25 np0005548731 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Load Kernel Module drm.
Dec  6 00:42:25 np0005548731 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Load Kernel Module fuse.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Generate network units from Kernel command line.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Apply Kernel Variables.
Dec  6 00:42:25 np0005548731 systemd[1]: Mounting FUSE Control File System...
Dec  6 00:42:25 np0005548731 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  6 00:42:25 np0005548731 systemd[1]: Starting Rebuild Hardware Database...
Dec  6 00:42:25 np0005548731 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  6 00:42:25 np0005548731 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  6 00:42:25 np0005548731 systemd[1]: Starting Load/Save OS Random Seed...
Dec  6 00:42:25 np0005548731 systemd[1]: Starting Create System Users...
Dec  6 00:42:25 np0005548731 systemd[1]: Mounted FUSE Control File System.
Dec  6 00:42:25 np0005548731 systemd-journald[678]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  6 00:42:25 np0005548731 systemd-journald[678]: Received client request to flush runtime journal.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Coldplug All udev Devices.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Create System Users.
Dec  6 00:42:25 np0005548731 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Load/Save OS Random Seed.
Dec  6 00:42:25 np0005548731 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  6 00:42:25 np0005548731 systemd[1]: Reached target Preparation for Local File Systems.
Dec  6 00:42:25 np0005548731 systemd[1]: Reached target Local File Systems.
Dec  6 00:42:25 np0005548731 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  6 00:42:25 np0005548731 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  6 00:42:25 np0005548731 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  6 00:42:25 np0005548731 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  6 00:42:25 np0005548731 systemd[1]: Starting Automatic Boot Loader Update...
Dec  6 00:42:25 np0005548731 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  6 00:42:25 np0005548731 systemd[1]: Starting Create Volatile Files and Directories...
Dec  6 00:42:25 np0005548731 bootctl[696]: Couldn't find EFI system partition, skipping.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Automatic Boot Loader Update.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Create Volatile Files and Directories.
Dec  6 00:42:25 np0005548731 systemd[1]: Starting Security Auditing Service...
Dec  6 00:42:25 np0005548731 systemd[1]: Starting RPC Bind...
Dec  6 00:42:25 np0005548731 systemd[1]: Starting Rebuild Journal Catalog...
Dec  6 00:42:25 np0005548731 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  6 00:42:25 np0005548731 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  6 00:42:25 np0005548731 systemd[1]: Started RPC Bind.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Rebuild Journal Catalog.
Dec  6 00:42:25 np0005548731 augenrules[707]: /sbin/augenrules: No change
Dec  6 00:42:25 np0005548731 augenrules[722]: No rules
Dec  6 00:42:25 np0005548731 augenrules[722]: enabled 1
Dec  6 00:42:25 np0005548731 augenrules[722]: failure 1
Dec  6 00:42:25 np0005548731 augenrules[722]: pid 702
Dec  6 00:42:25 np0005548731 augenrules[722]: rate_limit 0
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog_limit 8192
Dec  6 00:42:25 np0005548731 augenrules[722]: lost 0
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog 0
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog_wait_time 60000
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog_wait_time_actual 0
Dec  6 00:42:25 np0005548731 augenrules[722]: enabled 1
Dec  6 00:42:25 np0005548731 augenrules[722]: failure 1
Dec  6 00:42:25 np0005548731 augenrules[722]: pid 702
Dec  6 00:42:25 np0005548731 augenrules[722]: rate_limit 0
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog_limit 8192
Dec  6 00:42:25 np0005548731 augenrules[722]: lost 0
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog 0
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog_wait_time 60000
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog_wait_time_actual 0
Dec  6 00:42:25 np0005548731 augenrules[722]: enabled 1
Dec  6 00:42:25 np0005548731 augenrules[722]: failure 1
Dec  6 00:42:25 np0005548731 augenrules[722]: pid 702
Dec  6 00:42:25 np0005548731 augenrules[722]: rate_limit 0
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog_limit 8192
Dec  6 00:42:25 np0005548731 augenrules[722]: lost 0
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog 1
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog_wait_time 60000
Dec  6 00:42:25 np0005548731 augenrules[722]: backlog_wait_time_actual 0
Dec  6 00:42:25 np0005548731 systemd[1]: Started Security Auditing Service.
Dec  6 00:42:25 np0005548731 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  6 00:42:25 np0005548731 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  6 00:42:26 np0005548731 systemd[1]: Finished Rebuild Hardware Database.
Dec  6 00:42:26 np0005548731 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  6 00:42:26 np0005548731 systemd[1]: Starting Update is Completed...
Dec  6 00:42:26 np0005548731 systemd[1]: Finished Update is Completed.
Dec  6 00:42:26 np0005548731 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Dec  6 00:42:26 np0005548731 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  6 00:42:26 np0005548731 systemd[1]: Reached target System Initialization.
Dec  6 00:42:26 np0005548731 systemd[1]: Started dnf makecache --timer.
Dec  6 00:42:26 np0005548731 systemd[1]: Started Daily rotation of log files.
Dec  6 00:42:26 np0005548731 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  6 00:42:26 np0005548731 systemd[1]: Reached target Timer Units.
Dec  6 00:42:26 np0005548731 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  6 00:42:26 np0005548731 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  6 00:42:26 np0005548731 systemd[1]: Reached target Socket Units.
Dec  6 00:42:26 np0005548731 systemd[1]: Starting D-Bus System Message Bus...
Dec  6 00:42:26 np0005548731 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  6 00:42:26 np0005548731 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  6 00:42:26 np0005548731 systemd[1]: Starting Load Kernel Module configfs...
Dec  6 00:42:26 np0005548731 systemd-udevd[740]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 00:42:26 np0005548731 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  6 00:42:26 np0005548731 systemd[1]: Finished Load Kernel Module configfs.
Dec  6 00:42:26 np0005548731 systemd[1]: Started D-Bus System Message Bus.
Dec  6 00:42:26 np0005548731 systemd[1]: Reached target Basic System.
Dec  6 00:42:26 np0005548731 dbus-broker-lau[745]: Ready
Dec  6 00:42:26 np0005548731 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  6 00:42:26 np0005548731 systemd[1]: Starting NTP client/server...
Dec  6 00:42:26 np0005548731 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  6 00:42:26 np0005548731 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  6 00:42:26 np0005548731 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  6 00:42:26 np0005548731 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  6 00:42:26 np0005548731 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  6 00:42:26 np0005548731 systemd[1]: Starting IPv4 firewall with iptables...
Dec  6 00:42:26 np0005548731 systemd[1]: Started irqbalance daemon.
Dec  6 00:42:26 np0005548731 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  6 00:42:26 np0005548731 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 00:42:26 np0005548731 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 00:42:26 np0005548731 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 00:42:26 np0005548731 systemd[1]: Reached target sshd-keygen.target.
Dec  6 00:42:26 np0005548731 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  6 00:42:26 np0005548731 systemd[1]: Reached target User and Group Name Lookups.
Dec  6 00:42:26 np0005548731 chronyd[791]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  6 00:42:26 np0005548731 chronyd[791]: Loaded 0 symmetric keys
Dec  6 00:42:26 np0005548731 chronyd[791]: Using right/UTC timezone to obtain leap second data
Dec  6 00:42:26 np0005548731 chronyd[791]: Loaded seccomp filter (level 2)
Dec  6 00:42:27 np0005548731 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  6 00:42:27 np0005548731 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  6 00:42:27 np0005548731 kernel: Console: switching to colour dummy device 80x25
Dec  6 00:42:27 np0005548731 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  6 00:42:27 np0005548731 kernel: [drm] features: -context_init
Dec  6 00:42:27 np0005548731 systemd[1]: Starting User Login Management...
Dec  6 00:42:27 np0005548731 kernel: [drm] number of scanouts: 1
Dec  6 00:42:27 np0005548731 kernel: [drm] number of cap sets: 0
Dec  6 00:42:27 np0005548731 systemd[1]: Started NTP client/server.
Dec  6 00:42:27 np0005548731 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  6 00:42:27 np0005548731 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  6 00:42:27 np0005548731 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  6 00:42:27 np0005548731 kernel: Console: switching to colour frame buffer device 128x48
Dec  6 00:42:27 np0005548731 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  6 00:42:27 np0005548731 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  6 00:42:27 np0005548731 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  6 00:42:27 np0005548731 systemd-logind[794]: New seat seat0.
Dec  6 00:42:27 np0005548731 systemd-logind[794]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  6 00:42:27 np0005548731 systemd-logind[794]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  6 00:42:27 np0005548731 systemd[1]: Started User Login Management.
Dec  6 00:42:27 np0005548731 kernel: kvm_amd: TSC scaling supported
Dec  6 00:42:27 np0005548731 kernel: kvm_amd: Nested Virtualization enabled
Dec  6 00:42:27 np0005548731 kernel: kvm_amd: Nested Paging enabled
Dec  6 00:42:27 np0005548731 kernel: kvm_amd: LBR virtualization supported
Dec  6 00:42:27 np0005548731 iptables.init[781]: iptables: Applying firewall rules: [  OK  ]
Dec  6 00:42:27 np0005548731 systemd[1]: Finished IPv4 firewall with iptables.
Dec  6 00:42:27 np0005548731 cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 06 Dec 2025 05:42:27 +0000. Up 10.15 seconds.
Dec  6 00:42:27 np0005548731 systemd[1]: run-cloud\x2dinit-tmp-tmp0agphpmw.mount: Deactivated successfully.
Dec  6 00:42:27 np0005548731 systemd[1]: Starting Hostname Service...
Dec  6 00:42:27 np0005548731 systemd[1]: Started Hostname Service.
Dec  6 00:42:27 np0005548731 systemd-hostnamed[853]: Hostname set to <np0005548731.novalocal> (static)
Dec  6 00:42:28 np0005548731 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  6 00:42:28 np0005548731 systemd[1]: Reached target Preparation for Network.
Dec  6 00:42:28 np0005548731 systemd[1]: Starting Network Manager...
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0664] NetworkManager (version 1.54.1-1.el9) is starting... (boot:e68e3ea0-2c97-4c65-be4f-7c894f030f31)
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0670] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0732] manager[0x564046299080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0766] hostname: hostname: using hostnamed
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0766] hostname: static hostname changed from (none) to "np0005548731.novalocal"
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0770] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0882] manager[0x564046299080]: rfkill: Wi-Fi hardware radio set enabled
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0883] manager[0x564046299080]: rfkill: WWAN hardware radio set enabled
Dec  6 00:42:28 np0005548731 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0925] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0926] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0926] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0926] manager: Networking is enabled by state file
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0928] settings: Loaded settings plugin: keyfile (internal)
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0940] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0957] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0970] dhcp: init: Using DHCP client 'internal'
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0973] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0986] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.0993] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1000] device (lo): Activation: starting connection 'lo' (f9121f72-d016-4e94-a48a-a5240750641b)
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1010] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1013] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1038] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1042] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1044] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1045] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1049] device (eth0): carrier: link connected
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1052] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1058] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1063] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1066] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1067] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1068] manager: NetworkManager state is now CONNECTING
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1069] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1074] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1076] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 00:42:28 np0005548731 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 00:42:28 np0005548731 systemd[1]: Started Network Manager.
Dec  6 00:42:28 np0005548731 systemd[1]: Reached target Network.
Dec  6 00:42:28 np0005548731 systemd[1]: Starting Network Manager Wait Online...
Dec  6 00:42:28 np0005548731 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  6 00:42:28 np0005548731 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1390] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1392] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.1400] device (lo): Activation: successful, device activated.
Dec  6 00:42:28 np0005548731 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  6 00:42:28 np0005548731 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  6 00:42:28 np0005548731 systemd[1]: Reached target NFS client services.
Dec  6 00:42:28 np0005548731 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  6 00:42:28 np0005548731 systemd[1]: Reached target Remote File Systems.
Dec  6 00:42:28 np0005548731 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.9504] dhcp4 (eth0): state changed new lease, address=38.102.83.195
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.9519] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.9552] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.9587] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.9588] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.9590] manager: NetworkManager state is now CONNECTED_SITE
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.9593] device (eth0): Activation: successful, device activated.
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.9599] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  6 00:42:28 np0005548731 NetworkManager[857]: <info>  [1764999748.9602] manager: startup complete
Dec  6 00:42:28 np0005548731 systemd[1]: Finished Network Manager Wait Online.
Dec  6 00:42:28 np0005548731 systemd[1]: Starting Cloud-init: Network Stage...
Dec  6 00:42:29 np0005548731 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 06 Dec 2025 05:42:29 +0000. Up 12.00 seconds.
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.195        | 255.255.255.0 | global | fa:16:3e:c1:04:1b |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fec1:41b/64 |       .       |  link  | fa:16:3e:c1:04:1b |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  6 00:42:29 np0005548731 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  6 00:42:33 np0005548731 chronyd[791]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Dec  6 00:42:33 np0005548731 chronyd[791]: System clock TAI offset set to 37 seconds
Dec  6 00:42:37 np0005548731 irqbalance[782]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  6 00:42:37 np0005548731 irqbalance[782]: IRQ 25 affinity is now unmanaged
Dec  6 00:42:37 np0005548731 irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  6 00:42:37 np0005548731 irqbalance[782]: IRQ 31 affinity is now unmanaged
Dec  6 00:42:37 np0005548731 irqbalance[782]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  6 00:42:37 np0005548731 irqbalance[782]: IRQ 28 affinity is now unmanaged
Dec  6 00:42:37 np0005548731 irqbalance[782]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  6 00:42:37 np0005548731 irqbalance[782]: IRQ 32 affinity is now unmanaged
Dec  6 00:42:37 np0005548731 irqbalance[782]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  6 00:42:37 np0005548731 irqbalance[782]: IRQ 30 affinity is now unmanaged
Dec  6 00:42:37 np0005548731 irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  6 00:42:37 np0005548731 irqbalance[782]: IRQ 29 affinity is now unmanaged
Dec  6 00:42:39 np0005548731 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 00:42:43 np0005548731 cloud-init[921]: Generating public/private rsa key pair.
Dec  6 00:42:43 np0005548731 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  6 00:42:43 np0005548731 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  6 00:42:43 np0005548731 cloud-init[921]: The key fingerprint is:
Dec  6 00:42:43 np0005548731 cloud-init[921]: SHA256:aOWIp9WTWDX8hr7bextMhcpMddlyfn+WXdULoDDu5tM root@np0005548731.novalocal
Dec  6 00:42:43 np0005548731 cloud-init[921]: The key's randomart image is:
Dec  6 00:42:43 np0005548731 cloud-init[921]: +---[RSA 3072]----+
Dec  6 00:42:43 np0005548731 cloud-init[921]: |       o .o.. . =|
Dec  6 00:42:43 np0005548731 cloud-init[921]: |      . o.o. o.+=|
Dec  6 00:42:43 np0005548731 cloud-init[921]: |       .o. o. o++|
Dec  6 00:42:43 np0005548731 cloud-init[921]: |     ..O ..+o. o+|
Dec  6 00:42:43 np0005548731 cloud-init[921]: |    . BoS. .+ . *|
Dec  6 00:42:43 np0005548731 cloud-init[921]: |     =o ...  o .=|
Dec  6 00:42:43 np0005548731 cloud-init[921]: |    .  o E .  o..|
Dec  6 00:42:43 np0005548731 cloud-init[921]: |        . ..  .. |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |          ..oo.. |
Dec  6 00:42:43 np0005548731 cloud-init[921]: +----[SHA256]-----+
Dec  6 00:42:43 np0005548731 cloud-init[921]: Generating public/private ecdsa key pair.
Dec  6 00:42:43 np0005548731 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  6 00:42:43 np0005548731 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  6 00:42:43 np0005548731 cloud-init[921]: The key fingerprint is:
Dec  6 00:42:43 np0005548731 cloud-init[921]: SHA256:kMrKX5WmOa26KLtnznuQ591zSP0NeHD+1TCR+mAILqc root@np0005548731.novalocal
Dec  6 00:42:43 np0005548731 cloud-init[921]: The key's randomart image is:
Dec  6 00:42:43 np0005548731 cloud-init[921]: +---[ECDSA 256]---+
Dec  6 00:42:43 np0005548731 cloud-init[921]: |               . |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |       ..     o  |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |      o. . . . . |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |   . ...o.o = o  |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |   .o  +S. * o o.|
Dec  6 00:42:43 np0005548731 cloud-init[921]: | .o.. E*. o + . o|
Dec  6 00:42:43 np0005548731 cloud-init[921]: |  o+ .=o.. o + . |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |..ooo..o+ . . o  |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |oB=o+o.  o       |
Dec  6 00:42:43 np0005548731 cloud-init[921]: +----[SHA256]-----+
Dec  6 00:42:43 np0005548731 cloud-init[921]: Generating public/private ed25519 key pair.
Dec  6 00:42:43 np0005548731 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  6 00:42:43 np0005548731 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  6 00:42:43 np0005548731 cloud-init[921]: The key fingerprint is:
Dec  6 00:42:43 np0005548731 cloud-init[921]: SHA256:yLnzSZeqbBZYwtOYTok70B6PUFGjLIaaLmvbFjuaJkc root@np0005548731.novalocal
Dec  6 00:42:43 np0005548731 cloud-init[921]: The key's randomart image is:
Dec  6 00:42:43 np0005548731 cloud-init[921]: +--[ED25519 256]--+
Dec  6 00:42:43 np0005548731 cloud-init[921]: |  .oo            |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |.... .           |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |o+oo =           |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |=o+ O.oo         |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |o+ B =+ S        |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |. E.+ ..   .     |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |.o .o o.. o      |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |o+++ .o+ +       |
Dec  6 00:42:43 np0005548731 cloud-init[921]: |==+..oo.+        |
Dec  6 00:42:43 np0005548731 cloud-init[921]: +----[SHA256]-----+
Dec  6 00:42:43 np0005548731 systemd[1]: Finished Cloud-init: Network Stage.
Dec  6 00:42:43 np0005548731 systemd[1]: Reached target Cloud-config availability.
Dec  6 00:42:43 np0005548731 systemd[1]: Reached target Network is Online.
Dec  6 00:42:43 np0005548731 systemd[1]: Starting Cloud-init: Config Stage...
Dec  6 00:42:43 np0005548731 systemd[1]: Starting Crash recovery kernel arming...
Dec  6 00:42:43 np0005548731 systemd[1]: Starting Notify NFS peers of a restart...
Dec  6 00:42:43 np0005548731 systemd[1]: Starting System Logging Service...
Dec  6 00:42:43 np0005548731 sm-notify[1005]: Version 2.5.4 starting
Dec  6 00:42:43 np0005548731 systemd[1]: Starting OpenSSH server daemon...
Dec  6 00:42:43 np0005548731 systemd[1]: Starting Permit User Sessions...
Dec  6 00:42:43 np0005548731 systemd[1]: Started Notify NFS peers of a restart.
Dec  6 00:42:43 np0005548731 systemd[1]: Started OpenSSH server daemon.
Dec  6 00:42:43 np0005548731 systemd[1]: Finished Permit User Sessions.
Dec  6 00:42:43 np0005548731 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Dec  6 00:42:43 np0005548731 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  6 00:42:43 np0005548731 systemd[1]: Started Command Scheduler.
Dec  6 00:42:43 np0005548731 systemd[1]: Started Getty on tty1.
Dec  6 00:42:43 np0005548731 systemd[1]: Started Serial Getty on ttyS0.
Dec  6 00:42:43 np0005548731 systemd[1]: Reached target Login Prompts.
Dec  6 00:42:43 np0005548731 systemd[1]: Started System Logging Service.
Dec  6 00:42:43 np0005548731 systemd[1]: Reached target Multi-User System.
Dec  6 00:42:43 np0005548731 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  6 00:42:43 np0005548731 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  6 00:42:43 np0005548731 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  6 00:42:43 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 00:42:43 np0005548731 kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Dec  6 00:42:43 np0005548731 kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  6 00:42:44 np0005548731 cloud-init[1151]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 06 Dec 2025 05:42:43 +0000. Up 26.64 seconds.
Dec  6 00:42:44 np0005548731 systemd[1]: Finished Cloud-init: Config Stage.
Dec  6 00:42:44 np0005548731 systemd[1]: Starting Cloud-init: Final Stage...
Dec  6 00:42:44 np0005548731 dracut[1284]: dracut-057-102.git20250818.el9
Dec  6 00:42:44 np0005548731 cloud-init[1302]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 06 Dec 2025 05:42:44 +0000. Up 27.04 seconds.
Dec  6 00:42:44 np0005548731 dracut[1286]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  6 00:42:44 np0005548731 cloud-init[1326]: #############################################################
Dec  6 00:42:44 np0005548731 cloud-init[1328]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  6 00:42:44 np0005548731 cloud-init[1334]: 256 SHA256:kMrKX5WmOa26KLtnznuQ591zSP0NeHD+1TCR+mAILqc root@np0005548731.novalocal (ECDSA)
Dec  6 00:42:44 np0005548731 cloud-init[1342]: 256 SHA256:yLnzSZeqbBZYwtOYTok70B6PUFGjLIaaLmvbFjuaJkc root@np0005548731.novalocal (ED25519)
Dec  6 00:42:44 np0005548731 cloud-init[1352]: 3072 SHA256:aOWIp9WTWDX8hr7bextMhcpMddlyfn+WXdULoDDu5tM root@np0005548731.novalocal (RSA)
Dec  6 00:42:44 np0005548731 cloud-init[1355]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  6 00:42:44 np0005548731 cloud-init[1358]: #############################################################
Dec  6 00:42:44 np0005548731 cloud-init[1302]: Cloud-init v. 24.4-7.el9 finished at Sat, 06 Dec 2025 05:42:44 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 27.21 seconds
Dec  6 00:42:44 np0005548731 systemd[1]: Finished Cloud-init: Final Stage.
Dec  6 00:42:44 np0005548731 systemd[1]: Reached target Cloud-init target.
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: memstrack is not available
Dec  6 00:42:45 np0005548731 dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  6 00:42:45 np0005548731 dracut[1286]: memstrack is not available
Dec  6 00:42:45 np0005548731 dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  6 00:42:46 np0005548731 dracut[1286]: *** Including module: systemd ***
Dec  6 00:42:46 np0005548731 dracut[1286]: *** Including module: fips ***
Dec  6 00:42:46 np0005548731 dracut[1286]: *** Including module: systemd-initrd ***
Dec  6 00:42:46 np0005548731 dracut[1286]: *** Including module: i18n ***
Dec  6 00:42:46 np0005548731 dracut[1286]: *** Including module: drm ***
Dec  6 00:42:46 np0005548731 dracut[1286]: *** Including module: prefixdevname ***
Dec  6 00:42:46 np0005548731 dracut[1286]: *** Including module: kernel-modules ***
Dec  6 00:42:47 np0005548731 kernel: block vda: the capability attribute has been deprecated.
Dec  6 00:42:47 np0005548731 dracut[1286]: *** Including module: kernel-modules-extra ***
Dec  6 00:42:47 np0005548731 dracut[1286]: *** Including module: qemu ***
Dec  6 00:42:47 np0005548731 dracut[1286]: *** Including module: fstab-sys ***
Dec  6 00:42:47 np0005548731 dracut[1286]: *** Including module: rootfs-block ***
Dec  6 00:42:47 np0005548731 dracut[1286]: *** Including module: terminfo ***
Dec  6 00:42:47 np0005548731 dracut[1286]: *** Including module: udev-rules ***
Dec  6 00:42:48 np0005548731 dracut[1286]: Skipping udev rule: 91-permissions.rules
Dec  6 00:42:48 np0005548731 dracut[1286]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  6 00:42:48 np0005548731 dracut[1286]: *** Including module: virtiofs ***
Dec  6 00:42:48 np0005548731 dracut[1286]: *** Including module: dracut-systemd ***
Dec  6 00:42:48 np0005548731 dracut[1286]: *** Including module: usrmount ***
Dec  6 00:42:48 np0005548731 dracut[1286]: *** Including module: base ***
Dec  6 00:42:48 np0005548731 dracut[1286]: *** Including module: fs-lib ***
Dec  6 00:42:48 np0005548731 dracut[1286]: *** Including module: kdumpbase ***
Dec  6 00:42:48 np0005548731 dracut[1286]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  6 00:42:48 np0005548731 dracut[1286]:  microcode_ctl module: mangling fw_dir
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: configuration "intel" is ignored
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  6 00:42:48 np0005548731 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  6 00:42:49 np0005548731 dracut[1286]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  6 00:42:49 np0005548731 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  6 00:42:49 np0005548731 dracut[1286]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  6 00:42:49 np0005548731 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  6 00:42:49 np0005548731 dracut[1286]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  6 00:42:49 np0005548731 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  6 00:42:49 np0005548731 dracut[1286]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  6 00:42:49 np0005548731 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  6 00:42:49 np0005548731 dracut[1286]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  6 00:42:49 np0005548731 dracut[1286]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  6 00:42:49 np0005548731 dracut[1286]: *** Including module: openssl ***
Dec  6 00:42:49 np0005548731 dracut[1286]: *** Including module: shutdown ***
Dec  6 00:42:49 np0005548731 dracut[1286]: *** Including module: squash ***
Dec  6 00:42:49 np0005548731 dracut[1286]: *** Including modules done ***
Dec  6 00:42:49 np0005548731 dracut[1286]: *** Installing kernel module dependencies ***
Dec  6 00:42:50 np0005548731 dracut[1286]: *** Installing kernel module dependencies done ***
Dec  6 00:42:50 np0005548731 dracut[1286]: *** Resolving executable dependencies ***
Dec  6 00:42:51 np0005548731 dracut[1286]: *** Resolving executable dependencies done ***
Dec  6 00:42:51 np0005548731 dracut[1286]: *** Generating early-microcode cpio image ***
Dec  6 00:42:51 np0005548731 dracut[1286]: *** Store current command line parameters ***
Dec  6 00:42:51 np0005548731 dracut[1286]: Stored kernel commandline:
Dec  6 00:42:51 np0005548731 dracut[1286]: No dracut internal kernel commandline stored in the initramfs
Dec  6 00:42:51 np0005548731 dracut[1286]: *** Install squash loader ***
Dec  6 00:42:52 np0005548731 dracut[1286]: *** Squashing the files inside the initramfs ***
Dec  6 00:42:53 np0005548731 dracut[1286]: *** Squashing the files inside the initramfs done ***
Dec  6 00:42:53 np0005548731 dracut[1286]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  6 00:42:53 np0005548731 dracut[1286]: *** Hardlinking files ***
Dec  6 00:42:53 np0005548731 dracut[1286]: *** Hardlinking files done ***
Dec  6 00:42:54 np0005548731 dracut[1286]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  6 00:42:55 np0005548731 kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Dec  6 00:42:55 np0005548731 kdumpctl[1017]: kdump: Starting kdump: [OK]
Dec  6 00:42:55 np0005548731 systemd[1]: Finished Crash recovery kernel arming.
Dec  6 00:42:55 np0005548731 systemd[1]: Startup finished in 3.507s (kernel) + 3.413s (initrd) + 31.680s (userspace) = 38.601s.
Dec  6 00:42:57 np0005548731 irqbalance[782]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  6 00:42:57 np0005548731 irqbalance[782]: IRQ 27 affinity is now unmanaged
Dec  6 00:42:58 np0005548731 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  6 00:42:59 np0005548731 systemd[1]: Created slice User Slice of UID 1000.
Dec  6 00:42:59 np0005548731 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  6 00:42:59 np0005548731 systemd-logind[794]: New session 1 of user zuul.
Dec  6 00:42:59 np0005548731 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  6 00:42:59 np0005548731 systemd[1]: Starting User Manager for UID 1000...
Dec  6 00:42:59 np0005548731 systemd[4302]: Queued start job for default target Main User Target.
Dec  6 00:42:59 np0005548731 systemd[4302]: Created slice User Application Slice.
Dec  6 00:42:59 np0005548731 systemd[4302]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 00:42:59 np0005548731 systemd[4302]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 00:42:59 np0005548731 systemd[4302]: Reached target Paths.
Dec  6 00:42:59 np0005548731 systemd[4302]: Reached target Timers.
Dec  6 00:42:59 np0005548731 systemd[4302]: Starting D-Bus User Message Bus Socket...
Dec  6 00:42:59 np0005548731 systemd[4302]: Starting Create User's Volatile Files and Directories...
Dec  6 00:42:59 np0005548731 systemd[4302]: Finished Create User's Volatile Files and Directories.
Dec  6 00:42:59 np0005548731 systemd[4302]: Listening on D-Bus User Message Bus Socket.
Dec  6 00:42:59 np0005548731 systemd[4302]: Reached target Sockets.
Dec  6 00:42:59 np0005548731 systemd[4302]: Reached target Basic System.
Dec  6 00:42:59 np0005548731 systemd[4302]: Reached target Main User Target.
Dec  6 00:42:59 np0005548731 systemd[4302]: Startup finished in 134ms.
Dec  6 00:42:59 np0005548731 systemd[1]: Started User Manager for UID 1000.
Dec  6 00:42:59 np0005548731 systemd[1]: Started Session 1 of User zuul.
Dec  6 00:43:00 np0005548731 python3[4384]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 00:43:15 np0005548731 python3[4412]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 00:43:21 np0005548731 python3[4470]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 00:43:22 np0005548731 python3[4510]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  6 00:43:25 np0005548731 python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqdoyMw63SttI9iIXXKReZrn7TtV6srqHUtRSMkVNLk+dttRn/ilDYbFQHqzJ/xo15je18vaR7Bv7vbkUWqoBvXv2qGlsSjPC8W+dR3ltNvw5kJVACfyidVN58/yDoI48cYWeA9FNh/mxTYsU+lJxB8yKvctjItgaTD57DHx0/1ZJwntbGd1eTTg2+WCcTHiojfKmh1KhZm+lPGhLU3lPYya1xkVQLbgRIYnzXs5f2uKHRgoP7Rje0ttuq/2/FepxsVGWoB5DY6o7A0KrVtSPjJSA16aQO+X8om8LuAjPea4s5dF3x0mFmyEg6pCqgXrtGHUTxGDTcrOAaFB8COX0bxRJ84pfPcT9UeJN29I9iIrqrNo+IPgx+m02+DjoqYXWQ5Ri7hJNdlCVXBBJOHCLOeDCnjQole1OSsjiV6vnzBMpreymud5D4dPMOHC3WHrXXsect8s9TUQFtFTQcn3YBnqyNniUqfq5GyL9PMxw+QmtE5vsrAraDXvXeGqjun0E= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:27 np0005548731 python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:43:31 np0005548731 python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 00:43:31 np0005548731 python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764999811.2445266-254-49534863107258/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=18bb717f21a643ea9e23f475ff82c0e9_id_rsa follow=False checksum=c804b4c6be78ad90794c8d3b586f074243741373 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:43:32 np0005548731 python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 00:43:32 np0005548731 python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764999812.186426-308-65294377456699/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=18bb717f21a643ea9e23f475ff82c0e9_id_rsa.pub follow=False checksum=167f5c866ddcddcef28b6690fafd0cb8b4c5cda9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:43:35 np0005548731 python3[4972]: ansible-ping Invoked with data=pong
Dec  6 00:43:36 np0005548731 python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 00:43:38 np0005548731 python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  6 00:43:39 np0005548731 python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:43:40 np0005548731 chronyd[791]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Dec  6 00:43:40 np0005548731 python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:43:40 np0005548731 python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:43:41 np0005548731 python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:43:41 np0005548731 python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:43:41 np0005548731 python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:43:43 np0005548731 python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:43:44 np0005548731 python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 00:43:45 np0005548731 python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764999823.9038293-34-176926264365304/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:43:46 np0005548731 python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:47 np0005548731 python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:47 np0005548731 python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:47 np0005548731 python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:47 np0005548731 python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:48 np0005548731 python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:48 np0005548731 python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:48 np0005548731 python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:48 np0005548731 python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:49 np0005548731 python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:49 np0005548731 python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:49 np0005548731 python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:49 np0005548731 python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:50 np0005548731 python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:50 np0005548731 python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:50 np0005548731 python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:51 np0005548731 python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:51 np0005548731 python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:51 np0005548731 python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:51 np0005548731 python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:52 np0005548731 python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:52 np0005548731 python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:52 np0005548731 python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:52 np0005548731 python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:53 np0005548731 python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:53 np0005548731 python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:43:58 np0005548731 python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  6 00:43:58 np0005548731 systemd[1]: Starting Time & Date Service...
Dec  6 00:43:58 np0005548731 systemd[1]: Started Time & Date Service.
Dec  6 00:43:58 np0005548731 systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Dec  6 00:43:59 np0005548731 python3[6091]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:44:00 np0005548731 python3[6167]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 00:44:00 np0005548731 python3[6238]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764999839.7272792-254-233199420450373/source _original_basename=tmpm83pms5u follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:44:00 np0005548731 python3[6338]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 00:44:01 np0005548731 python3[6410]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764999840.7292612-304-261361709339132/source _original_basename=tmp48gajtny follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:44:03 np0005548731 python3[6512]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 00:44:03 np0005548731 python3[6585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764999842.8296728-384-250990401488480/source _original_basename=tmpt3daga5d follow=False checksum=cc5e842c2ec455e054f0b6d2069940e82eaaec51 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:44:04 np0005548731 python3[6633]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:44:04 np0005548731 python3[6659]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:44:04 np0005548731 python3[6739]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 00:44:05 np0005548731 python3[6812]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764999844.7266543-455-219877898356793/source _original_basename=tmphqg52kds follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:44:05 np0005548731 python3[6863]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-fae9-4e7d-00000000001f-1-compute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:44:07 np0005548731 python3[6891]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-fae9-4e7d-000000000020-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  6 00:44:08 np0005548731 python3[6919]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:44:28 np0005548731 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  6 00:44:33 np0005548731 python3[6951]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:44:45 np0005548731 chronyd[791]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Dec  6 00:45:17 np0005548731 systemd[4302]: Starting Mark boot as successful...
Dec  6 00:45:17 np0005548731 systemd[4302]: Finished Mark boot as successful.
Dec  6 00:45:33 np0005548731 systemd-logind[794]: Session 1 logged out. Waiting for processes to exit.
Dec  6 00:46:03 np0005548731 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  6 00:46:03 np0005548731 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  6 00:46:03 np0005548731 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  6 00:46:03 np0005548731 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  6 00:46:03 np0005548731 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  6 00:46:03 np0005548731 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  6 00:46:03 np0005548731 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  6 00:46:03 np0005548731 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  6 00:46:03 np0005548731 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  6 00:46:03 np0005548731 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  6 00:46:03 np0005548731 NetworkManager[857]: <info>  [1764999963.3230] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  6 00:46:03 np0005548731 systemd-udevd[6969]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 00:46:03 np0005548731 NetworkManager[857]: <info>  [1764999963.3389] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 00:46:03 np0005548731 NetworkManager[857]: <info>  [1764999963.3418] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  6 00:46:03 np0005548731 NetworkManager[857]: <info>  [1764999963.3421] device (eth1): carrier: link connected
Dec  6 00:46:03 np0005548731 NetworkManager[857]: <info>  [1764999963.3422] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  6 00:46:03 np0005548731 NetworkManager[857]: <info>  [1764999963.3428] policy: auto-activating connection 'Wired connection 1' (e9ebe57f-2b3a-3c6f-a1b5-85b8e76a73fa)
Dec  6 00:46:03 np0005548731 NetworkManager[857]: <info>  [1764999963.3432] device (eth1): Activation: starting connection 'Wired connection 1' (e9ebe57f-2b3a-3c6f-a1b5-85b8e76a73fa)
Dec  6 00:46:03 np0005548731 NetworkManager[857]: <info>  [1764999963.3432] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 00:46:03 np0005548731 NetworkManager[857]: <info>  [1764999963.3434] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 00:46:03 np0005548731 NetworkManager[857]: <info>  [1764999963.3438] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 00:46:03 np0005548731 NetworkManager[857]: <info>  [1764999963.3442] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  6 00:46:04 np0005548731 systemd-logind[794]: New session 3 of user zuul.
Dec  6 00:46:04 np0005548731 systemd[1]: Started Session 3 of User zuul.
Dec  6 00:46:04 np0005548731 python3[7000]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-0d69-987a-0000000001f6-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:46:14 np0005548731 python3[7082]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 00:46:14 np0005548731 python3[7155]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764999974.1774297-206-201940019675961/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=72e462f6bf121ec2b701a93bf7698aa5a7f27e3c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:46:15 np0005548731 python3[7205]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 00:46:15 np0005548731 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  6 00:46:15 np0005548731 systemd[1]: Stopped Network Manager Wait Online.
Dec  6 00:46:15 np0005548731 systemd[1]: Stopping Network Manager Wait Online...
Dec  6 00:46:15 np0005548731 systemd[1]: Stopping Network Manager...
Dec  6 00:46:15 np0005548731 NetworkManager[857]: <info>  [1764999975.3320] caught SIGTERM, shutting down normally.
Dec  6 00:46:15 np0005548731 NetworkManager[857]: <info>  [1764999975.3334] dhcp4 (eth0): canceled DHCP transaction
Dec  6 00:46:15 np0005548731 NetworkManager[857]: <info>  [1764999975.3334] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 00:46:15 np0005548731 NetworkManager[857]: <info>  [1764999975.3334] dhcp4 (eth0): state changed no lease
Dec  6 00:46:15 np0005548731 NetworkManager[857]: <info>  [1764999975.3337] manager: NetworkManager state is now CONNECTING
Dec  6 00:46:15 np0005548731 NetworkManager[857]: <info>  [1764999975.3496] dhcp4 (eth1): canceled DHCP transaction
Dec  6 00:46:15 np0005548731 NetworkManager[857]: <info>  [1764999975.3496] dhcp4 (eth1): state changed no lease
Dec  6 00:46:15 np0005548731 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 00:46:15 np0005548731 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 00:46:15 np0005548731 NetworkManager[857]: <info>  [1764999975.6168] exiting (success)
Dec  6 00:46:15 np0005548731 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  6 00:46:15 np0005548731 systemd[1]: Stopped Network Manager.
Dec  6 00:46:15 np0005548731 systemd[1]: NetworkManager.service: Consumed 1.681s CPU time, 10.1M memory peak.
Dec  6 00:46:15 np0005548731 systemd[1]: Starting Network Manager...
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.6891] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e68e3ea0-2c97-4c65-be4f-7c894f030f31)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.6892] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.6949] manager[0x55d89f3ec070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  6 00:46:15 np0005548731 systemd[1]: Starting Hostname Service...
Dec  6 00:46:15 np0005548731 systemd[1]: Started Hostname Service.
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7725] hostname: hostname: using hostnamed
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7726] hostname: static hostname changed from (none) to "np0005548731.novalocal"
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7731] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7737] manager[0x55d89f3ec070]: rfkill: Wi-Fi hardware radio set enabled
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7737] manager[0x55d89f3ec070]: rfkill: WWAN hardware radio set enabled
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7768] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7769] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7770] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7770] manager: Networking is enabled by state file
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7772] settings: Loaded settings plugin: keyfile (internal)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7777] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7799] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7807] dhcp: init: Using DHCP client 'internal'
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7809] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7814] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7818] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7824] device (lo): Activation: starting connection 'lo' (f9121f72-d016-4e94-a48a-a5240750641b)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7828] device (eth0): carrier: link connected
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7832] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7836] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7837] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7841] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7846] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7850] device (eth1): carrier: link connected
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7853] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7856] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (e9ebe57f-2b3a-3c6f-a1b5-85b8e76a73fa) (indicated)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7857] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7860] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7866] device (eth1): Activation: starting connection 'Wired connection 1' (e9ebe57f-2b3a-3c6f-a1b5-85b8e76a73fa)
Dec  6 00:46:15 np0005548731 systemd[1]: Started Network Manager.
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7871] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7874] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7877] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7878] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7880] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7882] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7884] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7886] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7889] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7893] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7895] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7903] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7904] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7916] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7921] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7924] device (lo): Activation: successful, device activated.
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7940] dhcp4 (eth0): state changed new lease, address=38.102.83.195
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.7944] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  6 00:46:15 np0005548731 systemd[1]: Starting Network Manager Wait Online...
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.8298] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.8315] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.8341] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.8347] manager: NetworkManager state is now CONNECTED_SITE
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.8354] device (eth0): Activation: successful, device activated.
Dec  6 00:46:15 np0005548731 NetworkManager[7222]: <info>  [1764999975.8359] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  6 00:46:16 np0005548731 python3[7289]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-0d69-987a-0000000000d3-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:46:25 np0005548731 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 00:46:45 np0005548731 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3369] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  6 00:47:01 np0005548731 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 00:47:01 np0005548731 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3664] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3666] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3674] device (eth1): Activation: successful, device activated.
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3679] manager: startup complete
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3681] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <warn>  [1765000021.3685] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3690] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  6 00:47:01 np0005548731 systemd[1]: Finished Network Manager Wait Online.
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3796] dhcp4 (eth1): canceled DHCP transaction
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3797] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3797] dhcp4 (eth1): state changed no lease
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3810] policy: auto-activating connection 'ci-private-network' (18745adb-ffcc-5b8a-b84a-1cf22aac92d8)
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3814] device (eth1): Activation: starting connection 'ci-private-network' (18745adb-ffcc-5b8a-b84a-1cf22aac92d8)
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3814] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3817] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3823] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3830] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3868] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3869] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 00:47:01 np0005548731 NetworkManager[7222]: <info>  [1765000021.3874] device (eth1): Activation: successful, device activated.
Dec  6 00:47:11 np0005548731 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 00:47:16 np0005548731 systemd-logind[794]: Session 3 logged out. Waiting for processes to exit.
Dec  6 00:47:16 np0005548731 systemd[1]: session-3.scope: Deactivated successfully.
Dec  6 00:47:16 np0005548731 systemd[1]: session-3.scope: Consumed 1.476s CPU time.
Dec  6 00:47:16 np0005548731 systemd-logind[794]: Removed session 3.
Dec  6 00:47:32 np0005548731 systemd-logind[794]: New session 4 of user zuul.
Dec  6 00:47:32 np0005548731 systemd[1]: Started Session 4 of User zuul.
Dec  6 00:47:32 np0005548731 python3[7410]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 00:47:33 np0005548731 python3[7483]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765000052.243086-373-214291901212458/source _original_basename=tmpmdkniil7 follow=False checksum=9aebb8635d8af9bb3da7e243651834426f96779b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:47:35 np0005548731 systemd[1]: session-4.scope: Deactivated successfully.
Dec  6 00:47:35 np0005548731 systemd-logind[794]: Session 4 logged out. Waiting for processes to exit.
Dec  6 00:47:35 np0005548731 systemd-logind[794]: Removed session 4.
Dec  6 00:48:17 np0005548731 systemd[4302]: Created slice User Background Tasks Slice.
Dec  6 00:48:17 np0005548731 systemd[4302]: Starting Cleanup of User's Temporary Files and Directories...
Dec  6 00:48:17 np0005548731 systemd[4302]: Finished Cleanup of User's Temporary Files and Directories.
Dec  6 00:52:56 np0005548731 systemd-logind[794]: New session 5 of user zuul.
Dec  6 00:52:56 np0005548731 systemd[1]: Started Session 5 of User zuul.
Dec  6 00:52:56 np0005548731 python3[7603]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-7dd8-fec6-000000000ca2-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:52:56 np0005548731 python3[7631]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:52:57 np0005548731 python3[7658]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:52:57 np0005548731 python3[7684]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:52:57 np0005548731 python3[7710]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:52:58 np0005548731 python3[7736]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:52:58 np0005548731 python3[7814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 00:52:59 np0005548731 python3[7887]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765000378.4714057-366-142553518753187/source _original_basename=tmp1hp0qzld follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:53:01 np0005548731 python3[7937]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 00:53:01 np0005548731 systemd[1]: Reloading.
Dec  6 00:53:01 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 00:53:03 np0005548731 python3[7993]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  6 00:53:05 np0005548731 python3[8019]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:53:05 np0005548731 python3[8047]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:53:05 np0005548731 python3[8075]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:53:05 np0005548731 python3[8103]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:53:07 np0005548731 python3[8131]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-7dd8-fec6-000000000ca9-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:53:08 np0005548731 python3[8162]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  6 00:53:11 np0005548731 systemd[1]: session-5.scope: Deactivated successfully.
Dec  6 00:53:11 np0005548731 systemd[1]: session-5.scope: Consumed 4.298s CPU time.
Dec  6 00:53:11 np0005548731 systemd-logind[794]: Session 5 logged out. Waiting for processes to exit.
Dec  6 00:53:11 np0005548731 systemd-logind[794]: Removed session 5.
Dec  6 00:53:13 np0005548731 systemd-logind[794]: New session 6 of user zuul.
Dec  6 00:53:13 np0005548731 systemd[1]: Started Session 6 of User zuul.
Dec  6 00:53:14 np0005548731 python3[8197]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  6 00:53:36 np0005548731 kernel: SELinux:  Converting 385 SID table entries...
Dec  6 00:53:36 np0005548731 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 00:53:36 np0005548731 kernel: SELinux:  policy capability open_perms=1
Dec  6 00:53:36 np0005548731 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 00:53:36 np0005548731 kernel: SELinux:  policy capability always_check_network=0
Dec  6 00:53:36 np0005548731 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 00:53:36 np0005548731 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 00:53:36 np0005548731 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 00:53:47 np0005548731 kernel: SELinux:  Converting 385 SID table entries...
Dec  6 00:53:47 np0005548731 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 00:53:47 np0005548731 kernel: SELinux:  policy capability open_perms=1
Dec  6 00:53:47 np0005548731 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 00:53:47 np0005548731 kernel: SELinux:  policy capability always_check_network=0
Dec  6 00:53:47 np0005548731 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 00:53:47 np0005548731 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 00:53:47 np0005548731 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 00:54:00 np0005548731 kernel: SELinux:  Converting 385 SID table entries...
Dec  6 00:54:00 np0005548731 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 00:54:00 np0005548731 kernel: SELinux:  policy capability open_perms=1
Dec  6 00:54:00 np0005548731 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 00:54:00 np0005548731 kernel: SELinux:  policy capability always_check_network=0
Dec  6 00:54:00 np0005548731 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 00:54:00 np0005548731 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 00:54:00 np0005548731 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 00:54:05 np0005548731 setsebool[8273]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  6 00:54:05 np0005548731 setsebool[8273]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  6 00:54:18 np0005548731 kernel: SELinux:  Converting 388 SID table entries...
Dec  6 00:54:18 np0005548731 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 00:54:18 np0005548731 kernel: SELinux:  policy capability open_perms=1
Dec  6 00:54:18 np0005548731 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 00:54:18 np0005548731 kernel: SELinux:  policy capability always_check_network=0
Dec  6 00:54:18 np0005548731 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 00:54:18 np0005548731 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 00:54:18 np0005548731 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 00:54:40 np0005548731 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  6 00:54:40 np0005548731 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 00:54:40 np0005548731 systemd[1]: Starting man-db-cache-update.service...
Dec  6 00:54:40 np0005548731 systemd[1]: Reloading.
Dec  6 00:54:40 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 00:54:40 np0005548731 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 00:55:07 np0005548731 python3[21816]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-70f1-899a-00000000000c-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 00:55:08 np0005548731 kernel: evm: overlay not supported
Dec  6 00:55:08 np0005548731 systemd[4302]: Starting D-Bus User Message Bus...
Dec  6 00:55:08 np0005548731 dbus-broker-launch[22363]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  6 00:55:08 np0005548731 dbus-broker-launch[22363]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  6 00:55:08 np0005548731 systemd[4302]: Started D-Bus User Message Bus.
Dec  6 00:55:08 np0005548731 dbus-broker-lau[22363]: Ready
Dec  6 00:55:08 np0005548731 systemd[4302]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  6 00:55:08 np0005548731 systemd[4302]: Created slice Slice /user.
Dec  6 00:55:08 np0005548731 systemd[4302]: podman-22278.scope: unit configures an IP firewall, but not running as root.
Dec  6 00:55:08 np0005548731 systemd[4302]: (This warning is only shown for the first unit using IP firewalling.)
Dec  6 00:55:08 np0005548731 systemd[4302]: Started podman-22278.scope.
Dec  6 00:55:08 np0005548731 systemd[4302]: Started podman-pause-c85ef912.scope.
Dec  6 00:55:18 np0005548731 python3[25751]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.182:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.182:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:55:18 np0005548731 python3[25751]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  6 00:55:18 np0005548731 systemd[1]: session-6.scope: Deactivated successfully.
Dec  6 00:55:18 np0005548731 systemd[1]: session-6.scope: Consumed 1min 6.784s CPU time.
Dec  6 00:55:18 np0005548731 systemd-logind[794]: Session 6 logged out. Waiting for processes to exit.
Dec  6 00:55:18 np0005548731 systemd-logind[794]: Removed session 6.
Dec  6 00:55:30 np0005548731 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 00:55:30 np0005548731 systemd[1]: Finished man-db-cache-update.service.
Dec  6 00:55:30 np0005548731 systemd[1]: man-db-cache-update.service: Consumed 58.311s CPU time.
Dec  6 00:55:30 np0005548731 systemd[1]: run-r01b4391ef0d84ce886fee1a28463c336.service: Deactivated successfully.
Dec  6 00:55:42 np0005548731 systemd-logind[794]: New session 7 of user zuul.
Dec  6 00:55:42 np0005548731 systemd[1]: Started Session 7 of User zuul.
Dec  6 00:55:43 np0005548731 python3[29737]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO334parmk7cxOcI6a0PFnZ27TSOTPbpmTBoe8YEeQQ0LR7/5AS2zaJjFwcqLsniM8KyvvJoyYEK8iW2BwzxAhE= zuul@np0005548728.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:55:43 np0005548731 python3[29763]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO334parmk7cxOcI6a0PFnZ27TSOTPbpmTBoe8YEeQQ0LR7/5AS2zaJjFwcqLsniM8KyvvJoyYEK8iW2BwzxAhE= zuul@np0005548728.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:55:44 np0005548731 python3[29789]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548731.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  6 00:55:45 np0005548731 python3[29823]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO334parmk7cxOcI6a0PFnZ27TSOTPbpmTBoe8YEeQQ0LR7/5AS2zaJjFwcqLsniM8KyvvJoyYEK8iW2BwzxAhE= zuul@np0005548728.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  6 00:55:46 np0005548731 python3[29901]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 00:55:46 np0005548731 python3[29974]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765000546.0294812-169-138489548353496/source _original_basename=tmpdtb4zosq follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 00:55:47 np0005548731 python3[30024]: ansible-ansible.builtin.hostname Invoked with name=compute-2 use=systemd
Dec  6 00:55:47 np0005548731 systemd[1]: Starting Hostname Service...
Dec  6 00:55:47 np0005548731 systemd[1]: Started Hostname Service.
Dec  6 00:55:47 np0005548731 systemd-hostnamed[30028]: Changed pretty hostname to 'compute-2'
Dec  6 00:55:47 np0005548731 systemd-hostnamed[30028]: Hostname set to <compute-2> (static)
Dec  6 00:55:47 np0005548731 NetworkManager[7222]: <info>  [1765000547.8207] hostname: static hostname changed from "np0005548731.novalocal" to "compute-2"
Dec  6 00:55:47 np0005548731 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 00:55:47 np0005548731 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 00:55:48 np0005548731 systemd[1]: session-7.scope: Deactivated successfully.
Dec  6 00:55:48 np0005548731 systemd[1]: session-7.scope: Consumed 2.348s CPU time.
Dec  6 00:55:48 np0005548731 systemd-logind[794]: Session 7 logged out. Waiting for processes to exit.
Dec  6 00:55:48 np0005548731 systemd-logind[794]: Removed session 7.
Dec  6 00:55:57 np0005548731 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 00:56:17 np0005548731 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  6 00:57:17 np0005548731 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  6 00:57:17 np0005548731 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  6 00:57:17 np0005548731 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  6 00:57:17 np0005548731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  6 01:00:07 np0005548731 systemd-logind[794]: New session 8 of user zuul.
Dec  6 01:00:07 np0005548731 systemd[1]: Started Session 8 of User zuul.
Dec  6 01:00:08 np0005548731 python3[30153]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:00:10 np0005548731 python3[30269]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 01:00:11 np0005548731 python3[30342]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.3522828-34061-67993233626020/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:00:11 np0005548731 python3[30368]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 01:00:11 np0005548731 python3[30441]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.3522828-34061-67993233626020/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:00:11 np0005548731 python3[30467]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 01:00:12 np0005548731 python3[30540]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.3522828-34061-67993233626020/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:00:12 np0005548731 python3[30566]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 01:00:12 np0005548731 python3[30639]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.3522828-34061-67993233626020/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:00:13 np0005548731 python3[30665]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 01:00:13 np0005548731 python3[30738]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.3522828-34061-67993233626020/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:00:13 np0005548731 python3[30764]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 01:00:14 np0005548731 python3[30837]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.3522828-34061-67993233626020/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:00:14 np0005548731 python3[30863]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 01:00:14 np0005548731 python3[30936]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.3522828-34061-67993233626020/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:00:27 np0005548731 python3[30984]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:05:27 np0005548731 systemd-logind[794]: Session 8 logged out. Waiting for processes to exit.
Dec  6 01:05:27 np0005548731 systemd[1]: session-8.scope: Deactivated successfully.
Dec  6 01:05:27 np0005548731 systemd[1]: session-8.scope: Consumed 4.928s CPU time.
Dec  6 01:05:27 np0005548731 systemd-logind[794]: Removed session 8.
Dec  6 01:09:17 np0005548731 systemd[1]: Starting dnf makecache...
Dec  6 01:09:18 np0005548731 dnf[31009]: Failed determining last makecache time.
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-openstack-barbican-42b4c41831408a8e323 350 kB/s |  13 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 3.0 MB/s |  65 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.5 MB/s |  32 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-python-stevedore-c4acc5639fd2329372142 5.1 MB/s | 131 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-python-cloudkitty-tests-tempest-2c80f8 1.4 MB/s |  32 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-os-net-config-d0cedbdb788d43e5c7551df5  11 MB/s | 349 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 2.1 MB/s |  42 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-python-designate-tests-tempest-347fdbc 934 kB/s |  18 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-openstack-glance-1fd12c29b339f30fe823e 774 kB/s |  18 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.5 MB/s |  29 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-openstack-manila-3c01b7181572c95dac462 1.3 MB/s |  25 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-python-whitebox-neutron-tests-tempest- 6.4 MB/s | 154 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-openstack-octavia-ba397f07a7331190208c 1.1 MB/s |  26 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-openstack-watcher-c014f81a8647287f6dcc 845 kB/s |  16 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-ansible-config_template-5ccaa22121a7ff 273 kB/s | 7.4 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 5.7 MB/s | 144 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-openstack-swift-dc98a8463506ac520c469a 720 kB/s |  14 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-python-tempestconf-8515371b7cceebd4282 2.3 MB/s |  53 kB     00:00
Dec  6 01:09:18 np0005548731 dnf[31009]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.9 MB/s |  96 kB     00:00
Dec  6 01:09:19 np0005548731 dnf[31009]: CentOS Stream 9 - BaseOS                         77 kB/s | 7.3 kB     00:00
Dec  6 01:09:19 np0005548731 dnf[31009]: CentOS Stream 9 - AppStream                      65 kB/s | 7.4 kB     00:00
Dec  6 01:09:19 np0005548731 dnf[31009]: CentOS Stream 9 - CRB                            66 kB/s | 7.2 kB     00:00
Dec  6 01:09:19 np0005548731 dnf[31009]: CentOS Stream 9 - Extras packages                67 kB/s | 8.3 kB     00:00
Dec  6 01:09:19 np0005548731 dnf[31009]: dlrn-antelope-testing                            28 MB/s | 1.1 MB     00:00
Dec  6 01:09:20 np0005548731 dnf[31009]: dlrn-antelope-build-deps                         15 MB/s | 461 kB     00:00
Dec  6 01:09:20 np0005548731 dnf[31009]: centos9-rabbitmq                                439 kB/s | 123 kB     00:00
Dec  6 01:09:20 np0005548731 dnf[31009]: centos9-storage                                 1.5 MB/s | 415 kB     00:00
Dec  6 01:09:21 np0005548731 dnf[31009]: centos9-opstools                                189 kB/s |  51 kB     00:00
Dec  6 01:09:21 np0005548731 dnf[31009]: NFV SIG OpenvSwitch                             1.6 MB/s | 456 kB     00:00
Dec  6 01:09:22 np0005548731 dnf[31009]: repo-setup-centos-appstream                      92 MB/s |  25 MB     00:00
Dec  6 01:09:28 np0005548731 dnf[31009]: repo-setup-centos-baseos                         74 MB/s | 8.8 MB     00:00
Dec  6 01:09:30 np0005548731 dnf[31009]: repo-setup-centos-highavailability              2.5 MB/s | 744 kB     00:00
Dec  6 01:09:30 np0005548731 dnf[31009]: repo-setup-centos-powertools                     75 MB/s | 7.3 MB     00:00
Dec  6 01:09:32 np0005548731 dnf[31009]: Extra Packages for Enterprise Linux 9 - x86_64   44 MB/s |  20 MB     00:00
Dec  6 01:09:45 np0005548731 dnf[31009]: Metadata cache created.
Dec  6 01:09:45 np0005548731 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  6 01:09:45 np0005548731 systemd[1]: Finished dnf makecache.
Dec  6 01:09:45 np0005548731 systemd[1]: dnf-makecache.service: Consumed 24.926s CPU time.
Dec  6 01:14:41 np0005548731 systemd-logind[794]: New session 9 of user zuul.
Dec  6 01:14:41 np0005548731 systemd[1]: Started Session 9 of User zuul.
Dec  6 01:14:42 np0005548731 python3.9[31283]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:14:43 np0005548731 python3.9[31464]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:15:17 np0005548731 systemd[1]: session-9.scope: Deactivated successfully.
Dec  6 01:15:17 np0005548731 systemd[1]: session-9.scope: Consumed 8.518s CPU time.
Dec  6 01:15:17 np0005548731 systemd-logind[794]: Session 9 logged out. Waiting for processes to exit.
Dec  6 01:15:17 np0005548731 systemd-logind[794]: Removed session 9.
Dec  6 01:15:33 np0005548731 systemd-logind[794]: New session 10 of user zuul.
Dec  6 01:15:33 np0005548731 systemd[1]: Started Session 10 of User zuul.
Dec  6 01:15:33 np0005548731 python3.9[31674]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  6 01:15:35 np0005548731 python3.9[31848]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:15:36 np0005548731 python3.9[32000]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:15:37 np0005548731 python3.9[32153]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:15:38 np0005548731 python3.9[32305]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:15:38 np0005548731 python3.9[32457]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:15:39 np0005548731 python3.9[32580]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765001738.312505-184-259870902180072/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:15:40 np0005548731 python3.9[32732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:15:41 np0005548731 python3.9[32888]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:15:41 np0005548731 python3.9[33040]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:15:43 np0005548731 python3.9[33190]: ansible-ansible.builtin.service_facts Invoked
Dec  6 01:15:48 np0005548731 python3.9[33443]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:15:49 np0005548731 python3.9[33593]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:15:50 np0005548731 python3.9[33747]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:15:51 np0005548731 python3.9[33905]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:15:52 np0005548731 python3.9[33989]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:16:42 np0005548731 systemd[1]: Reloading.
Dec  6 01:16:42 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:16:42 np0005548731 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  6 01:16:45 np0005548731 systemd[1]: Reloading.
Dec  6 01:16:45 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:16:45 np0005548731 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  6 01:16:45 np0005548731 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  6 01:16:45 np0005548731 systemd[1]: Reloading.
Dec  6 01:16:45 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:16:46 np0005548731 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  6 01:16:46 np0005548731 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Dec  6 01:16:46 np0005548731 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Dec  6 01:18:01 np0005548731 kernel: SELinux:  Converting 2719 SID table entries...
Dec  6 01:18:01 np0005548731 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 01:18:01 np0005548731 kernel: SELinux:  policy capability open_perms=1
Dec  6 01:18:01 np0005548731 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 01:18:01 np0005548731 kernel: SELinux:  policy capability always_check_network=0
Dec  6 01:18:01 np0005548731 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 01:18:01 np0005548731 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 01:18:01 np0005548731 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 01:18:01 np0005548731 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  6 01:18:02 np0005548731 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 01:18:02 np0005548731 systemd[1]: Starting man-db-cache-update.service...
Dec  6 01:18:02 np0005548731 systemd[1]: Reloading.
Dec  6 01:18:02 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:18:02 np0005548731 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 01:18:03 np0005548731 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 01:18:03 np0005548731 systemd[1]: Finished man-db-cache-update.service.
Dec  6 01:18:03 np0005548731 systemd[1]: man-db-cache-update.service: Consumed 1.203s CPU time.
Dec  6 01:18:03 np0005548731 systemd[1]: run-r4b970394510b4a5f933b149960489a72.service: Deactivated successfully.
Dec  6 01:18:26 np0005548731 python3.9[35511]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:18:27 np0005548731 irqbalance[782]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  6 01:18:27 np0005548731 irqbalance[782]: IRQ 26 affinity is now unmanaged
Dec  6 01:18:28 np0005548731 python3.9[35792]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  6 01:18:29 np0005548731 python3.9[35944]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  6 01:18:33 np0005548731 python3.9[36097]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:18:34 np0005548731 python3.9[36249]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  6 01:18:41 np0005548731 python3.9[36401]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:18:41 np0005548731 python3.9[36554]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:18:42 np0005548731 python3.9[36677]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765001921.3570778-674-214463419010757/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:18:47 np0005548731 python3.9[36829]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:18:47 np0005548731 python3.9[36981]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:18:48 np0005548731 python3.9[37134]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:18:50 np0005548731 python3.9[37286]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  6 01:18:50 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 01:18:50 np0005548731 python3.9[37440]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 01:18:51 np0005548731 python3.9[37599]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  6 01:18:53 np0005548731 python3.9[37759]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  6 01:18:54 np0005548731 python3.9[37912]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 01:18:55 np0005548731 python3.9[38070]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  6 01:18:56 np0005548731 python3.9[38222]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:19:02 np0005548731 python3.9[38375]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:19:03 np0005548731 python3.9[38527]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:19:03 np0005548731 python3.9[38650]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765001942.602692-1031-173121398896122/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:19:04 np0005548731 python3.9[38802]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:19:04 np0005548731 systemd[1]: Starting Load Kernel Modules...
Dec  6 01:19:05 np0005548731 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  6 01:19:05 np0005548731 kernel: Bridge firewalling registered
Dec  6 01:19:05 np0005548731 systemd-modules-load[38806]: Inserted module 'br_netfilter'
Dec  6 01:19:05 np0005548731 systemd[1]: Finished Load Kernel Modules.
Dec  6 01:19:05 np0005548731 python3.9[38962]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:19:06 np0005548731 python3.9[39085]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765001945.3047755-1100-30014258316125/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:19:07 np0005548731 python3.9[39237]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:19:13 np0005548731 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Dec  6 01:19:13 np0005548731 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Dec  6 01:19:14 np0005548731 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 01:19:14 np0005548731 systemd[1]: Starting man-db-cache-update.service...
Dec  6 01:19:14 np0005548731 systemd[1]: Reloading.
Dec  6 01:19:14 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:19:14 np0005548731 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 01:19:16 np0005548731 python3.9[41319]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:19:16 np0005548731 python3.9[42569]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  6 01:19:17 np0005548731 python3.9[43200]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:19:18 np0005548731 python3.9[43440]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:19:18 np0005548731 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  6 01:19:19 np0005548731 systemd[1]: Starting Authorization Manager...
Dec  6 01:19:19 np0005548731 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  6 01:19:19 np0005548731 polkitd[43657]: Started polkitd version 0.117
Dec  6 01:19:19 np0005548731 systemd[1]: Started Authorization Manager.
Dec  6 01:19:20 np0005548731 python3.9[43827]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:19:20 np0005548731 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  6 01:19:20 np0005548731 systemd[1]: tuned.service: Deactivated successfully.
Dec  6 01:19:20 np0005548731 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  6 01:19:20 np0005548731 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  6 01:19:20 np0005548731 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  6 01:19:20 np0005548731 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 01:19:20 np0005548731 systemd[1]: Finished man-db-cache-update.service.
Dec  6 01:19:20 np0005548731 systemd[1]: man-db-cache-update.service: Consumed 4.413s CPU time.
Dec  6 01:19:20 np0005548731 systemd[1]: run-r43108c2d77af4c13aafafc52da388db0.service: Deactivated successfully.
Dec  6 01:19:21 np0005548731 python3.9[43989]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  6 01:19:24 np0005548731 python3.9[44141]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:19:24 np0005548731 systemd[1]: Reloading.
Dec  6 01:19:24 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:19:25 np0005548731 python3.9[44330]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:19:25 np0005548731 systemd[1]: Reloading.
Dec  6 01:19:25 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:19:27 np0005548731 python3.9[44519]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:19:27 np0005548731 python3.9[44672]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:19:27 np0005548731 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  6 01:19:28 np0005548731 python3.9[44825]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:19:30 np0005548731 python3.9[44987]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:19:31 np0005548731 python3.9[45140]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:19:31 np0005548731 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  6 01:19:31 np0005548731 systemd[1]: Stopped Apply Kernel Variables.
Dec  6 01:19:31 np0005548731 systemd[1]: Stopping Apply Kernel Variables...
Dec  6 01:19:31 np0005548731 systemd[1]: Starting Apply Kernel Variables...
Dec  6 01:19:31 np0005548731 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  6 01:19:31 np0005548731 systemd[1]: Finished Apply Kernel Variables.
Dec  6 01:19:32 np0005548731 systemd[1]: session-10.scope: Deactivated successfully.
Dec  6 01:19:32 np0005548731 systemd[1]: session-10.scope: Consumed 2min 16.111s CPU time.
Dec  6 01:19:32 np0005548731 systemd-logind[794]: Session 10 logged out. Waiting for processes to exit.
Dec  6 01:19:32 np0005548731 systemd-logind[794]: Removed session 10.
Dec  6 01:19:37 np0005548731 systemd-logind[794]: New session 11 of user zuul.
Dec  6 01:19:37 np0005548731 systemd[1]: Started Session 11 of User zuul.
Dec  6 01:19:38 np0005548731 python3.9[45325]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:19:39 np0005548731 python3.9[45481]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  6 01:19:40 np0005548731 python3.9[45634]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 01:19:41 np0005548731 python3.9[45792]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  6 01:19:42 np0005548731 python3.9[45952]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:19:43 np0005548731 python3.9[46036]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  6 01:19:47 np0005548731 python3.9[46201]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:20:03 np0005548731 kernel: SELinux:  Converting 2731 SID table entries...
Dec  6 01:20:03 np0005548731 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 01:20:03 np0005548731 kernel: SELinux:  policy capability open_perms=1
Dec  6 01:20:03 np0005548731 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 01:20:03 np0005548731 kernel: SELinux:  policy capability always_check_network=0
Dec  6 01:20:03 np0005548731 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 01:20:03 np0005548731 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 01:20:03 np0005548731 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 01:20:03 np0005548731 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  6 01:20:03 np0005548731 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  6 01:20:05 np0005548731 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 01:20:05 np0005548731 systemd[1]: Starting man-db-cache-update.service...
Dec  6 01:20:05 np0005548731 systemd[1]: Reloading.
Dec  6 01:20:05 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:20:05 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:20:05 np0005548731 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 01:20:07 np0005548731 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 01:20:07 np0005548731 systemd[1]: Finished man-db-cache-update.service.
Dec  6 01:20:07 np0005548731 systemd[1]: run-rb20b133bd83148a3816eab7d29cdf2cd.service: Deactivated successfully.
Dec  6 01:20:08 np0005548731 python3.9[47299]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 01:20:08 np0005548731 systemd[1]: Reloading.
Dec  6 01:20:08 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:20:08 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:20:08 np0005548731 systemd[1]: Starting Open vSwitch Database Unit...
Dec  6 01:20:08 np0005548731 chown[47343]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  6 01:20:08 np0005548731 ovs-ctl[47348]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  6 01:20:08 np0005548731 ovs-ctl[47348]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  6 01:20:08 np0005548731 ovs-ctl[47348]: Starting ovsdb-server [  OK  ]
Dec  6 01:20:08 np0005548731 ovs-vsctl[47397]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  6 01:20:08 np0005548731 ovs-vsctl[47414]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"9f96b960-b4f2-40bd-ae99-08121f5e8b78\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  6 01:20:08 np0005548731 ovs-ctl[47348]: Configuring Open vSwitch system IDs [  OK  ]
Dec  6 01:20:08 np0005548731 ovs-ctl[47348]: Enabling remote OVSDB managers [  OK  ]
Dec  6 01:20:08 np0005548731 systemd[1]: Started Open vSwitch Database Unit.
Dec  6 01:20:08 np0005548731 ovs-vsctl[47423]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Dec  6 01:20:08 np0005548731 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  6 01:20:08 np0005548731 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  6 01:20:08 np0005548731 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  6 01:20:08 np0005548731 kernel: openvswitch: Open vSwitch switching datapath
Dec  6 01:20:08 np0005548731 ovs-ctl[47468]: Inserting openvswitch module [  OK  ]
Dec  6 01:20:08 np0005548731 ovs-ctl[47437]: Starting ovs-vswitchd [  OK  ]
Dec  6 01:20:08 np0005548731 ovs-vsctl[47485]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Dec  6 01:20:08 np0005548731 ovs-ctl[47437]: Enabling remote OVSDB managers [  OK  ]
Dec  6 01:20:08 np0005548731 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  6 01:20:08 np0005548731 systemd[1]: Starting Open vSwitch...
Dec  6 01:20:08 np0005548731 systemd[1]: Finished Open vSwitch.
Dec  6 01:20:09 np0005548731 python3.9[47637]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:20:10 np0005548731 python3.9[47789]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  6 01:20:11 np0005548731 kernel: SELinux:  Converting 2745 SID table entries...
Dec  6 01:20:11 np0005548731 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 01:20:11 np0005548731 kernel: SELinux:  policy capability open_perms=1
Dec  6 01:20:11 np0005548731 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 01:20:11 np0005548731 kernel: SELinux:  policy capability always_check_network=0
Dec  6 01:20:11 np0005548731 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 01:20:11 np0005548731 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 01:20:11 np0005548731 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 01:20:13 np0005548731 python3.9[47944]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:20:14 np0005548731 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  6 01:20:14 np0005548731 python3.9[48102]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:20:16 np0005548731 python3.9[48255]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:20:18 np0005548731 python3.9[48542]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  6 01:20:19 np0005548731 python3.9[48692]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:20:19 np0005548731 python3.9[48846]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:20:38 np0005548731 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 01:20:38 np0005548731 systemd[1]: Starting man-db-cache-update.service...
Dec  6 01:20:38 np0005548731 systemd[1]: Reloading.
Dec  6 01:20:38 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:20:38 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:20:38 np0005548731 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 01:20:39 np0005548731 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 01:20:39 np0005548731 systemd[1]: Finished man-db-cache-update.service.
Dec  6 01:20:39 np0005548731 systemd[1]: run-r081bd1c0c0f348b691a3876e7156dee2.service: Deactivated successfully.
Dec  6 01:20:40 np0005548731 python3.9[49164]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:20:40 np0005548731 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  6 01:20:40 np0005548731 systemd[1]: Stopped Network Manager Wait Online.
Dec  6 01:20:40 np0005548731 systemd[1]: Stopping Network Manager Wait Online...
Dec  6 01:20:40 np0005548731 systemd[1]: Stopping Network Manager...
Dec  6 01:20:40 np0005548731 NetworkManager[7222]: <info>  [1765002040.0543] caught SIGTERM, shutting down normally.
Dec  6 01:20:40 np0005548731 NetworkManager[7222]: <info>  [1765002040.0557] dhcp4 (eth0): canceled DHCP transaction
Dec  6 01:20:40 np0005548731 NetworkManager[7222]: <info>  [1765002040.0558] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 01:20:40 np0005548731 NetworkManager[7222]: <info>  [1765002040.0558] dhcp4 (eth0): state changed no lease
Dec  6 01:20:40 np0005548731 NetworkManager[7222]: <info>  [1765002040.0560] manager: NetworkManager state is now CONNECTED_SITE
Dec  6 01:20:40 np0005548731 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 01:20:40 np0005548731 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 01:20:40 np0005548731 NetworkManager[7222]: <info>  [1765002040.2612] exiting (success)
Dec  6 01:20:40 np0005548731 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  6 01:20:40 np0005548731 systemd[1]: Stopped Network Manager.
Dec  6 01:20:40 np0005548731 systemd[1]: NetworkManager.service: Consumed 15.413s CPU time, 4.1M memory peak, read 0B from disk, written 34.0K to disk.
Dec  6 01:20:40 np0005548731 systemd[1]: Starting Network Manager...
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.3367] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e68e3ea0-2c97-4c65-be4f-7c894f030f31)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.3368] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.3428] manager[0x55ee9d40d090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  6 01:20:40 np0005548731 systemd[1]: Starting Hostname Service...
Dec  6 01:20:40 np0005548731 systemd[1]: Started Hostname Service.
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4257] hostname: hostname: using hostnamed
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4257] hostname: static hostname changed from (none) to "compute-2"
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4262] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4267] manager[0x55ee9d40d090]: rfkill: Wi-Fi hardware radio set enabled
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4268] manager[0x55ee9d40d090]: rfkill: WWAN hardware radio set enabled
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4297] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4306] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4307] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4308] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4308] manager: Networking is enabled by state file
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4310] settings: Loaded settings plugin: keyfile (internal)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4314] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4344] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4355] dhcp: init: Using DHCP client 'internal'
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4358] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4363] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4368] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4377] device (lo): Activation: starting connection 'lo' (f9121f72-d016-4e94-a48a-a5240750641b)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4383] device (eth0): carrier: link connected
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4388] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4392] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4392] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4397] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4402] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4407] device (eth1): carrier: link connected
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4410] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4414] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (18745adb-ffcc-5b8a-b84a-1cf22aac92d8) (indicated)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4414] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4419] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4425] device (eth1): Activation: starting connection 'ci-private-network' (18745adb-ffcc-5b8a-b84a-1cf22aac92d8)
Dec  6 01:20:40 np0005548731 systemd[1]: Started Network Manager.
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4430] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4438] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4440] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4443] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4446] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4449] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4452] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4455] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4460] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4467] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4470] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4494] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4509] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4693] dhcp4 (eth0): state changed new lease, address=38.102.83.195
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4699] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  6 01:20:40 np0005548731 systemd[1]: Starting Network Manager Wait Online...
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4833] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4839] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4840] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4841] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4845] device (lo): Activation: successful, device activated.
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4851] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4854] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4856] device (eth1): Activation: successful, device activated.
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4872] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4874] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4877] manager: NetworkManager state is now CONNECTED_SITE
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4880] device (eth0): Activation: successful, device activated.
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4884] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  6 01:20:40 np0005548731 NetworkManager[49182]: <info>  [1765002040.4887] manager: startup complete
Dec  6 01:20:40 np0005548731 systemd[1]: Finished Network Manager Wait Online.
Dec  6 01:20:41 np0005548731 python3.9[49390]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:20:48 np0005548731 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 01:20:48 np0005548731 systemd[1]: Starting man-db-cache-update.service...
Dec  6 01:20:48 np0005548731 systemd[1]: Reloading.
Dec  6 01:20:48 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:20:48 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:20:48 np0005548731 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 01:20:50 np0005548731 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 01:20:50 np0005548731 systemd[1]: Finished man-db-cache-update.service.
Dec  6 01:20:50 np0005548731 systemd[1]: run-rf6be3d1c0896403c9fad5e408b6eb6ce.service: Deactivated successfully.
Dec  6 01:20:50 np0005548731 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 01:20:51 np0005548731 python3.9[49848]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:20:52 np0005548731 python3.9[50000]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:20:52 np0005548731 python3.9[50154]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:20:53 np0005548731 python3.9[50306]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:20:54 np0005548731 python3.9[50458]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:20:54 np0005548731 python3.9[50610]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:20:55 np0005548731 python3.9[50762]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:20:56 np0005548731 python3.9[50885]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002055.0286362-654-131766203222454/.source _original_basename=.dyjld9ab follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:20:56 np0005548731 python3.9[51037]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:20:57 np0005548731 python3.9[51189]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  6 01:20:58 np0005548731 python3.9[51341]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:21:00 np0005548731 python3.9[51768]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  6 01:21:02 np0005548731 ansible-async_wrapper.py[51943]: Invoked with j760570744619 300 /home/zuul/.ansible/tmp/ansible-tmp-1765002061.174649-852-234643905495710/AnsiballZ_edpm_os_net_config.py _
Dec  6 01:21:02 np0005548731 ansible-async_wrapper.py[51946]: Starting module and watcher
Dec  6 01:21:02 np0005548731 ansible-async_wrapper.py[51946]: Start watching 51947 (300)
Dec  6 01:21:02 np0005548731 ansible-async_wrapper.py[51947]: Start module (51947)
Dec  6 01:21:02 np0005548731 ansible-async_wrapper.py[51943]: Return async_wrapper task started.
Dec  6 01:21:02 np0005548731 python3.9[51948]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  6 01:21:02 np0005548731 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  6 01:21:02 np0005548731 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  6 01:21:02 np0005548731 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  6 01:21:02 np0005548731 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  6 01:21:02 np0005548731 kernel: cfg80211: failed to load regulatory.db
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.8531] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.8552] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9032] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9034] audit: op="connection-add" uuid="93186a95-ed55-4d94-b0e1-cbed6a371b32" name="br-ex-br" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9046] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9048] audit: op="connection-add" uuid="e2bd42e4-e6d9-42d7-9be4-8f0e7fb9c324" name="br-ex-port" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9059] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9061] audit: op="connection-add" uuid="2df76619-29bd-4c84-b333-17f3279cf997" name="eth1-port" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9072] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9074] audit: op="connection-add" uuid="cf5abd3e-6fe6-44c2-ba28-741465363fc5" name="vlan20-port" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9084] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9086] audit: op="connection-add" uuid="f2e7386b-b0ff-4f5b-880d-8b39f29a1eff" name="vlan21-port" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9097] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9098] audit: op="connection-add" uuid="d4bc8963-4200-45fa-a938-e6e87b290d0e" name="vlan22-port" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9108] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9110] audit: op="connection-add" uuid="53962e7d-7007-411f-9982-6892568aac05" name="vlan23-port" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9128] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,802-3-ethernet.mtu" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9142] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9144] audit: op="connection-add" uuid="cf7f589f-6109-4bde-bb14-c93a1b1e8ec7" name="br-ex-if" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9214] audit: op="connection-update" uuid="18745adb-ffcc-5b8a-b84a-1cf22aac92d8" name="ci-private-network" args="ipv4.routes,ipv4.routing-rules,ipv4.dns,ipv4.never-default,ipv4.method,ipv4.addresses,connection.master,connection.port-type,connection.slave-type,connection.controller,connection.timestamp,ipv6.routes,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ipv6.method,ipv6.addresses,ovs-external-ids.data,ovs-interface.type" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9228] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9230] audit: op="connection-add" uuid="ff607777-f274-4e68-90c5-c6a2622e1680" name="vlan20-if" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9244] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9246] audit: op="connection-add" uuid="e62d096d-327d-455c-85d9-ae5b2136435d" name="vlan21-if" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9259] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9261] audit: op="connection-add" uuid="69679fea-623d-4800-8b24-f2a19c286165" name="vlan22-if" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9275] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9277] audit: op="connection-add" uuid="cf0cf4ef-ad8a-4cb2-8072-ed182ff29581" name="vlan23-if" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9287] audit: op="connection-delete" uuid="e9ebe57f-2b3a-3c6f-a1b5-85b8e76a73fa" name="Wired connection 1" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9297] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9307] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9310] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (93186a95-ed55-4d94-b0e1-cbed6a371b32)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9312] audit: op="connection-activate" uuid="93186a95-ed55-4d94-b0e1-cbed6a371b32" name="br-ex-br" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9314] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9321] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9325] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (e2bd42e4-e6d9-42d7-9be4-8f0e7fb9c324)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9327] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9333] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9338] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (2df76619-29bd-4c84-b333-17f3279cf997)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9340] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9346] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9351] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (cf5abd3e-6fe6-44c2-ba28-741465363fc5)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9353] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9360] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9364] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (f2e7386b-b0ff-4f5b-880d-8b39f29a1eff)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9366] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9373] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9377] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (d4bc8963-4200-45fa-a938-e6e87b290d0e)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9379] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9386] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9391] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (53962e7d-7007-411f-9982-6892568aac05)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9392] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9395] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9398] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9403] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9408] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9413] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (cf7f589f-6109-4bde-bb14-c93a1b1e8ec7)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9414] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9418] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9420] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9422] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9423] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9434] device (eth1): disconnecting for new activation request.
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9435] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9479] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9481] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9482] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9484] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9491] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9494] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (ff607777-f274-4e68-90c5-c6a2622e1680)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9495] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9498] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9499] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9500] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9503] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9507] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9511] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (e62d096d-327d-455c-85d9-ae5b2136435d)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9511] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9514] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9516] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9517] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9519] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9523] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9528] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (69679fea-623d-4800-8b24-f2a19c286165)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9528] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9531] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9532] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9534] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9536] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9540] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9544] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (cf0cf4ef-ad8a-4cb2-8072-ed182ff29581)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9545] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9548] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9549] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9550] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9552] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9562] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9563] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9567] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9569] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9574] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9578] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9581] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9584] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9586] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9590] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9594] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 kernel: ovs-system: entered promiscuous mode
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9596] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9598] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9603] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9607] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9611] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9612] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 kernel: Timeout policy base is empty
Dec  6 01:21:03 np0005548731 systemd-udevd[51955]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9617] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9621] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9623] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9625] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9629] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9632] dhcp4 (eth0): canceled DHCP transaction
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9632] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9632] dhcp4 (eth0): state changed no lease
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9634] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9643] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9647] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51949 uid=0 result="fail" reason="Device is not activated"
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9685] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9693] device (eth1): disconnecting for new activation request.
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9694] audit: op="connection-activate" uuid="18745adb-ffcc-5b8a-b84a-1cf22aac92d8" name="ci-private-network" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9703] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9714] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9723] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9732] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51949 uid=0 result="success"
Dec  6 01:21:03 np0005548731 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9797] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9894] device (eth1): Activation: starting connection 'ci-private-network' (18745adb-ffcc-5b8a-b84a-1cf22aac92d8)
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9898] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9907] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9912] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 kernel: br-ex: entered promiscuous mode
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9920] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9925] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9930] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9932] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9933] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9935] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9937] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9938] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9969] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9975] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 kernel: vlan22: entered promiscuous mode
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9979] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9983] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9988] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9992] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9995] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 01:21:03 np0005548731 NetworkManager[49182]: <info>  [1765002063.9999] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0003] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 01:21:04 np0005548731 systemd-udevd[51954]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0008] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0012] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0016] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0020] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0028] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0034] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0042] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0052] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0059] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0067] device (eth1): Activation: successful, device activated.
Dec  6 01:21:04 np0005548731 kernel: vlan20: entered promiscuous mode
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0089] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0100] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0110] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0118] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0119] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0124] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  6 01:21:04 np0005548731 kernel: vlan21: entered promiscuous mode
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0171] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0177] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0184] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0198] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  6 01:21:04 np0005548731 kernel: vlan23: entered promiscuous mode
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0215] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0262] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0266] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0272] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0283] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0297] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0314] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0329] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0337] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0340] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0346] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0355] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0356] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.0361] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  6 01:21:04 np0005548731 NetworkManager[49182]: <info>  [1765002064.1088] dhcp4 (eth0): state changed new lease, address=38.102.83.195
Dec  6 01:21:05 np0005548731 NetworkManager[49182]: <info>  [1765002065.1610] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51949 uid=0 result="success"
Dec  6 01:21:05 np0005548731 NetworkManager[49182]: <info>  [1765002065.3112] checkpoint[0x55ee9d3e3950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  6 01:21:05 np0005548731 NetworkManager[49182]: <info>  [1765002065.3115] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51949 uid=0 result="success"
Dec  6 01:21:05 np0005548731 NetworkManager[49182]: <info>  [1765002065.5991] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51949 uid=0 result="success"
Dec  6 01:21:05 np0005548731 NetworkManager[49182]: <info>  [1765002065.6011] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51949 uid=0 result="success"
Dec  6 01:21:05 np0005548731 python3.9[52308]: ansible-ansible.legacy.async_status Invoked with jid=j760570744619.51943 mode=status _async_dir=/root/.ansible_async
Dec  6 01:21:05 np0005548731 NetworkManager[49182]: <info>  [1765002065.8216] audit: op="networking-control" arg="global-dns-configuration" pid=51949 uid=0 result="success"
Dec  6 01:21:05 np0005548731 NetworkManager[49182]: <info>  [1765002065.8257] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  6 01:21:05 np0005548731 NetworkManager[49182]: <info>  [1765002065.8290] audit: op="networking-control" arg="global-dns-configuration" pid=51949 uid=0 result="success"
Dec  6 01:21:05 np0005548731 NetworkManager[49182]: <info>  [1765002065.8314] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51949 uid=0 result="success"
Dec  6 01:21:05 np0005548731 NetworkManager[49182]: <info>  [1765002065.9632] checkpoint[0x55ee9d3e3a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  6 01:21:05 np0005548731 NetworkManager[49182]: <info>  [1765002065.9636] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51949 uid=0 result="success"
Dec  6 01:21:06 np0005548731 ansible-async_wrapper.py[51947]: Module complete (51947)
Dec  6 01:21:07 np0005548731 ansible-async_wrapper.py[51946]: Done in kid B.
Dec  6 01:21:09 np0005548731 python3.9[52412]: ansible-ansible.legacy.async_status Invoked with jid=j760570744619.51943 mode=status _async_dir=/root/.ansible_async
Dec  6 01:21:09 np0005548731 python3.9[52512]: ansible-ansible.legacy.async_status Invoked with jid=j760570744619.51943 mode=cleanup _async_dir=/root/.ansible_async
Dec  6 01:21:10 np0005548731 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  6 01:21:10 np0005548731 python3.9[52664]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:21:11 np0005548731 python3.9[52789]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002070.092234-928-113027378906894/.source.returncode _original_basename=.0lkmv7hc follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:21:11 np0005548731 python3.9[52941]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:21:12 np0005548731 python3.9[53064]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002071.5210764-976-81927494095266/.source.cfg _original_basename=.vdio6eg5 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:21:13 np0005548731 python3.9[53217]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:21:13 np0005548731 systemd[1]: Reloading Network Manager...
Dec  6 01:21:13 np0005548731 NetworkManager[49182]: <info>  [1765002073.7699] audit: op="reload" arg="0" pid=53221 uid=0 result="success"
Dec  6 01:21:13 np0005548731 NetworkManager[49182]: <info>  [1765002073.7707] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  6 01:21:13 np0005548731 systemd[1]: Reloaded Network Manager.
Dec  6 01:21:14 np0005548731 systemd[1]: session-11.scope: Deactivated successfully.
Dec  6 01:21:14 np0005548731 systemd[1]: session-11.scope: Consumed 51.373s CPU time.
Dec  6 01:21:14 np0005548731 systemd-logind[794]: Session 11 logged out. Waiting for processes to exit.
Dec  6 01:21:14 np0005548731 systemd-logind[794]: Removed session 11.
Dec  6 01:21:19 np0005548731 systemd-logind[794]: New session 12 of user zuul.
Dec  6 01:21:19 np0005548731 systemd[1]: Started Session 12 of User zuul.
Dec  6 01:21:20 np0005548731 python3.9[53405]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:21:21 np0005548731 python3.9[53559]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:21:23 np0005548731 python3.9[53753]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:21:23 np0005548731 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  6 01:21:24 np0005548731 systemd-logind[794]: Session 12 logged out. Waiting for processes to exit.
Dec  6 01:21:24 np0005548731 systemd[1]: session-12.scope: Deactivated successfully.
Dec  6 01:21:24 np0005548731 systemd[1]: session-12.scope: Consumed 2.209s CPU time.
Dec  6 01:21:24 np0005548731 systemd-logind[794]: Removed session 12.
Dec  6 01:21:29 np0005548731 systemd-logind[794]: New session 13 of user zuul.
Dec  6 01:21:29 np0005548731 systemd[1]: Started Session 13 of User zuul.
Dec  6 01:21:30 np0005548731 python3.9[53935]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:21:31 np0005548731 python3.9[54089]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:21:32 np0005548731 python3.9[54246]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:21:33 np0005548731 python3.9[54330]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:21:35 np0005548731 python3.9[54484]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:21:36 np0005548731 python3.9[54679]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:21:37 np0005548731 python3.9[54831]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:21:37 np0005548731 podman[54832]: 2025-12-06 06:21:37.760402815 +0000 UTC m=+0.083622376 system refresh
Dec  6 01:21:38 np0005548731 python3.9[54995]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:21:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:21:39 np0005548731 python3.9[55118]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002098.1866968-204-154088491770232/.source.json follow=False _original_basename=podman_network_config.j2 checksum=9b4aa504e05befc6181e0b91a0dee7dac46fc303 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:21:40 np0005548731 python3.9[55270]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:21:40 np0005548731 python3.9[55393]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765002099.6767645-249-44142047538970/.source.conf follow=False _original_basename=registries.conf.j2 checksum=6a7a92c6689685dca24f397866405caa5861defb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:21:41 np0005548731 python3.9[55545]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:21:42 np0005548731 python3.9[55697]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:21:42 np0005548731 python3.9[55849]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:21:43 np0005548731 python3.9[56002]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:21:44 np0005548731 python3.9[56154]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:21:47 np0005548731 python3.9[56307]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:21:47 np0005548731 python3.9[56461]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:21:48 np0005548731 python3.9[56613]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:21:50 np0005548731 python3.9[56765]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:21:51 np0005548731 python3.9[56918]: ansible-service_facts Invoked
Dec  6 01:21:51 np0005548731 network[56935]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 01:21:51 np0005548731 network[56936]: 'network-scripts' will be removed from distribution in near future.
Dec  6 01:21:51 np0005548731 network[56937]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 01:21:56 np0005548731 python3.9[57389]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:21:59 np0005548731 python3.9[57542]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  6 01:22:00 np0005548731 python3.9[57694]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:01 np0005548731 python3.9[57819]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002120.459194-682-200878065084142/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:02 np0005548731 python3.9[57973]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:02 np0005548731 python3.9[58098]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002121.9062579-728-8209412205179/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:04 np0005548731 python3.9[58252]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:06 np0005548731 python3.9[58406]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:22:07 np0005548731 python3.9[58490]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:22:08 np0005548731 python3.9[58644]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:22:09 np0005548731 python3.9[58728]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:22:09 np0005548731 chronyd[791]: chronyd exiting
Dec  6 01:22:09 np0005548731 systemd[1]: Stopping NTP client/server...
Dec  6 01:22:09 np0005548731 systemd[1]: chronyd.service: Deactivated successfully.
Dec  6 01:22:09 np0005548731 systemd[1]: Stopped NTP client/server.
Dec  6 01:22:09 np0005548731 systemd[1]: Starting NTP client/server...
Dec  6 01:22:09 np0005548731 chronyd[58737]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  6 01:22:09 np0005548731 chronyd[58737]: Frequency -23.854 +/- 0.244 ppm read from /var/lib/chrony/drift
Dec  6 01:22:09 np0005548731 chronyd[58737]: Loaded seccomp filter (level 2)
Dec  6 01:22:09 np0005548731 systemd[1]: Started NTP client/server.
Dec  6 01:22:10 np0005548731 systemd[1]: session-13.scope: Deactivated successfully.
Dec  6 01:22:10 np0005548731 systemd[1]: session-13.scope: Consumed 25.004s CPU time.
Dec  6 01:22:10 np0005548731 systemd-logind[794]: Session 13 logged out. Waiting for processes to exit.
Dec  6 01:22:10 np0005548731 systemd-logind[794]: Removed session 13.
Dec  6 01:22:16 np0005548731 systemd-logind[794]: New session 14 of user zuul.
Dec  6 01:22:16 np0005548731 systemd[1]: Started Session 14 of User zuul.
Dec  6 01:22:16 np0005548731 python3.9[58918]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:17 np0005548731 python3.9[59070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:18 np0005548731 python3.9[59193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002137.168954-69-99381743512757/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:18 np0005548731 systemd[1]: session-14.scope: Deactivated successfully.
Dec  6 01:22:18 np0005548731 systemd[1]: session-14.scope: Consumed 1.580s CPU time.
Dec  6 01:22:18 np0005548731 systemd-logind[794]: Session 14 logged out. Waiting for processes to exit.
Dec  6 01:22:18 np0005548731 systemd-logind[794]: Removed session 14.
Dec  6 01:22:25 np0005548731 systemd-logind[794]: New session 15 of user zuul.
Dec  6 01:22:25 np0005548731 systemd[1]: Started Session 15 of User zuul.
Dec  6 01:22:26 np0005548731 python3.9[59371]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:22:27 np0005548731 python3.9[59527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:28 np0005548731 python3.9[59702]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:28 np0005548731 python3.9[59825]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765002147.4148066-90-98532379472856/.source.json _original_basename=.nupkdf1g follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:30 np0005548731 python3.9[59977]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:30 np0005548731 python3.9[60100]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002149.3379588-160-65062860916190/.source _original_basename=.2_nuz9jx follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:31 np0005548731 python3.9[60252]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:22:32 np0005548731 python3.9[60404]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:32 np0005548731 python3.9[60527]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765002151.6551538-232-14107042752847/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:22:33 np0005548731 python3.9[60679]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:33 np0005548731 python3.9[60802]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765002152.7933855-232-115160581626346/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:22:34 np0005548731 python3.9[60954]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:35 np0005548731 python3.9[61106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:35 np0005548731 python3.9[61229]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002154.7288098-343-105388048725995/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:36 np0005548731 python3.9[61381]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:37 np0005548731 python3.9[61504]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002156.0378025-388-175568581135979/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:38 np0005548731 python3.9[61656]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:22:38 np0005548731 systemd[1]: Reloading.
Dec  6 01:22:38 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:22:38 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:22:38 np0005548731 systemd[1]: Reloading.
Dec  6 01:22:38 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:22:38 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:22:39 np0005548731 systemd[1]: Starting EDPM Container Shutdown...
Dec  6 01:22:39 np0005548731 systemd[1]: Finished EDPM Container Shutdown.
Dec  6 01:22:39 np0005548731 python3.9[61885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:40 np0005548731 python3.9[62008]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002159.2476966-456-242982247694046/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:41 np0005548731 python3.9[62160]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:41 np0005548731 python3.9[62283]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002160.7525156-502-52834087925441/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:42 np0005548731 python3.9[62435]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:22:42 np0005548731 systemd[1]: Reloading.
Dec  6 01:22:42 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:22:42 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:22:43 np0005548731 systemd[1]: Reloading.
Dec  6 01:22:43 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:22:43 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:22:43 np0005548731 systemd[1]: Starting Create netns directory...
Dec  6 01:22:43 np0005548731 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  6 01:22:43 np0005548731 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  6 01:22:43 np0005548731 systemd[1]: Finished Create netns directory.
Dec  6 01:22:44 np0005548731 python3.9[62662]: ansible-ansible.builtin.service_facts Invoked
Dec  6 01:22:45 np0005548731 network[62679]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 01:22:45 np0005548731 network[62680]: 'network-scripts' will be removed from distribution in near future.
Dec  6 01:22:45 np0005548731 network[62681]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 01:22:50 np0005548731 python3.9[62943]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:22:50 np0005548731 systemd[1]: Reloading.
Dec  6 01:22:50 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:22:50 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:22:51 np0005548731 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  6 01:22:51 np0005548731 iptables.init[62984]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  6 01:22:51 np0005548731 iptables.init[62984]: iptables: Flushing firewall rules: [  OK  ]
Dec  6 01:22:51 np0005548731 systemd[1]: iptables.service: Deactivated successfully.
Dec  6 01:22:51 np0005548731 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  6 01:22:52 np0005548731 python3.9[63180]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:22:53 np0005548731 python3.9[63334]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:22:53 np0005548731 systemd[1]: Reloading.
Dec  6 01:22:53 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:22:53 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:22:53 np0005548731 systemd[1]: Starting Netfilter Tables...
Dec  6 01:22:53 np0005548731 systemd[1]: Finished Netfilter Tables.
Dec  6 01:22:54 np0005548731 python3.9[63525]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:22:55 np0005548731 python3.9[63678]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:56 np0005548731 python3.9[63803]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002175.1355212-709-136471164269452/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:57 np0005548731 python3.9[63956]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:22:57 np0005548731 systemd[1]: Reloading OpenSSH server daemon...
Dec  6 01:22:57 np0005548731 systemd[1]: Reloaded OpenSSH server daemon.
Dec  6 01:22:57 np0005548731 python3.9[64112]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:22:58 np0005548731 python3.9[64264]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:22:59 np0005548731 python3.9[64387]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002178.0840333-802-15398573106428/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:00 np0005548731 python3.9[64539]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  6 01:23:00 np0005548731 systemd[1]: Starting Time & Date Service...
Dec  6 01:23:00 np0005548731 systemd[1]: Started Time & Date Service.
Dec  6 01:23:02 np0005548731 python3.9[64695]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:02 np0005548731 python3.9[64847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:23:03 np0005548731 python3.9[64970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002182.2023082-907-142445019738864/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:03 np0005548731 python3.9[65122]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:23:04 np0005548731 python3.9[65245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002183.354027-952-46984406256500/.source.yaml _original_basename=.9o4uac6t follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:04 np0005548731 python3.9[65397]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:23:05 np0005548731 python3.9[65520]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002184.525547-997-66691180874579/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:06 np0005548731 python3.9[65672]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:23:06 np0005548731 python3.9[65825]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:23:07 np0005548731 python3[65978]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  6 01:23:08 np0005548731 python3.9[66130]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:23:09 np0005548731 python3.9[66253]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002188.28425-1114-250165497661125/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:10 np0005548731 python3.9[66405]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:23:10 np0005548731 python3.9[66528]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002189.798204-1159-29844579880123/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:11 np0005548731 python3.9[66680]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:23:12 np0005548731 python3.9[66803]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002191.0613782-1204-11151526020072/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:12 np0005548731 python3.9[66955]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:23:13 np0005548731 python3.9[67078]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002192.3479302-1249-36363320826651/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:14 np0005548731 python3.9[67230]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:23:14 np0005548731 python3.9[67353]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002193.5242953-1294-143973083359897/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:15 np0005548731 python3.9[67505]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:15 np0005548731 python3.9[67657]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:23:16 np0005548731 python3.9[67816]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:17 np0005548731 python3.9[67969]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:18 np0005548731 python3.9[68121]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:19 np0005548731 python3.9[68273]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  6 01:23:19 np0005548731 python3.9[68426]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  6 01:23:20 np0005548731 systemd[1]: session-15.scope: Deactivated successfully.
Dec  6 01:23:20 np0005548731 systemd[1]: session-15.scope: Consumed 34.757s CPU time.
Dec  6 01:23:20 np0005548731 systemd-logind[794]: Session 15 logged out. Waiting for processes to exit.
Dec  6 01:23:20 np0005548731 systemd-logind[794]: Removed session 15.
Dec  6 01:23:25 np0005548731 systemd-logind[794]: New session 16 of user zuul.
Dec  6 01:23:26 np0005548731 systemd[1]: Started Session 16 of User zuul.
Dec  6 01:23:26 np0005548731 python3.9[68607]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  6 01:23:27 np0005548731 python3.9[68759]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:23:28 np0005548731 python3.9[68911]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:23:29 np0005548731 python3.9[69063]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTKFyfmCvsG9hwBkDHhSMH95Mc80Ub24C1l2ydnJGHzY4+vYnN+ZopDHd6HVKXgsP9msqkZAEdNXjbQn0sbWYw0v02CY6OcbMq306Rwo9N1fcSO6QVC6w79bGbRJasTA8jgAoGm1VSg3XpzU9C2Delv2ginn7LqCUou48j9w9jyaklDA2EV0anjvZ6hGLjcFaMQSlFPO8rr2pGS5nfNk2Re6GtYYWF4SPkd5xfecWi9szdT+tnG8VrwRX440/Pe3eV5UyVyHQzIEvxJK6DbTgtieOn0PVz3yHI3Uo8VatpsXahO8FsABY1GaI5QAj3qUudWz4YWsiV/qy0G5Wm27CB69LVPGWRr7y4+pVz0HxWiYGyYbRZdxHVZ3jqfGNBdMXJb2shp9BlIo+lpjEydkorHn66AKIpFCZFaGpHzkFPocaTP9yMPAxo/0YPllct7AqO/4CCBNhz4E6/0aMx2lsilFN6Oo/Mj0azpEQOHvuTEwqKJ5BK3MJCWgR11ccN7PM=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEDAV7OqwkHgR6GxlfPQDRQoPSdwQxyp1ILKzyPaTiD9#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeZVQBWQAXkYJ2DA26L5Jq1a5s2ScFbJ/Q/8jiRPf43wzW24IvQwAq99mI5t4QhVhmTRCbptw5L79elvFEyDY8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXnm8ofk0+O7jBFjA8fasReq5BkfwaahdsMJZLiqB71W18faenPK2mMCw6/Lyxaif1wFQKBGWK21bUZXsQguqG7iZV8rSLfQmLlKel/CnEgi2ekXmEhhYL/5GAB8Hq0UxChaI6YQUu0gWku4cPruBw6/+Lz36/PvLLwKqQupEi8npPR3O7a4jF6Px433cpBkZ/hgwG2m5+61NMAcNSCjjNj1cdXLugpDN9+05k6A3QV1sDXS2Zx6zdxPhgmLDKZLBGesQaz+glwYPo/2KfwAwlU4tAuY5eSV2BPX04PqKqexy3iziex/q3pFmtD6f1cRmqFZiyNs+kOfsxwABOVKQ6GG1iKKgzHMsK/paqNWMoHBj0lrRIJoX88Fd2A5DdPs2UPHwy3iUxLYekNcgiigT3O/4x92cFRritKJ8i8j83J6wJOQ0DnpyWxu4WFCjI4mBSKeA0NQzqMPICgkmtmtYKfSlzSdaL9W56FqnfE5JHkSrspcV9xnX3D/ijnD/8PxU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE4NkvA88mf0HvkHx7766e1aduefm45OK4uK2xW0LF1S#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIy6uH9XhIotH/UH4KICHfHUvzEiJMGjuOaC3xgcK45R/4kFK8w4At6C/G8bcf1l2+wNZCsHSuKrF09EzQCKCOU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXhoI1gGk2X98AQB4B5ZyJDup3CVjjMbiB6L30cKGocfIwEEwBz5d1xDpwA7euANP32L9+ddZ3Cn0VVuebREE5y184Yi1sdS+2O6H8M7BUT+RANGW4sY7jPXbTTJt6Bp2WWZu+AKxIRGMoo0UfvvFdscomysN+yxWB/KZ/niGARJyw61l1eO1/8shGJiP1LBuA4mdwHMTBYwXiYjk6LgI/i5m6zQk5ggmw2nKJqCwwPGyf2Xf7/LbRDgnryAatph9gA4JZ+QXULUJ8U+ILis30MPOGNA7vJ07ovYFAVwsoKYRCsxrEpg8AxMeRikU+CERKL2QQPABlbuJKnDZFrW2kY/L+B2g+i8FDWpaug4GQ6ZO7REu47ARhAUnuaIuJrhJgLrDq43vTqCgagXFz7UHhLI6KXLayNe3B/4It4UaZIVv3X8K+bZiI0zMWNhyjIBAU5VFZd0QjZDjt+Wv5WMYEFiWDyil+NVEHCxdSl46yd68mUvgMxWiv2Z57ICY9i+k=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF08740jXaMkiBlZr9+3kjjW/VDtcxAKNNm3eT7v4C7q#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJUJnc2vyZ7uGIoKmOtL2ok+zmjLSq/3vZNdtT52cNcj41FV66OIff0lT2r5neBPmMGSlOfqKMRY8iTu1fJs+/c=#012 create=True mode=0644 path=/tmp/ansible.81bcnvt5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:30 np0005548731 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  6 01:23:30 np0005548731 python3.9[69217]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.81bcnvt5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:23:31 np0005548731 python3.9[69371]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.81bcnvt5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:31 np0005548731 systemd[1]: session-16.scope: Deactivated successfully.
Dec  6 01:23:31 np0005548731 systemd[1]: session-16.scope: Consumed 3.216s CPU time.
Dec  6 01:23:31 np0005548731 systemd-logind[794]: Session 16 logged out. Waiting for processes to exit.
Dec  6 01:23:31 np0005548731 systemd-logind[794]: Removed session 16.
Dec  6 01:23:37 np0005548731 systemd-logind[794]: New session 17 of user zuul.
Dec  6 01:23:37 np0005548731 systemd[1]: Started Session 17 of User zuul.
Dec  6 01:23:38 np0005548731 python3.9[69550]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:23:40 np0005548731 python3.9[69706]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  6 01:23:41 np0005548731 python3.9[69860]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:23:42 np0005548731 python3.9[70013]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:23:43 np0005548731 python3.9[70166]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:23:43 np0005548731 python3.9[70320]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:23:45 np0005548731 python3.9[70475]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:23:45 np0005548731 systemd[1]: session-17.scope: Deactivated successfully.
Dec  6 01:23:45 np0005548731 systemd[1]: session-17.scope: Consumed 4.540s CPU time.
Dec  6 01:23:45 np0005548731 systemd-logind[794]: Session 17 logged out. Waiting for processes to exit.
Dec  6 01:23:45 np0005548731 systemd-logind[794]: Removed session 17.
Dec  6 01:23:50 np0005548731 systemd-logind[794]: New session 18 of user zuul.
Dec  6 01:23:50 np0005548731 systemd[1]: Started Session 18 of User zuul.
Dec  6 01:23:52 np0005548731 python3.9[70653]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:23:53 np0005548731 python3.9[70809]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:23:53 np0005548731 python3.9[70893]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  6 01:23:56 np0005548731 python3.9[71045]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:23:58 np0005548731 python3.9[71196]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 01:23:58 np0005548731 python3.9[71346]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:23:58 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 01:23:58 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 01:23:59 np0005548731 python3.9[71497]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:24:00 np0005548731 systemd-logind[794]: Session 18 logged out. Waiting for processes to exit.
Dec  6 01:24:00 np0005548731 systemd[1]: session-18.scope: Deactivated successfully.
Dec  6 01:24:00 np0005548731 systemd[1]: session-18.scope: Consumed 5.599s CPU time.
Dec  6 01:24:00 np0005548731 systemd-logind[794]: Removed session 18.
Dec  6 01:24:09 np0005548731 systemd-logind[794]: New session 19 of user zuul.
Dec  6 01:24:09 np0005548731 systemd[1]: Started Session 19 of User zuul.
Dec  6 01:24:18 np0005548731 python3[72263]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:24:19 np0005548731 chronyd[58737]: Selected source 216.232.132.102 (pool.ntp.org)
Dec  6 01:24:19 np0005548731 python3[72358]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  6 01:24:21 np0005548731 python3[72385]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  6 01:24:22 np0005548731 python3[72411]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:24:22 np0005548731 kernel: loop: module loaded
Dec  6 01:24:22 np0005548731 kernel: loop3: detected capacity change from 0 to 14680064
Dec  6 01:24:22 np0005548731 python3[72446]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:24:23 np0005548731 lvm[72449]: PV /dev/loop3 not used.
Dec  6 01:24:24 np0005548731 lvm[72451]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 01:24:24 np0005548731 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  6 01:24:25 np0005548731 lvm[72453]:  0 logical volume(s) in volume group "ceph_vg0" now active
Dec  6 01:24:25 np0005548731 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  6 01:24:25 np0005548731 lvm[72459]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 01:24:25 np0005548731 lvm[72459]: VG ceph_vg0 finished
Dec  6 01:24:25 np0005548731 lvm[72465]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 01:24:25 np0005548731 lvm[72465]: VG ceph_vg0 finished
Dec  6 01:24:26 np0005548731 python3[72543]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  6 01:24:26 np0005548731 python3[72616]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765002266.2440796-36995-183591787294380/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:24:27 np0005548731 python3[72666]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:24:27 np0005548731 systemd[1]: Reloading.
Dec  6 01:24:27 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:24:27 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:24:27 np0005548731 systemd[1]: Starting Ceph OSD losetup...
Dec  6 01:24:28 np0005548731 bash[72706]: /dev/loop3: [64513]:4327950 (/var/lib/ceph-osd-0.img)
Dec  6 01:24:28 np0005548731 systemd[1]: Finished Ceph OSD losetup.
Dec  6 01:24:28 np0005548731 lvm[72707]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 01:24:28 np0005548731 lvm[72707]: VG ceph_vg0 finished
Dec  6 01:24:30 np0005548731 python3[72732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:26:49 np0005548731 systemd-logind[794]: New session 20 of user ceph-admin.
Dec  6 01:26:49 np0005548731 systemd[1]: Created slice User Slice of UID 42477.
Dec  6 01:26:49 np0005548731 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  6 01:26:49 np0005548731 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  6 01:26:49 np0005548731 systemd[1]: Starting User Manager for UID 42477...
Dec  6 01:26:49 np0005548731 systemd[72782]: Queued start job for default target Main User Target.
Dec  6 01:26:49 np0005548731 systemd[72782]: Created slice User Application Slice.
Dec  6 01:26:49 np0005548731 systemd[72782]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 01:26:49 np0005548731 systemd[72782]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 01:26:49 np0005548731 systemd[72782]: Reached target Paths.
Dec  6 01:26:49 np0005548731 systemd[72782]: Reached target Timers.
Dec  6 01:26:49 np0005548731 systemd[72782]: Starting D-Bus User Message Bus Socket...
Dec  6 01:26:49 np0005548731 systemd[72782]: Starting Create User's Volatile Files and Directories...
Dec  6 01:26:49 np0005548731 systemd[72782]: Finished Create User's Volatile Files and Directories.
Dec  6 01:26:49 np0005548731 systemd[72782]: Listening on D-Bus User Message Bus Socket.
Dec  6 01:26:49 np0005548731 systemd[72782]: Reached target Sockets.
Dec  6 01:26:49 np0005548731 systemd[72782]: Reached target Basic System.
Dec  6 01:26:49 np0005548731 systemd[72782]: Reached target Main User Target.
Dec  6 01:26:49 np0005548731 systemd[72782]: Startup finished in 117ms.
Dec  6 01:26:49 np0005548731 systemd[1]: Started User Manager for UID 42477.
Dec  6 01:26:49 np0005548731 systemd[1]: Started Session 20 of User ceph-admin.
Dec  6 01:26:49 np0005548731 systemd-logind[794]: New session 22 of user ceph-admin.
Dec  6 01:26:49 np0005548731 systemd[1]: Started Session 22 of User ceph-admin.
Dec  6 01:26:49 np0005548731 systemd-logind[794]: New session 23 of user ceph-admin.
Dec  6 01:26:49 np0005548731 systemd[1]: Started Session 23 of User ceph-admin.
Dec  6 01:26:50 np0005548731 systemd-logind[794]: New session 24 of user ceph-admin.
Dec  6 01:26:50 np0005548731 systemd[1]: Started Session 24 of User ceph-admin.
Dec  6 01:26:50 np0005548731 systemd-logind[794]: New session 25 of user ceph-admin.
Dec  6 01:26:50 np0005548731 systemd[1]: Started Session 25 of User ceph-admin.
Dec  6 01:26:51 np0005548731 systemd-logind[794]: New session 26 of user ceph-admin.
Dec  6 01:26:51 np0005548731 systemd[1]: Started Session 26 of User ceph-admin.
Dec  6 01:26:51 np0005548731 systemd-logind[794]: New session 27 of user ceph-admin.
Dec  6 01:26:51 np0005548731 systemd[1]: Started Session 27 of User ceph-admin.
Dec  6 01:26:51 np0005548731 systemd-logind[794]: New session 28 of user ceph-admin.
Dec  6 01:26:51 np0005548731 systemd[1]: Started Session 28 of User ceph-admin.
Dec  6 01:26:52 np0005548731 systemd-logind[794]: New session 29 of user ceph-admin.
Dec  6 01:26:52 np0005548731 systemd[1]: Started Session 29 of User ceph-admin.
Dec  6 01:26:52 np0005548731 systemd-logind[794]: New session 30 of user ceph-admin.
Dec  6 01:26:52 np0005548731 systemd[1]: Started Session 30 of User ceph-admin.
Dec  6 01:26:53 np0005548731 systemd-logind[794]: New session 31 of user ceph-admin.
Dec  6 01:26:53 np0005548731 systemd[1]: Started Session 31 of User ceph-admin.
Dec  6 01:26:53 np0005548731 systemd-logind[794]: New session 32 of user ceph-admin.
Dec  6 01:26:53 np0005548731 systemd[1]: Started Session 32 of User ceph-admin.
Dec  6 01:26:53 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:28:32 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:28:32 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:28:33 np0005548731 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73801 (sysctl)
Dec  6 01:28:33 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:28:33 np0005548731 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  6 01:28:33 np0005548731 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  6 01:28:33 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:28:34 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:28:34 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:28:36 np0005548731 systemd[1]: var-lib-containers-storage-overlay-compat1113489417-merged.mount: Deactivated successfully.
Dec  6 01:28:37 np0005548731 systemd[1]: var-lib-containers-storage-overlay-compat1113489417-lower\x2dmapped.mount: Deactivated successfully.
Dec  6 01:29:09 np0005548731 podman[74078]: 2025-12-06 06:29:09.555948874 +0000 UTC m=+35.081433702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:29:10 np0005548731 podman[74078]: 2025-12-06 06:29:10.088158163 +0000 UTC m=+35.613642961 container create e9b8c2556ce14e43a70d78657e653f8abf24740c536f42d18b5d406fe12be1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  6 01:29:10 np0005548731 systemd[72782]: Starting Mark boot as successful...
Dec  6 01:29:10 np0005548731 systemd[72782]: Finished Mark boot as successful.
Dec  6 01:29:10 np0005548731 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  6 01:29:10 np0005548731 systemd[1]: Started libpod-conmon-e9b8c2556ce14e43a70d78657e653f8abf24740c536f42d18b5d406fe12be1eb.scope.
Dec  6 01:29:10 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:29:10 np0005548731 podman[74078]: 2025-12-06 06:29:10.753810632 +0000 UTC m=+36.279295480 container init e9b8c2556ce14e43a70d78657e653f8abf24740c536f42d18b5d406fe12be1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 01:29:10 np0005548731 podman[74078]: 2025-12-06 06:29:10.762476369 +0000 UTC m=+36.287961167 container start e9b8c2556ce14e43a70d78657e653f8abf24740c536f42d18b5d406fe12be1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  6 01:29:10 np0005548731 sweet_ride[74157]: 167 167
Dec  6 01:29:10 np0005548731 systemd[1]: libpod-e9b8c2556ce14e43a70d78657e653f8abf24740c536f42d18b5d406fe12be1eb.scope: Deactivated successfully.
Dec  6 01:29:11 np0005548731 podman[74078]: 2025-12-06 06:29:11.12609699 +0000 UTC m=+36.651581888 container attach e9b8c2556ce14e43a70d78657e653f8abf24740c536f42d18b5d406fe12be1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  6 01:29:11 np0005548731 podman[74078]: 2025-12-06 06:29:11.126591622 +0000 UTC m=+36.652076420 container died e9b8c2556ce14e43a70d78657e653f8abf24740c536f42d18b5d406fe12be1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:29:11 np0005548731 systemd[1]: var-lib-containers-storage-overlay-14dc64759eea559fdcecd9f6bf36f0d0127362d0f6e5db33494720219ae97cbe-merged.mount: Deactivated successfully.
Dec  6 01:29:12 np0005548731 podman[74078]: 2025-12-06 06:29:12.692372581 +0000 UTC m=+38.217857379 container remove e9b8c2556ce14e43a70d78657e653f8abf24740c536f42d18b5d406fe12be1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ride, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  6 01:29:12 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:29:12 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:29:12 np0005548731 systemd[1]: libpod-conmon-e9b8c2556ce14e43a70d78657e653f8abf24740c536f42d18b5d406fe12be1eb.scope: Deactivated successfully.
Dec  6 01:29:12 np0005548731 podman[74182]: 2025-12-06 06:29:12.897789553 +0000 UTC m=+0.072478548 container create 5730b0a8379cdcf32d72c45eeaaf5d3a8102d3a12b94286bb01bb02b0df6321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  6 01:29:12 np0005548731 podman[74182]: 2025-12-06 06:29:12.846932965 +0000 UTC m=+0.021621990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:29:13 np0005548731 systemd[1]: Started libpod-conmon-5730b0a8379cdcf32d72c45eeaaf5d3a8102d3a12b94286bb01bb02b0df6321e.scope.
Dec  6 01:29:13 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:29:13 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d08c8ff7804a9d752ee2f0851de8cf92b109f98711b1fedba925014352ccc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:13 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d08c8ff7804a9d752ee2f0851de8cf92b109f98711b1fedba925014352ccc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:13 np0005548731 podman[74182]: 2025-12-06 06:29:13.355481438 +0000 UTC m=+0.530170463 container init 5730b0a8379cdcf32d72c45eeaaf5d3a8102d3a12b94286bb01bb02b0df6321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 01:29:13 np0005548731 podman[74182]: 2025-12-06 06:29:13.363007848 +0000 UTC m=+0.537696843 container start 5730b0a8379cdcf32d72c45eeaaf5d3a8102d3a12b94286bb01bb02b0df6321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  6 01:29:13 np0005548731 podman[74182]: 2025-12-06 06:29:13.416394167 +0000 UTC m=+0.591083162 container attach 5730b0a8379cdcf32d72c45eeaaf5d3a8102d3a12b94286bb01bb02b0df6321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]: [
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:    {
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:        "available": false,
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:        "ceph_device": false,
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:        "lsm_data": {},
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:        "lvs": [],
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:        "path": "/dev/sr0",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:        "rejected_reasons": [
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "Insufficient space (<5GB)",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "Has a FileSystem"
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:        ],
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:        "sys_api": {
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "actuators": null,
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "device_nodes": "sr0",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "devname": "sr0",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "human_readable_size": "482.00 KB",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "id_bus": "ata",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "model": "QEMU DVD-ROM",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "nr_requests": "2",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "parent": "/dev/sr0",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "partitions": {},
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "path": "/dev/sr0",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "removable": "1",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "rev": "2.5+",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "ro": "0",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "rotational": "1",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "sas_address": "",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "sas_device_handle": "",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "scheduler_mode": "mq-deadline",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "sectors": 0,
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "sectorsize": "2048",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "size": 493568.0,
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "support_discard": "2048",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "type": "disk",
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:            "vendor": "QEMU"
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:        }
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]:    }
Dec  6 01:29:14 np0005548731 dreamy_heisenberg[74198]: ]
Dec  6 01:29:14 np0005548731 systemd[1]: libpod-5730b0a8379cdcf32d72c45eeaaf5d3a8102d3a12b94286bb01bb02b0df6321e.scope: Deactivated successfully.
Dec  6 01:29:14 np0005548731 systemd[1]: libpod-5730b0a8379cdcf32d72c45eeaaf5d3a8102d3a12b94286bb01bb02b0df6321e.scope: Consumed 1.282s CPU time.
Dec  6 01:29:14 np0005548731 podman[75264]: 2025-12-06 06:29:14.661356313 +0000 UTC m=+0.023308830 container died 5730b0a8379cdcf32d72c45eeaaf5d3a8102d3a12b94286bb01bb02b0df6321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  6 01:29:15 np0005548731 systemd[1]: var-lib-containers-storage-overlay-e5d08c8ff7804a9d752ee2f0851de8cf92b109f98711b1fedba925014352ccc6-merged.mount: Deactivated successfully.
Dec  6 01:29:15 np0005548731 podman[75264]: 2025-12-06 06:29:15.722464662 +0000 UTC m=+1.084417169 container remove 5730b0a8379cdcf32d72c45eeaaf5d3a8102d3a12b94286bb01bb02b0df6321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:29:15 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:29:15 np0005548731 systemd[1]: libpod-conmon-5730b0a8379cdcf32d72c45eeaaf5d3a8102d3a12b94286bb01bb02b0df6321e.scope: Deactivated successfully.
Dec  6 01:29:20 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:29:20 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:29:20 np0005548731 podman[77111]: 2025-12-06 06:29:20.635654226 +0000 UTC m=+0.021206930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:29:20 np0005548731 podman[77111]: 2025-12-06 06:29:20.900023668 +0000 UTC m=+0.285576352 container create 3c007fc92827acc4b739bc89cdb7313f7b5437e0c64f5e1cc9993b21feed297b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  6 01:29:20 np0005548731 systemd[1]: Started libpod-conmon-3c007fc92827acc4b739bc89cdb7313f7b5437e0c64f5e1cc9993b21feed297b.scope.
Dec  6 01:29:20 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:29:21 np0005548731 podman[77111]: 2025-12-06 06:29:21.025959175 +0000 UTC m=+0.411511889 container init 3c007fc92827acc4b739bc89cdb7313f7b5437e0c64f5e1cc9993b21feed297b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  6 01:29:21 np0005548731 podman[77111]: 2025-12-06 06:29:21.032905821 +0000 UTC m=+0.418458505 container start 3c007fc92827acc4b739bc89cdb7313f7b5437e0c64f5e1cc9993b21feed297b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  6 01:29:21 np0005548731 cranky_wescoff[77127]: 167 167
Dec  6 01:29:21 np0005548731 systemd[1]: libpod-3c007fc92827acc4b739bc89cdb7313f7b5437e0c64f5e1cc9993b21feed297b.scope: Deactivated successfully.
Dec  6 01:29:21 np0005548731 podman[77111]: 2025-12-06 06:29:21.075186434 +0000 UTC m=+0.460739118 container attach 3c007fc92827acc4b739bc89cdb7313f7b5437e0c64f5e1cc9993b21feed297b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:29:21 np0005548731 podman[77111]: 2025-12-06 06:29:21.075968023 +0000 UTC m=+0.461520707 container died 3c007fc92827acc4b739bc89cdb7313f7b5437e0c64f5e1cc9993b21feed297b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 01:29:21 np0005548731 podman[77111]: 2025-12-06 06:29:21.498781013 +0000 UTC m=+0.884333697 container remove 3c007fc92827acc4b739bc89cdb7313f7b5437e0c64f5e1cc9993b21feed297b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:29:21 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:29:21 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:29:21 np0005548731 systemd[1]: libpod-conmon-3c007fc92827acc4b739bc89cdb7313f7b5437e0c64f5e1cc9993b21feed297b.scope: Deactivated successfully.
Dec  6 01:29:21 np0005548731 podman[77146]: 2025-12-06 06:29:21.561597928 +0000 UTC m=+0.024318905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:29:22 np0005548731 podman[77146]: 2025-12-06 06:29:22.044116097 +0000 UTC m=+0.506837034 container create 687b59f955eb8e163477acdd8bf57519bc1c41f9a78116a8612ebb81e36acdd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  6 01:29:22 np0005548731 systemd[1]: Started libpod-conmon-687b59f955eb8e163477acdd8bf57519bc1c41f9a78116a8612ebb81e36acdd1.scope.
Dec  6 01:29:22 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:29:22 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e402fea4a60d042f21dd1e4b947b8ba51cbf21bfd7466d2e08d8266a2d3aafb8/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:22 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e402fea4a60d042f21dd1e4b947b8ba51cbf21bfd7466d2e08d8266a2d3aafb8/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:22 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e402fea4a60d042f21dd1e4b947b8ba51cbf21bfd7466d2e08d8266a2d3aafb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:22 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e402fea4a60d042f21dd1e4b947b8ba51cbf21bfd7466d2e08d8266a2d3aafb8/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:22 np0005548731 podman[77146]: 2025-12-06 06:29:22.224228952 +0000 UTC m=+0.686949919 container init 687b59f955eb8e163477acdd8bf57519bc1c41f9a78116a8612ebb81e36acdd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:29:22 np0005548731 podman[77146]: 2025-12-06 06:29:22.23002786 +0000 UTC m=+0.692748817 container start 687b59f955eb8e163477acdd8bf57519bc1c41f9a78116a8612ebb81e36acdd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec  6 01:29:22 np0005548731 podman[77146]: 2025-12-06 06:29:22.243840431 +0000 UTC m=+0.706561398 container attach 687b59f955eb8e163477acdd8bf57519bc1c41f9a78116a8612ebb81e36acdd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 01:29:23 np0005548731 systemd[1]: libpod-687b59f955eb8e163477acdd8bf57519bc1c41f9a78116a8612ebb81e36acdd1.scope: Deactivated successfully.
Dec  6 01:29:23 np0005548731 podman[77146]: 2025-12-06 06:29:23.068733373 +0000 UTC m=+1.531454350 container died 687b59f955eb8e163477acdd8bf57519bc1c41f9a78116a8612ebb81e36acdd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  6 01:29:23 np0005548731 systemd[1]: var-lib-containers-storage-overlay-e402fea4a60d042f21dd1e4b947b8ba51cbf21bfd7466d2e08d8266a2d3aafb8-merged.mount: Deactivated successfully.
Dec  6 01:29:23 np0005548731 podman[77146]: 2025-12-06 06:29:23.12116408 +0000 UTC m=+1.583885027 container remove 687b59f955eb8e163477acdd8bf57519bc1c41f9a78116a8612ebb81e36acdd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hawking, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:29:23 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:29:23 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:29:23 np0005548731 systemd[1]: libpod-conmon-687b59f955eb8e163477acdd8bf57519bc1c41f9a78116a8612ebb81e36acdd1.scope: Deactivated successfully.
Dec  6 01:29:23 np0005548731 systemd[1]: Reloading.
Dec  6 01:29:23 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:29:23 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:29:23 np0005548731 systemd[1]: Reloading.
Dec  6 01:29:23 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:29:23 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:29:23 np0005548731 systemd[1]: Reached target All Ceph clusters and services.
Dec  6 01:29:23 np0005548731 systemd[1]: Reloading.
Dec  6 01:29:23 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:29:23 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:29:23 np0005548731 systemd[1]: Reached target Ceph cluster 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec  6 01:29:23 np0005548731 systemd[1]: Reloading.
Dec  6 01:29:23 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:29:23 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:29:24 np0005548731 systemd[1]: Reloading.
Dec  6 01:29:24 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:29:24 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:29:24 np0005548731 systemd[1]: Created slice Slice /system/ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec  6 01:29:24 np0005548731 systemd[1]: Reached target System Time Set.
Dec  6 01:29:24 np0005548731 systemd[1]: Reached target System Time Synchronized.
Dec  6 01:29:24 np0005548731 systemd[1]: Starting Ceph mon.compute-2 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec  6 01:29:24 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:29:24 np0005548731 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  6 01:29:24 np0005548731 podman[77438]: 2025-12-06 06:29:24.655721943 +0000 UTC m=+0.023396722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:29:24 np0005548731 podman[77438]: 2025-12-06 06:29:24.751000224 +0000 UTC m=+0.118674963 container create 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  6 01:29:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92fcdf3fbaa5ce5bfe0fcb2d46d1522ccba7686deb22775b9d9b7c0b7306b8ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92fcdf3fbaa5ce5bfe0fcb2d46d1522ccba7686deb22775b9d9b7c0b7306b8ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92fcdf3fbaa5ce5bfe0fcb2d46d1522ccba7686deb22775b9d9b7c0b7306b8ac/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:24 np0005548731 podman[77438]: 2025-12-06 06:29:24.865046966 +0000 UTC m=+0.232721735 container init 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 01:29:24 np0005548731 podman[77438]: 2025-12-06 06:29:24.871161844 +0000 UTC m=+0.238836593 container start 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 01:29:24 np0005548731 bash[77438]: 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54
Dec  6 01:29:24 np0005548731 systemd[1]: Started Ceph mon.compute-2 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: pidfile_write: ignore empty --pid-file
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: load: jerasure load: lrc 
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: RocksDB version: 7.9.2
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Git sha 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: DB SUMMARY
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: DB Session ID:  CWBL8WMDM4D79BU86E9T
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: CURRENT file:  CURRENT
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: IDENTITY file:  IDENTITY
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-2/store.db dir, Total Num: 0, files: 
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-2/store.db: 000004.log size: 511 ; 
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                         Options.error_if_exists: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                       Options.create_if_missing: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                         Options.paranoid_checks: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                                     Options.env: 0x56191558fc40
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                                Options.info_log: 0x56191711cfc0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                Options.max_file_opening_threads: 16
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                              Options.statistics: (nil)
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                               Options.use_fsync: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                       Options.max_log_file_size: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                         Options.allow_fallocate: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                        Options.use_direct_reads: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:          Options.create_missing_column_families: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                              Options.db_log_dir: 
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                                 Options.wal_dir: 
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                   Options.advise_random_on_open: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                    Options.write_buffer_manager: 0x56191712cb40
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                            Options.rate_limiter: (nil)
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                  Options.unordered_write: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                               Options.row_cache: None
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                              Options.wal_filter: None
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.allow_ingest_behind: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.two_write_queues: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.manual_wal_flush: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.wal_compression: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.atomic_flush: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                 Options.log_readahead_size: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.allow_data_in_errors: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.db_host_id: __hostname__
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.max_background_jobs: 2
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.max_background_compactions: -1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.max_subcompactions: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.max_total_wal_size: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                          Options.max_open_files: -1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                          Options.bytes_per_sync: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:       Options.compaction_readahead_size: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                  Options.max_background_flushes: -1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Compression algorithms supported:
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: #011kZSTD supported: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: #011kXpressCompression supported: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: #011kBZip2Compression supported: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: #011kLZ4Compression supported: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: #011kZlibCompression supported: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: #011kSnappyCompression supported: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:           Options.merge_operator: 
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:        Options.compaction_filter: None
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56191711cc00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5619171151f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:        Options.write_buffer_size: 33554432
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:  Options.max_write_buffer_number: 2
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:          Options.compression: NoCompression
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.num_levels: 7
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bc01f9a1-fda8-4008-aee1-1fa5ff4371a1
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002564924169, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002564926873, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002564927039, "job": 1, "event": "recovery_finished"}
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56191713ee00
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: DB pointer 0x5619171c8000
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: mon.compute-2 does not exist in monmap, will attempt to join an existing cluster
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.64 KB,0.00012219%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: using public_addr v2:192.168.122.102:0/0 -> [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: starting mon.compute-2 rank -1 at public addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] at bind addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-2 fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec  6 01:29:24 np0005548731 ceph-mon[77458]: mon.compute-2@-1(???) e0 preinit fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).mds e2 new map
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T06:29:04.228355+0000#012modified#0112025-12-06T06:29:04.228395+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e9 e9: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e12 e12: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e13 e13: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e14 e14: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e15 e15: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e16 e16: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e17 e17: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e18 e18: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e19 e19: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e20 e20: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e21 e21: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e22 e22: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e23 e23: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e24 e24: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e25 e25: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e26 e26: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e27 e27: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e28 e28: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e29 e29: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e30 e30: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e31 e31: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e32 e32: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e33 e33: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e34 e34: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e35 e35: 2 total, 1 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e36 e36: 2 total, 2 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e37 e37: 2 total, 2 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e38 e38: 2 total, 2 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e39 e39: 2 total, 2 up, 2 in
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e39 crush map has features 3314933000852226048, adjusting msgr requires
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e39 crush map has features 288514051259236352, adjusting msgr requires
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e39 crush map has features 288514051259236352, adjusting msgr requires
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).osd e39 crush map has features 288514051259236352, adjusting msgr requires
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: Updating compute-2:/etc/ceph/ceph.conf
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: OSD bench result of 1657.429499 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420] boot
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: Deploying daemon mon.compute-2 on compute-2
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:25 np0005548731 ceph-mon[77458]: mon.compute-2@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3
Dec  6 01:29:27 np0005548731 ceph-mon[77458]: mon.compute-2@-1(probing) e2  my rank is now 1 (was -1)
Dec  6 01:29:27 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 01:29:27 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  6 01:29:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:29:30 np0005548731 systemd-logind[794]: Session 19 logged out. Waiting for processes to exit.
Dec  6 01:29:30 np0005548731 systemd[1]: session-19.scope: Deactivated successfully.
Dec  6 01:29:30 np0005548731 systemd[1]: session-19.scope: Consumed 8.330s CPU time.
Dec  6 01:29:30 np0005548731 systemd-logind[794]: Removed session 19.
Dec  6 01:29:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: mgrc update_daemon_metadata mon.compute-2 metadata {addrs=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2025-12-06T06:29:22.676970Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-2,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds; Reduced data availability: 1 pg inactive
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Dec  6 01:29:31 np0005548731 ceph-mon[77458]:    fs cephfs is offline because no MDS is active for it.
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Dec  6 01:29:31 np0005548731 ceph-mon[77458]:    fs cephfs has 0 MDS online, but wants 1
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive
Dec  6 01:29:31 np0005548731 ceph-mon[77458]:    pg 1.0 is stuck inactive for 62s, current state unknown, last acting []
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e40 e40: 2 total, 2 up, 2 in
Dec  6 01:29:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e2 handle_auth_request failed to assign global_id
Dec  6 01:29:32 np0005548731 podman[77637]: 2025-12-06 06:29:32.280096774 +0000 UTC m=+0.082119227 container create 6cfeb48c8a43457947804f6b16c40d547e5d42ffa90399a6d2fbc4f3b5cc30a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 01:29:32 np0005548731 podman[77637]: 2025-12-06 06:29:32.220656692 +0000 UTC m=+0.022679165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:29:32 np0005548731 systemd[1]: Started libpod-conmon-6cfeb48c8a43457947804f6b16c40d547e5d42ffa90399a6d2fbc4f3b5cc30a9.scope.
Dec  6 01:29:32 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:29:32 np0005548731 podman[77637]: 2025-12-06 06:29:32.3608932 +0000 UTC m=+0.162915683 container init 6cfeb48c8a43457947804f6b16c40d547e5d42ffa90399a6d2fbc4f3b5cc30a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  6 01:29:32 np0005548731 podman[77637]: 2025-12-06 06:29:32.368507792 +0000 UTC m=+0.170530245 container start 6cfeb48c8a43457947804f6b16c40d547e5d42ffa90399a6d2fbc4f3b5cc30a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  6 01:29:32 np0005548731 reverent_sinoussi[77653]: 167 167
Dec  6 01:29:32 np0005548731 systemd[1]: libpod-6cfeb48c8a43457947804f6b16c40d547e5d42ffa90399a6d2fbc4f3b5cc30a9.scope: Deactivated successfully.
Dec  6 01:29:32 np0005548731 podman[77637]: 2025-12-06 06:29:32.377423627 +0000 UTC m=+0.179446100 container attach 6cfeb48c8a43457947804f6b16c40d547e5d42ffa90399a6d2fbc4f3b5cc30a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  6 01:29:32 np0005548731 podman[77637]: 2025-12-06 06:29:32.379025005 +0000 UTC m=+0.181047458 container died 6cfeb48c8a43457947804f6b16c40d547e5d42ffa90399a6d2fbc4f3b5cc30a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:29:32 np0005548731 systemd[1]: var-lib-containers-storage-overlay-9b0592b2e9ba7ea1ff411a9ca2983825c2a5ad0712115c1e794e6bd35f73b91c-merged.mount: Deactivated successfully.
Dec  6 01:29:32 np0005548731 podman[77637]: 2025-12-06 06:29:32.494695326 +0000 UTC m=+0.296717779 container remove 6cfeb48c8a43457947804f6b16c40d547e5d42ffa90399a6d2fbc4f3b5cc30a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  6 01:29:32 np0005548731 systemd[1]: libpod-conmon-6cfeb48c8a43457947804f6b16c40d547e5d42ffa90399a6d2fbc4f3b5cc30a9.scope: Deactivated successfully.
Dec  6 01:29:32 np0005548731 systemd[1]: Reloading.
Dec  6 01:29:32 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:29:32 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive)
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  6 01:29:32 np0005548731 ceph-mon[77458]: Deploying daemon mgr.compute-2.ytlehq on compute-2
Dec  6 01:29:32 np0005548731 systemd[1]: Reloading.
Dec  6 01:29:32 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:29:32 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:29:33 np0005548731 systemd[1]: Starting Ceph mgr.compute-2.ytlehq for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec  6 01:29:33 np0005548731 podman[77799]: 2025-12-06 06:29:33.305745926 +0000 UTC m=+0.041161147 container create 928436f78d64081ae3691d28c122b689a1ed037e2f338277a93bb784afe9c01a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:29:33 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a441162afe5ca7325ca83d438c9608a0f59fb80bf1b9c7b58d54e1ebee2515/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:33 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a441162afe5ca7325ca83d438c9608a0f59fb80bf1b9c7b58d54e1ebee2515/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:33 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a441162afe5ca7325ca83d438c9608a0f59fb80bf1b9c7b58d54e1ebee2515/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:33 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a441162afe5ca7325ca83d438c9608a0f59fb80bf1b9c7b58d54e1ebee2515/merged/var/lib/ceph/mgr/ceph-compute-2.ytlehq supports timestamps until 2038 (0x7fffffff)
Dec  6 01:29:33 np0005548731 podman[77799]: 2025-12-06 06:29:33.367467055 +0000 UTC m=+0.102882306 container init 928436f78d64081ae3691d28c122b689a1ed037e2f338277a93bb784afe9c01a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  6 01:29:33 np0005548731 podman[77799]: 2025-12-06 06:29:33.373132401 +0000 UTC m=+0.108547642 container start 928436f78d64081ae3691d28c122b689a1ed037e2f338277a93bb784afe9c01a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  6 01:29:33 np0005548731 bash[77799]: 928436f78d64081ae3691d28c122b689a1ed037e2f338277a93bb784afe9c01a
Dec  6 01:29:33 np0005548731 podman[77799]: 2025-12-06 06:29:33.288804601 +0000 UTC m=+0.024219852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:29:33 np0005548731 systemd[1]: Started Ceph mgr.compute-2.ytlehq for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec  6 01:29:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e2 handle_auth_request failed to assign global_id
Dec  6 01:29:33 np0005548731 ceph-mgr[77818]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 01:29:33 np0005548731 ceph-mgr[77818]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  6 01:29:33 np0005548731 ceph-mgr[77818]: pidfile_write: ignore empty --pid-file
Dec  6 01:29:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e2 handle_auth_request failed to assign global_id
Dec  6 01:29:33 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'alerts'
Dec  6 01:29:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e41 e41: 2 total, 2 up, 2 in
Dec  6 01:29:33 np0005548731 ceph-mgr[77818]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 01:29:33 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'balancer'
Dec  6 01:29:33 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:33.889+0000 7f8f93ff1140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  6 01:29:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 01:29:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  6 01:29:34 np0005548731 ceph-mgr[77818]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 01:29:34 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'cephadm'
Dec  6 01:29:34 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:34.172+0000 7f8f93ff1140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  6 01:29:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  6 01:29:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1019916654 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:29:34 np0005548731 ceph-mon[77458]: Deploying daemon mgr.compute-1.nmklwp on compute-1
Dec  6 01:29:36 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'crash'
Dec  6 01:29:36 np0005548731 ceph-mgr[77818]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 01:29:36 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'dashboard'
Dec  6 01:29:36 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:36.696+0000 7f8f93ff1140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  6 01:29:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  6 01:29:38 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 01:29:38 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(10) init, last seen epoch 10
Dec  6 01:29:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:29:38 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'devicehealth'
Dec  6 01:29:38 np0005548731 ceph-mgr[77818]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 01:29:38 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'diskprediction_local'
Dec  6 01:29:38 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:38.768+0000 7f8f93ff1140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  6 01:29:39 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  6 01:29:39 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  6 01:29:39 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]:  from numpy import show_config as show_numpy_config
Dec  6 01:29:39 np0005548731 ceph-mgr[77818]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 01:29:39 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'influx'
Dec  6 01:29:39 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:39.390+0000 7f8f93ff1140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  6 01:29:39 np0005548731 ceph-mgr[77818]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 01:29:39 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'insights'
Dec  6 01:29:39 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:39.671+0000 7f8f93ff1140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  6 01:29:39 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'iostat'
Dec  6 01:29:40 np0005548731 ceph-mgr[77818]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 01:29:40 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'k8sevents'
Dec  6 01:29:40 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:40.214+0000 7f8f93ff1140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  6 01:29:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 01:29:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 01:29:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 01:29:42 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'localpool'
Dec  6 01:29:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 01:29:42 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'mds_autoscaler'
Dec  6 01:29:43 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'mirroring'
Dec  6 01:29:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:29:43 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'nfs'
Dec  6 01:29:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 01:29:44 np0005548731 ceph-mgr[77818]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 01:29:44 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'orchestrator'
Dec  6 01:29:44 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:44.327+0000 7f8f93ff1140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  6 01:29:45 np0005548731 ceph-mgr[77818]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 01:29:45 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'osd_perf_query'
Dec  6 01:29:45 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:45.095+0000 7f8f93ff1140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  6 01:29:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:29:45 np0005548731 ceph-mgr[77818]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 01:29:45 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:45.420+0000 7f8f93ff1140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  6 01:29:45 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'osd_support'
Dec  6 01:29:45 np0005548731 ceph-mgr[77818]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 01:29:45 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:45.704+0000 7f8f93ff1140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  6 01:29:45 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'pg_autoscaler'
Dec  6 01:29:46 np0005548731 ceph-mgr[77818]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 01:29:46 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'progress'
Dec  6 01:29:46 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:46.026+0000 7f8f93ff1140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  6 01:29:46 np0005548731 ceph-mgr[77818]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 01:29:46 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'prometheus'
Dec  6 01:29:46 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:46.271+0000 7f8f93ff1140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  6 01:29:47 np0005548731 ceph-mgr[77818]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 01:29:47 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:47.363+0000 7f8f93ff1140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  6 01:29:47 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'rbd_support'
Dec  6 01:29:47 np0005548731 ceph-mgr[77818]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 01:29:47 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'restful'
Dec  6 01:29:47 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:47.719+0000 7f8f93ff1140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  6 01:29:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e42 e42: 2 total, 2 up, 2 in
Dec  6 01:29:47 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:29:47 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 01:29:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:47 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 01:29:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:47 np0005548731 ceph-mon[77458]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Dec  6 01:29:47 np0005548731 ceph-mon[77458]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Dec  6 01:29:47 np0005548731 ceph-mon[77458]:    fs cephfs is offline because no MDS is active for it.
Dec  6 01:29:47 np0005548731 ceph-mon[77458]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Dec  6 01:29:47 np0005548731 ceph-mon[77458]:    fs cephfs has 0 MDS online, but wants 1
Dec  6 01:29:48 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'rgw'
Dec  6 01:29:49 np0005548731 ceph-mgr[77818]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 01:29:49 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'rook'
Dec  6 01:29:49 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:49.492+0000 7f8f93ff1140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  6 01:29:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020052917 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:29:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e43 e43: 2 total, 2 up, 2 in
Dec  6 01:29:50 np0005548731 ceph-mon[77458]: mon.compute-1 calling monitor election
Dec  6 01:29:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:29:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:51 np0005548731 ceph-mgr[77818]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 01:29:51 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:51.913+0000 7f8f93ff1140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  6 01:29:51 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'selftest'
Dec  6 01:29:52 np0005548731 ceph-mgr[77818]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 01:29:52 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'snap_schedule'
Dec  6 01:29:52 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:52.213+0000 7f8f93ff1140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  6 01:29:52 np0005548731 ceph-mgr[77818]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 01:29:52 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'stats'
Dec  6 01:29:52 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:52.496+0000 7f8f93ff1140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  6 01:29:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:29:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:52 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'status'
Dec  6 01:29:53 np0005548731 ceph-mgr[77818]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 01:29:53 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'telegraf'
Dec  6 01:29:53 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:53.071+0000 7f8f93ff1140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  6 01:29:53 np0005548731 ceph-mgr[77818]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 01:29:53 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'telemetry'
Dec  6 01:29:53 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:53.346+0000 7f8f93ff1140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  6 01:29:54 np0005548731 ceph-mgr[77818]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 01:29:54 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'test_orchestrator'
Dec  6 01:29:54 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:54.050+0000 7f8f93ff1140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  6 01:29:54 np0005548731 ceph-mgr[77818]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 01:29:54 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'volumes'
Dec  6 01:29:54 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:54.834+0000 7f8f93ff1140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  6 01:29:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054708 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:29:55 np0005548731 ceph-mgr[77818]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 01:29:55 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:55.597+0000 7f8f93ff1140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  6 01:29:55 np0005548731 ceph-mgr[77818]: mgr[py] Loading python module 'zabbix'
Dec  6 01:29:55 np0005548731 ceph-mgr[77818]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 01:29:55 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-2-ytlehq[77814]: 2025-12-06T06:29:55.868+0000 7f8f93ff1140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  6 01:29:55 np0005548731 ceph-mgr[77818]: ms_deliver_dispatch: unhandled message 0x55bd6ded71e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec  6 01:29:55 np0005548731 ceph-mgr[77818]: ms_deliver_dispatch: unhandled message 0x55bd6ded7080 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec  6 01:29:55 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 01:29:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:29:56 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 01:29:57 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 01:29:58 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 01:29:59 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 01:30:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:00 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 01:30:01 np0005548731 podman[77995]: 2025-12-06 06:30:01.429465141 +0000 UTC m=+0.027877357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:01 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 01:30:02 np0005548731 podman[77995]: 2025-12-06 06:30:02.068790502 +0000 UTC m=+0.667202698 container create 5adfb49d2825af4bd91787c7d467e4854364fffbd0399bd57fb34328e5a76340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  6 01:30:02 np0005548731 systemd[1]: Started libpod-conmon-5adfb49d2825af4bd91787c7d467e4854364fffbd0399bd57fb34328e5a76340.scope.
Dec  6 01:30:02 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:30:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 01:30:02 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.100:0/740520273' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  6 01:30:02 np0005548731 podman[77995]: 2025-12-06 06:30:02.676510585 +0000 UTC m=+1.274922801 container init 5adfb49d2825af4bd91787c7d467e4854364fffbd0399bd57fb34328e5a76340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  6 01:30:02 np0005548731 podman[77995]: 2025-12-06 06:30:02.683696052 +0000 UTC m=+1.282108248 container start 5adfb49d2825af4bd91787c7d467e4854364fffbd0399bd57fb34328e5a76340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:30:02 np0005548731 keen_pare[78011]: 167 167
Dec  6 01:30:02 np0005548731 systemd[1]: libpod-5adfb49d2825af4bd91787c7d467e4854364fffbd0399bd57fb34328e5a76340.scope: Deactivated successfully.
Dec  6 01:30:03 np0005548731 podman[77995]: 2025-12-06 06:30:03.05297814 +0000 UTC m=+1.651390336 container attach 5adfb49d2825af4bd91787c7d467e4854364fffbd0399bd57fb34328e5a76340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:30:03 np0005548731 podman[77995]: 2025-12-06 06:30:03.054645141 +0000 UTC m=+1.653057337 container died 5adfb49d2825af4bd91787c7d467e4854364fffbd0399bd57fb34328e5a76340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:30:04 np0005548731 systemd[1]: var-lib-containers-storage-overlay-a2d3cd305acf32ae0402b475a1d897f9cd3e9d1559204b9a3eb1c9b7b1ddc18f-merged.mount: Deactivated successfully.
Dec  6 01:30:04 np0005548731 ceph-mon[77458]: overall HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Dec  6 01:30:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  6 01:30:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:04 np0005548731 ceph-mon[77458]: Deploying daemon crash.compute-2 on compute-2
Dec  6 01:30:05 np0005548731 podman[77995]: 2025-12-06 06:30:05.170374987 +0000 UTC m=+3.768787183 container remove 5adfb49d2825af4bd91787c7d467e4854364fffbd0399bd57fb34328e5a76340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  6 01:30:05 np0005548731 systemd[1]: libpod-conmon-5adfb49d2825af4bd91787c7d467e4854364fffbd0399bd57fb34328e5a76340.scope: Deactivated successfully.
Dec  6 01:30:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:05 np0005548731 systemd[1]: Reloading.
Dec  6 01:30:06 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:30:06 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:30:06 np0005548731 systemd[1]: Reloading.
Dec  6 01:30:06 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:30:06 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:30:06 np0005548731 systemd[1]: Starting Ceph crash.compute-2 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec  6 01:30:06 np0005548731 podman[78159]: 2025-12-06 06:30:06.703377125 +0000 UTC m=+0.024194298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:06 np0005548731 podman[78159]: 2025-12-06 06:30:06.842655555 +0000 UTC m=+0.163472708 container create 2f1de5c61033d56ade74bf0e85c705c832656e4e36f8b8df3d568507c6ec4d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 01:30:06 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894dc676865fa58a9c70fcaa64e35dd2f692c8117198aadf6688af92617a27b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:06 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894dc676865fa58a9c70fcaa64e35dd2f692c8117198aadf6688af92617a27b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:06 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894dc676865fa58a9c70fcaa64e35dd2f692c8117198aadf6688af92617a27b0/merged/etc/ceph/ceph.client.crash.compute-2.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:06 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894dc676865fa58a9c70fcaa64e35dd2f692c8117198aadf6688af92617a27b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:07 np0005548731 podman[78159]: 2025-12-06 06:30:07.198157014 +0000 UTC m=+0.518974197 container init 2f1de5c61033d56ade74bf0e85c705c832656e4e36f8b8df3d568507c6ec4d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:30:07 np0005548731 podman[78159]: 2025-12-06 06:30:07.204004408 +0000 UTC m=+0.524821561 container start 2f1de5c61033d56ade74bf0e85c705c832656e4e36f8b8df3d568507c6ec4d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  6 01:30:07 np0005548731 bash[78159]: 2f1de5c61033d56ade74bf0e85c705c832656e4e36f8b8df3d568507c6ec4d28
Dec  6 01:30:07 np0005548731 systemd[1]: Started Ceph crash.compute-2 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec  6 01:30:07 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2[78174]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  6 01:30:07 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2[78174]: 2025-12-06T06:30:07.637+0000 7f5be693f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  6 01:30:07 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2[78174]: 2025-12-06T06:30:07.637+0000 7f5be693f640 -1 AuthRegistry(0x7f5be0067150) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  6 01:30:07 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2[78174]: 2025-12-06T06:30:07.638+0000 7f5be693f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  6 01:30:07 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2[78174]: 2025-12-06T06:30:07.638+0000 7f5be693f640 -1 AuthRegistry(0x7f5be693e000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  6 01:30:07 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2[78174]: 2025-12-06T06:30:07.639+0000 7f5bdffff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  6 01:30:07 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2[78174]: 2025-12-06T06:30:07.640+0000 7f5bdf7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  6 01:30:07 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2[78174]: 2025-12-06T06:30:07.643+0000 7f5be4eb5640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  6 01:30:07 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2[78174]: 2025-12-06T06:30:07.643+0000 7f5be693f640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec  6 01:30:07 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2[78174]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  6 01:30:07 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-2[78174]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  6 01:30:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:11 np0005548731 podman[78331]: 2025-12-06 06:30:11.131462199 +0000 UTC m=+0.042580350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:30:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:30:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:11 np0005548731 podman[78331]: 2025-12-06 06:30:11.272454633 +0000 UTC m=+0.183572754 container create 3df16ade6df0bd80889f54858b994a9169b97fcd5715274028aeae100d7d2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 01:30:11 np0005548731 systemd[1]: Started libpod-conmon-3df16ade6df0bd80889f54858b994a9169b97fcd5715274028aeae100d7d2933.scope.
Dec  6 01:30:11 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:30:11 np0005548731 podman[78331]: 2025-12-06 06:30:11.386520283 +0000 UTC m=+0.297638404 container init 3df16ade6df0bd80889f54858b994a9169b97fcd5715274028aeae100d7d2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:30:11 np0005548731 podman[78331]: 2025-12-06 06:30:11.39370294 +0000 UTC m=+0.304821061 container start 3df16ade6df0bd80889f54858b994a9169b97fcd5715274028aeae100d7d2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:30:11 np0005548731 stoic_gagarin[78347]: 167 167
Dec  6 01:30:11 np0005548731 systemd[1]: libpod-3df16ade6df0bd80889f54858b994a9169b97fcd5715274028aeae100d7d2933.scope: Deactivated successfully.
Dec  6 01:30:11 np0005548731 podman[78331]: 2025-12-06 06:30:11.423563086 +0000 UTC m=+0.334681207 container attach 3df16ade6df0bd80889f54858b994a9169b97fcd5715274028aeae100d7d2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:30:11 np0005548731 podman[78331]: 2025-12-06 06:30:11.425880893 +0000 UTC m=+0.336999014 container died 3df16ade6df0bd80889f54858b994a9169b97fcd5715274028aeae100d7d2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 01:30:11 np0005548731 systemd[1]: var-lib-containers-storage-overlay-51b5bf535c50bd53b515bb01c11ce3c4a770bc66ee08f7739cf1f771bf007dea-merged.mount: Deactivated successfully.
Dec  6 01:30:11 np0005548731 podman[78331]: 2025-12-06 06:30:11.516250329 +0000 UTC m=+0.427368450 container remove 3df16ade6df0bd80889f54858b994a9169b97fcd5715274028aeae100d7d2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 01:30:11 np0005548731 systemd[1]: libpod-conmon-3df16ade6df0bd80889f54858b994a9169b97fcd5715274028aeae100d7d2933.scope: Deactivated successfully.
Dec  6 01:30:11 np0005548731 podman[78372]: 2025-12-06 06:30:11.650292222 +0000 UTC m=+0.023464289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:11 np0005548731 podman[78372]: 2025-12-06 06:30:11.796654948 +0000 UTC m=+0.169827005 container create 3316ce0d15fc75e3e1c15df09ca6e3990885a17f93172c008480d7f1880ffcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 01:30:11 np0005548731 systemd[1]: Started libpod-conmon-3316ce0d15fc75e3e1c15df09ca6e3990885a17f93172c008480d7f1880ffcae.scope.
Dec  6 01:30:11 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:30:11 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef510a67d5bb4649ce7e17919a24329e44e644ba87e6e8fba7319ebaccd5385/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:11 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef510a67d5bb4649ce7e17919a24329e44e644ba87e6e8fba7319ebaccd5385/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:11 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef510a67d5bb4649ce7e17919a24329e44e644ba87e6e8fba7319ebaccd5385/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:11 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef510a67d5bb4649ce7e17919a24329e44e644ba87e6e8fba7319ebaccd5385/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:11 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef510a67d5bb4649ce7e17919a24329e44e644ba87e6e8fba7319ebaccd5385/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:12 np0005548731 podman[78372]: 2025-12-06 06:30:12.083148176 +0000 UTC m=+0.456320243 container init 3316ce0d15fc75e3e1c15df09ca6e3990885a17f93172c008480d7f1880ffcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  6 01:30:12 np0005548731 podman[78372]: 2025-12-06 06:30:12.091027151 +0000 UTC m=+0.464199198 container start 3316ce0d15fc75e3e1c15df09ca6e3990885a17f93172c008480d7f1880ffcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  6 01:30:12 np0005548731 podman[78372]: 2025-12-06 06:30:12.094579848 +0000 UTC m=+0.467751895 container attach 3316ce0d15fc75e3e1c15df09ca6e3990885a17f93172c008480d7f1880ffcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 01:30:12 np0005548731 heuristic_hawking[78388]: --> passed data devices: 0 physical, 1 LVM
Dec  6 01:30:12 np0005548731 heuristic_hawking[78388]: --> relative data size: 1.0
Dec  6 01:30:13 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 01:30:13 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e2493db6-bc13-4dc0-b3a7-5b4b07811cd5
Dec  6 01:30:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"} v 0) v1
Dec  6 01:30:13 np0005548731 ceph-mon[77458]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1558451655' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]: dispatch
Dec  6 01:30:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e44 e44: 3 total, 2 up, 3 in
Dec  6 01:30:14 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  6 01:30:14 np0005548731 lvm[78436]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 01:30:14 np0005548731 lvm[78436]: VG ceph_vg0 finished
Dec  6 01:30:14 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec  6 01:30:14 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  6 01:30:14 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  6 01:30:14 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:14 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec  6 01:30:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  6 01:30:14 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1378465590' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  6 01:30:14 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]: dispatch
Dec  6 01:30:14 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.102:0/1558451655' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]: dispatch
Dec  6 01:30:14 np0005548731 heuristic_hawking[78388]: stderr: got monmap epoch 3
Dec  6 01:30:15 np0005548731 heuristic_hawking[78388]: --> Creating keyring file for osd.2
Dec  6 01:30:15 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec  6 01:30:15 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec  6 01:30:15 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid e2493db6-bc13-4dc0-b3a7-5b4b07811cd5 --setuser ceph --setgroup ceph
Dec  6 01:30:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:15 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]': finished
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: stderr: 2025-12-06T06:30:15.112+0000 7f5358915740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: stderr: 2025-12-06T06:30:15.112+0000 7f5358915740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: stderr: 2025-12-06T06:30:15.112+0000 7f5358915740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: stderr: 2025-12-06T06:30:15.112+0000 7f5358915740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: --> ceph-volume lvm activate successful for osd ID: 2
Dec  6 01:30:18 np0005548731 heuristic_hawking[78388]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  6 01:30:18 np0005548731 systemd[1]: libpod-3316ce0d15fc75e3e1c15df09ca6e3990885a17f93172c008480d7f1880ffcae.scope: Deactivated successfully.
Dec  6 01:30:18 np0005548731 systemd[1]: libpod-3316ce0d15fc75e3e1c15df09ca6e3990885a17f93172c008480d7f1880ffcae.scope: Consumed 3.642s CPU time.
Dec  6 01:30:18 np0005548731 podman[79341]: 2025-12-06 06:30:18.478922547 +0000 UTC m=+0.032516992 container died 3316ce0d15fc75e3e1c15df09ca6e3990885a17f93172c008480d7f1880ffcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:30:18 np0005548731 systemd[1]: var-lib-containers-storage-overlay-3ef510a67d5bb4649ce7e17919a24329e44e644ba87e6e8fba7319ebaccd5385-merged.mount: Deactivated successfully.
Dec  6 01:30:18 np0005548731 podman[79341]: 2025-12-06 06:30:18.553632747 +0000 UTC m=+0.107227172 container remove 3316ce0d15fc75e3e1c15df09ca6e3990885a17f93172c008480d7f1880ffcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:30:18 np0005548731 systemd[1]: libpod-conmon-3316ce0d15fc75e3e1c15df09ca6e3990885a17f93172c008480d7f1880ffcae.scope: Deactivated successfully.
Dec  6 01:30:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:19 np0005548731 podman[79497]: 2025-12-06 06:30:19.099986898 +0000 UTC m=+0.024981686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:19 np0005548731 podman[79497]: 2025-12-06 06:30:19.205794645 +0000 UTC m=+0.130789403 container create 6893e9045ca9466c5716067cc033d31f898c1a527813726a9af5f830fe8ea93a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  6 01:30:19 np0005548731 systemd[1]: Started libpod-conmon-6893e9045ca9466c5716067cc033d31f898c1a527813726a9af5f830fe8ea93a.scope.
Dec  6 01:30:19 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:30:19 np0005548731 podman[79497]: 2025-12-06 06:30:19.516745785 +0000 UTC m=+0.441740573 container init 6893e9045ca9466c5716067cc033d31f898c1a527813726a9af5f830fe8ea93a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:30:19 np0005548731 podman[79497]: 2025-12-06 06:30:19.525252205 +0000 UTC m=+0.450246963 container start 6893e9045ca9466c5716067cc033d31f898c1a527813726a9af5f830fe8ea93a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Dec  6 01:30:19 np0005548731 flamboyant_wiles[79513]: 167 167
Dec  6 01:30:19 np0005548731 systemd[1]: libpod-6893e9045ca9466c5716067cc033d31f898c1a527813726a9af5f830fe8ea93a.scope: Deactivated successfully.
Dec  6 01:30:19 np0005548731 podman[79497]: 2025-12-06 06:30:19.59527329 +0000 UTC m=+0.520268078 container attach 6893e9045ca9466c5716067cc033d31f898c1a527813726a9af5f830fe8ea93a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:30:19 np0005548731 podman[79497]: 2025-12-06 06:30:19.595888276 +0000 UTC m=+0.520883034 container died 6893e9045ca9466c5716067cc033d31f898c1a527813726a9af5f830fe8ea93a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:30:19 np0005548731 systemd[1]: var-lib-containers-storage-overlay-f7ae846740afd69ada73750552c21976037a880f9c4394d17b2f084328cd39c4-merged.mount: Deactivated successfully.
Dec  6 01:30:19 np0005548731 podman[79497]: 2025-12-06 06:30:19.65327022 +0000 UTC m=+0.578264978 container remove 6893e9045ca9466c5716067cc033d31f898c1a527813726a9af5f830fe8ea93a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 01:30:19 np0005548731 systemd[1]: libpod-conmon-6893e9045ca9466c5716067cc033d31f898c1a527813726a9af5f830fe8ea93a.scope: Deactivated successfully.
Dec  6 01:30:19 np0005548731 podman[79538]: 2025-12-06 06:30:19.813475856 +0000 UTC m=+0.042952319 container create 659b874112845d31d8e33c5317082e8c5f61009cd454997e8777ddb0d9d60809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 01:30:19 np0005548731 systemd[1]: Started libpod-conmon-659b874112845d31d8e33c5317082e8c5f61009cd454997e8777ddb0d9d60809.scope.
Dec  6 01:30:19 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:30:19 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55418e4a8df676d7791c403eb3571743dc3dacd302c7b37ca029de3fa398db8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:19 np0005548731 podman[79538]: 2025-12-06 06:30:19.794427887 +0000 UTC m=+0.023904290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:19 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55418e4a8df676d7791c403eb3571743dc3dacd302c7b37ca029de3fa398db8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:19 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55418e4a8df676d7791c403eb3571743dc3dacd302c7b37ca029de3fa398db8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:19 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55418e4a8df676d7791c403eb3571743dc3dacd302c7b37ca029de3fa398db8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:19 np0005548731 podman[79538]: 2025-12-06 06:30:19.902358936 +0000 UTC m=+0.131835329 container init 659b874112845d31d8e33c5317082e8c5f61009cd454997e8777ddb0d9d60809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_morse, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 01:30:19 np0005548731 podman[79538]: 2025-12-06 06:30:19.910364263 +0000 UTC m=+0.139840646 container start 659b874112845d31d8e33c5317082e8c5f61009cd454997e8777ddb0d9d60809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:30:19 np0005548731 podman[79538]: 2025-12-06 06:30:19.913734426 +0000 UTC m=+0.143210819 container attach 659b874112845d31d8e33c5317082e8c5f61009cd454997e8777ddb0d9d60809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:30:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]: {
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:    "2": [
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:        {
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            "devices": [
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "/dev/loop3"
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            ],
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            "lv_name": "ceph_lv0",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            "lv_size": "7511998464",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=EtmlqF-Qwg7-4JQ4-A0qW-lgJm-16AO-BJE3Ko,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e2493db6-bc13-4dc0-b3a7-5b4b07811cd5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            "lv_uuid": "EtmlqF-Qwg7-4JQ4-A0qW-lgJm-16AO-BJE3Ko",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            "name": "ceph_lv0",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            "tags": {
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.block_uuid": "EtmlqF-Qwg7-4JQ4-A0qW-lgJm-16AO-BJE3Ko",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.cephx_lockbox_secret": "",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.cluster_name": "ceph",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.crush_device_class": "",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.encrypted": "0",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.osd_fsid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.osd_id": "2",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.type": "block",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:                "ceph.vdo": "0"
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            },
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            "type": "block",
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:            "vg_name": "ceph_vg0"
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:        }
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]:    ]
Dec  6 01:30:20 np0005548731 xenodochial_morse[79555]: }
Dec  6 01:30:20 np0005548731 systemd[1]: libpod-659b874112845d31d8e33c5317082e8c5f61009cd454997e8777ddb0d9d60809.scope: Deactivated successfully.
Dec  6 01:30:20 np0005548731 podman[79538]: 2025-12-06 06:30:20.735668506 +0000 UTC m=+0.965144889 container died 659b874112845d31d8e33c5317082e8c5f61009cd454997e8777ddb0d9d60809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  6 01:30:20 np0005548731 systemd[1]: var-lib-containers-storage-overlay-c55418e4a8df676d7791c403eb3571743dc3dacd302c7b37ca029de3fa398db8-merged.mount: Deactivated successfully.
Dec  6 01:30:20 np0005548731 podman[79538]: 2025-12-06 06:30:20.794352482 +0000 UTC m=+1.023828865 container remove 659b874112845d31d8e33c5317082e8c5f61009cd454997e8777ddb0d9d60809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_morse, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  6 01:30:20 np0005548731 systemd[1]: libpod-conmon-659b874112845d31d8e33c5317082e8c5f61009cd454997e8777ddb0d9d60809.scope: Deactivated successfully.
Dec  6 01:30:21 np0005548731 podman[79716]: 2025-12-06 06:30:21.41245759 +0000 UTC m=+0.054176326 container create c8e829adae8fe2d008d160a8bcac47bc8d0b529f4a7d6ff3716f58f3a3f02596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 01:30:21 np0005548731 systemd[1]: Started libpod-conmon-c8e829adae8fe2d008d160a8bcac47bc8d0b529f4a7d6ff3716f58f3a3f02596.scope.
Dec  6 01:30:21 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:30:21 np0005548731 podman[79716]: 2025-12-06 06:30:21.380834401 +0000 UTC m=+0.022553157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:21 np0005548731 podman[79716]: 2025-12-06 06:30:21.572038051 +0000 UTC m=+0.213756817 container init c8e829adae8fe2d008d160a8bcac47bc8d0b529f4a7d6ff3716f58f3a3f02596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  6 01:30:21 np0005548731 podman[79716]: 2025-12-06 06:30:21.585223877 +0000 UTC m=+0.226942613 container start c8e829adae8fe2d008d160a8bcac47bc8d0b529f4a7d6ff3716f58f3a3f02596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  6 01:30:21 np0005548731 reverent_merkle[79732]: 167 167
Dec  6 01:30:21 np0005548731 systemd[1]: libpod-c8e829adae8fe2d008d160a8bcac47bc8d0b529f4a7d6ff3716f58f3a3f02596.scope: Deactivated successfully.
Dec  6 01:30:21 np0005548731 podman[79716]: 2025-12-06 06:30:21.690608173 +0000 UTC m=+0.332326929 container attach c8e829adae8fe2d008d160a8bcac47bc8d0b529f4a7d6ff3716f58f3a3f02596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:30:21 np0005548731 podman[79716]: 2025-12-06 06:30:21.69168942 +0000 UTC m=+0.333408156 container died c8e829adae8fe2d008d160a8bcac47bc8d0b529f4a7d6ff3716f58f3a3f02596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:30:21 np0005548731 systemd[1]: var-lib-containers-storage-overlay-f40be0d40e4caf2d82633935f55cabd56378ab9ae9982f96747b6c9c76d62e95-merged.mount: Deactivated successfully.
Dec  6 01:30:21 np0005548731 podman[79716]: 2025-12-06 06:30:21.85775676 +0000 UTC m=+0.499475496 container remove c8e829adae8fe2d008d160a8bcac47bc8d0b529f4a7d6ff3716f58f3a3f02596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  6 01:30:21 np0005548731 systemd[1]: libpod-conmon-c8e829adae8fe2d008d160a8bcac47bc8d0b529f4a7d6ff3716f58f3a3f02596.scope: Deactivated successfully.
Dec  6 01:30:22 np0005548731 podman[79766]: 2025-12-06 06:30:22.114958688 +0000 UTC m=+0.051006088 container create 79ae9c597fff50fa0b29cee89d4f5c3346209843318319192bd4babac2b001b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  6 01:30:22 np0005548731 systemd[1]: Started libpod-conmon-79ae9c597fff50fa0b29cee89d4f5c3346209843318319192bd4babac2b001b4.scope.
Dec  6 01:30:22 np0005548731 podman[79766]: 2025-12-06 06:30:22.086903317 +0000 UTC m=+0.022950747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:22 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:30:22 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84859a5f8790544100549bd5cc0a5e3c15547143b6e175772296a48c3ec455b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:22 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84859a5f8790544100549bd5cc0a5e3c15547143b6e175772296a48c3ec455b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:22 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84859a5f8790544100549bd5cc0a5e3c15547143b6e175772296a48c3ec455b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:22 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84859a5f8790544100549bd5cc0a5e3c15547143b6e175772296a48c3ec455b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:22 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84859a5f8790544100549bd5cc0a5e3c15547143b6e175772296a48c3ec455b7/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:22 np0005548731 podman[79766]: 2025-12-06 06:30:22.235840846 +0000 UTC m=+0.171888266 container init 79ae9c597fff50fa0b29cee89d4f5c3346209843318319192bd4babac2b001b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate-test, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  6 01:30:22 np0005548731 podman[79766]: 2025-12-06 06:30:22.242592842 +0000 UTC m=+0.178640242 container start 79ae9c597fff50fa0b29cee89d4f5c3346209843318319192bd4babac2b001b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate-test, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:30:22 np0005548731 podman[79766]: 2025-12-06 06:30:22.25955775 +0000 UTC m=+0.195605150 container attach 79ae9c597fff50fa0b29cee89d4f5c3346209843318319192bd4babac2b001b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:30:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  6 01:30:22 np0005548731 ceph-mon[77458]: Deploying daemon osd.2 on compute-2
Dec  6 01:30:22 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate-test[79783]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  6 01:30:22 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate-test[79783]:                            [--no-systemd] [--no-tmpfs]
Dec  6 01:30:22 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate-test[79783]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  6 01:30:22 np0005548731 systemd[1]: libpod-79ae9c597fff50fa0b29cee89d4f5c3346209843318319192bd4babac2b001b4.scope: Deactivated successfully.
Dec  6 01:30:22 np0005548731 podman[79766]: 2025-12-06 06:30:22.978683147 +0000 UTC m=+0.914730557 container died 79ae9c597fff50fa0b29cee89d4f5c3346209843318319192bd4babac2b001b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate-test, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:30:23 np0005548731 systemd[1]: var-lib-containers-storage-overlay-84859a5f8790544100549bd5cc0a5e3c15547143b6e175772296a48c3ec455b7-merged.mount: Deactivated successfully.
Dec  6 01:30:23 np0005548731 podman[79766]: 2025-12-06 06:30:23.117791274 +0000 UTC m=+1.053838674 container remove 79ae9c597fff50fa0b29cee89d4f5c3346209843318319192bd4babac2b001b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:30:23 np0005548731 systemd[1]: libpod-conmon-79ae9c597fff50fa0b29cee89d4f5c3346209843318319192bd4babac2b001b4.scope: Deactivated successfully.
Dec  6 01:30:23 np0005548731 systemd[1]: Reloading.
Dec  6 01:30:23 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:30:23 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:30:23 np0005548731 systemd[1]: Reloading.
Dec  6 01:30:23 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:30:23 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:30:23 np0005548731 systemd[1]: Starting Ceph osd.2 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec  6 01:30:24 np0005548731 podman[79944]: 2025-12-06 06:30:24.255081693 +0000 UTC m=+0.107078148 container create 54c89f2783a93390e37fc7a04ee3b39b5a0bafc05b9571f15076c261555370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Dec  6 01:30:24 np0005548731 podman[79944]: 2025-12-06 06:30:24.17092047 +0000 UTC m=+0.022916945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:24 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:30:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ae9c7e92ebd6afde242420a03fa9ea1b69f11581e845bd017468ef68d2b614/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ae9c7e92ebd6afde242420a03fa9ea1b69f11581e845bd017468ef68d2b614/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ae9c7e92ebd6afde242420a03fa9ea1b69f11581e845bd017468ef68d2b614/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ae9c7e92ebd6afde242420a03fa9ea1b69f11581e845bd017468ef68d2b614/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ae9c7e92ebd6afde242420a03fa9ea1b69f11581e845bd017468ef68d2b614/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:24 np0005548731 podman[79944]: 2025-12-06 06:30:24.718952403 +0000 UTC m=+0.570948878 container init 54c89f2783a93390e37fc7a04ee3b39b5a0bafc05b9571f15076c261555370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  6 01:30:24 np0005548731 podman[79944]: 2025-12-06 06:30:24.725653117 +0000 UTC m=+0.577649562 container start 54c89f2783a93390e37fc7a04ee3b39b5a0bafc05b9571f15076c261555370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 01:30:24 np0005548731 podman[79944]: 2025-12-06 06:30:24.933205211 +0000 UTC m=+0.785201656 container attach 54c89f2783a93390e37fc7a04ee3b39b5a0bafc05b9571f15076c261555370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  6 01:30:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:25 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate[79960]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  6 01:30:25 np0005548731 bash[79944]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  6 01:30:25 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate[79960]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec  6 01:30:25 np0005548731 bash[79944]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec  6 01:30:25 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate[79960]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec  6 01:30:25 np0005548731 bash[79944]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec  6 01:30:25 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate[79960]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  6 01:30:25 np0005548731 bash[79944]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  6 01:30:25 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate[79960]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:25 np0005548731 bash[79944]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:25 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate[79960]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  6 01:30:25 np0005548731 bash[79944]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  6 01:30:25 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate[79960]: --> ceph-volume raw activate successful for osd ID: 2
Dec  6 01:30:25 np0005548731 bash[79944]: --> ceph-volume raw activate successful for osd ID: 2
Dec  6 01:30:25 np0005548731 systemd[1]: libpod-54c89f2783a93390e37fc7a04ee3b39b5a0bafc05b9571f15076c261555370c8.scope: Deactivated successfully.
Dec  6 01:30:25 np0005548731 podman[79944]: 2025-12-06 06:30:25.695763618 +0000 UTC m=+1.547760063 container died 54c89f2783a93390e37fc7a04ee3b39b5a0bafc05b9571f15076c261555370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 01:30:26 np0005548731 systemd[1]: var-lib-containers-storage-overlay-25ae9c7e92ebd6afde242420a03fa9ea1b69f11581e845bd017468ef68d2b614-merged.mount: Deactivated successfully.
Dec  6 01:30:26 np0005548731 podman[79944]: 2025-12-06 06:30:26.661528781 +0000 UTC m=+2.513525256 container remove 54c89f2783a93390e37fc7a04ee3b39b5a0bafc05b9571f15076c261555370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:30:26 np0005548731 podman[80128]: 2025-12-06 06:30:26.829371537 +0000 UTC m=+0.021051470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:26 np0005548731 podman[80128]: 2025-12-06 06:30:26.940610037 +0000 UTC m=+0.132289960 container create b8d93c707388f1a4f5ad95fb10b05f6d529d9d995615005a73a1b319caf28614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  6 01:30:27 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcf97621bb29900f7a673fad307f292914286d1f52f5c6219c27470fe62851b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:27 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcf97621bb29900f7a673fad307f292914286d1f52f5c6219c27470fe62851b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:27 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcf97621bb29900f7a673fad307f292914286d1f52f5c6219c27470fe62851b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:27 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcf97621bb29900f7a673fad307f292914286d1f52f5c6219c27470fe62851b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:27 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcf97621bb29900f7a673fad307f292914286d1f52f5c6219c27470fe62851b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:27 np0005548731 podman[80128]: 2025-12-06 06:30:27.205163035 +0000 UTC m=+0.396842978 container init b8d93c707388f1a4f5ad95fb10b05f6d529d9d995615005a73a1b319caf28614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 01:30:27 np0005548731 podman[80128]: 2025-12-06 06:30:27.211630514 +0000 UTC m=+0.403310427 container start b8d93c707388f1a4f5ad95fb10b05f6d529d9d995615005a73a1b319caf28614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  6 01:30:27 np0005548731 bash[80128]: b8d93c707388f1a4f5ad95fb10b05f6d529d9d995615005a73a1b319caf28614
Dec  6 01:30:27 np0005548731 systemd[1]: Started Ceph osd.2 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: pidfile_write: ignore empty --pid-file
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cf101c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cf101c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cf101c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cf101c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cff0d000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cff0d000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cff0d000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cff0d000 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cff0d000 /var/lib/ceph/osd/ceph-2/block) close
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cf101c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: load: jerasure load: lrc 
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  6 01:30:27 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff92c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff93400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff93400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff93400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff93400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluefs mount
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluefs mount shared_bdev_used = 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: RocksDB version: 7.9.2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Git sha 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: DB SUMMARY
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: DB Session ID:  JUYA7DAMPBZ4PBZIFSRQ
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: CURRENT file:  CURRENT
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: IDENTITY file:  IDENTITY
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                         Options.error_if_exists: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.create_if_missing: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                         Options.paranoid_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                                     Options.env: 0x5612cff95dc0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                                Options.info_log: 0x5612cf17ed00
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_file_opening_threads: 16
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                              Options.statistics: (nil)
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.use_fsync: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.max_log_file_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                         Options.allow_fallocate: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.use_direct_reads: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.create_missing_column_families: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                              Options.db_log_dir: 
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                                 Options.wal_dir: db.wal
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.advise_random_on_open: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.write_buffer_manager: 0x5612d0098460
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                            Options.rate_limiter: (nil)
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.unordered_write: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.row_cache: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                              Options.wal_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.allow_ingest_behind: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.two_write_queues: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.manual_wal_flush: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.wal_compression: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.atomic_flush: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.log_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.allow_data_in_errors: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.db_host_id: __hostname__
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.max_background_jobs: 4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.max_background_compactions: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.max_subcompactions: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.max_open_files: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.bytes_per_sync: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.max_background_flushes: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Compression algorithms supported:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kZSTD supported: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kXpressCompression supported: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kBZip2Compression supported: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kLZ4Compression supported: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kZlibCompression supported: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kSnappyCompression supported: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf17e740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf174dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf17e740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf174dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf17e740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf174dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf17e740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf174dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf17e740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf174dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf17e740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf174dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf17e740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf174dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf17e720)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf174430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf17e720)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf174430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf17e720)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf174430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 18d4f904-cfb1-426a-ae34-3fdfed92262e
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002628396817, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002628397035, "job": 1, "event": "recovery_finished"}
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: freelist init
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: freelist _read_cfg
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluefs umount
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff93400 /var/lib/ceph/osd/ceph-2/block) close
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff93400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff93400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff93400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bdev(0x5612cff93400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluefs mount
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluefs mount shared_bdev_used = 4718592
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: RocksDB version: 7.9.2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Git sha 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: DB SUMMARY
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: DB Session ID:  JUYA7DAMPBZ4PBZIFSRR
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: CURRENT file:  CURRENT
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: IDENTITY file:  IDENTITY
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                         Options.error_if_exists: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.create_if_missing: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                         Options.paranoid_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                                     Options.env: 0x5612cf2cc460
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                                Options.info_log: 0x5612cf17fa20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_file_opening_threads: 16
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                              Options.statistics: (nil)
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.use_fsync: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.max_log_file_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                         Options.allow_fallocate: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.use_direct_reads: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.create_missing_column_families: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                              Options.db_log_dir: 
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                                 Options.wal_dir: db.wal
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.advise_random_on_open: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.write_buffer_manager: 0x5612d0098460
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                            Options.rate_limiter: (nil)
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.unordered_write: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.row_cache: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                              Options.wal_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.allow_ingest_behind: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.two_write_queues: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.manual_wal_flush: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.wal_compression: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.atomic_flush: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.log_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.allow_data_in_errors: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.db_host_id: __hostname__
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.max_background_jobs: 4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.max_background_compactions: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.max_subcompactions: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.max_open_files: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.bytes_per_sync: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.max_background_flushes: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Compression algorithms supported:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kZSTD supported: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kXpressCompression supported: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kBZip2Compression supported: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kLZ4Compression supported: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kZlibCompression supported: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: #011kSnappyCompression supported: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf188200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf175350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf188200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf175350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf188200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf175350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf188200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf175350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf188200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf175350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf188200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf175350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf188200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf175350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf188120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf1754b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf188120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf1754b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:           Options.merge_operator: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.compaction_filter_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.sst_partitioner_factory: None
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5612cf188120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5612cf1754b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.write_buffer_size: 16777216
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.max_write_buffer_number: 64
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.compression: LZ4
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.num_levels: 7
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.level: 32767
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.compression_opts.strategy: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                  Options.compression_opts.enabled: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.arena_block_size: 1048576
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.disable_auto_compactions: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.inplace_update_support: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.bloom_locality: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                    Options.max_successive_merges: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.paranoid_file_checks: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.force_consistency_checks: 1
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.report_bg_io_stats: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                               Options.ttl: 2592000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                       Options.enable_blob_files: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                           Options.min_blob_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                          Options.blob_file_size: 268435456
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb:                Options.blob_file_starting_level: 0
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 18d4f904-cfb1-426a-ae34-3fdfed92262e
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002628661006, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  6 01:30:28 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  6 01:30:29 np0005548731 ceph-osd[80147]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002629079705, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002628, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "18d4f904-cfb1-426a-ae34-3fdfed92262e", "db_session_id": "JUYA7DAMPBZ4PBZIFSRR", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:30:29 np0005548731 ceph-osd[80147]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002629361256, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002629, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "18d4f904-cfb1-426a-ae34-3fdfed92262e", "db_session_id": "JUYA7DAMPBZ4PBZIFSRR", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:30:29 np0005548731 ceph-osd[80147]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002629403774, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002629, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "18d4f904-cfb1-426a-ae34-3fdfed92262e", "db_session_id": "JUYA7DAMPBZ4PBZIFSRR", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:30:29 np0005548731 ceph-osd[80147]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002629409796, "job": 1, "event": "recovery_finished"}
Dec  6 01:30:29 np0005548731 ceph-osd[80147]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  6 01:30:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5612cf247c00
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: rocksdb: DB pointer 0x5612d0081a00
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2.4 total, 2.4 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2.4 total, 2.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5612cf175350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2.4 total, 2.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5612cf175350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2.4 total, 2.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5612cf175350#2 capacity: 460.80 MB usag
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: _get_class not permitted to load lua
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: _get_class not permitted to load sdk
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: _get_class not permitted to load test_remote_reads
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: osd.2 0 load_pgs
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: osd.2 0 load_pgs opened 0 pgs
Dec  6 01:30:31 np0005548731 ceph-osd[80147]: osd.2 0 log_to_monitors true
Dec  6 01:30:31 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2[80143]: 2025-12-06T06:30:31.037+0000 7fb2b43d4740 -1 osd.2 0 log_to_monitors true
Dec  6 01:30:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Dec  6 01:30:31 np0005548731 ceph-mon[77458]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  6 01:30:31 np0005548731 podman[80718]: 2025-12-06 06:30:31.783224876 +0000 UTC m=+0.025748395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:31 np0005548731 podman[80718]: 2025-12-06 06:30:31.901637353 +0000 UTC m=+0.144160852 container create 29eb7aec304b5f07af568f0ef2298ba5fbcae9f362bf186008a20a444c88f35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 01:30:32 np0005548731 systemd[1]: Started libpod-conmon-29eb7aec304b5f07af568f0ef2298ba5fbcae9f362bf186008a20a444c88f35d.scope.
Dec  6 01:30:32 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:30:32 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  6 01:30:32 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  6 01:30:32 np0005548731 podman[80718]: 2025-12-06 06:30:32.15275948 +0000 UTC m=+0.395282989 container init 29eb7aec304b5f07af568f0ef2298ba5fbcae9f362bf186008a20a444c88f35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:30:32 np0005548731 podman[80718]: 2025-12-06 06:30:32.160986203 +0000 UTC m=+0.403509692 container start 29eb7aec304b5f07af568f0ef2298ba5fbcae9f362bf186008a20a444c88f35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  6 01:30:32 np0005548731 sleepy_mclean[80734]: 167 167
Dec  6 01:30:32 np0005548731 systemd[1]: libpod-29eb7aec304b5f07af568f0ef2298ba5fbcae9f362bf186008a20a444c88f35d.scope: Deactivated successfully.
Dec  6 01:30:32 np0005548731 conmon[80734]: conmon 29eb7aec304b5f07af56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29eb7aec304b5f07af568f0ef2298ba5fbcae9f362bf186008a20a444c88f35d.scope/container/memory.events
Dec  6 01:30:32 np0005548731 podman[80718]: 2025-12-06 06:30:32.170943798 +0000 UTC m=+0.413467287 container attach 29eb7aec304b5f07af568f0ef2298ba5fbcae9f362bf186008a20a444c88f35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:30:32 np0005548731 podman[80718]: 2025-12-06 06:30:32.172161838 +0000 UTC m=+0.414685327 container died 29eb7aec304b5f07af568f0ef2298ba5fbcae9f362bf186008a20a444c88f35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 01:30:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:32 np0005548731 ceph-mon[77458]: from='osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  6 01:30:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:32 np0005548731 ceph-mon[77458]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  6 01:30:32 np0005548731 systemd[1]: var-lib-containers-storage-overlay-06c2e3178c01afc9cffee06d938453fb8bb44dfc654edf2dc7496b23c5c04270-merged.mount: Deactivated successfully.
Dec  6 01:30:32 np0005548731 podman[80718]: 2025-12-06 06:30:32.575976707 +0000 UTC m=+0.818500196 container remove 29eb7aec304b5f07af568f0ef2298ba5fbcae9f362bf186008a20a444c88f35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:30:32 np0005548731 systemd[1]: libpod-conmon-29eb7aec304b5f07af568f0ef2298ba5fbcae9f362bf186008a20a444c88f35d.scope: Deactivated successfully.
Dec  6 01:30:32 np0005548731 podman[80758]: 2025-12-06 06:30:32.762704868 +0000 UTC m=+0.055361076 container create 93b116891e5e030ca48c00b063b9a667c330d357f13c9d8db9de9b5f738a895b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:30:32 np0005548731 systemd[1]: Started libpod-conmon-93b116891e5e030ca48c00b063b9a667c330d357f13c9d8db9de9b5f738a895b.scope.
Dec  6 01:30:32 np0005548731 podman[80758]: 2025-12-06 06:30:32.738744858 +0000 UTC m=+0.031401086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:32 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:30:32 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dc09dd9f1cf8ea9b41a5210a4a54b39d2db317646ebd56060e8c239c57faf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:32 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dc09dd9f1cf8ea9b41a5210a4a54b39d2db317646ebd56060e8c239c57faf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:32 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dc09dd9f1cf8ea9b41a5210a4a54b39d2db317646ebd56060e8c239c57faf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:32 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dc09dd9f1cf8ea9b41a5210a4a54b39d2db317646ebd56060e8c239c57faf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:32 np0005548731 podman[80758]: 2025-12-06 06:30:32.861736417 +0000 UTC m=+0.154392645 container init 93b116891e5e030ca48c00b063b9a667c330d357f13c9d8db9de9b5f738a895b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:30:32 np0005548731 podman[80758]: 2025-12-06 06:30:32.867992431 +0000 UTC m=+0.160648639 container start 93b116891e5e030ca48c00b063b9a667c330d357f13c9d8db9de9b5f738a895b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 01:30:32 np0005548731 podman[80758]: 2025-12-06 06:30:32.871682892 +0000 UTC m=+0.164339120 container attach 93b116891e5e030ca48c00b063b9a667c330d357f13c9d8db9de9b5f738a895b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:30:33 np0005548731 peaceful_driscoll[80774]: {
Dec  6 01:30:33 np0005548731 peaceful_driscoll[80774]:    "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5": {
Dec  6 01:30:33 np0005548731 peaceful_driscoll[80774]:        "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec  6 01:30:33 np0005548731 peaceful_driscoll[80774]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  6 01:30:33 np0005548731 peaceful_driscoll[80774]:        "osd_id": 2,
Dec  6 01:30:33 np0005548731 peaceful_driscoll[80774]:        "osd_uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5",
Dec  6 01:30:33 np0005548731 peaceful_driscoll[80774]:        "type": "bluestore"
Dec  6 01:30:33 np0005548731 peaceful_driscoll[80774]:    }
Dec  6 01:30:33 np0005548731 peaceful_driscoll[80774]: }
Dec  6 01:30:33 np0005548731 systemd[1]: libpod-93b116891e5e030ca48c00b063b9a667c330d357f13c9d8db9de9b5f738a895b.scope: Deactivated successfully.
Dec  6 01:30:33 np0005548731 podman[80758]: 2025-12-06 06:30:33.815747521 +0000 UTC m=+1.108403719 container died 93b116891e5e030ca48c00b063b9a667c330d357f13c9d8db9de9b5f738a895b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  6 01:30:33 np0005548731 systemd[1]: var-lib-containers-storage-overlay-a6dc09dd9f1cf8ea9b41a5210a4a54b39d2db317646ebd56060e8c239c57faf6-merged.mount: Deactivated successfully.
Dec  6 01:30:33 np0005548731 podman[80758]: 2025-12-06 06:30:33.872744385 +0000 UTC m=+1.165400593 container remove 93b116891e5e030ca48c00b063b9a667c330d357f13c9d8db9de9b5f738a895b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 01:30:33 np0005548731 systemd[1]: libpod-conmon-93b116891e5e030ca48c00b063b9a667c330d357f13c9d8db9de9b5f738a895b.scope: Deactivated successfully.
Dec  6 01:30:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e45 e45: 3 total, 2 up, 3 in
Dec  6 01:30:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Dec  6 01:30:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  6 01:30:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e46 e46: 3 total, 2 up, 3 in
Dec  6 01:30:36 np0005548731 ceph-mon[77458]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  6 01:30:36 np0005548731 ceph-mon[77458]: from='osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  6 01:30:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:36 np0005548731 ceph-mon[77458]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  6 01:30:36 np0005548731 ceph-osd[80147]: osd.2 0 done with init, starting boot process
Dec  6 01:30:36 np0005548731 ceph-osd[80147]: osd.2 0 start_boot
Dec  6 01:30:36 np0005548731 ceph-osd[80147]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  6 01:30:36 np0005548731 ceph-osd[80147]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  6 01:30:36 np0005548731 ceph-osd[80147]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  6 01:30:36 np0005548731 ceph-osd[80147]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  6 01:30:36 np0005548731 ceph-osd[80147]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec  6 01:30:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:40 np0005548731 ceph-mon[77458]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Dec  6 01:30:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oieczf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 01:30:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oieczf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 01:30:43 np0005548731 ceph-osd[80147]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 20.930 iops: 5358.078 elapsed_sec: 0.560
Dec  6 01:30:43 np0005548731 ceph-osd[80147]: log_channel(cluster) log [WRN] : OSD bench result of 5358.078177 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  6 01:30:43 np0005548731 ceph-osd[80147]: osd.2 0 waiting for initial osdmap
Dec  6 01:30:43 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2[80143]: 2025-12-06T06:30:43.015+0000 7fb2b0354640 -1 osd.2 0 waiting for initial osdmap
Dec  6 01:30:43 np0005548731 ceph-osd[80147]: osd.2 40 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  6 01:30:43 np0005548731 ceph-osd[80147]: osd.2 40 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  6 01:30:43 np0005548731 ceph-osd[80147]: osd.2 40 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  6 01:30:43 np0005548731 ceph-osd[80147]: osd.2 40 check_osdmap_features require_osd_release unknown -> reef
Dec  6 01:30:43 np0005548731 ceph-osd[80147]: osd.2 46 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  6 01:30:43 np0005548731 ceph-osd[80147]: osd.2 46 set_numa_affinity not setting numa affinity
Dec  6 01:30:43 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-2[80143]: 2025-12-06T06:30:43.087+0000 7fb2ab97c640 -1 osd.2 46 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  6 01:30:43 np0005548731 ceph-osd[80147]: osd.2 46 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec  6 01:30:43 np0005548731 podman[80954]: 2025-12-06 06:30:43.153215129 +0000 UTC m=+0.062900701 container create 953efd67eaea18cfc21ecb9c0d60b85cdc48d302299eee730b6f0faa90cda13d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:30:43 np0005548731 systemd[1]: Started libpod-conmon-953efd67eaea18cfc21ecb9c0d60b85cdc48d302299eee730b6f0faa90cda13d.scope.
Dec  6 01:30:43 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:30:43 np0005548731 podman[80954]: 2025-12-06 06:30:43.133087752 +0000 UTC m=+0.042773334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:43 np0005548731 podman[80954]: 2025-12-06 06:30:43.243423621 +0000 UTC m=+0.153109213 container init 953efd67eaea18cfc21ecb9c0d60b85cdc48d302299eee730b6f0faa90cda13d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:30:43 np0005548731 podman[80954]: 2025-12-06 06:30:43.249785148 +0000 UTC m=+0.159470720 container start 953efd67eaea18cfc21ecb9c0d60b85cdc48d302299eee730b6f0faa90cda13d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 01:30:43 np0005548731 podman[80954]: 2025-12-06 06:30:43.253743806 +0000 UTC m=+0.163429538 container attach 953efd67eaea18cfc21ecb9c0d60b85cdc48d302299eee730b6f0faa90cda13d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec  6 01:30:43 np0005548731 unruffled_elbakyan[80972]: 167 167
Dec  6 01:30:43 np0005548731 systemd[1]: libpod-953efd67eaea18cfc21ecb9c0d60b85cdc48d302299eee730b6f0faa90cda13d.scope: Deactivated successfully.
Dec  6 01:30:43 np0005548731 podman[80954]: 2025-12-06 06:30:43.256196785 +0000 UTC m=+0.165882357 container died 953efd67eaea18cfc21ecb9c0d60b85cdc48d302299eee730b6f0faa90cda13d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:30:43 np0005548731 systemd[1]: var-lib-containers-storage-overlay-2d4621168bf0403db12912d94833aa4a55f33632db2d32a6da50f75e31445bb5-merged.mount: Deactivated successfully.
Dec  6 01:30:43 np0005548731 podman[80954]: 2025-12-06 06:30:43.299475112 +0000 UTC m=+0.209160684 container remove 953efd67eaea18cfc21ecb9c0d60b85cdc48d302299eee730b6f0faa90cda13d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  6 01:30:43 np0005548731 systemd[1]: libpod-conmon-953efd67eaea18cfc21ecb9c0d60b85cdc48d302299eee730b6f0faa90cda13d.scope: Deactivated successfully.
Dec  6 01:30:43 np0005548731 systemd[1]: Reloading.
Dec  6 01:30:43 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:30:43 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:30:43 np0005548731 systemd[1]: Reloading.
Dec  6 01:30:43 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:30:43 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:30:43 np0005548731 systemd[1]: Starting Ceph rgw.rgw.compute-2.oieczf for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 46 tick checking mon for new map
Dec  6 01:30:44 np0005548731 podman[81116]: 2025-12-06 06:30:44.171754522 +0000 UTC m=+0.053551750 container create ee67a562ba12bf5ec21434cc3c42d8b922d749c85274ac848aa0217c26882157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-2-oieczf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 01:30:44 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351a9a91551f967c3d2ebc25be0c4282a2db9ba75a153c95fd64b4c15c740355/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:44 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351a9a91551f967c3d2ebc25be0c4282a2db9ba75a153c95fd64b4c15c740355/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:44 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351a9a91551f967c3d2ebc25be0c4282a2db9ba75a153c95fd64b4c15c740355/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:44 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/351a9a91551f967c3d2ebc25be0c4282a2db9ba75a153c95fd64b4c15c740355/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-2.oieczf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:30:44 np0005548731 podman[81116]: 2025-12-06 06:30:44.150272473 +0000 UTC m=+0.032069731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:30:44 np0005548731 podman[81116]: 2025-12-06 06:30:44.249134698 +0000 UTC m=+0.130931976 container init ee67a562ba12bf5ec21434cc3c42d8b922d749c85274ac848aa0217c26882157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-2-oieczf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 01:30:44 np0005548731 podman[81116]: 2025-12-06 06:30:44.255413263 +0000 UTC m=+0.137210501 container start ee67a562ba12bf5ec21434cc3c42d8b922d749c85274ac848aa0217c26882157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-2-oieczf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:30:44 np0005548731 bash[81116]: ee67a562ba12bf5ec21434cc3c42d8b922d749c85274ac848aa0217c26882157
Dec  6 01:30:44 np0005548731 systemd[1]: Started Ceph rgw.rgw.compute-2.oieczf for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec  6 01:30:44 np0005548731 radosgw[81136]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec  6 01:30:44 np0005548731 radosgw[81136]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Dec  6 01:30:44 np0005548731 radosgw[81136]: framework: beast
Dec  6 01:30:44 np0005548731 radosgw[81136]: framework conf key: endpoint, val: 192.168.122.102:8082
Dec  6 01:30:44 np0005548731 radosgw[81136]: init_numa not setting numa affinity
Dec  6 01:30:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e47 e47: 3 total, 3 up, 3 in
Dec  6 01:30:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:44 np0005548731 ceph-mon[77458]: Deploying daemon rgw.rgw.compute-2.oieczf on compute-2
Dec  6 01:30:44 np0005548731 ceph-mon[77458]: OSD bench result of 5358.078177 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 47 state: booting -> active
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.1c( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[3.1b( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=47) [2] r=0 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.1d( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.1b( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.19( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[3.8( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=47) [2] r=0 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[6.1( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.3( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.6( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[5.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=32/32 les/c/f=33/33/0 sis=47) [2] r=0 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.2( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.a( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[5.d( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.d( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[5.b( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[5.8( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.c( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[7.1d( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[7.14( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.10( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[5.12( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[5.13( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.13( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.14( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.15( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.b( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.f( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.9( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.8( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.5( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[5.4( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.1f( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[3.15( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[3.e( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.12( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[7.16( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.15( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[3.11( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[4.1( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[3.9( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[5.e( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.1d( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.18( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[5.1a( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[2.1c( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[3.1a( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 47 pg[3.1d( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:30:47 np0005548731 ceph-mon[77458]: osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493] boot
Dec  6 01:30:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e48 e48: 3 total, 3 up, 3 in
Dec  6 01:30:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Dec  6 01:30:51 np0005548731 ceph-mon[77458]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  6 01:30:51 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.1c( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:51 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[3.1d( empty local-lis/les=47/48 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:51 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=47/48 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:51 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.1d( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:51 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[3.9( empty local-lis/les=47/48 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:51 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.1f( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=47/48 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.8( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.a( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.2( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[5.d( empty local-lis/les=47/48 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.1( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[5.0( empty local-lis/les=47/48 n=0 ec=18/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[7.5( empty local-lis/les=47/48 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.6( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[3.0( empty local-lis/les=47/48 n=0 ec=14/14 lis/c=32/32 les/c/f=33/33/0 sis=47) [2] r=0 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[6.1( empty local-lis/les=47/48 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=47) [2] r=0 lpr=47 pi=[36,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.3( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.1d( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[3.1a( empty local-lis/les=47/48 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.19( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[3.8( empty local-lis/les=47/48 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=47) [2] r=0 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[3.1b( empty local-lis/les=47/48 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=47) [2] r=0 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.1c( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.1b( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.d( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[5.b( empty local-lis/les=47/48 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.c( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[7.1f( empty local-lis/les=47/48 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[7.a( empty local-lis/les=47/48 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[5.8( empty local-lis/les=47/48 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.9( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[3.e( empty local-lis/les=47/48 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.f( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[7.1d( empty local-lis/les=47/48 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.10( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.18( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=47/48 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.15( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[7.14( empty local-lis/les=47/48 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=47/48 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.5( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[7.11( empty local-lis/les=47/48 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[3.15( empty local-lis/les=47/48 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.15( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.13( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47) [2] r=0 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.12( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[4.14( empty local-lis/les=47/48 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[7.16( empty local-lis/les=47/48 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[2.b( empty local-lis/les=47/48 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[3.11( empty local-lis/les=47/48 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 48 pg[5.e( empty local-lis/les=47/48 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47) [2] r=0 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:30:52 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec  6 01:30:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e49 e49: 3 total, 3 up, 3 in
Dec  6 01:30:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:54 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec  6 01:30:54 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec  6 01:30:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:30:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e50 e50: 3 total, 3 up, 3 in
Dec  6 01:30:55 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  6 01:30:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dmyhav", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 01:30:55 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  6 01:30:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dmyhav", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 01:30:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:57 np0005548731 ceph-mon[77458]: Deploying daemon rgw.rgw.compute-1.dmyhav on compute-1
Dec  6 01:30:57 np0005548731 ceph-mon[77458]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 01:30:57 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  6 01:30:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e51 e51: 3 total, 3 up, 3 in
Dec  6 01:30:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Dec  6 01:30:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  6 01:30:58 np0005548731 ceph-mon[77458]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  6 01:30:58 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  6 01:30:58 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  6 01:30:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e52 e52: 3 total, 3 up, 3 in
Dec  6 01:30:58 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec  6 01:30:58 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec  6 01:30:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wqlami", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  6 01:30:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wqlami", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  6 01:30:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:30:59 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  6 01:30:59 np0005548731 ceph-mon[77458]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 01:30:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e53 e53: 3 total, 3 up, 3 in
Dec  6 01:30:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Dec  6 01:30:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 01:31:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:31:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e54 e54: 3 total, 3 up, 3 in
Dec  6 01:31:01 np0005548731 ceph-mon[77458]: Deploying daemon rgw.rgw.compute-0.wqlami on compute-0
Dec  6 01:31:01 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.100:0/1768129791' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 01:31:01 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 01:31:01 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.101:0/558176623' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 01:31:01 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 01:31:01 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  6 01:31:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:02 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec  6 01:31:02 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec  6 01:31:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:04 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.100:0/1768129791' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  6 01:31:04 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  6 01:31:04 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  6 01:31:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e55 e55: 3 total, 3 up, 3 in
Dec  6 01:31:05 np0005548731 radosgw[81136]: LDAP not started since no server URIs were provided in the configuration.
Dec  6 01:31:05 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-2-oieczf[81132]: 2025-12-06T06:31:05.345+0000 7fc71b7ac940 -1 LDAP not started since no server URIs were provided in the configuration.
Dec  6 01:31:05 np0005548731 radosgw[81136]: framework: beast
Dec  6 01:31:05 np0005548731 radosgw[81136]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec  6 01:31:05 np0005548731 radosgw[81136]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec  6 01:31:05 np0005548731 radosgw[81136]: starting handler: beast
Dec  6 01:31:05 np0005548731 radosgw[81136]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 01:31:05 np0005548731 radosgw[81136]: mgrc service_daemon_register rgw.24148 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.102:8082,frontend_type#0=beast,hostname=compute-2,id=rgw.compute-2.oieczf,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=22f3b99e-039b-412d-b524-6d79a2ee4dad,zone_name=default,zonegroup_id=3605fdfe-bab9-40f4-83c5-5b52927c1749,zonegroup_name=default}
Dec  6 01:31:05 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec  6 01:31:05 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec  6 01:31:06 np0005548731 ceph-mon[77458]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  6 01:31:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:06 np0005548731 ceph-mon[77458]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  6 01:31:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:06 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 01:31:06 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.101:0/758001210' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 01:31:06 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 01:31:06 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  6 01:31:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:31:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e56 e56: 3 total, 3 up, 3 in
Dec  6 01:31:07 np0005548731 podman[81892]: 2025-12-06 06:31:07.583599631 +0000 UTC m=+0.068387960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:31:07 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec  6 01:31:09 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec  6 01:31:09 np0005548731 podman[81892]: 2025-12-06 06:31:09.017629305 +0000 UTC m=+1.502417604 container create e7e06d3ae8189a2762ac8d66ac9f43318ac68e0c79d5a358b5e6457edeffa26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 01:31:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.tjfgow", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  6 01:31:09 np0005548731 systemd[1]: Started libpod-conmon-e7e06d3ae8189a2762ac8d66ac9f43318ac68e0c79d5a358b5e6457edeffa26d.scope.
Dec  6 01:31:09 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:31:10 np0005548731 podman[81892]: 2025-12-06 06:31:10.335378585 +0000 UTC m=+2.820166904 container init e7e06d3ae8189a2762ac8d66ac9f43318ac68e0c79d5a358b5e6457edeffa26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec  6 01:31:10 np0005548731 podman[81892]: 2025-12-06 06:31:10.345638296 +0000 UTC m=+2.830426615 container start e7e06d3ae8189a2762ac8d66ac9f43318ac68e0c79d5a358b5e6457edeffa26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jones, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:31:10 np0005548731 unruffled_jones[81907]: 167 167
Dec  6 01:31:10 np0005548731 systemd[1]: libpod-e7e06d3ae8189a2762ac8d66ac9f43318ac68e0c79d5a358b5e6457edeffa26d.scope: Deactivated successfully.
Dec  6 01:31:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e57 e57: 3 total, 3 up, 3 in
Dec  6 01:31:10 np0005548731 podman[81892]: 2025-12-06 06:31:10.50841837 +0000 UTC m=+2.993206689 container attach e7e06d3ae8189a2762ac8d66ac9f43318ac68e0c79d5a358b5e6457edeffa26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  6 01:31:10 np0005548731 podman[81892]: 2025-12-06 06:31:10.509798866 +0000 UTC m=+2.994587165 container died e7e06d3ae8189a2762ac8d66ac9f43318ac68e0c79d5a358b5e6457edeffa26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:31:11 np0005548731 ceph-mon[77458]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  6 01:31:11 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  6 01:31:11 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  6 01:31:11 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  6 01:31:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.tjfgow", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  6 01:31:11 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 01:31:11 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 01:31:11 np0005548731 ceph-mon[77458]: Deploying daemon mds.cephfs.compute-2.tjfgow on compute-2
Dec  6 01:31:11 np0005548731 systemd[1]: var-lib-containers-storage-overlay-4483306150bde5cc7a94611d6de21f1b16599694297279f0dc764b36172a52ba-merged.mount: Deactivated successfully.
Dec  6 01:31:11 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.0 deep-scrub starts
Dec  6 01:31:11 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.0 deep-scrub ok
Dec  6 01:31:12 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec  6 01:31:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:31:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e58 e58: 3 total, 3 up, 3 in
Dec  6 01:31:13 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec  6 01:31:13 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec  6 01:31:14 np0005548731 podman[81892]: 2025-12-06 06:31:14.047745456 +0000 UTC m=+6.532533755 container remove e7e06d3ae8189a2762ac8d66ac9f43318ac68e0c79d5a358b5e6457edeffa26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:31:14 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec  6 01:31:14 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.101:0/758001210' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 01:31:14 np0005548731 ceph-mon[77458]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  6 01:31:14 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  6 01:31:14 np0005548731 ceph-mon[77458]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  6 01:31:14 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  6 01:31:14 np0005548731 systemd[1]: libpod-conmon-e7e06d3ae8189a2762ac8d66ac9f43318ac68e0c79d5a358b5e6457edeffa26d.scope: Deactivated successfully.
Dec  6 01:31:14 np0005548731 systemd[1]: Reloading.
Dec  6 01:31:14 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:31:14 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:31:14 np0005548731 systemd[1]: Reloading.
Dec  6 01:31:14 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:31:14 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:31:14 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec  6 01:31:14 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec  6 01:31:14 np0005548731 systemd[1]: Starting Ceph mds.cephfs.compute-2.tjfgow for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec  6 01:31:15 np0005548731 podman[82055]: 2025-12-06 06:31:15.062200298 +0000 UTC m=+0.023444881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:31:16 np0005548731 ceph-mon[77458]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  6 01:31:16 np0005548731 podman[82055]: 2025-12-06 06:31:16.02830123 +0000 UTC m=+0.989545793 container create 2b70c30c81d512a5b560f652d457a40ac4aa89619e4a41c2135f5e71cb48af7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-2-tjfgow, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  6 01:31:16 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79bac75828200f41cc10a84a4500e248bcb7a14e2097080a1782d41bae4e1baa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:31:16 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79bac75828200f41cc10a84a4500e248bcb7a14e2097080a1782d41bae4e1baa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:31:16 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79bac75828200f41cc10a84a4500e248bcb7a14e2097080a1782d41bae4e1baa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 01:31:16 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79bac75828200f41cc10a84a4500e248bcb7a14e2097080a1782d41bae4e1baa/merged/var/lib/ceph/mds/ceph-cephfs.compute-2.tjfgow supports timestamps until 2038 (0x7fffffff)
Dec  6 01:31:16 np0005548731 podman[82055]: 2025-12-06 06:31:16.389470318 +0000 UTC m=+1.350714901 container init 2b70c30c81d512a5b560f652d457a40ac4aa89619e4a41c2135f5e71cb48af7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-2-tjfgow, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  6 01:31:16 np0005548731 podman[82055]: 2025-12-06 06:31:16.395633371 +0000 UTC m=+1.356877934 container start 2b70c30c81d512a5b560f652d457a40ac4aa89619e4a41c2135f5e71cb48af7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-2-tjfgow, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  6 01:31:16 np0005548731 bash[82055]: 2b70c30c81d512a5b560f652d457a40ac4aa89619e4a41c2135f5e71cb48af7e
Dec  6 01:31:16 np0005548731 ceph-mds[82074]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 01:31:16 np0005548731 ceph-mds[82074]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Dec  6 01:31:16 np0005548731 ceph-mds[82074]: main not setting numa affinity
Dec  6 01:31:16 np0005548731 ceph-mds[82074]: pidfile_write: ignore empty --pid-file
Dec  6 01:31:16 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-2-tjfgow[82070]: starting mds.cephfs.compute-2.tjfgow at 
Dec  6 01:31:16 np0005548731 systemd[1]: Started Ceph mds.cephfs.compute-2.tjfgow for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec  6 01:31:16 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 2 from mon.1
Dec  6 01:31:16 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec  6 01:31:16 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec  6 01:31:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e3 new map
Dec  6 01:31:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T06:29:04.228355+0000#012modified#0112025-12-06T06:29:04.228395+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.tjfgow{-1:24157} state up:standby seq 1 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
Dec  6 01:31:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 3 from mon.1
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Monitors have assigned me to become a standby.
Dec  6 01:31:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e4 new map
Dec  6 01:31:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T06:29:04.228355+0000#012modified#0112025-12-06T06:31:17.652602+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.tjfgow{0:24157} state up:creating seq 1 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 4 from mon.1
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.4 handle_mds_map i am now mds.0.4
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x1
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x100
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x600
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x601
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x602
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x603
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x604
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x605
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x606
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x607
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x608
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x609
Dec  6 01:31:17 np0005548731 ceph-mds[82074]: mds.0.4 creating_done
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qqwnku", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: daemon mds.cephfs.compute-2.tjfgow assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: Cluster is now healthy
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qqwnku", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: Deploying daemon mds.cephfs.compute-0.qqwnku on compute-0
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: daemon mds.cephfs.compute-2.tjfgow is now active in filesystem cephfs as rank 0
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e5 new map
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T06:29:04.228355+0000#012modified#0112025-12-06T06:31:18.697695+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Dec  6 01:31:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e59 e59: 3 total, 3 up, 3 in
Dec  6 01:31:18 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 5 from mon.1
Dec  6 01:31:18 np0005548731 ceph-mds[82074]: mds.0.4 handle_mds_map i am now mds.0.4
Dec  6 01:31:18 np0005548731 ceph-mds[82074]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec  6 01:31:18 np0005548731 ceph-mds[82074]: mds.0.4 recovery_done -- successful recovery!
Dec  6 01:31:18 np0005548731 ceph-mds[82074]: mds.0.4 active_start
Dec  6 01:31:20 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec  6 01:31:20 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec  6 01:31:21 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec  6 01:31:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e60 e60: 3 total, 3 up, 3 in
Dec  6 01:31:21 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec  6 01:31:21 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  6 01:31:21 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 01:31:21 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e6 new map
Dec  6 01:31:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T06:29:04.228355+0000#012modified#0112025-12-06T06:31:18.697695+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qqwnku{-1:14385} state up:standby seq 1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
Dec  6 01:31:22 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.1d deep-scrub starts
Dec  6 01:31:22 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.1d deep-scrub ok
Dec  6 01:31:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:31:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e7 new map
Dec  6 01:31:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T06:29:04.228355+0000#012modified#0112025-12-06T06:31:18.697695+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qqwnku{-1:14385} state up:standby seq 1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
Dec  6 01:31:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e61 e61: 3 total, 3 up, 3 in
Dec  6 01:31:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  6 01:31:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 01:31:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:24 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Dec  6 01:31:24 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Dec  6 01:31:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e62 e62: 3 total, 3 up, 3 in
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vsxbzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vsxbzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: Deploying daemon mds.cephfs.compute-1.vsxbzt on compute-1
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e63 e63: 3 total, 3 up, 3 in
Dec  6 01:31:27 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec  6 01:31:27 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec  6 01:31:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:31:29 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec  6 01:31:29 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec  6 01:31:31 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec  6 01:31:33 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec  6 01:31:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e64 e64: 3 total, 3 up, 3 in
Dec  6 01:31:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:31:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:31:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  6 01:31:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:31:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:31:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  6 01:31:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:31:33 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec  6 01:31:33 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec  6 01:31:35 np0005548731 systemd-logind[794]: New session 33 of user zuul.
Dec  6 01:31:35 np0005548731 systemd[1]: Started Session 33 of User zuul.
Dec  6 01:31:36 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec  6 01:31:36 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec  6 01:31:36 np0005548731 python3.9[82257]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:31:37 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:31:38 np0005548731 python3.9[82471]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:31:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:31:39 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.14 deep-scrub starts
Dec  6 01:31:39 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.14 deep-scrub ok
Dec  6 01:31:40 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec  6 01:31:40 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec  6 01:31:42 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec  6 01:31:42 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec  6 01:31:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:31:44 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec  6 01:31:44 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec  6 01:31:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:31:45 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Dec  6 01:31:45 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Dec  6 01:31:47 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec  6 01:31:47 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec  6 01:31:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:31:49 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:31:50 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec  6 01:31:50 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec  6 01:31:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:31:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e65 e65: 3 total, 3 up, 3 in
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[11.17( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.16( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.2( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[11.a( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.9( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[11.3( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[11.e( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.d( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.a( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.f( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[11.8( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.b( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.15( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[11.16( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.11( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.3( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[11.13( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.c( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.6( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.5( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[11.19( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.1f( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[8.1c( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec  6 01:31:54 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[10.12( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[10.3( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[10.4( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[10.1e( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[10.11( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[10.f( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[10.1( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 65 pg[10.10( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:31:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e66 e66: 3 total, 3 up, 3 in
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.2( v 50'4 (0'0,50'4] local-lis/les=65/66 n=1 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.16( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[11.e( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.a( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.f( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.15( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[11.17( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[11.16( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.11( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[11.13( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.3( v 50'4 (0'0,50'4] local-lis/les=65/66 n=1 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.d( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[11.3( v 58'3 (0'0,58'3] local-lis/les=65/66 n=1 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[11.8( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.6( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.5( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.9( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[11.19( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.b( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[11.a( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.c( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.1f( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[10.1e( v 57'96 (0'0,57'96] local-lis/les=65/66 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[8.1c( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [2] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[10.10( v 57'96 (0'0,57'96] local-lis/les=65/66 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[10.4( v 57'96 (0'0,57'96] local-lis/les=65/66 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[10.12( v 57'96 (0'0,57'96] local-lis/les=65/66 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[10.1( v 57'96 (0'0,57'96] local-lis/les=65/66 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[10.f( v 57'96 (0'0,57'96] local-lis/les=65/66 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[10.11( v 57'96 (0'0,57'96] local-lis/les=65/66 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 66 pg[10.3( v 64'99 lc 57'84 (0'0,64'99] local-lis/les=65/66 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [2] r=0 lpr=65 pi=[62,65)/1 crt=64'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:31:57 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec  6 01:31:57 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec  6 01:31:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:01 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec  6 01:32:01 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec  6 01:32:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:32:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:32:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  6 01:32:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:32:03 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.11 deep-scrub starts
Dec  6 01:32:03 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.11 deep-scrub ok
Dec  6 01:32:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e8 new map
Dec  6 01:32:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T06:29:04.228355+0000#012modified#0112025-12-06T06:31:18.697695+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qqwnku{-1:14385} state up:standby seq 1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
Dec  6 01:32:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:04 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec  6 01:32:04 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec  6 01:32:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:07 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec  6 01:32:07 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec  6 01:32:07 np0005548731 ceph-mon[77458]: Health check failed: Degraded data redundancy: 1/216 objects degraded (0.463%), 2 pgs degraded (PG_DEGRADED)
Dec  6 01:32:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e9 new map
Dec  6 01:32:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T06:29:04.228355+0000#012modified#0112025-12-06T06:32:07.852539+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01167#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14385}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qqwnku{0:14385} state up:replay seq 13 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
Dec  6 01:32:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e67 e67: 3 total, 3 up, 3 in
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 9 from mon.1
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Map removed me [mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}] from cluster; respawning! See cluster/monitor logs for details.
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow respawn!
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command assert hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command abort hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command leak_some_memory hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command perfcounters_dump hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command 1 hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command perf dump hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command perfcounters_schema hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command perf histogram dump hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command 2 hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command perf schema hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command counter dump hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command counter schema hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command perf histogram schema hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command perf reset hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command config show hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command config help hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command config set hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command config unset hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command config get hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command config diff hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command config diff get hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command injectargs hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command log flush hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command log dump hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command log reopen hook 0x5558f982acc0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump_mempools hook 0x5558f9860328
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: build_initial for_mkfs: 0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding auth protocol: none
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.tjfgow/keyring
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding auth protocol: none
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d17f480) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.tjfgow/keyring
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.tjfgow/keyring
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command rotate-key hook 0x7fff9d17f5c8
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: monclient: found mon.noname-c
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: monclient: authenticate success, global_id 24154
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: set_mon_vals no callback set
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) unregister_commands rotate-key
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: set uid:gid to 167:167 (ceph:ceph)
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: main not setting numa affinity
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: pidfile_write: ignore empty --pid-file
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) init /var/run/ceph/ceph-mds.cephfs.compute-2.tjfgow.asok
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) bind_and_listen /var/run/ceph/ceph-mds.cephfs.compute-2.tjfgow.asok
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command 0 hook 0x5558f98813d8
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command version hook 0x5558f98813d8
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command git_version hook 0x5558f98813d8
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command help hook 0x5558f982ac40
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command get_command_descriptions hook 0x5558f982ac50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command raise hook 0x5558f9864870
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding auth protocol: none
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x5558fa5d40d0) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.tjfgow/keyring
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) entry start
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: build_initial for_mkfs: 0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding auth protocol: cephx
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding auth protocol: none
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: crc
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: AuthRegistry(0x7fff9d180800) adding con mode: secure
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.tjfgow/keyring
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: auth: KeyRing::load: loaded key file /var/lib/ceph/mds/ceph-cephfs.compute-2.tjfgow/keyring
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command rotate-key hook 0x7fff9d180948
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: monclient: found mon.noname-c
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: set_mon_vals no callback set
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: monclient: authenticate success, global_id 24157
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command status hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump_ops_in_flight hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command ops hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump_blocked_ops hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump_blocked_ops_count hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump_historic_ops hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump_historic_ops_by_duration hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command scrub_path hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command scrub start hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command scrub abort hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command scrub pause hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command scrub resume hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command scrub status hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command tag path hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command flush_path hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command export dir hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump cache hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command cache drop hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command lock path hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command cache status hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump tree hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump loads hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump snaps hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command session ls hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command client ls hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command session evict hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command client evict hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command session kill hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command session config hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command client config hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command damage ls hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command damage rm hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command osdmap barrier hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command flush journal hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command force_readonly hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command get subtrees hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dirfrag split hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dirfrag merge hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dirfrag ls hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command openfiles ls hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump inode hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command dump dir hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command exit hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command respawn hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command heap hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command cpu_profiler hook 0x5558f982bd50
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 2 from mon.1
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:boot seq 1
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 3 from mon.1
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Monitors have assigned me to become a standby.
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow set_want_state: up:boot -> up:standby
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:boot seq 1 rtt 1.23503
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mgrc handle_mgr_map Got map version 11
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/798720280,v1:192.168.122.100:6801/798720280]
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/798720280,v1:192.168.122.100:6801/798720280]
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 4 from mon.1
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.purge_queue operator():  data pool 7 not found in OSDMap
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: asok(0x5558f98ae000) register_command objecter_requests hook 0x5558f982bf10
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.purge_queue operator():  data pool 7 not found in OSDMap
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.0 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 handle_mds_map i am now mds.0.4
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow set_want_state: up:standby -> up:creating
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 boot_create
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.log create empty log
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.journaler.mdlog(ro) set_writeable
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.journaler.mdlog(rw) created blank journal at inode 0x0x200, format=1
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 boot_create creating fresh hierarchy
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x1
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 boot_create creating mydir hierarchy
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x100
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x600
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x601
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x602
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x603
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x604
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x605
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x606
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x607
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x608
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache creating system inode with ino:0x609
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 boot_create creating global snaprealm
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.purge_queue create: creating
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.journaler.pq(ro) set_writeable
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.journaler.pq(rw) created blank journal at inode 0x0x500, format=1
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mgrc handle_mgr_configure stats_period=5
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mgrc handle_mgr_configure updated stats threshold: 5
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.log _submit_thread 4194304~872 : ESubtreeMap 2 subtrees , 0 ambiguous [metablob 0x1, 2 dirs]
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 251348, rss 36964, heap 190740, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 creating_done
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 request_state up:active
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow set_want_state: up:creating -> up:active
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 2
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 252372, rss 37796, heap 190740, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 5 from mon.1
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 handle_mds_map i am now mds.0.4
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 recovery_done -- successful recovery!
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 active_start
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 set_osd_epoch_barrier: epoch=58
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 2 rtt 1.04903
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 37992, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38064, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38264, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 3
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 3 rtt 0.0610016
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38280, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38284, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38296, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38300, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 4
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 4 rtt 0.0440011
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38312, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38016, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38020, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38028, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 5
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38036, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38040, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38044, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 5 rtt 3.35309
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38056, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 6
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38064, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38068, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38072, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38088, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 7
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 6 rtt 4.88513
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 7 rtt 0.884023
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38096, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38104, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38108, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38112, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 8
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38120, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38128, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38132, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 8 rtt 3.47709
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 9
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38140, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38148, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38152, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38156, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 10
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38164, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38172, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 9 rtt 5.56214
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 10 rtt 1.56104
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38180, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38184, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 11
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38192, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 11 rtt 0.135004
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38200, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.4 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0)
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38208, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38216, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 12
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38224, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38228, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38232, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 12 rtt 2.89708
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38240, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 13
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38248, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38252, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 13 rtt 1.44204
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38252, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38256, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow Sending beacon up:active seq 14
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38264, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: set_mon_vals no callback set
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow received beacon reply up:active seq 14 rtt 0.183005
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38320, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache Memory usage:  total 268764, rss 38340, heap 207124, baseline 190740, 0 / 13 inodes have caps, 0 caps, 0 caps per inode
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.0.cache trim bytes_used=50kB limit=4GB reservation=0.05% count=0
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 9 from mon.1
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Map removed me [mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}] from cluster; respawning! See cluster/monitor logs for details.
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow respawn!
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  e: '/usr/bin/ceph-mds'
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  0: '/usr/bin/ceph-mds'
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  1: '-n'
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  2: 'mds.cephfs.compute-2.tjfgow'
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  3: '-f'
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  4: '--setuser'
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  5: 'ceph'
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  6: '--setgroup'
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  7: 'ceph'
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  8: '--default-log-to-file=false'
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  9: '--default-log-to-journald=true'
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow  10: '--default-log-to-stderr=false'
Dec  6 01:32:08 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-2-tjfgow[82070]: ignoring --setuser ceph since I am not root
Dec  6 01:32:08 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-2-tjfgow[82070]: ignoring --setgroup ceph since I am not root
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: main not setting numa affinity
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: pidfile_write: ignore empty --pid-file
Dec  6 01:32:08 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-2-tjfgow[82070]: starting mds.cephfs.compute-2.tjfgow at 
Dec  6 01:32:08 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 9 from mon.1
Dec  6 01:32:08 np0005548731 ceph-mon[77458]: Deploying daemon haproxy.rgw.default.compute-0.ybrwqj on compute-0
Dec  6 01:32:08 np0005548731 ceph-mon[77458]: Dropping low affinity active daemon mds.cephfs.compute-2.tjfgow in favor of higher affinity standby.
Dec  6 01:32:08 np0005548731 ceph-mon[77458]: Replacing daemon mds.cephfs.compute-2.tjfgow as rank 0 with standby daemon mds.cephfs.compute-0.qqwnku
Dec  6 01:32:08 np0005548731 ceph-mon[77458]: Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Dec  6 01:32:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  6 01:32:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:08 np0005548731 systemd[1]: session-33.scope: Deactivated successfully.
Dec  6 01:32:08 np0005548731 systemd[1]: session-33.scope: Consumed 8.870s CPU time.
Dec  6 01:32:08 np0005548731 systemd-logind[794]: Session 33 logged out. Waiting for processes to exit.
Dec  6 01:32:08 np0005548731 systemd-logind[794]: Removed session 33.
Dec  6 01:32:09 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 3.11 deep-scrub starts
Dec  6 01:32:09 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 3.11 deep-scrub ok
Dec  6 01:32:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e68 e68: 3 total, 3 up, 3 in
Dec  6 01:32:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e10 new map
Dec  6 01:32:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e10 print_map#012e10#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01110#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T06:29:04.228355+0000#012modified#0112025-12-06T06:32:09.094595+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01167#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14385}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qqwnku{0:14385} state up:reconnect seq 14 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.tjfgow{-1:24166} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] compat {c=[1],r=[1],i=[7ff]}]
Dec  6 01:32:09 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Updating MDS map to version 10 from mon.1
Dec  6 01:32:09 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Monitors have assigned me to become a standby.
Dec  6 01:32:10 np0005548731 ceph-mon[77458]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/216 objects degraded (0.463%), 2 pgs degraded)
Dec  6 01:32:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  6 01:32:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e11 new map
Dec  6 01:32:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e11 print_map#012e11#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01111#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T06:29:04.228355+0000#012modified#0112025-12-06T06:32:10.364755+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01167#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14385}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qqwnku{0:14385} state up:rejoin seq 15 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.tjfgow{-1:24166} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] compat {c=[1],r=[1],i=[7ff]}]
Dec  6 01:32:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e69 e69: 3 total, 3 up, 3 in
Dec  6 01:32:12 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  6 01:32:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000058s ======
Dec  6 01:32:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:12.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Dec  6 01:32:12 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:12 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 69 pg[9.b( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:12 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 69 pg[9.3( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:12 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:12 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 69 pg[9.13( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:12 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:12 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:12 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 69 pg[9.1b( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:13 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.15 deep-scrub starts
Dec  6 01:32:13 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.15 deep-scrub ok
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e70 e70: 3 total, 3 up, 3 in
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e70 crush map has features 3314933000854323200, adjusting msgr requires
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e70 crush map has features 432629239337189376, adjusting msgr requires
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e70 crush map has features 432629239337189376, adjusting msgr requires
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e70 crush map has features 432629239337189376, adjusting msgr requires
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 70 crush map has features 432629239337189376, adjusting msgr requires for clients
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 70 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 70 crush map has features 3314933000854323200, adjusting msgr requires for osds
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.7( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.b( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.3( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.17( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.3( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.17( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.b( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.7( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.13( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.13( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.1b( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.1b( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:14 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 70 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:14.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e12 new map
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).mds e12 print_map#012e12#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T06:29:04.228355+0000#012modified#0112025-12-06T06:32:13.099122+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01167#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14385}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qqwnku{0:14385} state up:active seq 16 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.tjfgow{-1:24166} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] compat {c=[1],r=[1],i=[7ff]}]
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: daemon mds.cephfs.compute-0.qqwnku is now active in filesystem cephfs as rank 0
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.2", "id": [0, 1]}]: dispatch
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.8", "id": [0, 1]}]: dispatch
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [0, 1]}]: dispatch
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]: dispatch
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 1]}]: dispatch
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 1]}]: dispatch
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Dec  6 01:32:14 np0005548731 ceph-mon[77458]: Cluster is now healthy
Dec  6 01:32:15 np0005548731 podman[82688]: 2025-12-06 06:32:15.154670522 +0000 UTC m=+2.117287661 container create 4b946982a4a85b713b19c642752f83cba970c2da1b179d6f315166d7d328ad63 (image=quay.io/ceph/haproxy:2.3, name=objective_shannon)
Dec  6 01:32:15 np0005548731 podman[82688]: 2025-12-06 06:32:15.137764123 +0000 UTC m=+2.100381292 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  6 01:32:15 np0005548731 systemd[72782]: Created slice User Background Tasks Slice.
Dec  6 01:32:15 np0005548731 systemd[72782]: Starting Cleanup of User's Temporary Files and Directories...
Dec  6 01:32:15 np0005548731 systemd[1]: Started libpod-conmon-4b946982a4a85b713b19c642752f83cba970c2da1b179d6f315166d7d328ad63.scope.
Dec  6 01:32:15 np0005548731 systemd[72782]: Finished Cleanup of User's Temporary Files and Directories.
Dec  6 01:32:15 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:32:15 np0005548731 podman[82688]: 2025-12-06 06:32:15.207456193 +0000 UTC m=+2.170073352 container init 4b946982a4a85b713b19c642752f83cba970c2da1b179d6f315166d7d328ad63 (image=quay.io/ceph/haproxy:2.3, name=objective_shannon)
Dec  6 01:32:15 np0005548731 podman[82688]: 2025-12-06 06:32:15.215396079 +0000 UTC m=+2.178013218 container start 4b946982a4a85b713b19c642752f83cba970c2da1b179d6f315166d7d328ad63 (image=quay.io/ceph/haproxy:2.3, name=objective_shannon)
Dec  6 01:32:15 np0005548731 podman[82688]: 2025-12-06 06:32:15.219884451 +0000 UTC m=+2.182501590 container attach 4b946982a4a85b713b19c642752f83cba970c2da1b179d6f315166d7d328ad63 (image=quay.io/ceph/haproxy:2.3, name=objective_shannon)
Dec  6 01:32:15 np0005548731 objective_shannon[82803]: 0 0
Dec  6 01:32:15 np0005548731 systemd[1]: libpod-4b946982a4a85b713b19c642752f83cba970c2da1b179d6f315166d7d328ad63.scope: Deactivated successfully.
Dec  6 01:32:15 np0005548731 podman[82688]: 2025-12-06 06:32:15.220831468 +0000 UTC m=+2.183448607 container died 4b946982a4a85b713b19c642752f83cba970c2da1b179d6f315166d7d328ad63 (image=quay.io/ceph/haproxy:2.3, name=objective_shannon)
Dec  6 01:32:15 np0005548731 systemd[1]: var-lib-containers-storage-overlay-21b229d0c6c5f12fe008c9987e2a64ebc3a2bfc0595c2fc41b27b7b441058ac7-merged.mount: Deactivated successfully.
Dec  6 01:32:15 np0005548731 podman[82688]: 2025-12-06 06:32:15.261289455 +0000 UTC m=+2.223906594 container remove 4b946982a4a85b713b19c642752f83cba970c2da1b179d6f315166d7d328ad63 (image=quay.io/ceph/haproxy:2.3, name=objective_shannon)
Dec  6 01:32:15 np0005548731 systemd[1]: libpod-conmon-4b946982a4a85b713b19c642752f83cba970c2da1b179d6f315166d7d328ad63.scope: Deactivated successfully.
Dec  6 01:32:15 np0005548731 systemd[1]: Reloading.
Dec  6 01:32:15 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:32:15 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:32:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e71 e71: 3 total, 3 up, 3 in
Dec  6 01:32:15 np0005548731 systemd[1]: Reloading.
Dec  6 01:32:15 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:32:15 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:32:15 np0005548731 systemd[1]: Starting Ceph haproxy.rgw.default.compute-2.nyemkw for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec  6 01:32:15 np0005548731 ceph-mon[77458]: Deploying daemon haproxy.rgw.default.compute-2.nyemkw on compute-2
Dec  6 01:32:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  6 01:32:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  6 01:32:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.2", "id": [0, 1]}]': finished
Dec  6 01:32:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.8", "id": [0, 1]}]': finished
Dec  6 01:32:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [0, 1]}]': finished
Dec  6 01:32:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]': finished
Dec  6 01:32:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 1]}]': finished
Dec  6 01:32:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 1]}]': finished
Dec  6 01:32:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  6 01:32:16 np0005548731 podman[82949]: 2025-12-06 06:32:16.120385135 +0000 UTC m=+0.047920177 container create 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:32:16 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00eb09a69c53f6ff4fd8c7faf0b349cc3c738d14c5d96ea118a3c090beb8cb39/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec  6 01:32:16 np0005548731 podman[82949]: 2025-12-06 06:32:16.176781492 +0000 UTC m=+0.104316564 container init 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:32:16 np0005548731 podman[82949]: 2025-12-06 06:32:16.181779011 +0000 UTC m=+0.109314053 container start 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:32:16 np0005548731 bash[82949]: 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167
Dec  6 01:32:16 np0005548731 podman[82949]: 2025-12-06 06:32:16.094271183 +0000 UTC m=+0.021806275 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  6 01:32:16 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw[82964]: [NOTICE] 339/063216 (2) : New worker #1 (4) forked
Dec  6 01:32:16 np0005548731 systemd[1]: Started Ceph haproxy.rgw.default.compute-2.nyemkw for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec  6 01:32:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:16.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e72 e72: 3 total, 3 up, 3 in
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.17( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.3( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.17( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.3( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.7( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.7( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.13( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.b( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.b( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 72 pg[9.13( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 01:32:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:17.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 01:32:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:17 np0005548731 ceph-mon[77458]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 01:32:17 np0005548731 ceph-mon[77458]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 01:32:17 np0005548731 ceph-mon[77458]: Deploying daemon keepalived.rgw.default.compute-2.cossgt on compute-2
Dec  6 01:32:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e73 e73: 3 total, 3 up, 3 in
Dec  6 01:32:17 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 73 pg[9.3( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:17 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 73 pg[9.b( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:17 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 73 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:17 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 73 pg[9.7( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:17 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 73 pg[9.13( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:17 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 73 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:17 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 73 pg[9.17( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:17 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 73 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e74 e74: 3 total, 3 up, 3 in
Dec  6 01:32:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:18.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e75 e75: 3 total, 3 up, 3 in
Dec  6 01:32:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 01:32:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:19.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 01:32:20 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec  6 01:32:20 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec  6 01:32:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:20.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:20 np0005548731 podman[83118]: 2025-12-06 06:32:20.918352784 +0000 UTC m=+4.078861828 container create 34dae9eb423db1d79e6172d35937c0cf17c474eb088a80690369e68951cb3027 (image=quay.io/ceph/keepalived:2.2.4, name=distracted_gould, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, release=1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, version=2.2.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec  6 01:32:20 np0005548731 systemd[1]: Started libpod-conmon-34dae9eb423db1d79e6172d35937c0cf17c474eb088a80690369e68951cb3027.scope.
Dec  6 01:32:20 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:32:20 np0005548731 podman[83118]: 2025-12-06 06:32:20.898874958 +0000 UTC m=+4.059384022 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  6 01:32:20 np0005548731 podman[83118]: 2025-12-06 06:32:20.990302201 +0000 UTC m=+4.150811265 container init 34dae9eb423db1d79e6172d35937c0cf17c474eb088a80690369e68951cb3027 (image=quay.io/ceph/keepalived:2.2.4, name=distracted_gould, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, version=2.2.4, vcs-type=git, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, architecture=x86_64)
Dec  6 01:32:20 np0005548731 podman[83118]: 2025-12-06 06:32:20.997699539 +0000 UTC m=+4.158208583 container start 34dae9eb423db1d79e6172d35937c0cf17c474eb088a80690369e68951cb3027 (image=quay.io/ceph/keepalived:2.2.4, name=distracted_gould, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.openshift.expose-services=, release=1793, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, version=2.2.4, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  6 01:32:21 np0005548731 podman[83118]: 2025-12-06 06:32:21.001704108 +0000 UTC m=+4.162213162 container attach 34dae9eb423db1d79e6172d35937c0cf17c474eb088a80690369e68951cb3027 (image=quay.io/ceph/keepalived:2.2.4, name=distracted_gould, architecture=x86_64, vcs-type=git, description=keepalived for Ceph, name=keepalived, com.redhat.component=keepalived-container, distribution-scope=public, version=2.2.4, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793)
Dec  6 01:32:21 np0005548731 distracted_gould[83218]: 0 0
Dec  6 01:32:21 np0005548731 systemd[1]: libpod-34dae9eb423db1d79e6172d35937c0cf17c474eb088a80690369e68951cb3027.scope: Deactivated successfully.
Dec  6 01:32:21 np0005548731 podman[83118]: 2025-12-06 06:32:21.005963514 +0000 UTC m=+4.166472558 container died 34dae9eb423db1d79e6172d35937c0cf17c474eb088a80690369e68951cb3027 (image=quay.io/ceph/keepalived:2.2.4, name=distracted_gould, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, name=keepalived, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, release=1793)
Dec  6 01:32:21 np0005548731 systemd[1]: var-lib-containers-storage-overlay-06dbf0088586f270a648ab120362e3f2713f81130b3da30270f504a072f98919-merged.mount: Deactivated successfully.
Dec  6 01:32:21 np0005548731 podman[83118]: 2025-12-06 06:32:21.048899753 +0000 UTC m=+4.209408797 container remove 34dae9eb423db1d79e6172d35937c0cf17c474eb088a80690369e68951cb3027 (image=quay.io/ceph/keepalived:2.2.4, name=distracted_gould, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container)
Dec  6 01:32:21 np0005548731 systemd[1]: libpod-conmon-34dae9eb423db1d79e6172d35937c0cf17c474eb088a80690369e68951cb3027.scope: Deactivated successfully.
Dec  6 01:32:21 np0005548731 systemd[1]: Reloading.
Dec  6 01:32:21 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:32:21 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:32:21 np0005548731 systemd[1]: Reloading.
Dec  6 01:32:21 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:32:21 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:32:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:21.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:21 np0005548731 systemd[1]: Starting Ceph keepalived.rgw.default.compute-2.cossgt for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec  6 01:32:21 np0005548731 podman[83366]: 2025-12-06 06:32:21.834894662 +0000 UTC m=+0.044113565 container create 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, com.redhat.component=keepalived-container, name=keepalived, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2)
Dec  6 01:32:21 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69aa7d9c5d38707a2517f53167a3dbd5539990fc11241965088953c9dffbfdaf/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:32:21 np0005548731 podman[83366]: 2025-12-06 06:32:21.890206968 +0000 UTC m=+0.099425891 container init 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, release=1793, name=keepalived, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2)
Dec  6 01:32:21 np0005548731 podman[83366]: 2025-12-06 06:32:21.89673496 +0000 UTC m=+0.105953863 container start 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, vcs-type=git, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=keepalived-container, name=keepalived, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, architecture=x86_64, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, distribution-scope=public)
Dec  6 01:32:21 np0005548731 bash[83366]: 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39
Dec  6 01:32:21 np0005548731 podman[83366]: 2025-12-06 06:32:21.81755924 +0000 UTC m=+0.026778173 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  6 01:32:21 np0005548731 systemd[1]: Started Ceph keepalived.rgw.default.compute-2.cossgt for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec  6 01:32:21 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:21 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec  6 01:32:21 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:21 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec  6 01:32:21 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:21 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec  6 01:32:21 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:21 2025: Configuration file /etc/keepalived/keepalived.conf
Dec  6 01:32:21 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:21 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec  6 01:32:21 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:21 2025: Starting VRRP child process, pid=4
Dec  6 01:32:21 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:21 2025: Startup complete
Dec  6 01:32:21 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:21 2025: (VI_0) Entering BACKUP STATE (init)
Dec  6 01:32:21 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:21 2025: VRRP_Script(check_backend) succeeded
Dec  6 01:32:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 01:32:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:22.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 01:32:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  6 01:32:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e76 e76: 3 total, 3 up, 3 in
Dec  6 01:32:23 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 76 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=76) [2] r=0 lpr=76 pi=[74,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 01:32:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:23.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 01:32:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:23 np0005548731 systemd-logind[794]: New session 34 of user zuul.
Dec  6 01:32:23 np0005548731 systemd[1]: Started Session 34 of User zuul.
Dec  6 01:32:24 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts
Dec  6 01:32:24 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok
Dec  6 01:32:24 np0005548731 ceph-mon[77458]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  6 01:32:24 np0005548731 ceph-mon[77458]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  6 01:32:24 np0005548731 ceph-mon[77458]: Deploying daemon keepalived.rgw.default.compute-0.fknpoc on compute-0
Dec  6 01:32:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  6 01:32:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:24.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:24 np0005548731 python3.9[83543]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  6 01:32:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e77 e77: 3 total, 3 up, 3 in
Dec  6 01:32:24 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 77 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[74,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:24 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 77 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[74,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:24 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec  6 01:32:24 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec  6 01:32:25 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 76 pg[9.15( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=76) [2] r=0 lpr=76 pi=[62,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:25 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 76 pg[9.d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=76) [2] r=0 lpr=76 pi=[62,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:25 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 76 pg[9.5( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=76) [2] r=0 lpr=76 pi=[62,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  6 01:32:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 01:32:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:25.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 01:32:25 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:25 2025: (VI_0) Entering MASTER STATE
Dec  6 01:32:25 np0005548731 python3.9[83717]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:32:25 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec  6 01:32:26 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec  6 01:32:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:26.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:27 np0005548731 python3.9[83874]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:32:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:27.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e78 e78: 3 total, 3 up, 3 in
Dec  6 01:32:27 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 78 pg[9.d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[62,78)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:27 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 78 pg[9.d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[62,78)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:27 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 78 pg[9.5( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[62,78)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:27 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 78 pg[9.15( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[62,78)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:27 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 78 pg[9.5( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[62,78)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:27 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 78 pg[9.15( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=-1 lpr=78 pi=[62,78)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:28 np0005548731 python3.9[84028]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:32:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:28.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:29 np0005548731 python3.9[84232]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:32:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e79 e79: 3 total, 3 up, 3 in
Dec  6 01:32:29 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 79 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=77/74 les/c/f=78/75/0 sis=79) [2] r=0 lpr=79 pi=[74,79)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:29 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 79 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=77/74 les/c/f=78/75/0 sis=79) [2] r=0 lpr=79 pi=[74,79)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 01:32:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:29.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 01:32:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  6 01:32:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:29 np0005548731 podman[84605]: 2025-12-06 06:32:29.8673075 +0000 UTC m=+0.063441526 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  6 01:32:29 np0005548731 python3.9[84587]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:32:30 np0005548731 podman[84605]: 2025-12-06 06:32:30.201980155 +0000 UTC m=+0.398114191 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 01:32:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 01:32:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:30.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 01:32:30 np0005548731 python3.9[84842]: ansible-ansible.builtin.service_facts Invoked
Dec  6 01:32:30 np0005548731 network[84924]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 01:32:30 np0005548731 network[84927]: 'network-scripts' will be removed from distribution in near future.
Dec  6 01:32:30 np0005548731 network[84928]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 01:32:31 np0005548731 podman[84935]: 2025-12-06 06:32:31.017513968 +0000 UTC m=+0.166652488 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:32:31 np0005548731 podman[84957]: 2025-12-06 06:32:31.16980109 +0000 UTC m=+0.135439055 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:32:31 np0005548731 podman[84935]: 2025-12-06 06:32:31.17756584 +0000 UTC m=+0.326704340 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:32:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 01:32:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:31.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 01:32:31 np0005548731 podman[85009]: 2025-12-06 06:32:31.69303126 +0000 UTC m=+0.059088278 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived)
Dec  6 01:32:31 np0005548731 podman[85009]: 2025-12-06 06:32:31.714951288 +0000 UTC m=+0.081008336 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.expose-services=)
Dec  6 01:32:31 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:31 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Dec  6 01:32:31 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt[83381]: Sat Dec  6 06:32:31 2025: (VI_0) Entering BACKUP STATE
Dec  6 01:32:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e80 e80: 3 total, 3 up, 3 in
Dec  6 01:32:32 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 80 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80) [2] r=0 lpr=80 pi=[62,80)/2 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:32 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 80 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80) [2] r=0 lpr=80 pi=[62,80)/2 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:32 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 80 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80) [2] r=0 lpr=80 pi=[62,80)/2 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:32 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 80 pg[9.5( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80) [2] r=0 lpr=80 pi=[62,80)/2 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:32 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 80 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80) [2] r=0 lpr=80 pi=[62,80)/2 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:32 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 80 pg[9.5( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80) [2] r=0 lpr=80 pi=[62,80)/2 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:32 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 80 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=5 ec=62/51 lis/c=77/74 les/c/f=78/75/0 sis=79) [2] r=0 lpr=79 pi=[74,79)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 01:32:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:33.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 01:32:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e81 e81: 3 total, 3 up, 3 in
Dec  6 01:32:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:34 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 81 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=80/81 n=6 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80) [2] r=0 lpr=80 pi=[62,80)/2 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:34 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 81 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=80/81 n=5 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80) [2] r=0 lpr=80 pi=[62,80)/2 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:34 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 81 pg[9.5( v 58'1159 (0'0,58'1159] local-lis/les=80/81 n=6 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80) [2] r=0 lpr=80 pi=[62,80)/2 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 01:32:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:34.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 01:32:35 np0005548731 python3.9[85407]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:32:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:35.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:35 np0005548731 podman[85702]: 2025-12-06 06:32:35.812047974 +0000 UTC m=+0.039136218 container create 843bd46681d212e02119028b262821b2d125add09de2b5d8c7052e403718cac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 01:32:35 np0005548731 systemd[1]: Started libpod-conmon-843bd46681d212e02119028b262821b2d125add09de2b5d8c7052e403718cac4.scope.
Dec  6 01:32:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e82 e82: 3 total, 3 up, 3 in
Dec  6 01:32:35 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:32:35 np0005548731 podman[85702]: 2025-12-06 06:32:35.795404042 +0000 UTC m=+0.022492316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:32:35 np0005548731 podman[85702]: 2025-12-06 06:32:35.894197633 +0000 UTC m=+0.121285887 container init 843bd46681d212e02119028b262821b2d125add09de2b5d8c7052e403718cac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:32:35 np0005548731 podman[85702]: 2025-12-06 06:32:35.900395616 +0000 UTC m=+0.127483850 container start 843bd46681d212e02119028b262821b2d125add09de2b5d8c7052e403718cac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Dec  6 01:32:35 np0005548731 podman[85702]: 2025-12-06 06:32:35.904563949 +0000 UTC m=+0.131652213 container attach 843bd46681d212e02119028b262821b2d125add09de2b5d8c7052e403718cac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec  6 01:32:35 np0005548731 practical_moser[85728]: 167 167
Dec  6 01:32:35 np0005548731 systemd[1]: libpod-843bd46681d212e02119028b262821b2d125add09de2b5d8c7052e403718cac4.scope: Deactivated successfully.
Dec  6 01:32:35 np0005548731 podman[85702]: 2025-12-06 06:32:35.920578322 +0000 UTC m=+0.147666576 container died 843bd46681d212e02119028b262821b2d125add09de2b5d8c7052e403718cac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:32:35 np0005548731 systemd[1]: var-lib-containers-storage-overlay-694fa768b4a60ffd94d3f3d60953af58f630fac9fedbd9bd6ad96c27d8733366-merged.mount: Deactivated successfully.
Dec  6 01:32:35 np0005548731 podman[85702]: 2025-12-06 06:32:35.969232501 +0000 UTC m=+0.196320745 container remove 843bd46681d212e02119028b262821b2d125add09de2b5d8c7052e403718cac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:32:35 np0005548731 systemd[1]: libpod-conmon-843bd46681d212e02119028b262821b2d125add09de2b5d8c7052e403718cac4.scope: Deactivated successfully.
Dec  6 01:32:36 np0005548731 python3.9[85717]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:32:36 np0005548731 podman[85755]: 2025-12-06 06:32:36.114499256 +0000 UTC m=+0.044397504 container create 0e800b537205e3a3bf4a279dd799dcb5f78cd6cf7242bddbb2458bec02daf7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:32:36 np0005548731 systemd[1]: Started libpod-conmon-0e800b537205e3a3bf4a279dd799dcb5f78cd6cf7242bddbb2458bec02daf7a0.scope.
Dec  6 01:32:36 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:32:36 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c580ed756670be6fb1f0eef820f94717a9787b97c350523cf43374366b404629/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 01:32:36 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c580ed756670be6fb1f0eef820f94717a9787b97c350523cf43374366b404629/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 01:32:36 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c580ed756670be6fb1f0eef820f94717a9787b97c350523cf43374366b404629/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 01:32:36 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c580ed756670be6fb1f0eef820f94717a9787b97c350523cf43374366b404629/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 01:32:36 np0005548731 podman[85755]: 2025-12-06 06:32:36.181730834 +0000 UTC m=+0.111629102 container init 0e800b537205e3a3bf4a279dd799dcb5f78cd6cf7242bddbb2458bec02daf7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:32:36 np0005548731 podman[85755]: 2025-12-06 06:32:36.189837664 +0000 UTC m=+0.119735912 container start 0e800b537205e3a3bf4a279dd799dcb5f78cd6cf7242bddbb2458bec02daf7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  6 01:32:36 np0005548731 podman[85755]: 2025-12-06 06:32:36.094617608 +0000 UTC m=+0.024515866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:32:36 np0005548731 podman[85755]: 2025-12-06 06:32:36.19345936 +0000 UTC m=+0.123357608 container attach 0e800b537205e3a3bf4a279dd799dcb5f78cd6cf7242bddbb2458bec02daf7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  6 01:32:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:36.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:36 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec  6 01:32:36 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]: [
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:    {
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:        "available": false,
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:        "ceph_device": false,
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:        "lsm_data": {},
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:        "lvs": [],
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:        "path": "/dev/sr0",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:        "rejected_reasons": [
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "Insufficient space (<5GB)",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "Has a FileSystem"
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:        ],
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:        "sys_api": {
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "actuators": null,
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "device_nodes": "sr0",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "devname": "sr0",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "human_readable_size": "482.00 KB",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "id_bus": "ata",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "model": "QEMU DVD-ROM",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "nr_requests": "2",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "parent": "/dev/sr0",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "partitions": {},
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "path": "/dev/sr0",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "removable": "1",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "rev": "2.5+",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "ro": "0",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "rotational": "1",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "sas_address": "",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "sas_device_handle": "",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "scheduler_mode": "mq-deadline",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "sectors": 0,
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "sectorsize": "2048",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "size": 493568.0,
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "support_discard": "2048",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "type": "disk",
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:            "vendor": "QEMU"
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:        }
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]:    }
Dec  6 01:32:37 np0005548731 eloquent_leavitt[85776]: ]
Dec  6 01:32:37 np0005548731 systemd[1]: libpod-0e800b537205e3a3bf4a279dd799dcb5f78cd6cf7242bddbb2458bec02daf7a0.scope: Deactivated successfully.
Dec  6 01:32:37 np0005548731 systemd[1]: libpod-0e800b537205e3a3bf4a279dd799dcb5f78cd6cf7242bddbb2458bec02daf7a0.scope: Consumed 1.230s CPU time.
Dec  6 01:32:37 np0005548731 conmon[85776]: conmon 0e800b537205e3a3bf4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e800b537205e3a3bf4a279dd799dcb5f78cd6cf7242bddbb2458bec02daf7a0.scope/container/memory.events
Dec  6 01:32:37 np0005548731 podman[85755]: 2025-12-06 06:32:37.416733668 +0000 UTC m=+1.346631936 container died 0e800b537205e3a3bf4a279dd799dcb5f78cd6cf7242bddbb2458bec02daf7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:32:37 np0005548731 systemd[1]: var-lib-containers-storage-overlay-c580ed756670be6fb1f0eef820f94717a9787b97c350523cf43374366b404629-merged.mount: Deactivated successfully.
Dec  6 01:32:37 np0005548731 python3.9[85956]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:32:37 np0005548731 podman[85755]: 2025-12-06 06:32:37.480299677 +0000 UTC m=+1.410197915 container remove 0e800b537205e3a3bf4a279dd799dcb5f78cd6cf7242bddbb2458bec02daf7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  6 01:32:37 np0005548731 systemd[1]: libpod-conmon-0e800b537205e3a3bf4a279dd799dcb5f78cd6cf7242bddbb2458bec02daf7a0.scope: Deactivated successfully.
Dec  6 01:32:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 01:32:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:37.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 01:32:37 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Dec  6 01:32:37 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Dec  6 01:32:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e83 e83: 3 total, 3 up, 3 in
Dec  6 01:32:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:38.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:38 np0005548731 python3.9[87408]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:32:38 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec  6 01:32:38 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec  6 01:32:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 01:32:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:32:39 np0005548731 ceph-mon[77458]: Updating compute-0:/etc/ceph/ceph.conf
Dec  6 01:32:39 np0005548731 ceph-mon[77458]: Updating compute-1:/etc/ceph/ceph.conf
Dec  6 01:32:39 np0005548731 ceph-mon[77458]: Updating compute-2:/etc/ceph/ceph.conf
Dec  6 01:32:39 np0005548731 python3.9[87889]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:32:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 01:32:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:39.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 01:32:40 np0005548731 ceph-mon[77458]: Updating compute-1:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec  6 01:32:40 np0005548731 ceph-mon[77458]: Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec  6 01:32:40 np0005548731 ceph-mon[77458]: Updating compute-0:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec  6 01:32:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:40.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:32:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 01:32:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:41.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 01:32:41 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec  6 01:32:41 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec  6 01:32:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e84 e84: 3 total, 3 up, 3 in
Dec  6 01:32:42 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  6 01:32:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:42.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:43.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  6 01:32:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 01:32:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:44.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 01:32:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e85 e85: 3 total, 3 up, 3 in
Dec  6 01:32:45 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 85 pg[9.8( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=85) [2] r=0 lpr=85 pi=[74,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:45 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 85 pg[9.18( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=85) [2] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  6 01:32:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 01:32:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:45.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 01:32:45 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec  6 01:32:45 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec  6 01:32:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 01:32:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:46.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 01:32:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:47.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:48.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  6 01:32:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  6 01:32:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e86 e86: 3 total, 3 up, 3 in
Dec  6 01:32:49 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 86 pg[9.18( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=86) [2]/[0] r=-1 lpr=86 pi=[62,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:49 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 86 pg[9.18( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=86) [2]/[0] r=-1 lpr=86 pi=[62,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:49 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 86 pg[9.8( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=86) [2]/[1] r=-1 lpr=86 pi=[74,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:49 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 86 pg[9.8( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=86) [2]/[1] r=-1 lpr=86 pi=[74,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:49 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 86 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=86) [2] r=0 lpr=86 pi=[74,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:49 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 86 pg[9.9( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=86) [2] r=0 lpr=86 pi=[62,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:49.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:49 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec  6 01:32:49 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec  6 01:32:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  6 01:32:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  6 01:32:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 01:32:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e87 e87: 3 total, 3 up, 3 in
Dec  6 01:32:50 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 87 pg[9.9( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[62,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:50 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 87 pg[9.9( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[62,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:50 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 87 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[74,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:50 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 87 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[74,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:32:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:50.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:50 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec  6 01:32:50 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec  6 01:32:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e88 e88: 3 total, 3 up, 3 in
Dec  6 01:32:51 np0005548731 ceph-mon[77458]: Reconfiguring mon.compute-0 (monmap changed)...
Dec  6 01:32:51 np0005548731 ceph-mon[77458]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  6 01:32:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  6 01:32:51 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 88 pg[9.18( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=86/62 les/c/f=87/63/0 sis=88) [2] r=0 lpr=88 pi=[62,88)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:51 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 88 pg[9.18( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=86/62 les/c/f=87/63/0 sis=88) [2] r=0 lpr=88 pi=[62,88)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:51 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 88 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=86/74 les/c/f=87/75/0 sis=88) [2] r=0 lpr=88 pi=[74,88)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:51 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 88 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=86/74 les/c/f=87/75/0 sis=88) [2] r=0 lpr=88 pi=[74,88)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:52.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:52 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec  6 01:32:52 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec  6 01:32:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e89 e89: 3 total, 3 up, 3 in
Dec  6 01:32:53 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 89 pg[9.9( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=87/62 les/c/f=88/63/0 sis=89) [2] r=0 lpr=89 pi=[62,89)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:53 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 89 pg[9.9( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=87/62 les/c/f=88/63/0 sis=89) [2] r=0 lpr=89 pi=[62,89)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:53 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 89 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=87/74 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[74,89)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:32:53 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 89 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=87/74 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[74,89)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:32:53 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 89 pg[9.18( v 58'1159 (0'0,58'1159] local-lis/les=88/89 n=5 ec=62/51 lis/c=86/62 les/c/f=87/63/0 sis=88) [2] r=0 lpr=88 pi=[62,88)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:53 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 89 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=88/89 n=6 ec=62/51 lis/c=86/74 les/c/f=87/75/0 sis=88) [2] r=0 lpr=88 pi=[74,88)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000029s ======
Dec  6 01:32:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:53.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec  6 01:32:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:53 np0005548731 ceph-mon[77458]: Reconfiguring mgr.compute-0.sfzyix (monmap changed)...
Dec  6 01:32:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.sfzyix", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 01:32:53 np0005548731 ceph-mon[77458]: Reconfiguring daemon mgr.compute-0.sfzyix on compute-0
Dec  6 01:32:53 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec  6 01:32:53 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec  6 01:32:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e90 e90: 3 total, 3 up, 3 in
Dec  6 01:32:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 90 pg[9.9( v 58'1159 (0'0,58'1159] local-lis/les=89/90 n=6 ec=62/51 lis/c=87/62 les/c/f=88/63/0 sis=89) [2] r=0 lpr=89 pi=[62,89)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:54.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:54 np0005548731 ceph-mon[77458]: Reconfiguring crash.compute-0 (monmap changed)...
Dec  6 01:32:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 01:32:54 np0005548731 ceph-mon[77458]: Reconfiguring daemon crash.compute-0 on compute-0
Dec  6 01:32:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  6 01:32:54 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 90 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=89/90 n=5 ec=62/51 lis/c=87/74 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[74,89)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:32:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:55.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:56 np0005548731 ceph-mon[77458]: Reconfiguring osd.0 (monmap changed)...
Dec  6 01:32:56 np0005548731 ceph-mon[77458]: Reconfiguring daemon osd.0 on compute-0
Dec  6 01:32:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000028s ======
Dec  6 01:32:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:56.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec  6 01:32:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:32:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:32:57 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec  6 01:32:57 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec  6 01:32:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e91 e91: 3 total, 3 up, 3 in
Dec  6 01:32:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  6 01:32:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:58 np0005548731 ceph-mon[77458]: Reconfiguring crash.compute-1 (monmap changed)...
Dec  6 01:32:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  6 01:32:58 np0005548731 ceph-mon[77458]: Reconfiguring daemon crash.compute-1 on compute-1
Dec  6 01:32:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e92 e92: 3 total, 3 up, 3 in
Dec  6 01:32:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:32:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:58.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: Reconfiguring osd.1 (monmap changed)...
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: Reconfiguring daemon osd.1 on compute-1
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: Reconfiguring mon.compute-1 (monmap changed)...
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: Reconfiguring daemon mon.compute-1 on compute-1
Dec  6 01:32:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e93 e93: 3 total, 3 up, 3 in
Dec  6 01:32:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:32:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:32:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:59.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:00 np0005548731 ceph-mon[77458]: Reconfiguring mgr.compute-1.nmklwp (monmap changed)...
Dec  6 01:33:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 01:33:00 np0005548731 ceph-mon[77458]: Reconfiguring daemon mgr.compute-1.nmklwp on compute-1
Dec  6 01:33:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  6 01:33:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:00.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e94 e94: 3 total, 3 up, 3 in
Dec  6 01:33:01 np0005548731 podman[88581]: 2025-12-06 06:33:01.216471147 +0000 UTC m=+0.053656251 container create 130afb941be9ce6ab22db0f10494f94e431bf20a4af70459392856379968194a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_allen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:33:01 np0005548731 systemd[1]: Started libpod-conmon-130afb941be9ce6ab22db0f10494f94e431bf20a4af70459392856379968194a.scope.
Dec  6 01:33:01 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:33:01 np0005548731 podman[88581]: 2025-12-06 06:33:01.198019876 +0000 UTC m=+0.035205010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:33:01 np0005548731 podman[88581]: 2025-12-06 06:33:01.301254731 +0000 UTC m=+0.138439855 container init 130afb941be9ce6ab22db0f10494f94e431bf20a4af70459392856379968194a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  6 01:33:01 np0005548731 podman[88581]: 2025-12-06 06:33:01.309107796 +0000 UTC m=+0.146292900 container start 130afb941be9ce6ab22db0f10494f94e431bf20a4af70459392856379968194a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  6 01:33:01 np0005548731 podman[88581]: 2025-12-06 06:33:01.31308845 +0000 UTC m=+0.150273554 container attach 130afb941be9ce6ab22db0f10494f94e431bf20a4af70459392856379968194a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  6 01:33:01 np0005548731 kind_allen[88598]: 167 167
Dec  6 01:33:01 np0005548731 systemd[1]: libpod-130afb941be9ce6ab22db0f10494f94e431bf20a4af70459392856379968194a.scope: Deactivated successfully.
Dec  6 01:33:01 np0005548731 podman[88581]: 2025-12-06 06:33:01.314737393 +0000 UTC m=+0.151922497 container died 130afb941be9ce6ab22db0f10494f94e431bf20a4af70459392856379968194a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  6 01:33:01 np0005548731 systemd[1]: var-lib-containers-storage-overlay-f5a0a002f8406c4802f72057d2af5ab0b1efc75bc708d38de8e65a7dc25d7d37-merged.mount: Deactivated successfully.
Dec  6 01:33:01 np0005548731 podman[88581]: 2025-12-06 06:33:01.375195061 +0000 UTC m=+0.212380155 container remove 130afb941be9ce6ab22db0f10494f94e431bf20a4af70459392856379968194a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 01:33:01 np0005548731 systemd[1]: libpod-conmon-130afb941be9ce6ab22db0f10494f94e431bf20a4af70459392856379968194a.scope: Deactivated successfully.
Dec  6 01:33:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:01.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:02 np0005548731 podman[88737]: 2025-12-06 06:33:01.931248341 +0000 UTC m=+0.023693640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 01:33:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:02 np0005548731 ceph-mon[77458]: Reconfiguring mon.compute-2 (monmap changed)...
Dec  6 01:33:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  6 01:33:02 np0005548731 ceph-mon[77458]: Reconfiguring daemon mon.compute-2 on compute-2
Dec  6 01:33:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  6 01:33:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  6 01:33:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:02.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:02 np0005548731 podman[88737]: 2025-12-06 06:33:02.79065095 +0000 UTC m=+0.883096239 container create 5c8021da7527d145e8de7858fddffc787e1c79264894b4f475ce5c154ada38f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 01:33:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e95 e95: 3 total, 3 up, 3 in
Dec  6 01:33:02 np0005548731 systemd[1]: Started libpod-conmon-5c8021da7527d145e8de7858fddffc787e1c79264894b4f475ce5c154ada38f8.scope.
Dec  6 01:33:02 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:33:03 np0005548731 podman[88737]: 2025-12-06 06:33:03.030758249 +0000 UTC m=+1.123203558 container init 5c8021da7527d145e8de7858fddffc787e1c79264894b4f475ce5c154ada38f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gates, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  6 01:33:03 np0005548731 podman[88737]: 2025-12-06 06:33:03.037762642 +0000 UTC m=+1.130207931 container start 5c8021da7527d145e8de7858fddffc787e1c79264894b4f475ce5c154ada38f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gates, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 01:33:03 np0005548731 ceph-mon[77458]: Reconfiguring mgr.compute-2.ytlehq (monmap changed)...
Dec  6 01:33:03 np0005548731 ceph-mon[77458]: Reconfiguring daemon mgr.compute-2.ytlehq on compute-2
Dec  6 01:33:03 np0005548731 clever_gates[88761]: 167 167
Dec  6 01:33:03 np0005548731 systemd[1]: libpod-5c8021da7527d145e8de7858fddffc787e1c79264894b4f475ce5c154ada38f8.scope: Deactivated successfully.
Dec  6 01:33:03 np0005548731 podman[88737]: 2025-12-06 06:33:03.04421659 +0000 UTC m=+1.136661879 container attach 5c8021da7527d145e8de7858fddffc787e1c79264894b4f475ce5c154ada38f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gates, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  6 01:33:03 np0005548731 podman[88737]: 2025-12-06 06:33:03.044636691 +0000 UTC m=+1.137081990 container died 5c8021da7527d145e8de7858fddffc787e1c79264894b4f475ce5c154ada38f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  6 01:33:03 np0005548731 systemd[1]: var-lib-containers-storage-overlay-ab6916c98361fad7a335d0f83bb7b67073ab174e9313d2073e46bb0e2f504e49-merged.mount: Deactivated successfully.
Dec  6 01:33:03 np0005548731 podman[88737]: 2025-12-06 06:33:03.094601555 +0000 UTC m=+1.187046854 container remove 5c8021da7527d145e8de7858fddffc787e1c79264894b4f475ce5c154ada38f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gates, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:33:03 np0005548731 systemd[1]: libpod-conmon-5c8021da7527d145e8de7858fddffc787e1c79264894b4f475ce5c154ada38f8.scope: Deactivated successfully.
Dec  6 01:33:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:03.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:03 np0005548731 podman[88948]: 2025-12-06 06:33:03.872765283 +0000 UTC m=+0.061608089 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  6 01:33:03 np0005548731 podman[88948]: 2025-12-06 06:33:03.974860459 +0000 UTC m=+0.163703255 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:33:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:33:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:33:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:04.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:33:04 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.8 deep-scrub starts
Dec  6 01:33:04 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.8 deep-scrub ok
Dec  6 01:33:05 np0005548731 podman[89104]: 2025-12-06 06:33:05.180438497 +0000 UTC m=+0.641042929 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:33:05 np0005548731 podman[89104]: 2025-12-06 06:33:05.189619008 +0000 UTC m=+0.650223430 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:33:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:05 np0005548731 podman[89176]: 2025-12-06 06:33:05.436018541 +0000 UTC m=+0.055529521 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec  6 01:33:05 np0005548731 podman[89176]: 2025-12-06 06:33:05.451321581 +0000 UTC m=+0.070832571 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-type=git, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793)
Dec  6 01:33:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:05.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:33:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:33:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:06.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:07.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  6 01:33:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e96 e96: 3 total, 3 up, 3 in
Dec  6 01:33:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:08.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:33:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:09.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  6 01:33:10 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec  6 01:33:10 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec  6 01:33:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e97 e97: 3 total, 3 up, 3 in
Dec  6 01:33:10 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 97 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=80/81 n=6 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=97 pruub=11.790550232s) [1] r=-1 lpr=97 pi=[80,97)/1 crt=58'1159 mlcod 0'0 active pruub 171.405059814s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:10 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 97 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=5 ec=62/51 lis/c=79/79 les/c/f=80/80/0 sis=97 pruub=9.433007240s) [1] r=-1 lpr=97 pi=[79,97)/1 crt=58'1159 mlcod 0'0 active pruub 169.047515869s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:10 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 97 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=80/81 n=6 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=97 pruub=11.790477753s) [1] r=-1 lpr=97 pi=[80,97)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 171.405059814s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:33:10 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 97 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=5 ec=62/51 lis/c=79/79 les/c/f=80/80/0 sis=97 pruub=9.432940483s) [1] r=-1 lpr=97 pi=[79,97)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 169.047515869s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:33:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  6 01:33:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:10.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:11.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e98 e98: 3 total, 3 up, 3 in
Dec  6 01:33:11 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 98 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=5 ec=62/51 lis/c=79/79 les/c/f=80/80/0 sis=98) [1]/[2] r=0 lpr=98 pi=[79,98)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:11 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 98 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=80/81 n=6 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=98) [1]/[2] r=0 lpr=98 pi=[80,98)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:11 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 98 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=5 ec=62/51 lis/c=79/79 les/c/f=80/80/0 sis=98) [1]/[2] r=0 lpr=98 pi=[79,98)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 01:33:11 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 98 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=80/81 n=6 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=98) [1]/[2] r=0 lpr=98 pi=[80,98)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 01:33:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  6 01:33:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:33:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:12.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e99 e99: 3 total, 3 up, 3 in
Dec  6 01:33:13 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 99 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=98/99 n=5 ec=62/51 lis/c=79/79 les/c/f=80/80/0 sis=98) [1]/[2] async=[1] r=0 lpr=98 pi=[79,98)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:33:13 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 99 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=98/99 n=6 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=98) [1]/[2] async=[1] r=0 lpr=98 pi=[80,98)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:33:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e100 e100: 3 total, 3 up, 3 in
Dec  6 01:33:13 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 100 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=98/99 n=5 ec=62/51 lis/c=98/79 les/c/f=99/80/0 sis=100 pruub=15.543317795s) [1] async=[1] r=-1 lpr=100 pi=[79,100)/1 crt=58'1159 mlcod 58'1159 active pruub 178.057098389s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:13 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 100 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=98/99 n=5 ec=62/51 lis/c=98/79 les/c/f=99/80/0 sis=100 pruub=15.543213844s) [1] r=-1 lpr=100 pi=[79,100)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 178.057098389s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:33:13 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 100 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=98/99 n=6 ec=62/51 lis/c=98/80 les/c/f=99/81/0 sis=100 pruub=15.546151161s) [1] async=[1] r=-1 lpr=100 pi=[80,100)/1 crt=58'1159 mlcod 58'1159 active pruub 178.060470581s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:13 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 100 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=98/99 n=6 ec=62/51 lis/c=98/80 les/c/f=99/81/0 sis=100 pruub=15.546031952s) [1] r=-1 lpr=100 pi=[80,100)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 178.060470581s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:33:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:13.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:33:14 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec  6 01:33:14 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec  6 01:33:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e101 e101: 3 total, 3 up, 3 in
Dec  6 01:33:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:33:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:14.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:33:15 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec  6 01:33:15 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec  6 01:33:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:33:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:15.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:33:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:16.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:33:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:17.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:33:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:18.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:33:19 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.3 deep-scrub starts
Dec  6 01:33:19 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.3 deep-scrub ok
Dec  6 01:33:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  6 01:33:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:19.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e102 e102: 3 total, 3 up, 3 in
Dec  6 01:33:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:20.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e103 e103: 3 total, 3 up, 3 in
Dec  6 01:33:21 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 103 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=5 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=103 pruub=8.591589928s) [1] r=-1 lpr=103 pi=[72,103)/1 crt=58'1159 mlcod 0'0 active pruub 178.599151611s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:21 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 103 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=6 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=103 pruub=8.590903282s) [1] r=-1 lpr=103 pi=[72,103)/1 crt=58'1159 mlcod 0'0 active pruub 178.599258423s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:21 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 103 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=5 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=103 pruub=8.591114044s) [1] r=-1 lpr=103 pi=[72,103)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 178.599151611s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:33:21 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 103 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=6 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=103 pruub=8.590808868s) [1] r=-1 lpr=103 pi=[72,103)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 178.599258423s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:33:21 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  6 01:33:21 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  6 01:33:21 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec  6 01:33:21 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec  6 01:33:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  6 01:33:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e104 e104: 3 total, 3 up, 3 in
Dec  6 01:33:22 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 104 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=6 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=104) [1]/[2] r=0 lpr=104 pi=[72,104)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:22 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 104 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=6 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=104) [1]/[2] r=0 lpr=104 pi=[72,104)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 01:33:22 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 104 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=5 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=104) [1]/[2] r=0 lpr=104 pi=[72,104)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:22 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 104 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=5 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=104) [1]/[2] r=0 lpr=104 pi=[72,104)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 01:33:22 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec  6 01:33:22 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec  6 01:33:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:22.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:33:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:33:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:33:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e105 e105: 3 total, 3 up, 3 in
Dec  6 01:33:24 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 105 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=104/105 n=6 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=104) [1]/[2] async=[1] r=0 lpr=104 pi=[72,104)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:33:24 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 105 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=104/105 n=5 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=104) [1]/[2] async=[1] r=0 lpr=104 pi=[72,104)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:33:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:24.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:26 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec  6 01:33:26 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec  6 01:33:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:26.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e106 e106: 3 total, 3 up, 3 in
Dec  6 01:33:26 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 106 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=104/105 n=6 ec=62/51 lis/c=104/72 les/c/f=105/73/0 sis=106 pruub=13.430812836s) [1] async=[1] r=-1 lpr=106 pi=[72,106)/1 crt=58'1159 mlcod 58'1159 active pruub 189.169982910s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:26 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 106 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=104/105 n=6 ec=62/51 lis/c=104/72 les/c/f=105/73/0 sis=106 pruub=13.430675507s) [1] r=-1 lpr=106 pi=[72,106)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 189.169982910s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:33:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:27.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:27 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  6 01:33:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e107 e107: 3 total, 3 up, 3 in
Dec  6 01:33:27 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 107 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=104/105 n=5 ec=62/51 lis/c=104/72 les/c/f=105/73/0 sis=107 pruub=12.307349205s) [1] async=[1] r=-1 lpr=107 pi=[72,107)/1 crt=58'1159 mlcod 58'1159 active pruub 189.170013428s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:33:27 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 107 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=104/105 n=5 ec=62/51 lis/c=104/72 les/c/f=105/73/0 sis=107 pruub=12.307205200s) [1] r=-1 lpr=107 pi=[72,107)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 189.170013428s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:33:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:28.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  6 01:33:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  6 01:33:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:33:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e108 e108: 3 total, 3 up, 3 in
Dec  6 01:33:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:29.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:30 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec  6 01:33:30 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec  6 01:33:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  6 01:33:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e109 e109: 3 total, 3 up, 3 in
Dec  6 01:33:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:30.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:30 np0005548731 python3.9[89537]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:33:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  6 01:33:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  6 01:33:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e110 e110: 3 total, 3 up, 3 in
Dec  6 01:33:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:31.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:32 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec  6 01:33:32 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec  6 01:33:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:32.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:32 np0005548731 python3.9[89825]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  6 01:33:33 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec  6 01:33:33 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec  6 01:33:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:33.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:33 np0005548731 python3.9[89977]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  6 01:33:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:33:34 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec  6 01:33:34 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec  6 01:33:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:34.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:35 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec  6 01:33:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:35.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:33:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:36.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:33:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:37.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:38 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.1c deep-scrub starts
Dec  6 01:33:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:38.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:38 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec  6 01:33:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:39.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:40 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:33:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:40.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:41.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:42.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:44 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:33:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:44.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:33:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:45.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:33:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:46.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:47 np0005548731 python3.9[90130]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:33:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:47.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e111 e111: 3 total, 3 up, 3 in
Dec  6 01:33:47 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 13.856166840s
Dec  6 01:33:47 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 13.856166840s
Dec  6 01:33:47 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.856403351s, txc = 0x5612d018f500
Dec  6 01:33:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).paxos(paxos updating c 1..751) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 7.694776535s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec  6 01:33:47 np0005548731 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2[77454]: 2025-12-06T06:33:47.934+0000 7ff5877f0640 -1 mon.compute-2@1(peon).paxos(paxos updating c 1..751) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 7.694776535s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec  6 01:33:47 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec  6 01:33:47 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec  6 01:33:47 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 8.1c deep-scrub ok
Dec  6 01:33:48 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.925290108s, txc = 0x5612d09f0000
Dec  6 01:33:48 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.906360626s, txc = 0x5612d09f0300
Dec  6 01:33:48 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.712567329s, txc = 0x5612d1358300
Dec  6 01:33:48 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.962915421s, txc = 0x5612d09f8000
Dec  6 01:33:48 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.007397652s, txc = 0x5612d27b7800
Dec  6 01:33:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:33:48 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:33:48 np0005548731 python3.9[90288]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  6 01:33:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:33:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:33:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:48.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:33:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:49 np0005548731 python3.9[90441]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:33:50.170729) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830170854, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7043, "num_deletes": 257, "total_data_size": 13618400, "memory_usage": 13825632, "flush_reason": "Manual Compaction"}
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Dec  6 01:33:50 np0005548731 python3.9[90645]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830744254, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 8546463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 257, "largest_seqno": 7048, "table_properties": {"data_size": 8516382, "index_size": 19662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 88377, "raw_average_key_size": 23, "raw_value_size": 8444413, "raw_average_value_size": 2274, "num_data_blocks": 867, "num_entries": 3713, "num_filter_entries": 3713, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002565, "oldest_key_time": 1765002565, "file_creation_time": 1765002830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 573635 microseconds, and 21819 cpu microseconds.
Dec  6 01:33:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:50.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:33:50.744375) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 8546463 bytes OK
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:33:50.744408) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:33:50.930768) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:33:50.930829) EVENT_LOG_v1 {"time_micros": 1765002830930819, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:33:50.930856) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13578454, prev total WAL file size 13622849, number of live WAL files 2.
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:33:50.939878) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(8346KB) 8(1648B)]
Dec  6 01:33:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830940025, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 8548111, "oldest_snapshot_seqno": -1}
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3459 keys, 8542626 bytes, temperature: kUnknown
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002831298363, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 8542626, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8513253, "index_size": 19599, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8709, "raw_key_size": 84137, "raw_average_key_size": 24, "raw_value_size": 8444471, "raw_average_value_size": 2441, "num_data_blocks": 866, "num_entries": 3459, "num_filter_entries": 3459, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765002830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e112 e112: 3 total, 3 up, 3 in
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:33:51.298782) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 8542626 bytes
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:33:51.335245) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 23.8 rd, 23.8 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(8.2, 0.0 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3718, records dropped: 259 output_compression: NoCompression
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:33:51.335302) EVENT_LOG_v1 {"time_micros": 1765002831335286, "job": 4, "event": "compaction_finished", "compaction_time_micros": 358444, "compaction_time_cpu_micros": 22158, "output_level": 6, "num_output_files": 1, "total_output_size": 8542626, "num_input_records": 3718, "num_output_records": 3459, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002831337736, "job": 4, "event": "table_file_deletion", "file_number": 14}
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002831337847, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:33:50.939694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 01:33:51 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 01:33:51 np0005548731 python3.9[90723]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:33:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:33:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:51.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:33:51 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec  6 01:33:51 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec  6 01:33:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:52.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e113 e113: 3 total, 3 up, 3 in
Dec  6 01:33:52 np0005548731 python3.9[90878]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:33:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:33:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:53.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:54 np0005548731 python3.9[91033]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  6 01:33:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:54.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:55 np0005548731 python3.9[91188]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  6 01:33:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:55.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:55 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec  6 01:33:55 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec  6 01:33:56 np0005548731 python3.9[91342]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 01:33:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:56.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:33:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:57.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:33:57 np0005548731 python3.9[91495]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  6 01:33:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:33:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:58.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:33:58 np0005548731 python3.9[91649]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:33:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  6 01:33:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e114 e114: 3 total, 3 up, 3 in
Dec  6 01:33:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:33:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:33:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:59.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  6 01:34:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:00.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:00 np0005548731 python3.9[91805]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:34:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e115 e115: 3 total, 3 up, 3 in
Dec  6 01:34:01 np0005548731 python3.9[91957]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:34:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  6 01:34:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  6 01:34:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:01.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:01 np0005548731 python3.9[92036]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:34:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:02.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:02 np0005548731 python3.9[92190]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:34:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:34:03 np0005548731 python3.9[92268]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:34:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:03.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:04 np0005548731 python3.9[92421]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:34:04 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec  6 01:34:04 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec  6 01:34:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:04.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:05.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:05 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec  6 01:34:05 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec  6 01:34:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:07.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:07 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec  6 01:34:07 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec  6 01:34:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:34:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:34:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:09.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:11.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).paxos(paxos updating c 252..769) lease_timeout -- calling new election
Dec  6 01:34:11 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 01:34:11 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(16) init, last seen epoch 16
Dec  6 01:34:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:34:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:34:12 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:34:12 np0005548731 podman[92674]: 2025-12-06 06:34:12.760261915 +0000 UTC m=+0.067035655 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  6 01:34:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:12.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:12 np0005548731 podman[92674]: 2025-12-06 06:34:12.887973487 +0000 UTC m=+0.194747207 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  6 01:34:13 np0005548731 podman[92825]: 2025-12-06 06:34:13.513703407 +0000 UTC m=+0.060902946 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:34:13 np0005548731 podman[92847]: 2025-12-06 06:34:13.587383141 +0000 UTC m=+0.052593446 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:34:13 np0005548731 podman[92825]: 2025-12-06 06:34:13.59349368 +0000 UTC m=+0.140693019 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:34:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:13.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:13 np0005548731 podman[92892]: 2025-12-06 06:34:13.808007705 +0000 UTC m=+0.057643061 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container)
Dec  6 01:34:13 np0005548731 podman[92892]: 2025-12-06 06:34:13.82100039 +0000 UTC m=+0.070635736 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, io.openshift.expose-services=, release=1793, io.buildah.version=1.28.2, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, vcs-type=git)
Dec  6 01:34:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:14.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:15.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:16 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:34:16 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Dec  6 01:34:16 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Dec  6 01:34:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e116 e116: 3 total, 3 up, 3 in
Dec  6 01:34:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:34:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 116 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=80/81 n=5 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=116 pruub=11.170239449s) [1] r=-1 lpr=116 pi=[80,116)/1 crt=58'1159 mlcod 0'0 active pruub 236.898025513s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:16 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 116 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=80/81 n=5 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=116 pruub=11.170166016s) [1] r=-1 lpr=116 pi=[80,116)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 236.898025513s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:34:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:16.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:17.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:17 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec  6 01:34:17 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e117 e117: 3 total, 3 up, 3 in
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.14"}]: dispatch
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec  6 01:34:18 np0005548731 ceph-mon[77458]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:34:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:34:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:18.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  6 01:34:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  6 01:34:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  6 01:34:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  6 01:34:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  6 01:34:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  6 01:34:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.14"}]': finished
Dec  6 01:34:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  6 01:34:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  6 01:34:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  6 01:34:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e118 e118: 3 total, 3 up, 3 in
Dec  6 01:34:19 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 118 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=80/81 n=5 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=118) [1]/[2] r=0 lpr=118 pi=[80,118)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:19 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 118 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=80/81 n=5 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=118) [1]/[2] r=0 lpr=118 pi=[80,118)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 01:34:19 np0005548731 python3.9[93181]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:34:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:19.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:19 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 118 pg[9.16( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=118) [2] r=0 lpr=118 pi=[82,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:34:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  6 01:34:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  6 01:34:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e119 e119: 3 total, 3 up, 3 in
Dec  6 01:34:20 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 119 pg[9.16( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=119) [2]/[1] r=-1 lpr=119 pi=[82,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:20 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 119 pg[9.16( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=119) [2]/[1] r=-1 lpr=119 pi=[82,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:34:20 np0005548731 python3.9[93334]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  6 01:34:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:20.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:21 np0005548731 python3.9[93484]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:34:21 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 119 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=118/119 n=5 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=118) [1]/[2] async=[1] r=0 lpr=118 pi=[80,118)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:34:21 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  6 01:34:21 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:34:21 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:34:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e120 e120: 3 total, 3 up, 3 in
Dec  6 01:34:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:21.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:34:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:34:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:34:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e121 e121: 3 total, 3 up, 3 in
Dec  6 01:34:22 np0005548731 python3.9[93637]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:34:22 np0005548731 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  6 01:34:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:22.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:22 np0005548731 systemd[1]: tuned.service: Deactivated successfully.
Dec  6 01:34:22 np0005548731 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  6 01:34:22 np0005548731 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  6 01:34:23 np0005548731 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  6 01:34:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:34:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e122 e122: 3 total, 3 up, 3 in
Dec  6 01:34:23 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 122 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=119/82 les/c/f=120/83/0 sis=122) [2] r=0 lpr=122 pi=[82,122)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:23 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 122 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=119/82 les/c/f=120/83/0 sis=122) [2] r=0 lpr=122 pi=[82,122)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:34:23 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 122 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=118/119 n=5 ec=62/51 lis/c=118/80 les/c/f=119/81/0 sis=122 pruub=13.558900833s) [1] async=[1] r=-1 lpr=122 pi=[80,122)/1 crt=58'1159 mlcod 58'1159 active pruub 246.189483643s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:23 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 122 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=118/119 n=5 ec=62/51 lis/c=118/80 les/c/f=119/81/0 sis=122 pruub=13.558813095s) [1] r=-1 lpr=122 pi=[80,122)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 246.189483643s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:34:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:23.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:23 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec  6 01:34:23 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec  6 01:34:24 np0005548731 python3.9[93799]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  6 01:34:24 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 01:34:24 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(21) init, last seen epoch 21, mid-election, bumping
Dec  6 01:34:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:34:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:34:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:24.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:34:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e123 e123: 3 total, 3 up, 3 in
Dec  6 01:34:25 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:34:25 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 01:34:25 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 01:34:25 np0005548731 ceph-mon[77458]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec  6 01:34:25 np0005548731 ceph-mon[77458]: Cluster is now healthy
Dec  6 01:34:25 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 123 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=122/123 n=5 ec=62/51 lis/c=119/82 les/c/f=120/83/0 sis=122) [2] r=0 lpr=122 pi=[82,122)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:34:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:25.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:25 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec  6 01:34:25 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec  6 01:34:26 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 01:34:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:26.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:27.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:28 np0005548731 python3.9[93952]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:34:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:34:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:34:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:34:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  6 01:34:28 np0005548731 python3.9[94157]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:34:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:28.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e124 e124: 3 total, 3 up, 3 in
Dec  6 01:34:29 np0005548731 systemd[1]: session-34.scope: Deactivated successfully.
Dec  6 01:34:29 np0005548731 systemd[1]: session-34.scope: Consumed 1min 8.690s CPU time.
Dec  6 01:34:29 np0005548731 systemd-logind[794]: Session 34 logged out. Waiting for processes to exit.
Dec  6 01:34:29 np0005548731 systemd-logind[794]: Removed session 34.
Dec  6 01:34:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:29.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  6 01:34:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:30.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e125 e125: 3 total, 3 up, 3 in
Dec  6 01:34:30 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 125 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=89/90 n=5 ec=62/51 lis/c=89/89 les/c/f=90/90/0 sis=125 pruub=13.078331947s) [0] r=-1 lpr=125 pi=[89,125)/1 crt=58'1159 mlcod 0'0 active pruub 252.898559570s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:30 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 125 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=89/90 n=5 ec=62/51 lis/c=89/89 les/c/f=90/90/0 sis=125 pruub=13.078227997s) [0] r=-1 lpr=125 pi=[89,125)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 252.898559570s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:34:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  6 01:34:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:31.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e126 e126: 3 total, 3 up, 3 in
Dec  6 01:34:32 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 126 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=89/90 n=5 ec=62/51 lis/c=89/89 les/c/f=90/90/0 sis=126) [0]/[2] r=0 lpr=126 pi=[89,126)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:32 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 126 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=89/90 n=5 ec=62/51 lis/c=89/89 les/c/f=90/90/0 sis=126) [0]/[2] r=0 lpr=126 pi=[89,126)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 01:34:32 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec  6 01:34:32 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec  6 01:34:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:34:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:32.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:34:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  6 01:34:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:34:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:33.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e127 e127: 3 total, 3 up, 3 in
Dec  6 01:34:34 np0005548731 systemd-logind[794]: New session 35 of user zuul.
Dec  6 01:34:34 np0005548731 systemd[1]: Started Session 35 of User zuul.
Dec  6 01:34:34 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 127 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=126/127 n=5 ec=62/51 lis/c=89/89 les/c/f=90/90/0 sis=126) [0]/[2] async=[0] r=0 lpr=126 pi=[89,126)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:34:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:34.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:35 np0005548731 python3.9[94390]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:34:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e128 e128: 3 total, 3 up, 3 in
Dec  6 01:34:35 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 128 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=126/127 n=5 ec=62/51 lis/c=126/89 les/c/f=127/90/0 sis=128 pruub=14.951223373s) [0] async=[0] r=-1 lpr=128 pi=[89,128)/1 crt=58'1159 mlcod 58'1159 active pruub 259.438964844s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:35 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 128 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=126/127 n=5 ec=62/51 lis/c=126/89 les/c/f=127/90/0 sis=128 pruub=14.951118469s) [0] r=-1 lpr=128 pi=[89,128)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 259.438964844s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:34:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:35.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:35 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec  6 01:34:35 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec  6 01:34:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e129 e129: 3 total, 3 up, 3 in
Dec  6 01:34:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:36.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:36 np0005548731 python3.9[94547]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  6 01:34:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:37.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:37 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.b deep-scrub starts
Dec  6 01:34:37 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.b deep-scrub ok
Dec  6 01:34:37 np0005548731 python3.9[94700]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:34:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:34:38 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec  6 01:34:38 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec  6 01:34:38 np0005548731 python3.9[94785]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  6 01:34:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:38.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:39.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:41 np0005548731 python3.9[94939]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:34:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:41.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:42.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:42 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  6 01:34:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e130 e130: 3 total, 3 up, 3 in
Dec  6 01:34:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  6 01:34:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e131 e131: 3 total, 3 up, 3 in
Dec  6 01:34:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:43.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:43 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec  6 01:34:43 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec  6 01:34:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  6 01:34:44 np0005548731 python3.9[95094]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 01:34:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e132 e132: 3 total, 3 up, 3 in
Dec  6 01:34:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 132 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=2 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=132 pruub=15.240422249s) [0] r=-1 lpr=132 pi=[72,132)/1 crt=58'1159 mlcod 0'0 active pruub 268.898956299s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:44 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 132 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=2 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=132 pruub=15.240328789s) [0] r=-1 lpr=132 pi=[72,132)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 268.898956299s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:34:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:44.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  6 01:34:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  6 01:34:45 np0005548731 python3.9[95248]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:34:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e133 e133: 3 total, 3 up, 3 in
Dec  6 01:34:45 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 133 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=2 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=133) [0]/[2] r=0 lpr=133 pi=[72,133)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:45 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 133 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=72/73 n=2 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=133) [0]/[2] r=0 lpr=133 pi=[72,133)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  6 01:34:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:45.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:46 np0005548731 python3.9[95400]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  6 01:34:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:46.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:47 np0005548731 python3.9[95551]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:34:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:47.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e134 e134: 3 total, 3 up, 3 in
Dec  6 01:34:48 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 134 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=133/134 n=2 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=133) [0]/[2] async=[0] r=0 lpr=133 pi=[72,133)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:34:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:34:48 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec  6 01:34:48 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec  6 01:34:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e135 e135: 3 total, 3 up, 3 in
Dec  6 01:34:48 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 135 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=133/134 n=2 ec=62/51 lis/c=133/72 les/c/f=134/73/0 sis=135 pruub=15.234102249s) [0] async=[0] r=-1 lpr=135 pi=[72,135)/1 crt=58'1159 mlcod 58'1159 active pruub 272.989715576s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:48 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 135 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=133/134 n=2 ec=62/51 lis/c=133/72 les/c/f=134/73/0 sis=135 pruub=15.233796120s) [0] r=-1 lpr=135 pi=[72,135)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 272.989715576s@ mbc={}] state<Start>: transitioning to Stray
Dec  6 01:34:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:48.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:49 np0005548731 python3.9[95710]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:34:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:49.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e136 e136: 3 total, 3 up, 3 in
Dec  6 01:34:50 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec  6 01:34:50 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec  6 01:34:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:50.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:51.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:52 np0005548731 python3.9[95915]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:34:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:52.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:34:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e137 e137: 3 total, 3 up, 3 in
Dec  6 01:34:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:53.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:54 np0005548731 python3.9[96202]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  6 01:34:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  6 01:34:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  6 01:34:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:54.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:54 np0005548731 python3.9[96353]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:34:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e138 e138: 3 total, 3 up, 3 in
Dec  6 01:34:55 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec  6 01:34:55 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec  6 01:34:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:55.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:55 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 138 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=100/100 les/c/f=101/101/0 sis=138) [2] r=0 lpr=138 pi=[100,138)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:34:56 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  6 01:34:56 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  6 01:34:56 np0005548731 python3.9[96508]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:34:56 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec  6 01:34:56 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec  6 01:34:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e139 e139: 3 total, 3 up, 3 in
Dec  6 01:34:56 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 139 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=100/100 les/c/f=101/101/0 sis=139) [2]/[1] r=-1 lpr=139 pi=[100,139)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:56 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 139 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=100/100 les/c/f=101/101/0 sis=139) [2]/[1] r=-1 lpr=139 pi=[100,139)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  6 01:34:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:34:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:56.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:34:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  6 01:34:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e140 e140: 3 total, 3 up, 3 in
Dec  6 01:34:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:57.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:34:58 np0005548731 python3.9[96662]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:34:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e141 e141: 3 total, 3 up, 3 in
Dec  6 01:34:58 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 141 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=139/100 les/c/f=140/101/0 sis=141) [2] r=0 lpr=141 pi=[100,141)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  6 01:34:58 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 141 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=139/100 les/c/f=140/101/0 sis=141) [2] r=0 lpr=141 pi=[100,141)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  6 01:34:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  6 01:34:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  6 01:34:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:58.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:34:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:34:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:59.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:34:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e142 e142: 3 total, 3 up, 3 in
Dec  6 01:34:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  6 01:34:59 np0005548731 ceph-osd[80147]: osd.2 pg_epoch: 142 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=141/142 n=5 ec=62/51 lis/c=139/100 les/c/f=140/101/0 sis=141) [2] r=0 lpr=141 pi=[100,141)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  6 01:35:00 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec  6 01:35:00 np0005548731 ceph-osd[80147]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec  6 01:35:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:35:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:00.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:35:01 np0005548731 python3.9[96816]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:35:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e143 e143: 3 total, 3 up, 3 in
Dec  6 01:35:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:01.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:02 np0005548731 python3.9[96971]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec  6 01:35:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e144 e144: 3 total, 3 up, 3 in
Dec  6 01:35:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:02.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:03.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 e145: 3 total, 3 up, 3 in
Dec  6 01:35:04 np0005548731 systemd[1]: session-35.scope: Deactivated successfully.
Dec  6 01:35:04 np0005548731 systemd[1]: session-35.scope: Consumed 18.963s CPU time.
Dec  6 01:35:04 np0005548731 systemd-logind[794]: Session 35 logged out. Waiting for processes to exit.
Dec  6 01:35:04 np0005548731 systemd-logind[794]: Removed session 35.
Dec  6 01:35:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:04.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:05.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:06.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:07.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:08.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:09.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:10.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:11.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:12.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:13.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:14.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:15.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:16 np0005548731 systemd-logind[794]: New session 36 of user zuul.
Dec  6 01:35:16 np0005548731 systemd[1]: Started Session 36 of User zuul.
Dec  6 01:35:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:16.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:17 np0005548731 python3.9[97206]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:35:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:17.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:18 np0005548731 python3.9[97361]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:35:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:18.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:19.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:20 np0005548731 python3.9[97555]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:35:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:20.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:20 np0005548731 systemd[1]: session-36.scope: Deactivated successfully.
Dec  6 01:35:20 np0005548731 systemd[1]: session-36.scope: Consumed 2.515s CPU time.
Dec  6 01:35:20 np0005548731 systemd-logind[794]: Session 36 logged out. Waiting for processes to exit.
Dec  6 01:35:20 np0005548731 systemd-logind[794]: Removed session 36.
Dec  6 01:35:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:21.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:22.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:23.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:24.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:25.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:26 np0005548731 systemd-logind[794]: New session 37 of user zuul.
Dec  6 01:35:26 np0005548731 systemd[1]: Started Session 37 of User zuul.
Dec  6 01:35:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:26.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:27 np0005548731 python3.9[97738]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:35:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:27.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:28 np0005548731 podman[98062]: 2025-12-06 06:35:28.701055132 +0000 UTC m=+0.080581817 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:35:28 np0005548731 python3.9[98026]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:35:28 np0005548731 podman[98062]: 2025-12-06 06:35:28.818026577 +0000 UTC m=+0.197553232 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 01:35:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:28.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:29.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:29 np0005548731 python3.9[98384]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:35:30 np0005548731 podman[98339]: 2025-12-06 06:35:30.140430721 +0000 UTC m=+0.690857838 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:35:30 np0005548731 podman[98339]: 2025-12-06 06:35:30.154029174 +0000 UTC m=+0.704456291 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:35:30 np0005548731 podman[98447]: 2025-12-06 06:35:30.397517903 +0000 UTC m=+0.076100292 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, name=keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.28.2, vcs-type=git, io.openshift.expose-services=, com.redhat.component=keepalived-container, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  6 01:35:30 np0005548731 podman[98447]: 2025-12-06 06:35:30.413045033 +0000 UTC m=+0.091627422 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, architecture=x86_64, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc.)
Dec  6 01:35:30 np0005548731 python3.9[98560]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:35:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:30.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:35:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:35:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:35:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:35:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:35:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:35:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:31.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:35:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:35:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:35:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:32.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:33 np0005548731 python3.9[98891]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:35:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:33.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:34 np0005548731 python3.9[99087]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:35:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:34.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:35 np0005548731 python3.9[99239]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:35:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:35.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:36 np0005548731 python3.9[99405]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:35:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:36.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:37 np0005548731 python3.9[99483]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:35:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:37.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:37 np0005548731 python3.9[99635]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:35:38 np0005548731 python3.9[99714]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:35:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:38.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:39 np0005548731 python3.9[99867]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:35:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:39.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:40 np0005548731 python3.9[100019]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:35:40 np0005548731 python3.9[100203]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:35:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:40.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:35:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:35:41 np0005548731 python3.9[100374]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:35:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:41.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:42 np0005548731 python3.9[100527]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:35:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:42.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:43.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:44.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:44 np0005548731 python3.9[100681]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:35:45 np0005548731 python3.9[100835]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:35:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:45.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:46 np0005548731 python3.9[100988]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:35:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:46.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:47 np0005548731 python3.9[101140]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:35:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:47.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:48.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:48 np0005548731 python3.9[101294]: ansible-service_facts Invoked
Dec  6 01:35:49 np0005548731 network[101311]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 01:35:49 np0005548731 network[101312]: 'network-scripts' will be removed from distribution in near future.
Dec  6 01:35:49 np0005548731 network[101313]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 01:35:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000036s ======
Dec  6 01:35:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:49.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000036s
Dec  6 01:35:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:50.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:52.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:53.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:54 np0005548731 python3.9[101818]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:35:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:54.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:56.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:35:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:57.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:35:57 np0005548731 python3.9[101972]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  6 01:35:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:58.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:35:59 np0005548731 python3.9[102125]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:35:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:35:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:35:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:35:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:59.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:00 np0005548731 python3.9[102203]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:00.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:01 np0005548731 python3.9[102356]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:01.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:01 np0005548731 python3.9[102434]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:02.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:03 np0005548731 python3.9[102587]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:03.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:36:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:04.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:05 np0005548731 python3.9[102740]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:36:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:05.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:06 np0005548731 python3.9[102825]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:36:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:06.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:07 np0005548731 systemd[1]: session-37.scope: Deactivated successfully.
Dec  6 01:36:07 np0005548731 systemd[1]: session-37.scope: Consumed 25.284s CPU time.
Dec  6 01:36:07 np0005548731 systemd-logind[794]: Session 37 logged out. Waiting for processes to exit.
Dec  6 01:36:07 np0005548731 systemd-logind[794]: Removed session 37.
Dec  6 01:36:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:07.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:09.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:36:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:09.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:11.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:11.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:13.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:13 np0005548731 systemd-logind[794]: New session 38 of user zuul.
Dec  6 01:36:13 np0005548731 systemd[1]: Started Session 38 of User zuul.
Dec  6 01:36:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:13.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:14 np0005548731 python3.9[103061]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:36:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:15.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:15 np0005548731 python3.9[103213]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:15.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:15 np0005548731 python3.9[103291]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:16 np0005548731 systemd[1]: session-38.scope: Deactivated successfully.
Dec  6 01:36:16 np0005548731 systemd[1]: session-38.scope: Consumed 1.501s CPU time.
Dec  6 01:36:16 np0005548731 systemd-logind[794]: Session 38 logged out. Waiting for processes to exit.
Dec  6 01:36:16 np0005548731 systemd-logind[794]: Removed session 38.
Dec  6 01:36:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:17.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:17.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:19.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:36:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:19.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:21.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:21.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:22 np0005548731 systemd-logind[794]: New session 39 of user zuul.
Dec  6 01:36:22 np0005548731 systemd[1]: Started Session 39 of User zuul.
Dec  6 01:36:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:23.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:23 np0005548731 python3.9[103473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:36:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:23.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:24 np0005548731 python3.9[103630]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:36:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:25.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:25 np0005548731 python3.9[103805]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:25 np0005548731 python3.9[103883]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.v20sntcd recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:25.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:26 np0005548731 python3.9[104036]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:27.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:27 np0005548731 python3.9[104114]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.jt308lwk recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000019s ======
Dec  6 01:36:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:27.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec  6 01:36:28 np0005548731 python3.9[104267]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:36:29 np0005548731 python3.9[104419]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:29.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:29 np0005548731 python3.9[104497]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:36:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000019s ======
Dec  6 01:36:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:29.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec  6 01:36:30 np0005548731 python3.9[104649]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:36:30 np0005548731 python3.9[104728]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:36:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:31.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:31 np0005548731 python3.9[104930]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:31.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:32 np0005548731 python3.9[105083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:32 np0005548731 python3.9[105161]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:33.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:33 np0005548731 python3.9[105313]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000037s ======
Dec  6 01:36:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:33.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000037s
Dec  6 01:36:34 np0005548731 python3.9[105392]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:35.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:35 np0005548731 python3.9[105544]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:36:35 np0005548731 systemd[1]: Reloading.
Dec  6 01:36:35 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:36:35 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:36:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:36:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:35.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:36 np0005548731 python3.9[105735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:37.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:37 np0005548731 python3.9[105813]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000019s ======
Dec  6 01:36:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:37.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec  6 01:36:38 np0005548731 python3.9[105965]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:38 np0005548731 python3.9[106044]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:39.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:39 np0005548731 python3.9[106196]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:36:39 np0005548731 systemd[1]: Reloading.
Dec  6 01:36:39 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:36:39 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:36:39 np0005548731 systemd[1]: Starting Create netns directory...
Dec  6 01:36:39 np0005548731 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  6 01:36:39 np0005548731 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  6 01:36:39 np0005548731 systemd[1]: Finished Create netns directory.
Dec  6 01:36:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000019s ======
Dec  6 01:36:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:39.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec  6 01:36:40 np0005548731 python3.9[106390]: ansible-ansible.builtin.service_facts Invoked
Dec  6 01:36:40 np0005548731 network[106430]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 01:36:40 np0005548731 network[106432]: 'network-scripts' will be removed from distribution in near future.
Dec  6 01:36:40 np0005548731 network[106433]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 01:36:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:36:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:41.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:41.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:42 np0005548731 podman[106613]: 2025-12-06 06:36:42.009907198 +0000 UTC m=+0.078461026 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 01:36:42 np0005548731 podman[106613]: 2025-12-06 06:36:42.137009159 +0000 UTC m=+0.205562957 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 01:36:42 np0005548731 podman[106803]: 2025-12-06 06:36:42.816154535 +0000 UTC m=+0.058515806 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:36:42 np0005548731 podman[106803]: 2025-12-06 06:36:42.858032323 +0000 UTC m=+0.100393594 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:36:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:43.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:43 np0005548731 podman[106884]: 2025-12-06 06:36:43.107139618 +0000 UTC m=+0.068767841 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, distribution-scope=public, io.openshift.expose-services=, name=keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vendor=Red Hat, Inc., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, com.redhat.component=keepalived-container)
Dec  6 01:36:43 np0005548731 podman[106884]: 2025-12-06 06:36:43.123094162 +0000 UTC m=+0.084722365 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, vcs-type=git, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived)
Dec  6 01:36:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:43.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:44 np0005548731 python3.9[107084]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:45.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:45 np0005548731 python3.9[107162]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:45.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:36:46 np0005548731 python3.9[107314]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:46 np0005548731 python3.9[107467]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:47.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:47 np0005548731 python3.9[107545]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:47.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:36:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:36:48 np0005548731 python3.9[107698]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  6 01:36:48 np0005548731 systemd[1]: Starting Time & Date Service...
Dec  6 01:36:48 np0005548731 systemd[1]: Started Time & Date Service.
Dec  6 01:36:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000019s ======
Dec  6 01:36:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec  6 01:36:49 np0005548731 python3.9[107984]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000019s ======
Dec  6 01:36:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:49.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec  6 01:36:50 np0005548731 python3.9[108136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:50 np0005548731 python3.9[108215]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:50 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Dec  6 01:36:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:50.660403) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:36:50 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Dec  6 01:36:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003010660496, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2511, "num_deletes": 258, "total_data_size": 5732390, "memory_usage": 5831400, "flush_reason": "Manual Compaction"}
Dec  6 01:36:50 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Dec  6 01:36:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003011650271, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 2570754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7053, "largest_seqno": 9559, "table_properties": {"data_size": 2561929, "index_size": 4999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 24143, "raw_average_key_size": 22, "raw_value_size": 2541753, "raw_average_value_size": 2321, "num_data_blocks": 222, "num_entries": 1095, "num_filter_entries": 1095, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002830, "oldest_key_time": 1765002830, "file_creation_time": 1765003010, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 1006720 microseconds, and 7784 cpu microseconds.
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:51.651017) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 2570754 bytes OK
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:51.667711) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:51.696509) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:51.696594) EVENT_LOG_v1 {"time_micros": 1765003011696577, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:51.696628) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5720663, prev total WAL file size 5774398, number of live WAL files 2.
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:51.698251) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323539' seq:0, type:0; will stop at (end)
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(2510KB)], [15(8342KB)]
Dec  6 01:36:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003011698345, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11113380, "oldest_snapshot_seqno": -1}
Dec  6 01:36:51 np0005548731 python3.9[108417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000019s ======
Dec  6 01:36:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:51.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 4082 keys, 9538875 bytes, temperature: kUnknown
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003012039199, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9538875, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9506218, "index_size": 21347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 99125, "raw_average_key_size": 24, "raw_value_size": 9427131, "raw_average_value_size": 2309, "num_data_blocks": 941, "num_entries": 4082, "num_filter_entries": 4082, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765003011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:52.039499) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9538875 bytes
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:52.053397) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.6 rd, 28.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 4554, records dropped: 472 output_compression: NoCompression
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:52.053452) EVENT_LOG_v1 {"time_micros": 1765003012053436, "job": 6, "event": "compaction_finished", "compaction_time_micros": 340953, "compaction_time_cpu_micros": 25055, "output_level": 6, "num_output_files": 1, "total_output_size": 9538875, "num_input_records": 4554, "num_output_records": 4082, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003012054337, "job": 6, "event": "table_file_deletion", "file_number": 17}
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003012055819, "job": 6, "event": "table_file_deletion", "file_number": 15}
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:51.698064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:52.055938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:52.055944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:52.055946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:52.055948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:36:52.055949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:36:52 np0005548731 python3.9[108496]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._wc070pk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:36:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:36:53 np0005548731 python3.9[108648]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:53.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:53 np0005548731 python3.9[108726]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:53.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:54 np0005548731 python3.9[108879]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:36:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:55.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:55 np0005548731 python3[109032]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  6 01:36:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:55.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:56 np0005548731 python3.9[109184]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:56 np0005548731 python3.9[109263]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:36:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:57.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:57 np0005548731 python3.9[109415]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:57 np0005548731 python3.9[109493]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:57.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:58 np0005548731 python3.9[109646]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:36:59 np0005548731 python3.9[109774]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:36:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:36:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:59.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:36:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:36:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:36:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:36:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:36:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:59.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:36:59 np0005548731 python3.9[109926]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:37:00 np0005548731 python3.9[110005]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:37:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:01.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:01 np0005548731 python3.9[110157]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:37:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:37:01 np0005548731 python3.9[110235]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:37:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:01.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000019s ======
Dec  6 01:37:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec  6 01:37:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:37:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:03.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:37:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000019s ======
Dec  6 01:37:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:05.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec  6 01:37:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:37:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:05.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:37:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:37:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:07.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:07.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:37:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:37:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:09.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:37:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).paxos(paxos updating c 252..980) lease_timeout -- calling new election
Dec  6 01:37:09 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 01:37:09 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(24) init, last seen epoch 24
Dec  6 01:37:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000019s ======
Dec  6 01:37:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:09.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec  6 01:37:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:11.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:11.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:12 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:37:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000018s ======
Dec  6 01:37:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:13.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec  6 01:37:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:13.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:14 np0005548731 python3.9[110444]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:37:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:15.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:15 np0005548731 python3.9[110599]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:37:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:37:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:15.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:37:16 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:37:16 np0005548731 python3.9[110752]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:37:17 np0005548731 python3.9[110904]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:37:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:17.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:17.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:17 np0005548731 python3.9[111056]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  6 01:37:18 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 01:37:18 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:37:18 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  6 01:37:18 np0005548731 ceph-mon[77458]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec  6 01:37:18 np0005548731 ceph-mon[77458]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec  6 01:37:18 np0005548731 ceph-mon[77458]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec  6 01:37:18 np0005548731 ceph-mon[77458]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec  6 01:37:18 np0005548731 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  6 01:37:18 np0005548731 python3.9[111211]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  6 01:37:19 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 01:37:19 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(28) init, last seen epoch 28
Dec  6 01:37:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:37:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:19.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:37:19 np0005548731 systemd[1]: session-39.scope: Deactivated successfully.
Dec  6 01:37:19 np0005548731 systemd[1]: session-39.scope: Consumed 30.301s CPU time.
Dec  6 01:37:19 np0005548731 systemd-logind[794]: Session 39 logged out. Waiting for processes to exit.
Dec  6 01:37:19 np0005548731 systemd-logind[794]: Removed session 39.
Dec  6 01:37:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:37:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:19.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:37:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:20 np0005548731 ceph-mon[77458]: mon.compute-1 calling monitor election
Dec  6 01:37:20 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:37:20 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 01:37:20 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 01:37:20 np0005548731 ceph-mon[77458]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec  6 01:37:20 np0005548731 ceph-mon[77458]: Cluster is now healthy
Dec  6 01:37:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:21.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:21.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:37:22 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 01:37:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:37:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:23.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:37:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:23.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:24 np0005548731 systemd-logind[794]: New session 40 of user zuul.
Dec  6 01:37:24 np0005548731 systemd[1]: Started Session 40 of User zuul.
Dec  6 01:37:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:25.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:25 np0005548731 python3.9[111396]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  6 01:37:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:37:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:25.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:37:26 np0005548731 python3.9[111549]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:37:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:37:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:27.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:37:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:37:27 np0005548731 python3.9[111703]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec  6 01:37:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:27 np0005548731 python3.9[111855]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.ksh6iusd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:37:28 np0005548731 python3.9[111981]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.ksh6iusd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003047.4726942-109-132499692305338/.source.ksh6iusd _original_basename=.nxk_wu7p follow=False checksum=660676d376f77098a981422bf7716e6cca0e00ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:37:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:29.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:37:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:29.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:37:29 np0005548731 python3.9[112133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:37:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:31.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:31 np0005548731 python3.9[112286]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXhoI1gGk2X98AQB4B5ZyJDup3CVjjMbiB6L30cKGocfIwEEwBz5d1xDpwA7euANP32L9+ddZ3Cn0VVuebREE5y184Yi1sdS+2O6H8M7BUT+RANGW4sY7jPXbTTJt6Bp2WWZu+AKxIRGMoo0UfvvFdscomysN+yxWB/KZ/niGARJyw61l1eO1/8shGJiP1LBuA4mdwHMTBYwXiYjk6LgI/i5m6zQk5ggmw2nKJqCwwPGyf2Xf7/LbRDgnryAatph9gA4JZ+QXULUJ8U+ILis30MPOGNA7vJ07ovYFAVwsoKYRCsxrEpg8AxMeRikU+CERKL2QQPABlbuJKnDZFrW2kY/L+B2g+i8FDWpaug4GQ6ZO7REu47ARhAUnuaIuJrhJgLrDq43vTqCgagXFz7UHhLI6KXLayNe3B/4It4UaZIVv3X8K+bZiI0zMWNhyjIBAU5VFZd0QjZDjt+Wv5WMYEFiWDyil+NVEHCxdSl46yd68mUvgMxWiv2Z57ICY9i+k=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF08740jXaMkiBlZr9+3kjjW/VDtcxAKNNm3eT7v4C7q#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJUJnc2vyZ7uGIoKmOtL2ok+zmjLSq/3vZNdtT52cNcj41FV66OIff0lT2r5neBPmMGSlOfqKMRY8iTu1fJs+/c=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTKFyfmCvsG9hwBkDHhSMH95Mc80Ub24C1l2ydnJGHzY4+vYnN+ZopDHd6HVKXgsP9msqkZAEdNXjbQn0sbWYw0v02CY6OcbMq306Rwo9N1fcSO6QVC6w79bGbRJasTA8jgAoGm1VSg3XpzU9C2Delv2ginn7LqCUou48j9w9jyaklDA2EV0anjvZ6hGLjcFaMQSlFPO8rr2pGS5nfNk2Re6GtYYWF4SPkd5xfecWi9szdT+tnG8VrwRX440/Pe3eV5UyVyHQzIEvxJK6DbTgtieOn0PVz3yHI3Uo8VatpsXahO8FsABY1GaI5QAj3qUudWz4YWsiV/qy0G5Wm27CB69LVPGWRr7y4+pVz0HxWiYGyYbRZdxHVZ3jqfGNBdMXJb2shp9BlIo+lpjEydkorHn66AKIpFCZFaGpHzkFPocaTP9yMPAxo/0YPllct7AqO/4CCBNhz4E6/0aMx2lsilFN6Oo/Mj0azpEQOHvuTEwqKJ5BK3MJCWgR11ccN7PM=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEDAV7OqwkHgR6GxlfPQDRQoPSdwQxyp1ILKzyPaTiD9#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeZVQBWQAXkYJ2DA26L5Jq1a5s2ScFbJ/Q/8jiRPf43wzW24IvQwAq99mI5t4QhVhmTRCbptw5L79elvFEyDY8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXnm8ofk0+O7jBFjA8fasReq5BkfwaahdsMJZLiqB71W18faenPK2mMCw6/Lyxaif1wFQKBGWK21bUZXsQguqG7iZV8rSLfQmLlKel/CnEgi2ekXmEhhYL/5GAB8Hq0UxChaI6YQUu0gWku4cPruBw6/+Lz36/PvLLwKqQupEi8npPR3O7a4jF6Px433cpBkZ/hgwG2m5+61NMAcNSCjjNj1cdXLugpDN9+05k6A3QV1sDXS2Zx6zdxPhgmLDKZLBGesQaz+glwYPo/2KfwAwlU4tAuY5eSV2BPX04PqKqexy3iziex/q3pFmtD6f1cRmqFZiyNs+kOfsxwABOVKQ6GG1iKKgzHMsK/paqNWMoHBj0lrRIJoX88Fd2A5DdPs2UPHwy3iUxLYekNcgiigT3O/4x92cFRritKJ8i8j83J6wJOQ0DnpyWxu4WFCjI4mBSKeA0NQzqMPICgkmtmtYKfSlzSdaL9W56FqnfE5JHkSrspcV9xnX3D/ijnD/8PxU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE4NkvA88mf0HvkHx7766e1aduefm45OK4uK2xW0LF1S#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIy6uH9XhIotH/UH4KICHfHUvzEiJMGjuOaC3xgcK45R/4kFK8w4At6C/G8bcf1l2+wNZCsHSuKrF09EzQCKCOU=#012 create=True mode=0644 path=/tmp/ansible.ksh6iusd state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:37:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:31.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:37:32 np0005548731 python3.9[112489]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ksh6iusd' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:37:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:37:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:33.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:37:33 np0005548731 python3.9[112643]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ksh6iusd state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:37:33 np0005548731 systemd[1]: session-40.scope: Deactivated successfully.
Dec  6 01:37:33 np0005548731 systemd[1]: session-40.scope: Consumed 5.398s CPU time.
Dec  6 01:37:33 np0005548731 systemd-logind[794]: Session 40 logged out. Waiting for processes to exit.
Dec  6 01:37:33 np0005548731 systemd-logind[794]: Removed session 40.
Dec  6 01:37:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:33.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:37:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:35.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:37:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:37:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:35.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:37:36 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:37:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).paxos(paxos updating c 252..994) lease_timeout -- calling new election
Dec  6 01:37:37 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 01:37:37 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(30) init, last seen epoch 30
Dec  6 01:37:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:37.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:37.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:38 np0005548731 systemd-logind[794]: New session 41 of user zuul.
Dec  6 01:37:38 np0005548731 systemd[1]: Started Session 41 of User zuul.
Dec  6 01:37:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:39.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:39 np0005548731 python3.9[112825]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:37:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:37:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:39.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:37:40 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:37:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:41.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:41 np0005548731 python3.9[112982]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  6 01:37:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:41.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:42 np0005548731 python3.9[113136]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:37:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:43 np0005548731 python3.9[113290]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:37:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:43 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 01:37:43 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:37:43 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  6 01:37:43 np0005548731 ceph-mon[77458]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec  6 01:37:43 np0005548731 ceph-mon[77458]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec  6 01:37:43 np0005548731 ceph-mon[77458]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec  6 01:37:43 np0005548731 ceph-mon[77458]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec  6 01:37:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:37:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:43.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:37:43 np0005548731 python3.9[113443]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:37:44 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 01:37:44 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(34) init, last seen epoch 34
Dec  6 01:37:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:37:44 np0005548731 python3.9[113596]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:37:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:45.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:45 np0005548731 systemd[1]: session-41.scope: Deactivated successfully.
Dec  6 01:37:45 np0005548731 systemd[1]: session-41.scope: Consumed 4.155s CPU time.
Dec  6 01:37:45 np0005548731 systemd-logind[794]: Session 41 logged out. Waiting for processes to exit.
Dec  6 01:37:45 np0005548731 systemd-logind[794]: Removed session 41.
Dec  6 01:37:45 np0005548731 ceph-mon[77458]: mon.compute-1 calling monitor election
Dec  6 01:37:45 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:37:45 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 01:37:45 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 01:37:45 np0005548731 ceph-mon[77458]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec  6 01:37:45 np0005548731 ceph-mon[77458]: Cluster is now healthy
Dec  6 01:37:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:45.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:37:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:47.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:37:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:37:47 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 01:37:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:37:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:47.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:37:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:49.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.537011) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069537335, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 661, "num_deletes": 251, "total_data_size": 968930, "memory_usage": 982656, "flush_reason": "Manual Compaction"}
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069543383, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 615781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9564, "largest_seqno": 10220, "table_properties": {"data_size": 612508, "index_size": 1117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8495, "raw_average_key_size": 19, "raw_value_size": 605511, "raw_average_value_size": 1424, "num_data_blocks": 50, "num_entries": 425, "num_filter_entries": 425, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003011, "oldest_key_time": 1765003011, "file_creation_time": 1765003069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 6441 microseconds, and 3076 cpu microseconds.
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.543450) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 615781 bytes OK
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.543479) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.544754) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.544771) EVENT_LOG_v1 {"time_micros": 1765003069544767, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.544793) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 965192, prev total WAL file size 965192, number of live WAL files 2.
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.545519) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(601KB)], [18(9315KB)]
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069545617, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10154656, "oldest_snapshot_seqno": -1}
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3985 keys, 8538558 bytes, temperature: kUnknown
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069644678, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 8538558, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8508213, "index_size": 19291, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 98482, "raw_average_key_size": 24, "raw_value_size": 8432359, "raw_average_value_size": 2116, "num_data_blocks": 839, "num_entries": 3985, "num_filter_entries": 3985, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765003069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.644967) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 8538558 bytes
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.647736) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 102.4 rd, 86.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(30.4) write-amplify(13.9) OK, records in: 4507, records dropped: 522 output_compression: NoCompression
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.647782) EVENT_LOG_v1 {"time_micros": 1765003069647768, "job": 8, "event": "compaction_finished", "compaction_time_micros": 99159, "compaction_time_cpu_micros": 25624, "output_level": 6, "num_output_files": 1, "total_output_size": 8538558, "num_input_records": 4507, "num_output_records": 3985, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069648033, "job": 8, "event": "table_file_deletion", "file_number": 20}
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069650135, "job": 8, "event": "table_file_deletion", "file_number": 18}
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.545384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.650309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.650317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.650319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.650321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:37:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:37:49.650323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:37:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:37:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:49.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:37:51 np0005548731 systemd-logind[794]: New session 42 of user zuul.
Dec  6 01:37:51 np0005548731 systemd[1]: Started Session 42 of User zuul.
Dec  6 01:37:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:51.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:51.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:52 np0005548731 python3.9[113827]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:37:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:37:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:53.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:53 np0005548731 python3.9[113984]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:37:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:37:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:53.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:37:54 np0005548731 python3.9[114069]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  6 01:37:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:55.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:55.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:56 np0005548731 python3.9[114221]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:37:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:57.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:37:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:37:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:57.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:37:58 np0005548731 python3.9[114372]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 01:37:58 np0005548731 python3.9[114523]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:37:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:59.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:37:59 np0005548731 podman[114846]: 2025-12-06 06:37:59.59572952 +0000 UTC m=+0.114229332 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 01:37:59 np0005548731 python3.9[114838]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:37:59 np0005548731 podman[114846]: 2025-12-06 06:37:59.729987978 +0000 UTC m=+0.248487780 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:37:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:37:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:37:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:59.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:00 np0005548731 systemd[1]: session-42.scope: Deactivated successfully.
Dec  6 01:38:00 np0005548731 systemd[1]: session-42.scope: Consumed 6.322s CPU time.
Dec  6 01:38:00 np0005548731 systemd-logind[794]: Session 42 logged out. Waiting for processes to exit.
Dec  6 01:38:00 np0005548731 systemd-logind[794]: Removed session 42.
Dec  6 01:38:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:38:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:38:00 np0005548731 podman[115024]: 2025-12-06 06:38:00.501810053 +0000 UTC m=+0.079222833 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:38:00 np0005548731 podman[115044]: 2025-12-06 06:38:00.61895257 +0000 UTC m=+0.095226310 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:38:00 np0005548731 podman[115024]: 2025-12-06 06:38:00.637936883 +0000 UTC m=+0.215349653 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:38:00 np0005548731 podman[115090]: 2025-12-06 06:38:00.926032673 +0000 UTC m=+0.116987824 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, architecture=x86_64, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793)
Dec  6 01:38:01 np0005548731 podman[115112]: 2025-12-06 06:38:01.024786051 +0000 UTC m=+0.075474011 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, release=1793, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, name=keepalived, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  6 01:38:01 np0005548731 podman[115090]: 2025-12-06 06:38:01.046001713 +0000 UTC m=+0.236956834 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20)
Dec  6 01:38:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:01.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:01.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:03.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:38:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:38:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:38:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:03.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:38:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:38:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:05.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:05 np0005548731 systemd-logind[794]: New session 43 of user zuul.
Dec  6 01:38:05 np0005548731 systemd[1]: Started Session 43 of User zuul.
Dec  6 01:38:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:05.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:06 np0005548731 python3.9[115411]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:38:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:07.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:07.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:08 np0005548731 python3.9[115568]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:08 np0005548731 python3.9[115721]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:38:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:09.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:38:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:38:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:38:09 np0005548731 python3.9[115873]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:09.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:10 np0005548731 python3.9[116046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003088.956329-166-193451977876244/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=3aa3770915f13d5b57d3a9eb88c5b476d9356f7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:10 np0005548731 python3.9[116199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:38:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:11.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:38:11 np0005548731 python3.9[116322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003090.26402-166-267214699477805/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=ffa5eebd660d3b98ce1601104e9075977b0c55a4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:38:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:11.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:38:12 np0005548731 python3.9[116523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:12 np0005548731 python3.9[116648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003091.5523586-166-15682366102238/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=d19e8d161b8ac1bd29a184174f3b952bd5d89cd4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:38:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:13.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:38:13 np0005548731 python3.9[116800]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:13.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:14 np0005548731 python3.9[116952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:14 np0005548731 python3.9[117105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:38:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:15.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:38:15 np0005548731 python3.9[117228]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003094.3686013-341-22062921081098/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=6c691b916bcaa19a66aedeb4fb094a3ba839ff0e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:38:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:15.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:38:16 np0005548731 python3.9[117381]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:16 np0005548731 python3.9[117504]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003095.6527996-341-255102201990071/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=7302dfe8a711171d00de900e7fa863eaee0e2101 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:17.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:17 np0005548731 python3.9[117656]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:17.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:18 np0005548731 python3.9[117779]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003096.9380388-341-219484599368261/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=743e96d9330e37dc474a31470beb927f2b883a88 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:18 np0005548731 python3.9[117932]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000022s ======
Dec  6 01:38:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:19.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec  6 01:38:19 np0005548731 python3.9[118084]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:19.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:20 np0005548731 python3.9[118237]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:20 np0005548731 python3.9[118360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003099.8372984-536-87942049582557/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=4f9ec66d5e5af47f41bd00728b722b24e2801808 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:21.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:21 np0005548731 python3.9[118512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:21.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:22 np0005548731 python3.9[118635]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003101.0507634-536-148039107136026/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=7302dfe8a711171d00de900e7fa863eaee0e2101 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:22 np0005548731 python3.9[118788]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:23.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:23 np0005548731 python3.9[118911]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003102.2792675-536-140554963821723/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=ac66ede306aaf288bddcdd2f412fd886658fba6a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:23.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:24 np0005548731 python3.9[119064]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:25.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:25 np0005548731 python3.9[119216]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:25 np0005548731 python3.9[119339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003104.8252678-746-80725981903462/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:25.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:26 np0005548731 python3.9[119492]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:27.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:27 np0005548731 python3.9[119644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:27.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:28 np0005548731 python3.9[119767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003106.8703046-823-42901569314622/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:28 np0005548731 python3.9[119920]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:29.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:29 np0005548731 python3.9[120072]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:29.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:30 np0005548731 python3.9[120195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003108.9303887-899-196441129109014/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:30 np0005548731 python3.9[120348]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:31.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:31 np0005548731 python3.9[120500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:31 np0005548731 python3.9[120623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003110.8553522-971-26234383872977/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:31.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:32 np0005548731 python3.9[120826]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:33.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:33 np0005548731 python3.9[120978]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:33 np0005548731 python3.9[121101]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003112.8203433-1043-64537702851412/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:33.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:34 np0005548731 python3.9[121254]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:35 np0005548731 python3.9[121406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:35.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:35 np0005548731 python3.9[121529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003114.8121994-1107-154169232230555/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:35.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:36 np0005548731 systemd[1]: session-43.scope: Deactivated successfully.
Dec  6 01:38:36 np0005548731 systemd[1]: session-43.scope: Consumed 24.486s CPU time.
Dec  6 01:38:36 np0005548731 systemd-logind[794]: Session 43 logged out. Waiting for processes to exit.
Dec  6 01:38:36 np0005548731 systemd-logind[794]: Removed session 43.
Dec  6 01:38:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:37.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:37.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:39.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:39.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000020s ======
Dec  6 01:38:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:41.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Dec  6 01:38:41 np0005548731 systemd-logind[794]: New session 44 of user zuul.
Dec  6 01:38:41 np0005548731 systemd[1]: Started Session 44 of User zuul.
Dec  6 01:38:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:41.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:42 np0005548731 python3.9[121713]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000020s ======
Dec  6 01:38:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:43.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Dec  6 01:38:43 np0005548731 python3.9[121865]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:43.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:44 np0005548731 python3.9[121989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003122.696248-69-266607527085667/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=eb306234ce11ca94053ba9deb99a6e4ceca2e349 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:44 np0005548731 python3.9[122141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:38:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:45.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:45 np0005548731 python3.9[122264]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003124.3127744-69-102754708467690/.source.conf _original_basename=ceph.conf follow=False checksum=72f9497223d5391694ed548fdd27afc9585eca3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:38:45 np0005548731 systemd[1]: session-44.scope: Deactivated successfully.
Dec  6 01:38:45 np0005548731 systemd[1]: session-44.scope: Consumed 2.659s CPU time.
Dec  6 01:38:45 np0005548731 systemd-logind[794]: Session 44 logged out. Waiting for processes to exit.
Dec  6 01:38:45 np0005548731 systemd-logind[794]: Removed session 44.
Dec  6 01:38:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:45.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:47.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:47.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:49.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:49.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:51.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:51 np0005548731 systemd-logind[794]: New session 45 of user zuul.
Dec  6 01:38:51 np0005548731 systemd[1]: Started Session 45 of User zuul.
Dec  6 01:38:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000020s ======
Dec  6 01:38:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:52.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Dec  6 01:38:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:52 np0005548731 python3.9[122496]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:38:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:53.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:53 np0005548731 python3.9[122652]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:54.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:54 np0005548731 python3.9[122805]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:38:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:38:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:55.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:38:55 np0005548731 python3.9[122955]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:38:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000020s ======
Dec  6 01:38:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:56.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Dec  6 01:38:56 np0005548731 python3.9[123108]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  6 01:38:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:38:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:57.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:58.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:59 np0005548731 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  6 01:38:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:38:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:38:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:59.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:38:59 np0005548731 python3.9[123265]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:39:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:39:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:00.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:39:00 np0005548731 python3.9[123350]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:39:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:01.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:03 np0005548731 python3.9[123504]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 01:39:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:39:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:03.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:39:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:04.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:04 np0005548731 python3[123660]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  6 01:39:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000020s ======
Dec  6 01:39:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:05.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Dec  6 01:39:05 np0005548731 python3.9[123812]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:06.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:06 np0005548731 python3.9[123965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:06 np0005548731 python3.9[124043]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:07.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:07 np0005548731 python3.9[124195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:07 np0005548731 python3.9[124273]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.81mk9t5o recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:39:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:08.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:39:09 np0005548731 python3.9[124426]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:09.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:09 np0005548731 python3.9[124504]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:39:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:10.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:39:10 np0005548731 python3.9[124806]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:39:10 np0005548731 podman[124975]: 2025-12-06 06:39:10.923221247 +0000 UTC m=+0.065890215 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:39:11 np0005548731 podman[124975]: 2025-12-06 06:39:11.034013592 +0000 UTC m=+0.176682570 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  6 01:39:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:39:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:39:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 01:39:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 01:39:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec  6 01:39:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:39:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:11.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:39:11 np0005548731 python3[125213]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  6 01:39:11 np0005548731 podman[125259]: 2025-12-06 06:39:11.827955241 +0000 UTC m=+0.214738153 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:39:11 np0005548731 podman[125328]: 2025-12-06 06:39:11.928863712 +0000 UTC m=+0.080856069 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:39:11 np0005548731 podman[125259]: 2025-12-06 06:39:11.988864029 +0000 UTC m=+0.375646931 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:39:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:39:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:12.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:39:12 np0005548731 podman[125475]: 2025-12-06 06:39:12.231126885 +0000 UTC m=+0.063453984 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, distribution-scope=public, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, vcs-type=git, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Dec  6 01:39:12 np0005548731 podman[125475]: 2025-12-06 06:39:12.246994897 +0000 UTC m=+0.079321986 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=keepalived-container, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, distribution-scope=public, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  6 01:39:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:12 np0005548731 python3.9[125478]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:13 np0005548731 python3.9[125799]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003151.8406663-440-225557572844738/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:39:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:13.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:39:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:39:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:39:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:39:13 np0005548731 python3.9[125967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:39:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:14.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:39:14 np0005548731 python3.9[126093]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003153.269814-484-157936995735238/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:39:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:39:15 np0005548731 python3.9[126245]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:15.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:15 np0005548731 python3.9[126370]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003154.6465151-529-274637206253178/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:16.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:16 np0005548731 python3.9[126523]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:17 np0005548731 python3.9[126648]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003155.959132-575-124329684969254/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:17.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:18 np0005548731 python3.9[126800]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:18.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:18 np0005548731 python3.9[126926]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003157.375027-619-226338476056989/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:19 np0005548731 python3.9[127078]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:39:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:19.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:39:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:20.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:20 np0005548731 python3.9[127230]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:39:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:39:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:39:20 np0005548731 python3.9[127436]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:39:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:21.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:39:21 np0005548731 python3.9[127588]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:39:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:22.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:22 np0005548731 python3.9[127742]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:39:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000021s ======
Dec  6 01:39:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:23.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec  6 01:39:23 np0005548731 python3.9[127896]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:39:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:24.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:24 np0005548731 python3.9[128052]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 01:39:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 1642 writes, 11K keys, 1642 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 1641 writes, 1641 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1642 writes, 11K keys, 1642 commit groups, 1.0 writes per commit group, ingest: 21.93 MB, 0.04 MB/s#012Interval WAL: 1641 writes, 1641 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.0      1.59              0.03         4    0.397       0      0       0.0       0.0#012  L6      1/0    8.14 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.3     35.6     31.8      0.80              0.07         3    0.266     12K   1253       0.0       0.0#012 Sum      1/0    8.14 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3     11.9     15.3      2.39              0.11         7    0.341     12K   1253       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3     11.9     15.3      2.39              0.11         6    0.398     12K   1253       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     35.6     31.8      0.80              0.07         3    0.266     12K   1253       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.1      1.59              0.03         3    0.529       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 2.4 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 2.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 1.14 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(54,1.01 MB,0.331236%) FilterBlock(7,45.17 KB,0.0145109%) IndexBlock(7,90.70 KB,0.0291373%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 01:39:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:25.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:25 np0005548731 python3.9[128202]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:39:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:26.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:26 np0005548731 python3.9[128356]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:39:26 np0005548731 ovs-vsctl[128357]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  6 01:39:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:27.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:27 np0005548731 python3.9[128509]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:39:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:28.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:28 np0005548731 python3.9[128665]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:39:28 np0005548731 ovs-vsctl[128666]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  6 01:39:29 np0005548731 python3.9[128816]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:39:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:29.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:30.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:30 np0005548731 python3.9[128970]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:39:30 np0005548731 python3.9[129123]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:31 np0005548731 python3.9[129201]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:39:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:31.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:31 np0005548731 python3.9[129353]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:32.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:32 np0005548731 python3.9[129432]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:39:33 np0005548731 python3.9[129634]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:33.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:33 np0005548731 python3.9[129786]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:34.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:34 np0005548731 python3.9[129865]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:35 np0005548731 python3.9[130017]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:35.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:35 np0005548731 python3.9[130095]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:36.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:36 np0005548731 python3.9[130248]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:39:36 np0005548731 systemd[1]: Reloading.
Dec  6 01:39:36 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:39:36 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:39:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:37.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:37 np0005548731 python3.9[130436]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:38.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:38 np0005548731 python3.9[130515]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:39 np0005548731 python3.9[130667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:39.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:39 np0005548731 python3.9[130745]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:40.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:40 np0005548731 python3.9[130898]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:39:40 np0005548731 systemd[1]: Reloading.
Dec  6 01:39:40 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:39:40 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:39:40 np0005548731 systemd[1]: Starting Create netns directory...
Dec  6 01:39:40 np0005548731 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  6 01:39:40 np0005548731 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  6 01:39:40 np0005548731 systemd[1]: Finished Create netns directory.
Dec  6 01:39:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:41.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:41 np0005548731 python3.9[131093]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:39:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:42.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:42 np0005548731 python3.9[131246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:43 np0005548731 python3.9[131369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003181.9332023-1373-155389622334824/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:39:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:43.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:44.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:44 np0005548731 python3.9[131521]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:39:44 np0005548731 python3.9[131674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:39:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:45.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:46.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:46 np0005548731 python3.9[131798]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003184.3735638-1447-246520885392539/.source.json _original_basename=.b7sjf599 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:46 np0005548731 python3.9[131950]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:39:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:47.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:48.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:49 np0005548731 python3.9[132378]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  6 01:39:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:49.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:50.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:50 np0005548731 python3.9[132531]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  6 01:39:51 np0005548731 python3.9[132683]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  6 01:39:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:51.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:52 np0005548731 python3[132912]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  6 01:39:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:53.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:54.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:39:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:55.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:39:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:56.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:39:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:57.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:58.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:58 np0005548731 podman[132926]: 2025-12-06 06:39:58.915196237 +0000 UTC m=+5.885430993 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  6 01:39:59 np0005548731 podman[133049]: 2025-12-06 06:39:59.049967155 +0000 UTC m=+0.027073243 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  6 01:39:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:39:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:39:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:59.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:39:59 np0005548731 podman[133049]: 2025-12-06 06:39:59.756934234 +0000 UTC m=+0.734040302 container create a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  6 01:39:59 np0005548731 python3[132912]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  6 01:40:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:00.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:01.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:02.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:02 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 01:40:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:03.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:04.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:40:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:05.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:40:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:40:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:06.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:40:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:40:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:07.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:40:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:40:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:08.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:40:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:40:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:09.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:10.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:11.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:40:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:12.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:40:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:13 np0005548731 python3.9[133298]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:40:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:40:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:13.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:40:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:14.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:14 np0005548731 python3.9[133453]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:40:14 np0005548731 python3.9[133529]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:40:15 np0005548731 python3.9[133680]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003214.7704039-1711-5439662231270/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:40:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:15.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:15 np0005548731 python3.9[133756]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 01:40:15 np0005548731 systemd[1]: Reloading.
Dec  6 01:40:16 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:40:16 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:40:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:40:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:16.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:40:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:17.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:17 np0005548731 python3.9[133869]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:40:17 np0005548731 systemd[1]: Reloading.
Dec  6 01:40:17 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:40:17 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:40:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:18.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:18 np0005548731 systemd[1]: Starting ovn_controller container...
Dec  6 01:40:18 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:40:18 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308735ced6531073da6c584391c7c696ca6058b40154a6609b58c81b6b5dc53a/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  6 01:40:18 np0005548731 systemd[1]: Started /usr/bin/podman healthcheck run a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78.
Dec  6 01:40:18 np0005548731 podman[133912]: 2025-12-06 06:40:18.37258143 +0000 UTC m=+0.151458375 container init a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 01:40:18 np0005548731 ovn_controller[133927]: + sudo -E kolla_set_configs
Dec  6 01:40:18 np0005548731 podman[133912]: 2025-12-06 06:40:18.402223694 +0000 UTC m=+0.181100609 container start a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 01:40:18 np0005548731 edpm-start-podman-container[133912]: ovn_controller
Dec  6 01:40:18 np0005548731 systemd[1]: Created slice User Slice of UID 0.
Dec  6 01:40:18 np0005548731 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  6 01:40:18 np0005548731 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  6 01:40:18 np0005548731 systemd[1]: Starting User Manager for UID 0...
Dec  6 01:40:18 np0005548731 edpm-start-podman-container[133911]: Creating additional drop-in dependency for "ovn_controller" (a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78)
Dec  6 01:40:18 np0005548731 podman[133934]: 2025-12-06 06:40:18.497965975 +0000 UTC m=+0.080320236 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 01:40:18 np0005548731 systemd[1]: a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78-6580ec1f71f0460e.service: Main process exited, code=exited, status=1/FAILURE
Dec  6 01:40:18 np0005548731 systemd[1]: a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78-6580ec1f71f0460e.service: Failed with result 'exit-code'.
Dec  6 01:40:18 np0005548731 systemd[1]: Reloading.
Dec  6 01:40:18 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:40:18 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:40:18 np0005548731 systemd[133968]: Queued start job for default target Main User Target.
Dec  6 01:40:18 np0005548731 systemd[133968]: Created slice User Application Slice.
Dec  6 01:40:18 np0005548731 systemd[133968]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  6 01:40:18 np0005548731 systemd[133968]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 01:40:18 np0005548731 systemd[133968]: Reached target Paths.
Dec  6 01:40:18 np0005548731 systemd[133968]: Reached target Timers.
Dec  6 01:40:18 np0005548731 systemd[133968]: Starting D-Bus User Message Bus Socket...
Dec  6 01:40:18 np0005548731 systemd[133968]: Starting Create User's Volatile Files and Directories...
Dec  6 01:40:18 np0005548731 systemd[133968]: Finished Create User's Volatile Files and Directories.
Dec  6 01:40:18 np0005548731 systemd[133968]: Listening on D-Bus User Message Bus Socket.
Dec  6 01:40:18 np0005548731 systemd[133968]: Reached target Sockets.
Dec  6 01:40:18 np0005548731 systemd[133968]: Reached target Basic System.
Dec  6 01:40:18 np0005548731 systemd[133968]: Reached target Main User Target.
Dec  6 01:40:18 np0005548731 systemd[133968]: Startup finished in 182ms.
Dec  6 01:40:18 np0005548731 systemd[1]: Started User Manager for UID 0.
Dec  6 01:40:18 np0005548731 systemd[1]: Started ovn_controller container.
Dec  6 01:40:18 np0005548731 systemd[1]: Started Session c1 of User root.
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: INFO:__main__:Validating config file
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: INFO:__main__:Writing out command to execute
Dec  6 01:40:19 np0005548731 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: ++ cat /run_command
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: + ARGS=
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: + sudo kolla_copy_cacerts
Dec  6 01:40:19 np0005548731 systemd[1]: Started Session c2 of User root.
Dec  6 01:40:19 np0005548731 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: + [[ ! -n '' ]]
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: + . kolla_extend_start
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: + umask 0022
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  6 01:40:19 np0005548731 NetworkManager[49182]: <info>  [1765003219.1259] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec  6 01:40:19 np0005548731 NetworkManager[49182]: <info>  [1765003219.1267] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:40:19 np0005548731 NetworkManager[49182]: <info>  [1765003219.1278] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  6 01:40:19 np0005548731 NetworkManager[49182]: <info>  [1765003219.1283] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec  6 01:40:19 np0005548731 NetworkManager[49182]: <info>  [1765003219.1287] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  6 01:40:19 np0005548731 kernel: br-int: entered promiscuous mode
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00019|main|INFO|OVS feature set changed, force recompute.
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  6 01:40:19 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:19Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  6 01:40:19 np0005548731 NetworkManager[49182]: <info>  [1765003219.1476] manager: (ovn-03fe05-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  6 01:40:19 np0005548731 NetworkManager[49182]: <info>  [1765003219.1486] manager: (ovn-feab6d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Dec  6 01:40:19 np0005548731 NetworkManager[49182]: <info>  [1765003219.1492] manager: (ovn-150d59-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec  6 01:40:19 np0005548731 kernel: genev_sys_6081: entered promiscuous mode
Dec  6 01:40:19 np0005548731 systemd-udevd[134063]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 01:40:19 np0005548731 NetworkManager[49182]: <info>  [1765003219.1656] device (genev_sys_6081): carrier: link connected
Dec  6 01:40:19 np0005548731 NetworkManager[49182]: <info>  [1765003219.1658] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Dec  6 01:40:19 np0005548731 systemd-udevd[134065]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 01:40:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:19.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:20.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:20 np0005548731 python3.9[134196]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:40:20 np0005548731 ovs-vsctl[134246]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  6 01:40:20 np0005548731 podman[134448]: 2025-12-06 06:40:20.816914423 +0000 UTC m=+0.073483215 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 01:40:20 np0005548731 podman[134448]: 2025-12-06 06:40:20.930211971 +0000 UTC m=+0.186780763 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  6 01:40:21 np0005548731 python3.9[134543]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:40:21 np0005548731 ovs-vsctl[134591]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  6 01:40:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:21.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:21 np0005548731 podman[134698]: 2025-12-06 06:40:21.811457407 +0000 UTC m=+0.339732345 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:40:21 np0005548731 podman[134698]: 2025-12-06 06:40:21.82507904 +0000 UTC m=+0.353353978 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:40:22 np0005548731 podman[134876]: 2025-12-06 06:40:22.072190665 +0000 UTC m=+0.063798825 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, description=keepalived for Ceph, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, release=1793)
Dec  6 01:40:22 np0005548731 podman[134876]: 2025-12-06 06:40:22.086917695 +0000 UTC m=+0.078525825 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public)
Dec  6 01:40:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:22.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:22 np0005548731 python3.9[134903]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:40:22 np0005548731 ovs-vsctl[134922]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  6 01:40:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:22 np0005548731 systemd-logind[794]: Session 45 logged out. Waiting for processes to exit.
Dec  6 01:40:22 np0005548731 systemd[1]: session-45.scope: Deactivated successfully.
Dec  6 01:40:22 np0005548731 systemd[1]: session-45.scope: Consumed 59.317s CPU time.
Dec  6 01:40:22 np0005548731 systemd-logind[794]: Removed session 45.
Dec  6 01:40:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:23.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:24.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:40:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:25.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:40:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:40:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:26.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:40:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:40:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:40:27 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:40:27 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:40:27 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:40:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:27.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:28.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:28 np0005548731 systemd-logind[794]: New session 47 of user zuul.
Dec  6 01:40:28 np0005548731 systemd[1]: Started Session 47 of User zuul.
Dec  6 01:40:29 np0005548731 systemd[1]: Stopping User Manager for UID 0...
Dec  6 01:40:29 np0005548731 systemd[133968]: Activating special unit Exit the Session...
Dec  6 01:40:29 np0005548731 systemd[133968]: Stopped target Main User Target.
Dec  6 01:40:29 np0005548731 systemd[133968]: Stopped target Basic System.
Dec  6 01:40:29 np0005548731 systemd[133968]: Stopped target Paths.
Dec  6 01:40:29 np0005548731 systemd[133968]: Stopped target Sockets.
Dec  6 01:40:29 np0005548731 systemd[133968]: Stopped target Timers.
Dec  6 01:40:29 np0005548731 systemd[133968]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  6 01:40:29 np0005548731 systemd[133968]: Closed D-Bus User Message Bus Socket.
Dec  6 01:40:29 np0005548731 systemd[133968]: Stopped Create User's Volatile Files and Directories.
Dec  6 01:40:29 np0005548731 systemd[133968]: Removed slice User Application Slice.
Dec  6 01:40:29 np0005548731 systemd[133968]: Reached target Shutdown.
Dec  6 01:40:29 np0005548731 systemd[133968]: Finished Exit the Session.
Dec  6 01:40:29 np0005548731 systemd[133968]: Reached target Exit the Session.
Dec  6 01:40:29 np0005548731 systemd[1]: user@0.service: Deactivated successfully.
Dec  6 01:40:29 np0005548731 systemd[1]: Stopped User Manager for UID 0.
Dec  6 01:40:29 np0005548731 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  6 01:40:29 np0005548731 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  6 01:40:29 np0005548731 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  6 01:40:29 np0005548731 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  6 01:40:29 np0005548731 systemd[1]: Removed slice User Slice of UID 0.
Dec  6 01:40:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:40:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:29.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:40:29 np0005548731 python3.9[135236]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:40:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:30.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:30 np0005548731 python3.9[135393]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 01:40:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 602.4 total, 600.0 interval#012Cumulative writes: 4941 writes, 22K keys, 4941 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4941 writes, 696 syncs, 7.10 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4941 writes, 22K keys, 4941 commit groups, 1.0 writes per commit group, ingest: 18.36 MB, 0.03 MB/s#012Interval WAL: 4941 writes, 696 syncs, 7.10 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5612cf175350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000236 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5612cf175350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000236 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 602.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  6 01:40:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:31.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:31 np0005548731 python3.9[135545]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:40:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:32.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:40:32 np0005548731 python3.9[135698]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:33 np0005548731 python3.9[135900]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:33.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:33 np0005548731 python3.9[136102]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:40:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:34.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:40:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:40:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:40:34 np0005548731 python3.9[136253]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:40:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:35.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:35 np0005548731 python3.9[136405]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  6 01:40:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:36.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:37 np0005548731 python3.9[136556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:40:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:37.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:37 np0005548731 python3.9[136678]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003236.534462-225-130552100331506/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:38.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:38 np0005548731 python3.9[136829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:40:39 np0005548731 python3.9[136950]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003238.148239-271-248536617982962/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:39.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:40.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:40 np0005548731 python3.9[137103]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:40:41 np0005548731 python3.9[137187]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:40:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:40:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:41.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:40:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:42.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:43.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:43 np0005548731 python3.9[137341]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 01:40:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:44.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:45.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:45 np0005548731 python3.9[137495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:40:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:46.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:46 np0005548731 python3.9[137616]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003245.1202872-381-244836731794381/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:46 np0005548731 python3.9[137767]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:40:47 np0005548731 python3.9[137888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003246.286144-381-157921339240062/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:47.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:48.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:48 np0005548731 python3.9[138039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:40:48 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:48Z|00025|memory|INFO|16256 kB peak resident set size after 29.8 seconds
Dec  6 01:40:48 np0005548731 ovn_controller[133927]: 2025-12-06T06:40:48Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Dec  6 01:40:48 np0005548731 podman[138119]: 2025-12-06 06:40:48.968472275 +0000 UTC m=+0.128564372 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Dec  6 01:40:49 np0005548731 python3.9[138178]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003248.1390836-513-116375836914026/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:49.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:49 np0005548731 python3.9[138337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:40:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:50.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:50 np0005548731 python3.9[138459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003249.435152-513-40596083497889/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:51 np0005548731 python3.9[138609]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:40:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:51.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:51 np0005548731 python3.9[138763]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:52.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:52 np0005548731 python3.9[138916]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:40:53 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 01:40:53 np0005548731 python3.9[139045]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:53.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:54 np0005548731 python3.9[139197]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:40:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:54.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:54 np0005548731 python3.9[139276]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:40:55 np0005548731 python3.9[139428]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:40:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:40:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:55.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:40:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:56.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:56 np0005548731 python3.9[139581]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:40:56 np0005548731 python3.9[139659]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:40:57 np0005548731 python3.9[139811]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:40:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:57.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:40:57 np0005548731 python3.9[139889]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:40:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:40:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:58.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:40:58 np0005548731 python3.9[140042]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:40:58 np0005548731 systemd[1]: Reloading.
Dec  6 01:40:58 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:40:58 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:40:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:40:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:40:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:59.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:40:59 np0005548731 python3.9[140230]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:41:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:00.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:00 np0005548731 python3.9[140309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:41:01 np0005548731 python3.9[140461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:41:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:01.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:01 np0005548731 python3.9[140539]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:41:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:41:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:02.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:41:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:41:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:41:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:03.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:41:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:41:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:04.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:41:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:41:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:05.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:41:05 np0005548731 python3.9[140693]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:41:05 np0005548731 systemd[1]: Reloading.
Dec  6 01:41:05 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:41:05 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:41:06 np0005548731 systemd[1]: Starting Create netns directory...
Dec  6 01:41:06 np0005548731 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  6 01:41:06 np0005548731 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  6 01:41:06 np0005548731 systemd[1]: Finished Create netns directory.
Dec  6 01:41:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:06.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:07 np0005548731 python3.9[140887]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:41:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:07.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:07 np0005548731 python3.9[141039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:41:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:41:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:08.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:08 np0005548731 python3.9[141163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003267.245359-966-43593089332470/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:41:09 np0005548731 python3.9[141315]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:41:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:09.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:10.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:10 np0005548731 python3.9[141467]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:41:10 np0005548731 python3.9[141591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003269.6495588-1041-58487624324519/.source.json _original_basename=.x0t655vr follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:41:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:11.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:11 np0005548731 python3.9[141743]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:41:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:41:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:12.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:41:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:41:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:41:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:13.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:41:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:41:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:14.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:41:14 np0005548731 python3.9[142221]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  6 01:41:15 np0005548731 python3.9[142374]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  6 01:41:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:15.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:16 np0005548731 python3.9[142526]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  6 01:41:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:41:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:16.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:41:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:41:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:17.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:41:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:41:17 np0005548731 python3[142705]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  6 01:41:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:18.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:19.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:20.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:41:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:21.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:41:21 np0005548731 podman[142768]: 2025-12-06 06:41:21.571117951 +0000 UTC m=+2.180926387 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 01:41:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:22.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:41:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:23.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:41:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:24.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:41:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:25.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:26.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:41:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:27.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:41:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:41:28 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:41:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:41:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:28.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:41:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:29.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:41:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:30.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:41:31 np0005548731 podman[142719]: 2025-12-06 06:41:31.301386187 +0000 UTC m=+13.337763531 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 01:41:31 np0005548731 podman[142871]: 2025-12-06 06:41:31.479820276 +0000 UTC m=+0.055182364 container create 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 01:41:31 np0005548731 podman[142871]: 2025-12-06 06:41:31.450526608 +0000 UTC m=+0.025888716 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 01:41:31 np0005548731 python3[142705]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 01:41:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:31.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:32 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:41:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:41:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:32.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:33.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:41:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:34.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:41:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:35.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:36 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:41:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:41:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:36.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:41:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:41:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:37.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:41:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:38.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:38 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:41:38 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  6 01:41:38 np0005548731 ceph-mon[77458]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec  6 01:41:38 np0005548731 ceph-mon[77458]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec  6 01:41:38 np0005548731 ceph-mon[77458]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec  6 01:41:38 np0005548731 ceph-mon[77458]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec  6 01:41:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:41:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:41:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:40.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:41:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:41.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:41:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:41:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:42.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:41:42 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 01:41:42 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(38) init, last seen epoch 38
Dec  6 01:41:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:41:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:41:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:43.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:41:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:44.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:41:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:45.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:46.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:41:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:47.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:41:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:48.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:49 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 01:41:49 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:41:49 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 01:41:49 np0005548731 ceph-mon[77458]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec  6 01:41:49 np0005548731 ceph-mon[77458]: Cluster is now healthy
Dec  6 01:41:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:49.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:41:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:50.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:41:51 np0005548731 ceph-mon[77458]: mon.compute-1 calling monitor election
Dec  6 01:41:51 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 01:41:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:41:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:41:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:41:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:41:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:41:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:52.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:41:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:53.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:41:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:54.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:41:54 np0005548731 python3.9[143252]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:41:55 np0005548731 python3.9[143458]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:41:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:55.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:55 np0005548731 python3.9[143534]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:41:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:56.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:56 np0005548731 python3.9[143689]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003315.861292-1305-162423207739851/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:41:57 np0005548731 python3.9[143765]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 01:41:57 np0005548731 systemd[1]: Reloading.
Dec  6 01:41:57 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:41:57 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:41:57 np0005548731 podman[143767]: 2025-12-06 06:41:57.301764078 +0000 UTC m=+0.136405482 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:41:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:57.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:41:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:41:58 np0005548731 python3.9[143902]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:41:58 np0005548731 systemd[1]: Reloading.
Dec  6 01:41:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:41:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:58.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:41:58 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:41:58 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:41:58 np0005548731 systemd[1]: Starting ovn_metadata_agent container...
Dec  6 01:41:58 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:41:58 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbd065051e4096b88616a29ba7cd4468059e104806fd1bcc7b9cade90d6d5a5/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  6 01:41:58 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbd065051e4096b88616a29ba7cd4468059e104806fd1bcc7b9cade90d6d5a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 01:41:58 np0005548731 systemd[1]: Started /usr/bin/podman healthcheck run 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b.
Dec  6 01:41:58 np0005548731 podman[143944]: 2025-12-06 06:41:58.614704314 +0000 UTC m=+0.140574817 container init 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: + sudo -E kolla_set_configs
Dec  6 01:41:58 np0005548731 podman[143944]: 2025-12-06 06:41:58.639792983 +0000 UTC m=+0.165663466 container start 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  6 01:41:58 np0005548731 edpm-start-podman-container[143944]: ovn_metadata_agent
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Validating config file
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Copying service configuration files
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Writing out command to execute
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  6 01:41:58 np0005548731 podman[143967]: 2025-12-06 06:41:58.709375033 +0000 UTC m=+0.055857642 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  6 01:41:58 np0005548731 edpm-start-podman-container[143943]: Creating additional drop-in dependency for "ovn_metadata_agent" (26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b)
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: ++ cat /run_command
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: + CMD=neutron-ovn-metadata-agent
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: + ARGS=
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: + sudo kolla_copy_cacerts
Dec  6 01:41:58 np0005548731 systemd[1]: Reloading.
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: + [[ ! -n '' ]]
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: + . kolla_extend_start
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: Running command: 'neutron-ovn-metadata-agent'
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: + umask 0022
Dec  6 01:41:58 np0005548731 ovn_metadata_agent[143960]: + exec neutron-ovn-metadata-agent
Dec  6 01:41:58 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:41:58 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:41:59 np0005548731 systemd[1]: Started ovn_metadata_agent container.
Dec  6 01:41:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:41:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:41:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:59.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:00.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.791 143965 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.792 143965 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.792 143965 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.792 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.793 143965 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.793 143965 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.793 143965 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.793 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.793 143965 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.793 143965 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.793 143965 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.793 143965 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.793 143965 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.794 143965 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.794 143965 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.794 143965 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.794 143965 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.794 143965 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.794 143965 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.794 143965 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.795 143965 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.795 143965 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.795 143965 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.795 143965 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.795 143965 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.795 143965 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.795 143965 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.795 143965 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.796 143965 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.796 143965 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.796 143965 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.796 143965 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.796 143965 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.796 143965 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.796 143965 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.796 143965 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.797 143965 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.797 143965 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.797 143965 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.797 143965 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.797 143965 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.797 143965 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.797 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.797 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.797 143965 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.798 143965 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.798 143965 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.798 143965 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.798 143965 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.798 143965 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.798 143965 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.798 143965 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.798 143965 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.798 143965 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.798 143965 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.799 143965 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.799 143965 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.799 143965 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.799 143965 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.799 143965 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.799 143965 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.799 143965 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.799 143965 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.799 143965 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.799 143965 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.800 143965 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.800 143965 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.800 143965 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.800 143965 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.800 143965 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.800 143965 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.800 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.800 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.801 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.801 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.801 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.801 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.801 143965 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.801 143965 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.801 143965 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.801 143965 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.801 143965 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.802 143965 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.802 143965 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.802 143965 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.802 143965 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.802 143965 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.802 143965 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.802 143965 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.802 143965 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.802 143965 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.803 143965 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.803 143965 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.803 143965 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.803 143965 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.803 143965 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.803 143965 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.803 143965 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.803 143965 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.803 143965 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.803 143965 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.803 143965 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.804 143965 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.804 143965 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.804 143965 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.804 143965 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.804 143965 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.804 143965 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.804 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.804 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.804 143965 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.805 143965 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.805 143965 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.805 143965 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.805 143965 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.805 143965 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.805 143965 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.805 143965 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.805 143965 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.805 143965 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.806 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.806 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.806 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.806 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.806 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.806 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.806 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.806 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.806 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.807 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.807 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.807 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.807 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.807 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.807 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.807 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.807 143965 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.807 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.808 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.808 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.808 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.808 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.808 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.808 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.808 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.809 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.809 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.809 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.809 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.809 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.809 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.809 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.809 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.809 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.810 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.810 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.810 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.810 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.810 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.810 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.810 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.810 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.810 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.810 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.811 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.811 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.811 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.811 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.811 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.811 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.811 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.811 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.811 143965 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.812 143965 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.812 143965 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.812 143965 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.812 143965 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.812 143965 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.812 143965 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.812 143965 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.812 143965 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.812 143965 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.812 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.813 143965 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.813 143965 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.813 143965 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.813 143965 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.813 143965 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.813 143965 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.813 143965 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.813 143965 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.813 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.814 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.814 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.814 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.814 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.814 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.814 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.814 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.814 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.814 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.815 143965 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.815 143965 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.815 143965 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.815 143965 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.815 143965 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.815 143965 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.815 143965 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.815 143965 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.815 143965 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.815 143965 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.816 143965 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.816 143965 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.816 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.816 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.816 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.816 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.816 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.816 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.816 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.816 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.817 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.817 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.817 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.817 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.817 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.817 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.817 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.817 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.817 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.818 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.818 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.818 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.818 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.818 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.818 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.818 143965 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.818 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.818 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.819 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.819 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.819 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.819 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.819 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.819 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.819 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.819 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.819 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.820 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.820 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.820 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.820 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.820 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.820 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.820 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.820 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.820 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.821 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.821 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.821 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.821 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.821 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.821 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.821 143965 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.821 143965 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.821 143965 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.822 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.822 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.822 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.822 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.822 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.822 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.822 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.822 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.822 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.823 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.823 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.823 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.823 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.823 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.823 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.823 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.824 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.824 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.824 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.824 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.824 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.824 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.824 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.824 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.824 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.825 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.825 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.825 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.825 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.825 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.825 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.825 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.825 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.825 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.826 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.826 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.826 143965 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.826 143965 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.835 143965 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.835 143965 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.836 143965 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.836 143965 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.836 143965 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.852 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 9f96b960-b4f2-40bd-ae99-08121f5e8b78 (UUID: 9f96b960-b4f2-40bd-ae99-08121f5e8b78) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.880 143965 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.881 143965 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.881 143965 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.881 143965 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.885 143965 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.891 143965 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.896 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '9f96b960-b4f2-40bd-ae99-08121f5e8b78'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], external_ids={}, name=9f96b960-b4f2-40bd-ae99-08121f5e8b78, nb_cfg_timestamp=1765003227142, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.897 143965 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f6d8df75f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.898 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.898 143965 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.899 143965 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.899 143965 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.904 143965 DEBUG oslo_service.service [-] Started child 144074 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.907 143965 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpuqibh07n/privsep.sock']#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.911 144074 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-4098015'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.944 144074 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.945 144074 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.945 144074 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.948 144074 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.955 144074 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  6 01:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:00.962 144074 INFO eventlet.wsgi.server [-] (144074) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  6 01:42:01 np0005548731 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  6 01:42:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:01.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:01.666 143965 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  6 01:42:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:01.667 143965 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpuqibh07n/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  6 01:42:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:01.519 144079 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  6 01:42:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:01.527 144079 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  6 01:42:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:01.531 144079 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  6 01:42:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:01.531 144079 INFO oslo.privsep.daemon [-] privsep daemon running as pid 144079#033[00m
Dec  6 01:42:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:01.669 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac2828f-7d76-4ca1-a341-3a10d386f706]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:42:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:02.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.245 144079 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.246 144079 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.246 144079 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:42:02 np0005548731 systemd[1]: session-47.scope: Deactivated successfully.
Dec  6 01:42:02 np0005548731 systemd[1]: session-47.scope: Consumed 1min 2.756s CPU time.
Dec  6 01:42:02 np0005548731 systemd-logind[794]: Session 47 logged out. Waiting for processes to exit.
Dec  6 01:42:02 np0005548731 systemd-logind[794]: Removed session 47.
Dec  6 01:42:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.863 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[7e515b59-c7e5-45dd-89c8-59a3ec9b5bfd]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.866 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, column=external_ids, values=({'neutron:ovn-metadata-id': '4e41dd4c-6029-54bc-9dbc-84b2a9bbfccc'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.876 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.884 143965 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.885 143965 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.885 143965 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.885 143965 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.885 143965 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.885 143965 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.885 143965 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.888 143965 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.889 143965 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.889 143965 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.889 143965 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.889 143965 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.889 143965 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.890 143965 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.890 143965 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.890 143965 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.890 143965 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.890 143965 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.890 143965 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.891 143965 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.891 143965 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.891 143965 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.891 143965 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.891 143965 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.891 143965 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.892 143965 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.892 143965 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.892 143965 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.892 143965 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.892 143965 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.892 143965 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.892 143965 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.893 143965 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.893 143965 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.893 143965 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.893 143965 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.893 143965 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.894 143965 DEBUG oslo_service.service [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.894 143965 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.894 143965 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.894 143965 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.894 143965 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.894 143965 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.894 143965 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.894 143965 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.895 143965 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.895 143965 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.895 143965 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.895 143965 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.895 143965 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.895 143965 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.896 143965 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.896 143965 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.896 143965 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.896 143965 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.896 143965 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.896 143965 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.896 143965 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.896 143965 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.897 143965 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.897 143965 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.897 143965 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.897 143965 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.897 143965 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.897 143965 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.897 143965 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.898 143965 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.898 143965 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.898 143965 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.898 143965 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.898 143965 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.898 143965 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.898 143965 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.898 143965 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.899 143965 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.899 143965 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.899 143965 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.899 143965 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.899 143965 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.899 143965 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.899 143965 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.900 143965 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.900 143965 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.900 143965 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.900 143965 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.900 143965 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.900 143965 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.900 143965 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.900 143965 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.901 143965 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.901 143965 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.901 143965 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.901 143965 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.901 143965 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.901 143965 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.901 143965 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.901 143965 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.902 143965 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.902 143965 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.902 143965 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.902 143965 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.902 143965 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.902 143965 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.902 143965 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.902 143965 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.903 143965 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.903 143965 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.903 143965 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.903 143965 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.903 143965 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.903 143965 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.904 143965 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.904 143965 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.904 143965 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.904 143965 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.904 143965 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.904 143965 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.904 143965 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.905 143965 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.905 143965 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.905 143965 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.905 143965 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.905 143965 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.905 143965 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.906 143965 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.906 143965 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.906 143965 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.906 143965 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.906 143965 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.906 143965 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.906 143965 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.907 143965 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.907 143965 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.907 143965 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.907 143965 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.907 143965 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.907 143965 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.907 143965 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.908 143965 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.908 143965 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.908 143965 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.908 143965 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.908 143965 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.908 143965 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.908 143965 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.909 143965 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.909 143965 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.909 143965 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.909 143965 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.909 143965 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.909 143965 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.909 143965 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.909 143965 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.910 143965 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.910 143965 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.910 143965 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.910 143965 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.910 143965 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.910 143965 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.910 143965 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.910 143965 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.911 143965 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.911 143965 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.911 143965 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.911 143965 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.911 143965 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.911 143965 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.911 143965 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.911 143965 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.911 143965 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.911 143965 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.912 143965 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.912 143965 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.912 143965 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.912 143965 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.912 143965 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.912 143965 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.912 143965 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.912 143965 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.913 143965 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.913 143965 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.913 143965 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.913 143965 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.913 143965 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.913 143965 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.913 143965 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.913 143965 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.913 143965 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.914 143965 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.914 143965 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.914 143965 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.914 143965 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.914 143965 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.914 143965 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.914 143965 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.914 143965 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.915 143965 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.915 143965 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.915 143965 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.915 143965 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.915 143965 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.915 143965 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.915 143965 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.916 143965 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.916 143965 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.916 143965 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.916 143965 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.916 143965 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.916 143965 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.916 143965 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.917 143965 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.917 143965 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.917 143965 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.917 143965 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.917 143965 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.917 143965 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.917 143965 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.918 143965 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.918 143965 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.918 143965 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.918 143965 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.918 143965 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.918 143965 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.918 143965 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.919 143965 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.919 143965 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.919 143965 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.919 143965 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.919 143965 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.919 143965 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.919 143965 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.920 143965 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.920 143965 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.920 143965 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.920 143965 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.920 143965 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.920 143965 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.921 143965 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.921 143965 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.921 143965 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.921 143965 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.921 143965 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.921 143965 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.922 143965 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.922 143965 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.922 143965 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.922 143965 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.922 143965 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.922 143965 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.922 143965 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.923 143965 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.923 143965 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.923 143965 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.923 143965 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.923 143965 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.923 143965 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.924 143965 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.924 143965 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.924 143965 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.924 143965 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.924 143965 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.924 143965 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.924 143965 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.925 143965 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.925 143965 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.925 143965 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.925 143965 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.925 143965 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.925 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.926 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.926 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.926 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.926 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.926 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.926 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.926 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.927 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.927 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.927 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.927 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.927 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.928 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.928 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.928 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.928 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.929 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.929 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.929 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.929 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.929 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.929 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.929 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.930 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.930 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.930 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.930 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.930 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.930 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.930 143965 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.931 143965 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.931 143965 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.931 143965 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.931 143965 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:42:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:42:02.931 143965 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  6 01:42:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:03.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:04.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:05.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:06.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:07.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:42:08 np0005548731 systemd-logind[794]: New session 48 of user zuul.
Dec  6 01:42:08 np0005548731 systemd[1]: Started Session 48 of User zuul.
Dec  6 01:42:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:42:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:08.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:09 np0005548731 python3.9[144241]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:42:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:09.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:42:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:10.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:42:10 np0005548731 python3.9[144398]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:42:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:11.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:12 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:42:12 np0005548731 python3.9[144563]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 01:42:12 np0005548731 systemd[1]: Reloading.
Dec  6 01:42:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:12.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:12 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:42:12 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:42:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).paxos(paxos updating c 503..1243) lease_timeout -- calling new election
Dec  6 01:42:12 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 01:42:12 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(40) init, last seen epoch 40
Dec  6 01:42:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:42:13 np0005548731 python3.9[144748]: ansible-ansible.builtin.service_facts Invoked
Dec  6 01:42:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:13.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:13 np0005548731 network[144765]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 01:42:13 np0005548731 network[144766]: 'network-scripts' will be removed from distribution in near future.
Dec  6 01:42:13 np0005548731 network[144767]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 01:42:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:42:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:14.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:42:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:42:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:15.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:42:16 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:42:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:16.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:17 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 is new leader, mons compute-2,compute-1 in quorum (ranks 1,2)
Dec  6 01:42:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:17.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:17 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  6 01:42:17 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 01:42:17 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(43) init, last seen epoch 43, mid-election, bumping
Dec  6 01:42:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:42:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:42:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:42:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:18.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:42:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 01:42:19 np0005548731 python3.9[145132]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:42:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:19.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:20 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 01:42:20 np0005548731 ceph-mon[77458]: mon.compute-2 is new leader, mons compute-2,compute-1 in quorum (ranks 1,2)
Dec  6 01:42:20 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 01:42:20 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 01:42:20 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 01:42:20 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 01:42:20 np0005548731 ceph-mon[77458]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Dec  6 01:42:20 np0005548731 ceph-mon[77458]: Cluster is now healthy
Dec  6 01:42:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:20.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:20 np0005548731 python3.9[145286]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:42:21 np0005548731 python3.9[145439]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:42:21 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 01:42:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:21.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:21 np0005548731 python3.9[145592]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:42:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:22.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:22 np0005548731 python3.9[145746]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:42:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:42:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:23.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:23 np0005548731 python3.9[145899]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:42:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:24.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:24.946085) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003344946146, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2454, "num_deletes": 252, "total_data_size": 6231764, "memory_usage": 6291440, "flush_reason": "Manual Compaction"}
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003344987493, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 4042520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10225, "largest_seqno": 12674, "table_properties": {"data_size": 4032649, "index_size": 6301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20185, "raw_average_key_size": 20, "raw_value_size": 4012609, "raw_average_value_size": 4032, "num_data_blocks": 281, "num_entries": 995, "num_filter_entries": 995, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003070, "oldest_key_time": 1765003070, "file_creation_time": 1765003344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 41502 microseconds, and 10568 cpu microseconds.
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:24.987593) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 4042520 bytes OK
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:24.987619) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:24.989620) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:24.989654) EVENT_LOG_v1 {"time_micros": 1765003344989646, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:24.989677) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6221066, prev total WAL file size 6221066, number of live WAL files 2.
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:24.991409) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3947KB)], [21(8338KB)]
Dec  6 01:42:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003344991454, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 12581078, "oldest_snapshot_seqno": -1}
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4450 keys, 9968182 bytes, temperature: kUnknown
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345072115, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 9968182, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9934212, "index_size": 21764, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 109128, "raw_average_key_size": 24, "raw_value_size": 9849602, "raw_average_value_size": 2213, "num_data_blocks": 938, "num_entries": 4450, "num_filter_entries": 4450, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765003344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:25.072429) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 9968182 bytes
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:25.074376) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.8 rd, 123.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 8.1 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(5.6) write-amplify(2.5) OK, records in: 4980, records dropped: 530 output_compression: NoCompression
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:25.074395) EVENT_LOG_v1 {"time_micros": 1765003345074383, "job": 10, "event": "compaction_finished", "compaction_time_micros": 80758, "compaction_time_cpu_micros": 28099, "output_level": 6, "num_output_files": 1, "total_output_size": 9968182, "num_input_records": 4980, "num_output_records": 4450, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345075196, "job": 10, "event": "table_file_deletion", "file_number": 23}
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345076771, "job": 10, "event": "table_file_deletion", "file_number": 21}
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:24.991303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:25.076930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:25.076938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:25.076940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:25.076942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:42:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:42:25.076943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:42:25 np0005548731 python3.9[146053]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:42:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:25.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:26.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:26 np0005548731 python3.9[146207]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:27 np0005548731 python3.9[146359]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:27.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:27 np0005548731 podman[146483]: 2025-12-06 06:42:27.795845387 +0000 UTC m=+0.105023553 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:42:27 np0005548731 python3.9[146528]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:42:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:28.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:28 np0005548731 podman[146691]: 2025-12-06 06:42:28.924164057 +0000 UTC m=+0.086734988 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 01:42:29 np0005548731 python3.9[146690]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000049s ======
Dec  6 01:42:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:29.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Dec  6 01:42:29 np0005548731 python3.9[146863]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:30.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:30 np0005548731 python3.9[147016]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:31 np0005548731 python3.9[147168]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:31.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:32.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:32 np0005548731 python3.9[147320]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:33 np0005548731 python3.9[147473]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:42:33 np0005548731 python3.9[147625]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:33.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:34.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:34 np0005548731 python3.9[147778]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:35 np0005548731 python3.9[147930]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:35.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:36 np0005548731 python3.9[148082]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:36.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:36 np0005548731 python3.9[148235]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:42:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:37.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:42:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:38.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:42:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:42:39 np0005548731 python3.9[148387]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:42:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:42:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:39.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:42:40 np0005548731 python3.9[148590]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 01:42:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:42:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:40.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:42:41 np0005548731 python3.9[148743]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 01:42:41 np0005548731 systemd[1]: Reloading.
Dec  6 01:42:41 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:42:41 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:42:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:41.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:42.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:42 np0005548731 python3.9[148930]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:42:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:43.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:43 np0005548731 python3.9[149083]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:42:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:42:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:42:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:44.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:42:44 np0005548731 python3.9[149237]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:42:45 np0005548731 python3.9[149390]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:42:45 np0005548731 python3.9[149543]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:42:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:45.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:46.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:46 np0005548731 python3.9[149697]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:42:46 np0005548731 python3.9[149850]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:42:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:47.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:42:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:48.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:42:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:42:49 np0005548731 python3.9[150004]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  6 01:42:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:49.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:50.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:50 np0005548731 python3.9[150158]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 01:42:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:42:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:51.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:42:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:52.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:53.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:54 np0005548731 python3.9[150317]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  6 01:42:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:42:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:42:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:54.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:42:55 np0005548731 python3.9[150478]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:42:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:42:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:55.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:42:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:42:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:42:56 np0005548731 python3.9[150563]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:42:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:42:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:57.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:42:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:58.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:42:58 np0005548731 podman[150565]: 2025-12-06 06:42:58.359282021 +0000 UTC m=+0.418916589 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  6 01:42:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:42:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:42:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:59.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:00 np0005548731 podman[150649]: 2025-12-06 06:43:00.071667361 +0000 UTC m=+0.060397136 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  6 01:43:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:43:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:00.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:43:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:43:00.829 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:43:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:43:00.829 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:43:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:43:00.830 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:43:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:02.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:43:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:03.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:43:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:43:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:04.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:43:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:43:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:05.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:43:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:06.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:07.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:08.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:09.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:43:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:10.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:43:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:11.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:43:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:12.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:43:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:13.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:14.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:15.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:43:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:43:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:17.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:43:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:18.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:43:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:19.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 01:43:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:43:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:43:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:43:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:20.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:21.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:22.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:23.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:24.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:25.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:43:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:43:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:43:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:27.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:43:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:43:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:28.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:43:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:43:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:43:29 np0005548731 podman[151096]: 2025-12-06 06:43:29.138659823 +0000 UTC m=+0.128929156 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  6 01:43:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:30.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:30 np0005548731 podman[151122]: 2025-12-06 06:43:30.920494906 +0000 UTC m=+0.069440184 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 01:43:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:31.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:32.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:33.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:43:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:34.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:43:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:35.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:43:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:36.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:43:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:38.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:39 np0005548731 kernel: SELinux:  Converting 2771 SID table entries...
Dec  6 01:43:39 np0005548731 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 01:43:39 np0005548731 kernel: SELinux:  policy capability open_perms=1
Dec  6 01:43:39 np0005548731 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 01:43:39 np0005548731 kernel: SELinux:  policy capability always_check_network=0
Dec  6 01:43:39 np0005548731 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 01:43:39 np0005548731 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 01:43:39 np0005548731 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 01:43:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:39.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:43:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:40.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:43:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:41.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:42.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:43:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:43.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:43:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:43:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:43:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:43:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:45.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:43:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:43:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:46.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:43:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:43:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:47.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:43:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:43:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:48.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:43:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:49.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:50.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:43:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:51.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:43:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:52.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:52 np0005548731 kernel: SELinux:  Converting 2771 SID table entries...
Dec  6 01:43:53 np0005548731 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 01:43:53 np0005548731 kernel: SELinux:  policy capability open_perms=1
Dec  6 01:43:53 np0005548731 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 01:43:53 np0005548731 kernel: SELinux:  policy capability always_check_network=0
Dec  6 01:43:53 np0005548731 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 01:43:53 np0005548731 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 01:43:53 np0005548731 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 01:43:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:53.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:43:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:54.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:43:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:43:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:55.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:43:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:56.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:43:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:43:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:57.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:43:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:43:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:58.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:43:58 np0005548731 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  6 01:43:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:43:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:43:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:59.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:43:59 np0005548731 podman[151270]: 2025-12-06 06:43:59.950205348 +0000 UTC m=+0.093827882 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:44:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:00.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:44:00.830 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:44:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:44:00.831 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:44:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:44:00.831 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:44:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:01.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:01 np0005548731 podman[151298]: 2025-12-06 06:44:01.90162848 +0000 UTC m=+0.056683431 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:44:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:02.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:03.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:04.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:44:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:05.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:44:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:06.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:07.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:08.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:09.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:10.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:11.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:12.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:13.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:14.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:44:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:15.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:44:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:16.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:17.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:18.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:19.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:21.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:22.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:23.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:44:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:24.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:44:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:25.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:26.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:27.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:28.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:44:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:29.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:44:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:44:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000050s ======
Dec  6 01:44:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:30.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Dec  6 01:44:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:30 np0005548731 podman[164477]: 2025-12-06 06:44:30.958236651 +0000 UTC m=+0.122814114 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:44:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:31.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:44:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:32.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:44:32 np0005548731 podman[165688]: 2025-12-06 06:44:32.90264093 +0000 UTC m=+0.060275143 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 01:44:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:33.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:34.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:35.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:36.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:37.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:38.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:44:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:39.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:44:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:40.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:41.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:44:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:42.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:44:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:43.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:44.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:45.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:46.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:44:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:47.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:48.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:49.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:50.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:51.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:52.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:54.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:54.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:54 np0005548731 kernel: SELinux:  Converting 2772 SID table entries...
Dec  6 01:44:54 np0005548731 kernel: SELinux:  policy capability network_peer_controls=1
Dec  6 01:44:54 np0005548731 kernel: SELinux:  policy capability open_perms=1
Dec  6 01:44:54 np0005548731 kernel: SELinux:  policy capability extended_socket_class=1
Dec  6 01:44:54 np0005548731 kernel: SELinux:  policy capability always_check_network=0
Dec  6 01:44:54 np0005548731 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  6 01:44:54 np0005548731 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  6 01:44:54 np0005548731 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  6 01:44:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:44:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:56.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:44:56 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:44:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:44:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:56.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:44:56 np0005548731 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Dec  6 01:44:56 np0005548731 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  6 01:44:56 np0005548731 dbus-broker-launch[745]: Noticed file-system modification, trigger reload.
Dec  6 01:44:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:44:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:44:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:58.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:44:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:44:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:44:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:58.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:45:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:00.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:00.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:45:00.832 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:45:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:45:00.833 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:45:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:45:00.833 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:45:01 np0005548731 podman[168764]: 2025-12-06 06:45:01.697212586 +0000 UTC m=+0.147017850 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 01:45:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:45:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:02.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:02.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:02 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 01:45:03 np0005548731 podman[168807]: 2025-12-06 06:45:03.823946947 +0000 UTC m=+0.086198971 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 01:45:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:04.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:04.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:04 np0005548731 systemd[1]: Stopping OpenSSH server daemon...
Dec  6 01:45:04 np0005548731 systemd[1]: sshd.service: Deactivated successfully.
Dec  6 01:45:04 np0005548731 systemd[1]: Stopped OpenSSH server daemon.
Dec  6 01:45:04 np0005548731 systemd[1]: sshd.service: Consumed 4.897s CPU time, read 32.0K from disk, written 52.0K to disk.
Dec  6 01:45:04 np0005548731 systemd[1]: Stopped target sshd-keygen.target.
Dec  6 01:45:04 np0005548731 systemd[1]: Stopping sshd-keygen.target...
Dec  6 01:45:04 np0005548731 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 01:45:04 np0005548731 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 01:45:04 np0005548731 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  6 01:45:04 np0005548731 systemd[1]: Reached target sshd-keygen.target.
Dec  6 01:45:04 np0005548731 systemd[1]: Starting OpenSSH server daemon...
Dec  6 01:45:04 np0005548731 systemd[1]: Started OpenSSH server daemon.
Dec  6 01:45:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:06.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:06.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:06 np0005548731 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 01:45:06 np0005548731 systemd[1]: Starting man-db-cache-update.service...
Dec  6 01:45:06 np0005548731 systemd[1]: Reloading.
Dec  6 01:45:06 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:45:06 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:45:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:45:07 np0005548731 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 01:45:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000027s ======
Dec  6 01:45:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:08.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec  6 01:45:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:10.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:10.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:12.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:45:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:12.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:45:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:45:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:14.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:14.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:16.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:16.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:18.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:45:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:18.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:19 np0005548731 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 01:45:19 np0005548731 systemd[1]: Finished man-db-cache-update.service.
Dec  6 01:45:19 np0005548731 systemd[1]: man-db-cache-update.service: Consumed 12.194s CPU time.
Dec  6 01:45:19 np0005548731 systemd[1]: run-r22a230f7fe374bb1b197c3081cf2355c.service: Deactivated successfully.
Dec  6 01:45:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:20.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:20.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:22.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:22.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:45:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:24.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:24.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:26.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:26.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:28.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:28.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:45:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:30.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:30.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:31 np0005548731 podman[178170]: 2025-12-06 06:45:31.962390953 +0000 UTC m=+0.111049357 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec  6 01:45:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:32.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:32.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:34.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:45:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:34 np0005548731 podman[178200]: 2025-12-06 06:45:34.89588996 +0000 UTC m=+0.057761779 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 01:45:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:36.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:36.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:38.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:38.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:45:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:40.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:40.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:42.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:42.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:45:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:44.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:44 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:45:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:44.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:46.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:46.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:48.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:48.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:45:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:45:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:45:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:45:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:45:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:45:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:50.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:50.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.842602) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551842698, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1819, "num_deletes": 250, "total_data_size": 4516433, "memory_usage": 4581808, "flush_reason": "Manual Compaction"}
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551861685, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 2959024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12679, "largest_seqno": 14493, "table_properties": {"data_size": 2951579, "index_size": 4452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14096, "raw_average_key_size": 18, "raw_value_size": 2936758, "raw_average_value_size": 3833, "num_data_blocks": 201, "num_entries": 766, "num_filter_entries": 766, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003346, "oldest_key_time": 1765003346, "file_creation_time": 1765003551, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 19219 microseconds, and 7702 cpu microseconds.
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.861836) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 2959024 bytes OK
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.861909) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.864367) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.864420) EVENT_LOG_v1 {"time_micros": 1765003551864412, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.864444) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 4508358, prev total WAL file size 4508358, number of live WAL files 2.
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.866506) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(2889KB)], [24(9734KB)]
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551866612, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12927206, "oldest_snapshot_seqno": -1}
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4703 keys, 12379675 bytes, temperature: kUnknown
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551971463, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 12379675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12341807, "index_size": 25022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 116017, "raw_average_key_size": 24, "raw_value_size": 12250508, "raw_average_value_size": 2604, "num_data_blocks": 1062, "num_entries": 4703, "num_filter_entries": 4703, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765003551, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.971759) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12379675 bytes
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.973742) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.1 rd, 117.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.5 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(8.6) write-amplify(4.2) OK, records in: 5216, records dropped: 513 output_compression: NoCompression
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.973799) EVENT_LOG_v1 {"time_micros": 1765003551973780, "job": 12, "event": "compaction_finished", "compaction_time_micros": 104982, "compaction_time_cpu_micros": 30726, "output_level": 6, "num_output_files": 1, "total_output_size": 12379675, "num_input_records": 5216, "num_output_records": 4703, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551974516, "job": 12, "event": "table_file_deletion", "file_number": 26}
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551977081, "job": 12, "event": "table_file_deletion", "file_number": 24}
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.866344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.977124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.977129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.977130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.977131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:45:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:45:51.977133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:45:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:52.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 01:45:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:52.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 01:45:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:45:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:54.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:54.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:56.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:56.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:45:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:58.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:45:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:45:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:45:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:58.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:45:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:46:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:00.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:46:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:00.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:46:00.832 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:46:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:46:00.834 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:46:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:46:00.834 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:46:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:02.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:02.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:02 np0005548731 podman[178417]: 2025-12-06 06:46:02.926670196 +0000 UTC m=+0.086679921 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec  6 01:46:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:04.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:46:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:04.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:46:05 np0005548731 podman[178445]: 2025-12-06 06:46:05.898638027 +0000 UTC m=+0.057687199 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:46:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:06.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.004000098s ======
Dec  6 01:46:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:06.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000098s
Dec  6 01:46:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:46:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:08.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:46:08 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:46:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:08.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:46:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:46:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:10.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:46:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:46:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:46:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:10.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:46:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:12.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:12.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:14.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:14.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:16.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:16.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:18.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:18.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:46:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:20.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:46:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:20.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:22.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:22.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:24.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:24.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:26.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:26.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:28.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:28.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:30.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:30.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:31 np0005548731 systemd[1]: session-48.scope: Deactivated successfully.
Dec  6 01:46:31 np0005548731 systemd[1]: session-48.scope: Consumed 2min 19.160s CPU time.
Dec  6 01:46:31 np0005548731 systemd-logind[794]: Session 48 logged out. Waiting for processes to exit.
Dec  6 01:46:31 np0005548731 systemd-logind[794]: Removed session 48.
Dec  6 01:46:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:32.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:32.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:33 np0005548731 podman[178629]: 2025-12-06 06:46:33.996287655 +0000 UTC m=+0.138635043 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 01:46:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:34.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:34.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:36.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:36 np0005548731 podman[178658]: 2025-12-06 06:46:36.909163173 +0000 UTC m=+0.070162974 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  6 01:46:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:38.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:46:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:38.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:46:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:40.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:46:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:40.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:46:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:42.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:46:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:42.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:46:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:44.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:44.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:46.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:46.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:48.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:48.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:50.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:50.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:46:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:52.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:46:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:46:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:54.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:56.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:46:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:58.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:46:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:46:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:46:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:58.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:46:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:00.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:47:00.834 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:47:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:47:00.836 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:47:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:47:00.836 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:47:01 np0005548731 systemd-logind[794]: New session 49 of user zuul.
Dec  6 01:47:01 np0005548731 systemd[1]: Started Session 49 of User zuul.
Dec  6 01:47:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:02.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:47:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:02.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:47:02 np0005548731 python3.9[178871]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 01:47:02 np0005548731 systemd[1]: Reloading.
Dec  6 01:47:02 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:47:02 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:47:03 np0005548731 python3.9[179062]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 01:47:03 np0005548731 systemd[1]: Reloading.
Dec  6 01:47:03 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:47:03 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:47:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:04.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:04 np0005548731 podman[179101]: 2025-12-06 06:47:04.333803434 +0000 UTC m=+0.103459956 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 01:47:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:04.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:05 np0005548731 python3.9[179278]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 01:47:05 np0005548731 systemd[1]: Reloading.
Dec  6 01:47:05 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:47:05 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:47:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:06.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:06 np0005548731 python3.9[179469]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 01:47:06 np0005548731 systemd[1]: Reloading.
Dec  6 01:47:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:06.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:06 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:47:06 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:47:07 np0005548731 podman[179631]: 2025-12-06 06:47:07.563313045 +0000 UTC m=+0.059053035 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec  6 01:47:07 np0005548731 python3.9[179675]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:07 np0005548731 systemd[1]: Reloading.
Dec  6 01:47:08 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:47:08 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:47:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:08.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:08.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:09 np0005548731 python3.9[179913]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:09 np0005548731 systemd[1]: Reloading.
Dec  6 01:47:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:09 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:47:09 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:47:10 np0005548731 python3.9[180241]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:10 np0005548731 systemd[1]: Reloading.
Dec  6 01:47:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:10.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:47:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:47:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:47:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:47:10 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:47:10 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:47:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:10.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:47:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:47:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:47:11 np0005548731 python3.9[180432]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:12 np0005548731 python3.9[180587]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:12 np0005548731 systemd[1]: Reloading.
Dec  6 01:47:12 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:47:12 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:47:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:47:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:12.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:47:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000049s ======
Dec  6 01:47:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:12.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Dec  6 01:47:13 np0005548731 python3.9[180779]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  6 01:47:13 np0005548731 systemd[1]: Reloading.
Dec  6 01:47:13 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:47:13 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:47:14 np0005548731 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  6 01:47:14 np0005548731 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  6 01:47:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:14.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:14.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:15 np0005548731 python3.9[180974]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:15 np0005548731 python3.9[181129]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:16.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:16 np0005548731 python3.9[181285]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:17 np0005548731 python3.9[181440]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:18.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:18 np0005548731 python3.9[181596]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:18.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:19 np0005548731 python3.9[181751]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:20.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:20 np0005548731 python3.9[181906]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:20.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:21 np0005548731 python3.9[182062]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:21 np0005548731 python3.9[182217]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:22.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:47:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:22.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:47:22 np0005548731 python3.9[182423]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:23 np0005548731 python3.9[182578]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:24.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:47:24 np0005548731 python3.9[182734]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:24.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:25 np0005548731 python3.9[182889]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:26 np0005548731 python3.9[183044]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  6 01:47:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:26.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:47:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:26.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:47:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:47:27 np0005548731 python3.9[183200]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:47:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:28.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:28 np0005548731 python3.9[183353]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:47:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:28.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:29 np0005548731 python3.9[183555]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:47:29 np0005548731 python3.9[183707]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:47:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:30.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:30 np0005548731 python3.9[183860]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:47:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:30.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:31 np0005548731 python3.9[184012]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.234889) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651234943, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1163, "num_deletes": 501, "total_data_size": 1977792, "memory_usage": 2002008, "flush_reason": "Manual Compaction"}
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651241616, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 853154, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14499, "largest_seqno": 15656, "table_properties": {"data_size": 848909, "index_size": 1385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13557, "raw_average_key_size": 19, "raw_value_size": 838028, "raw_average_value_size": 1187, "num_data_blocks": 61, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003552, "oldest_key_time": 1765003552, "file_creation_time": 1765003651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 6784 microseconds, and 3152 cpu microseconds.
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.241678) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 853154 bytes OK
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.241698) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.243305) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.243320) EVENT_LOG_v1 {"time_micros": 1765003651243315, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.243338) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1971202, prev total WAL file size 1971202, number of live WAL files 2.
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.244030) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323538' seq:72057594037927935, type:22 .. '6D67727374617400353039' seq:0, type:0; will stop at (end)
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(833KB)], [27(11MB)]
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651244094, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 13232829, "oldest_snapshot_seqno": -1}
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4419 keys, 7585036 bytes, temperature: kUnknown
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651323826, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 7585036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7554814, "index_size": 18080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 111497, "raw_average_key_size": 25, "raw_value_size": 7474087, "raw_average_value_size": 1691, "num_data_blocks": 749, "num_entries": 4419, "num_filter_entries": 4419, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765003651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.324075) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7585036 bytes
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.325239) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.8 rd, 95.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.8 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(24.4) write-amplify(8.9) OK, records in: 5409, records dropped: 990 output_compression: NoCompression
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.325261) EVENT_LOG_v1 {"time_micros": 1765003651325251, "job": 14, "event": "compaction_finished", "compaction_time_micros": 79798, "compaction_time_cpu_micros": 28282, "output_level": 6, "num_output_files": 1, "total_output_size": 7585036, "num_input_records": 5409, "num_output_records": 4419, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651325582, "job": 14, "event": "table_file_deletion", "file_number": 29}
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651327828, "job": 14, "event": "table_file_deletion", "file_number": 27}
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.243969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.327887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.327893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.327895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.327896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:47:31 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:47:31.327898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:47:31 np0005548731 python3.9[184164]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:32 np0005548731 python3.9[184290]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003651.3039138-1636-159552929677893/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:32.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:33 np0005548731 python3.9[184442]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:34 np0005548731 python3.9[184567]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003652.7506535-1636-161467844153360/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:47:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:34.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:47:34 np0005548731 podman[184692]: 2025-12-06 06:47:34.56852315 +0000 UTC m=+0.116459408 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  6 01:47:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:34.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:34 np0005548731 python3.9[184733]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:35 np0005548731 python3.9[184869]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003654.1714575-1636-127744254651586/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:35 np0005548731 python3.9[185021]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:36.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:36 np0005548731 python3.9[185147]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003655.384605-1636-237724253499193/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:36.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:37 np0005548731 python3.9[185299]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:37 np0005548731 python3.9[185424]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003656.5643137-1636-223229723567869/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:37 np0005548731 podman[185425]: 2025-12-06 06:47:37.657298847 +0000 UTC m=+0.050855727 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  6 01:47:38 np0005548731 python3.9[185595]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:38.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:38.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:38 np0005548731 python3.9[185721]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003657.737689-1636-219963400967968/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:39 np0005548731 python3.9[185873]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:39 np0005548731 python3.9[185996]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003658.8827808-1636-206811342588781/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:40.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:40 np0005548731 python3.9[186149]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:40.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:41 np0005548731 python3.9[186274]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003660.0314574-1636-91238675558066/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:41 np0005548731 python3.9[186426]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  6 01:47:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:42.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:42.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:43 np0005548731 python3.9[186580]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:43 np0005548731 python3.9[186732]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:44.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:44 np0005548731 python3.9[186885]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:44.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:45 np0005548731 python3.9[187037]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:46 np0005548731 python3.9[187189]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:46.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:46.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:46 np0005548731 python3.9[187342]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:47 np0005548731 python3.9[187494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:47 np0005548731 python3.9[187646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:48.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:48 np0005548731 python3.9[187799]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:48.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:49 np0005548731 python3.9[187952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:49 np0005548731 python3.9[188153]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:50.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:50 np0005548731 python3.9[188306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:50.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:51 np0005548731 python3.9[188458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:51 np0005548731 python3.9[188610]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:52.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:47:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:52.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:47:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:47:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:54.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:47:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:54.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:54 np0005548731 python3.9[188764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:55 np0005548731 python3.9[188887]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003674.4136724-2299-49485146482648/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:56 np0005548731 python3.9[189039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:56.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:56.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:56 np0005548731 python3.9[189163]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003675.621696-2299-19379307601174/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:57 np0005548731 python3.9[189315]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:57 np0005548731 python3.9[189438]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003676.837205-2299-57943163654587/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:47:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:58.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:47:58 np0005548731 python3.9[189591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:47:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:47:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:47:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:58.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:47:59 np0005548731 python3.9[189714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003678.0088236-2299-66729274235924/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:47:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:47:59 np0005548731 python3.9[189866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:00 np0005548731 python3.9[189989]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003679.178165-2299-227399584377073/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:00.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:00.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:00 np0005548731 python3.9[190142]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:48:00.834 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:48:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:48:00.835 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:48:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:48:00.835 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:48:01 np0005548731 python3.9[190265]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003680.3220465-2299-24805460753490/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:01 np0005548731 python3.9[190417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:01 np0005548731 auditd[702]: Audit daemon rotating log files
Dec  6 01:48:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:02.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:02 np0005548731 python3.9[190541]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003681.4502876-2299-177459601392638/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:02.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:03 np0005548731 python3.9[190693]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:03 np0005548731 python3.9[190816]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003682.7038028-2299-70040369600378/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:48:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:04.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:04 np0005548731 python3.9[190969]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:04.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:04 np0005548731 podman[191038]: 2025-12-06 06:48:04.932310997 +0000 UTC m=+0.097938880 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  6 01:48:05 np0005548731 python3.9[191118]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003683.9551432-2299-55228780336168/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:05 np0005548731 python3.9[191270]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:06.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:06 np0005548731 python3.9[191394]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003685.321157-2299-98613981009193/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:06.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:07 np0005548731 python3.9[191546]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:07 np0005548731 python3.9[191669]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003686.5587294-2299-114233439737737/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:07 np0005548731 podman[191734]: 2025-12-06 06:48:07.928800848 +0000 UTC m=+0.087330645 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 01:48:08 np0005548731 python3.9[191843]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:08.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:48:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:08.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:48:08 np0005548731 python3.9[191966]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003687.781426-2299-163820943380831/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:48:09 np0005548731 python3.9[192168]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:09 np0005548731 python3.9[192291]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003689.0126355-2299-5846219090007/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:10.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:10 np0005548731 python3.9[192444]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:10.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:11 np0005548731 python3.9[192567]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003690.1496465-2299-31501733625164/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:11 np0005548731 python3.9[192717]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:48:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:48:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:12.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:48:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:12.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:13 np0005548731 python3.9[192873]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  6 01:48:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:48:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:14.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:14.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:15 np0005548731 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  6 01:48:15 np0005548731 python3.9[193030]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:16 np0005548731 python3.9[193182]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:16.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:16.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:16 np0005548731 python3.9[193335]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:17 np0005548731 python3.9[193487]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:18 np0005548731 python3.9[193639]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:48:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:18.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:48:18 np0005548731 python3.9[193792]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:48:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:18.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:48:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:48:19 np0005548731 python3.9[193944]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:19 np0005548731 python3.9[194096]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:20.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:20 np0005548731 python3.9[194249]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:21 np0005548731 python3.9[194401]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:22 np0005548731 python3.9[194553]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:48:22 np0005548731 systemd[1]: Reloading.
Dec  6 01:48:22 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:48:22 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:48:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:22.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:22 np0005548731 systemd[1]: Starting libvirt logging daemon socket...
Dec  6 01:48:22 np0005548731 systemd[1]: Listening on libvirt logging daemon socket.
Dec  6 01:48:22 np0005548731 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  6 01:48:22 np0005548731 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  6 01:48:22 np0005548731 systemd[1]: Starting libvirt logging daemon...
Dec  6 01:48:22 np0005548731 systemd[1]: Started libvirt logging daemon.
Dec  6 01:48:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:22.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:23 np0005548731 python3.9[194869]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:48:23 np0005548731 systemd[1]: Reloading.
Dec  6 01:48:23 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:48:23 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:48:23 np0005548731 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  6 01:48:23 np0005548731 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  6 01:48:23 np0005548731 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  6 01:48:23 np0005548731 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  6 01:48:23 np0005548731 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  6 01:48:23 np0005548731 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  6 01:48:23 np0005548731 systemd[1]: Starting libvirt nodedev daemon...
Dec  6 01:48:23 np0005548731 systemd[1]: Started libvirt nodedev daemon.
Dec  6 01:48:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:48:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:48:24 np0005548731 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  6 01:48:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:48:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:48:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:24.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:48:24 np0005548731 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  6 01:48:24 np0005548731 python3.9[195094]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:48:24 np0005548731 systemd[1]: Reloading.
Dec  6 01:48:24 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:48:24 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:48:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:24.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:24 np0005548731 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  6 01:48:24 np0005548731 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  6 01:48:24 np0005548731 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  6 01:48:24 np0005548731 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  6 01:48:24 np0005548731 systemd[1]: Starting libvirt proxy daemon...
Dec  6 01:48:24 np0005548731 systemd[1]: Started libvirt proxy daemon.
Dec  6 01:48:24 np0005548731 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  6 01:48:24 np0005548731 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  6 01:48:25 np0005548731 python3.9[195314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:48:25 np0005548731 systemd[1]: Reloading.
Dec  6 01:48:25 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:48:25 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:48:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:48:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:48:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:48:25 np0005548731 setroubleshoot[195064]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 729519a0-b54b-43ff-aa1b-39dd718d3aad
Dec  6 01:48:25 np0005548731 setroubleshoot[195064]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  6 01:48:25 np0005548731 setroubleshoot[195064]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 729519a0-b54b-43ff-aa1b-39dd718d3aad
Dec  6 01:48:25 np0005548731 setroubleshoot[195064]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  6 01:48:25 np0005548731 systemd[1]: Listening on libvirt locking daemon socket.
Dec  6 01:48:25 np0005548731 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  6 01:48:25 np0005548731 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  6 01:48:25 np0005548731 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  6 01:48:25 np0005548731 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  6 01:48:25 np0005548731 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  6 01:48:25 np0005548731 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  6 01:48:25 np0005548731 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  6 01:48:25 np0005548731 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  6 01:48:25 np0005548731 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  6 01:48:25 np0005548731 systemd[1]: Starting libvirt QEMU daemon...
Dec  6 01:48:25 np0005548731 systemd[1]: Started libvirt QEMU daemon.
Dec  6 01:48:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:26.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:26 np0005548731 python3.9[195531]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:48:26 np0005548731 systemd[1]: Reloading.
Dec  6 01:48:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:26.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:26 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:48:26 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:48:27 np0005548731 systemd[1]: Starting libvirt secret daemon socket...
Dec  6 01:48:27 np0005548731 systemd[1]: Listening on libvirt secret daemon socket.
Dec  6 01:48:27 np0005548731 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  6 01:48:27 np0005548731 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  6 01:48:27 np0005548731 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  6 01:48:27 np0005548731 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  6 01:48:27 np0005548731 systemd[1]: Starting libvirt secret daemon...
Dec  6 01:48:27 np0005548731 systemd[1]: Started libvirt secret daemon.
Dec  6 01:48:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:28.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:48:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:30.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:30 np0005548731 python3.9[195795]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:30.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:31 np0005548731 python3.9[195947]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 01:48:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:48:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:48:31 np0005548731 python3.9[196144]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:48:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:32.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:32 np0005548731 python3.9[196304]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 01:48:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:48:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:32.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:48:33 np0005548731 python3.9[196454]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:34 np0005548731 python3.9[196575]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003713.108741-3373-190244289777490/.source.xml follow=False _original_basename=secret.xml.j2 checksum=cbc4650861b6b585bb80bed115fd7c888a642f49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:48:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:34.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:34.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:34 np0005548731 python3.9[196728]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 40a1bae4-cf76-5610-8dab-c75116dfe0bb#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:48:35 np0005548731 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  6 01:48:35 np0005548731 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.015s CPU time.
Dec  6 01:48:35 np0005548731 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  6 01:48:35 np0005548731 podman[196817]: 2025-12-06 06:48:35.944181334 +0000 UTC m=+0.097746086 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:48:36 np0005548731 python3.9[196916]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:36.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:36.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:38 np0005548731 podman[197353]: 2025-12-06 06:48:38.220258715 +0000 UTC m=+0.068329489 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  6 01:48:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:38.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:38 np0005548731 python3.9[197400]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:39 np0005548731 python3.9[197552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:48:39 np0005548731 python3.9[197675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003718.7512314-3538-30590240685396/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:40.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:40 np0005548731 python3.9[197828]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:41 np0005548731 python3.9[197980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:41 np0005548731 python3.9[198058]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:42.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:42 np0005548731 python3.9[198211]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:42.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:42 np0005548731 python3.9[198289]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.za3j_z7x recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:43 np0005548731 python3.9[198441]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:44 np0005548731 python3.9[198519]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:48:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:48:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:44.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:48:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:46.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:46 np0005548731 python3.9[198673]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:48:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:46.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:47 np0005548731 python3[198826]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  6 01:48:47 np0005548731 python3.9[198978]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:48.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:48 np0005548731 python3.9[199057]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:49 np0005548731 python3.9[199209]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:49 np0005548731 python3.9[199337]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:50.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:50.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:48:52 np0005548731 python3.9[199490]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:48:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:52.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:48:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:52.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:52 np0005548731 python3.9[199569]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:53 np0005548731 python3.9[199721]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:53 np0005548731 python3.9[199799]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:54.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:54 np0005548731 python3.9[199952]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:48:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:54.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:55 np0005548731 python3.9[200077]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003734.1057863-3913-65598150725341/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:56 np0005548731 python3.9[200229]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:48:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:56.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:48:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:56.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:48:57 np0005548731 python3.9[200382]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:48:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:58.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:58 np0005548731 python3.9[200538]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:48:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:48:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:48:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:58.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:48:59 np0005548731 python3.9[200690]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:49:00 np0005548731 python3.9[200843]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:49:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:00.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:00.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:49:00.835 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:49:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:49:00.836 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:49:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:49:00.836 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:49:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:02.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:02.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:02 np0005548731 python3.9[200999]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:49:03 np0005548731 python3.9[201154]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:04 np0005548731 python3.9[201307]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:49:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:04.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:04.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:04 np0005548731 python3.9[201430]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003743.814821-4129-212066216840013/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:05 np0005548731 python3.9[201582]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:49:06 np0005548731 python3.9[201705]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003745.145837-4174-208148580409493/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:06.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:06 np0005548731 podman[201830]: 2025-12-06 06:49:06.7349652 +0000 UTC m=+0.085559391 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  6 01:49:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:06.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:06 np0005548731 python3.9[201877]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:49:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:07 np0005548731 python3.9[202008]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003746.4198236-4219-98645344214012/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:08 np0005548731 python3.9[202160]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:49:08 np0005548731 systemd[1]: Reloading.
Dec  6 01:49:08 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:49:08 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:49:08 np0005548731 podman[202163]: 2025-12-06 06:49:08.346379813 +0000 UTC m=+0.084423589 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  6 01:49:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:08.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:08 np0005548731 systemd[1]: Reached target edpm_libvirt.target.
Dec  6 01:49:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:49:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:08.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:49:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:10.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:10 np0005548731 python3.9[202421]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  6 01:49:10 np0005548731 systemd[1]: Reloading.
Dec  6 01:49:10 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:49:10 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:49:10 np0005548731 systemd[1]: Reloading.
Dec  6 01:49:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:10.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:10 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:49:10 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:49:11 np0005548731 systemd[1]: session-49.scope: Deactivated successfully.
Dec  6 01:49:11 np0005548731 systemd[1]: session-49.scope: Consumed 1min 25.827s CPU time.
Dec  6 01:49:11 np0005548731 systemd-logind[794]: Session 49 logged out. Waiting for processes to exit.
Dec  6 01:49:11 np0005548731 systemd-logind[794]: Removed session 49.
Dec  6 01:49:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:12.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:49:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:14.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:49:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:14.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:16.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:17 np0005548731 systemd-logind[794]: New session 50 of user zuul.
Dec  6 01:49:17 np0005548731 systemd[1]: Started Session 50 of User zuul.
Dec  6 01:49:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:18 np0005548731 python3.9[202674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:49:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:18.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:18.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:19 np0005548731 python3.9[202829]: ansible-ansible.builtin.service_facts Invoked
Dec  6 01:49:19 np0005548731 network[202846]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 01:49:19 np0005548731 network[202847]: 'network-scripts' will be removed from distribution in near future.
Dec  6 01:49:19 np0005548731 network[202848]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 01:49:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.003000075s ======
Dec  6 01:49:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:20.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000075s
Dec  6 01:49:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:20.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:22.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:22.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:24 np0005548731 python3.9[203123]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  6 01:49:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:24.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:49:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:24.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:49:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 01:49:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 2785 writes, 16K keys, 2785 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s#012Cumulative WAL: 2785 writes, 2785 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1143 writes, 5410 keys, 1143 commit groups, 1.0 writes per commit group, ingest: 11.95 MB, 0.02 MB/s#012Interval WAL: 1144 writes, 1144 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.3      1.66              0.05         7    0.237       0      0       0.0       0.0#012  L6      1/0    7.23 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9     61.4     50.7      1.06              0.16         6    0.177     28K   3286       0.0       0.0#012 Sum      1/0    7.23 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9     24.0     26.7      2.72              0.21        13    0.209     28K   3286       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    110.9    108.2      0.33              0.11         6    0.056     15K   2033       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0     61.4     50.7      1.06              0.16         6    0.177     28K   3286       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.3      1.65              0.05         6    0.276       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.018, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 2.7 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 2.28 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(106,2.03 MB,0.666689%) FilterBlock(13,87.86 KB,0.0282237%) IndexBlock(13,173.55 KB,0.0557498%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 01:49:25 np0005548731 python3.9[203207]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:49:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:26.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:26.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:28.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:28.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:30.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:49:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:30.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:49:31 np0005548731 python3.9[203413]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:49:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:32.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:32 np0005548731 python3.9[203686]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:49:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:32.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:34.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:34.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 01:49:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 01:49:35 np0005548731 python3.9[203960]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:49:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:49:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:49:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:49:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:49:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:49:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:36.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:36 np0005548731 python3.9[204125]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:49:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:36.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:36 np0005548731 podman[204164]: 2025-12-06 06:49:36.962534423 +0000 UTC m=+0.113840778 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 01:49:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:37 np0005548731 python3.9[204304]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:49:37 np0005548731 python3.9[204427]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003776.834452-254-239183167591302/.source.iscsi _original_basename=.tr1hg2wp follow=False checksum=d866848d5b4813c0001b028d03539d19f3321d67 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:38.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:38 np0005548731 podman[204552]: 2025-12-06 06:49:38.64184146 +0000 UTC m=+0.046205956 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:49:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:38.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:38 np0005548731 python3.9[204599]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:39 np0005548731 python3.9[204751]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:40.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:40 np0005548731 python3.9[204904]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:49:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:40.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:40 np0005548731 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  6 01:49:41 np0005548731 python3.9[205060]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:49:41 np0005548731 systemd[1]: Reloading.
Dec  6 01:49:41 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:49:41 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:49:42 np0005548731 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  6 01:49:42 np0005548731 systemd[1]: Starting Open-iSCSI...
Dec  6 01:49:42 np0005548731 kernel: Loading iSCSI transport class v2.0-870.
Dec  6 01:49:42 np0005548731 systemd[1]: Started Open-iSCSI.
Dec  6 01:49:42 np0005548731 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  6 01:49:42 np0005548731 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  6 01:49:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:42.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:42.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:43 np0005548731 python3.9[205261]: ansible-ansible.builtin.service_facts Invoked
Dec  6 01:49:43 np0005548731 network[205278]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 01:49:43 np0005548731 network[205279]: 'network-scripts' will be removed from distribution in near future.
Dec  6 01:49:43 np0005548731 network[205280]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 01:49:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:44.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:44.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:49:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:49:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:46.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:46.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:48 np0005548731 python3.9[205604]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  6 01:49:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:49:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:48.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:49:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:48.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:49 np0005548731 python3.9[205757]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  6 01:49:49 np0005548731 python3.9[205913]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:49:50 np0005548731 python3.9[206087]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003789.2831655-485-83904744378172/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:50.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:50.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:51 np0005548731 python3.9[206239]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:52 np0005548731 python3.9[206392]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:49:52 np0005548731 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  6 01:49:52 np0005548731 systemd[1]: Stopped Load Kernel Modules.
Dec  6 01:49:52 np0005548731 systemd[1]: Stopping Load Kernel Modules...
Dec  6 01:49:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:52 np0005548731 systemd[1]: Starting Load Kernel Modules...
Dec  6 01:49:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:52.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:52 np0005548731 systemd[1]: Finished Load Kernel Modules.
Dec  6 01:49:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:49:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:52.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:49:53 np0005548731 python3.9[206548]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:49:54 np0005548731 python3.9[206700]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:49:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:54.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:54.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:54 np0005548731 python3.9[206853]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:49:55 np0005548731 python3.9[207005]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:49:56 np0005548731 python3.9[207129]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003795.2072983-659-169371035051110/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:56.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:56.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:57 np0005548731 python3.9[207282]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:49:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:49:57 np0005548731 python3.9[207435]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:49:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:58.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:49:58 np0005548731 python3.9[207588]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:49:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:49:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:49:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:58.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:49:59 np0005548731 python3.9[207740]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:00 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 01:50:00 np0005548731 python3.9[207892]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:00.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:00.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:50:00.836 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:50:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:50:00.837 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:50:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:50:00.837 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:50:00 np0005548731 python3.9[208045]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:01 np0005548731 python3.9[208197]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:02 np0005548731 python3.9[208349]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:02.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:50:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:02.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:50:02 np0005548731 python3.9[208502]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:50:03 np0005548731 python3.9[208656]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:04.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:04 np0005548731 python3.9[208809]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:50:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:04.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:05 np0005548731 python3.9[208961]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:50:05 np0005548731 python3.9[209039]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:50:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:06.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:06 np0005548731 python3.9[209192]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:50:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:50:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:06.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:50:06 np0005548731 python3.9[209270]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:50:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:07 np0005548731 podman[209394]: 2025-12-06 06:50:07.717285388 +0000 UTC m=+0.086017230 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 01:50:07 np0005548731 python3.9[209440]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:08.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:08 np0005548731 python3.9[209601]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:50:08 np0005548731 podman[209651]: 2025-12-06 06:50:08.794991069 +0000 UTC m=+0.054088627 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  6 01:50:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:08.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:09 np0005548731 python3.9[209698]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:09 np0005548731 python3.9[209850]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:50:10 np0005548731 python3.9[209977]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:10.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:10.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:11 np0005548731 python3.9[210131]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:50:11 np0005548731 systemd[1]: Reloading.
Dec  6 01:50:11 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:50:11 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:50:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:12 np0005548731 python3.9[210321]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:50:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:12.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:12 np0005548731 python3.9[210399]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:12.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:13 np0005548731 python3.9[210551]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:50:14 np0005548731 python3.9[210630]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:14.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:14.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:15 np0005548731 python3.9[210782]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:50:15 np0005548731 systemd[1]: Reloading.
Dec  6 01:50:15 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:50:15 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:50:15 np0005548731 systemd[1]: Starting Create netns directory...
Dec  6 01:50:15 np0005548731 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  6 01:50:15 np0005548731 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  6 01:50:15 np0005548731 systemd[1]: Finished Create netns directory.
Dec  6 01:50:16 np0005548731 python3.9[210976]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:50:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:16.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:16.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:16 np0005548731 python3.9[211128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:50:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:17 np0005548731 python3.9[211251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003816.5858388-1280-86134582645542/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:50:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:50:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:18.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:50:18 np0005548731 python3.9[211404]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:50:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:18.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:19 np0005548731 python3.9[211556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:50:20 np0005548731 python3.9[211679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003819.0458329-1354-243951544433514/.source.json _original_basename=.k5_5fbsw follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:20.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:20 np0005548731 python3.9[211832]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:20.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:22.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:22.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:23 np0005548731 python3.9[212260]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  6 01:50:23 np0005548731 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  6 01:50:24 np0005548731 python3.9[212413]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  6 01:50:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:24.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:24 np0005548731 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  6 01:50:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:24.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:24 np0005548731 python3.9[212566]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  6 01:50:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:50:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:26.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:50:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:26.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:27 np0005548731 python3[212746]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  6 01:50:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:28 np0005548731 podman[212759]: 2025-12-06 06:50:28.355924381 +0000 UTC m=+1.259202600 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  6 01:50:28 np0005548731 podman[212816]: 2025-12-06 06:50:28.483278152 +0000 UTC m=+0.044449433 container create ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Dec  6 01:50:28 np0005548731 podman[212816]: 2025-12-06 06:50:28.459577308 +0000 UTC m=+0.020748609 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  6 01:50:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:28.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:28 np0005548731 python3[212746]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  6 01:50:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:28.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:29 np0005548731 python3.9[213005]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:50:30 np0005548731 python3.9[213159]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:50:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:30.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:50:30 np0005548731 python3.9[213286]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:50:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:30.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 01:50:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1202.4 total, 600.0 interval#012Cumulative writes: 5404 writes, 22K keys, 5404 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5404 writes, 921 syncs, 5.87 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 463 writes, 719 keys, 463 commit groups, 1.0 writes per commit group, ingest: 0.23 MB, 0.00 MB/s#012Interval WAL: 463 writes, 225 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5612cf175350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5612cf175350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1202.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Dec  6 01:50:31 np0005548731 python3.9[213437]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003830.602841-1618-212281271283572/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:31 np0005548731 python3.9[213513]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 01:50:31 np0005548731 systemd[1]: Reloading.
Dec  6 01:50:31 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:50:31 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:50:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:32.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:50:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:32.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:50:32 np0005548731 python3.9[213624]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:50:32 np0005548731 systemd[1]: Reloading.
Dec  6 01:50:32 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:50:32 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:50:33 np0005548731 systemd[1]: Starting multipathd container...
Dec  6 01:50:33 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:50:33 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d67fcecb2b8f576fbf59b55c0e5d1b53aafcca09d39006306e94ef3031752ce1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  6 01:50:33 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d67fcecb2b8f576fbf59b55c0e5d1b53aafcca09d39006306e94ef3031752ce1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  6 01:50:33 np0005548731 systemd[1]: Started /usr/bin/podman healthcheck run ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e.
Dec  6 01:50:33 np0005548731 podman[213664]: 2025-12-06 06:50:33.379937961 +0000 UTC m=+0.132970854 container init ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd)
Dec  6 01:50:33 np0005548731 multipathd[213680]: + sudo -E kolla_set_configs
Dec  6 01:50:33 np0005548731 podman[213664]: 2025-12-06 06:50:33.408817086 +0000 UTC m=+0.161849959 container start ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 01:50:33 np0005548731 podman[213664]: multipathd
Dec  6 01:50:33 np0005548731 systemd[1]: Started multipathd container.
Dec  6 01:50:33 np0005548731 multipathd[213680]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 01:50:33 np0005548731 multipathd[213680]: INFO:__main__:Validating config file
Dec  6 01:50:33 np0005548731 multipathd[213680]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 01:50:33 np0005548731 multipathd[213680]: INFO:__main__:Writing out command to execute
Dec  6 01:50:33 np0005548731 multipathd[213680]: ++ cat /run_command
Dec  6 01:50:33 np0005548731 multipathd[213680]: + CMD='/usr/sbin/multipathd -d'
Dec  6 01:50:33 np0005548731 multipathd[213680]: + ARGS=
Dec  6 01:50:33 np0005548731 multipathd[213680]: + sudo kolla_copy_cacerts
Dec  6 01:50:33 np0005548731 multipathd[213680]: + [[ ! -n '' ]]
Dec  6 01:50:33 np0005548731 multipathd[213680]: + . kolla_extend_start
Dec  6 01:50:33 np0005548731 multipathd[213680]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  6 01:50:33 np0005548731 multipathd[213680]: Running command: '/usr/sbin/multipathd -d'
Dec  6 01:50:33 np0005548731 multipathd[213680]: + umask 0022
Dec  6 01:50:33 np0005548731 multipathd[213680]: + exec /usr/sbin/multipathd -d
Dec  6 01:50:33 np0005548731 podman[213687]: 2025-12-06 06:50:33.500917199 +0000 UTC m=+0.080568500 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 01:50:33 np0005548731 multipathd[213680]: 4096.165281 | --------start up--------
Dec  6 01:50:33 np0005548731 multipathd[213680]: 4096.165298 | read /etc/multipath.conf
Dec  6 01:50:33 np0005548731 multipathd[213680]: 4096.169198 | path checkers start up
Dec  6 01:50:33 np0005548731 systemd[1]: ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e-300dee8b3e658413.service: Main process exited, code=exited, status=1/FAILURE
Dec  6 01:50:33 np0005548731 systemd[1]: ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e-300dee8b3e658413.service: Failed with result 'exit-code'.
Dec  6 01:50:34 np0005548731 python3.9[213871]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:50:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:34.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:34 np0005548731 python3.9[214026]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:50:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:34.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:35 np0005548731 python3.9[214192]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:50:35 np0005548731 systemd[1]: Stopping multipathd container...
Dec  6 01:50:35 np0005548731 multipathd[213680]: 4098.511190 | exit (signal)
Dec  6 01:50:35 np0005548731 multipathd[213680]: 4098.511266 | --------shut down-------
Dec  6 01:50:35 np0005548731 systemd[1]: libpod-ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e.scope: Deactivated successfully.
Dec  6 01:50:35 np0005548731 podman[214196]: 2025-12-06 06:50:35.878659379 +0000 UTC m=+0.072146077 container died ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 01:50:35 np0005548731 systemd[1]: ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e-300dee8b3e658413.timer: Deactivated successfully.
Dec  6 01:50:35 np0005548731 systemd[1]: Stopped /usr/bin/podman healthcheck run ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e.
Dec  6 01:50:35 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e-userdata-shm.mount: Deactivated successfully.
Dec  6 01:50:35 np0005548731 systemd[1]: var-lib-containers-storage-overlay-d67fcecb2b8f576fbf59b55c0e5d1b53aafcca09d39006306e94ef3031752ce1-merged.mount: Deactivated successfully.
Dec  6 01:50:36 np0005548731 podman[214196]: 2025-12-06 06:50:36.140947132 +0000 UTC m=+0.334433830 container cleanup ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:50:36 np0005548731 podman[214196]: multipathd
Dec  6 01:50:36 np0005548731 podman[214228]: multipathd
Dec  6 01:50:36 np0005548731 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  6 01:50:36 np0005548731 systemd[1]: Stopped multipathd container.
Dec  6 01:50:36 np0005548731 systemd[1]: Starting multipathd container...
Dec  6 01:50:36 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:50:36 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d67fcecb2b8f576fbf59b55c0e5d1b53aafcca09d39006306e94ef3031752ce1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  6 01:50:36 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d67fcecb2b8f576fbf59b55c0e5d1b53aafcca09d39006306e94ef3031752ce1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  6 01:50:36 np0005548731 systemd[1]: Started /usr/bin/podman healthcheck run ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e.
Dec  6 01:50:36 np0005548731 podman[214241]: 2025-12-06 06:50:36.351376836 +0000 UTC m=+0.109452026 container init ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 01:50:36 np0005548731 multipathd[214257]: + sudo -E kolla_set_configs
Dec  6 01:50:36 np0005548731 podman[214241]: 2025-12-06 06:50:36.378707652 +0000 UTC m=+0.136782822 container start ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:50:36 np0005548731 podman[214241]: multipathd
Dec  6 01:50:36 np0005548731 systemd[1]: Started multipathd container.
Dec  6 01:50:36 np0005548731 multipathd[214257]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 01:50:36 np0005548731 multipathd[214257]: INFO:__main__:Validating config file
Dec  6 01:50:36 np0005548731 multipathd[214257]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 01:50:36 np0005548731 multipathd[214257]: INFO:__main__:Writing out command to execute
Dec  6 01:50:36 np0005548731 multipathd[214257]: ++ cat /run_command
Dec  6 01:50:36 np0005548731 multipathd[214257]: + CMD='/usr/sbin/multipathd -d'
Dec  6 01:50:36 np0005548731 multipathd[214257]: + ARGS=
Dec  6 01:50:36 np0005548731 multipathd[214257]: + sudo kolla_copy_cacerts
Dec  6 01:50:36 np0005548731 podman[214264]: 2025-12-06 06:50:36.44780544 +0000 UTC m=+0.058040458 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:50:36 np0005548731 systemd[1]: ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e-772e7d53403d1d03.service: Main process exited, code=exited, status=1/FAILURE
Dec  6 01:50:36 np0005548731 systemd[1]: ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e-772e7d53403d1d03.service: Failed with result 'exit-code'.
Dec  6 01:50:36 np0005548731 multipathd[214257]: + [[ ! -n '' ]]
Dec  6 01:50:36 np0005548731 multipathd[214257]: + . kolla_extend_start
Dec  6 01:50:36 np0005548731 multipathd[214257]: Running command: '/usr/sbin/multipathd -d'
Dec  6 01:50:36 np0005548731 multipathd[214257]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  6 01:50:36 np0005548731 multipathd[214257]: + umask 0022
Dec  6 01:50:36 np0005548731 multipathd[214257]: + exec /usr/sbin/multipathd -d
Dec  6 01:50:36 np0005548731 multipathd[214257]: 4099.136890 | --------start up--------
Dec  6 01:50:36 np0005548731 multipathd[214257]: 4099.136905 | read /etc/multipath.conf
Dec  6 01:50:36 np0005548731 multipathd[214257]: 4099.142071 | path checkers start up
Dec  6 01:50:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:36.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:36.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:37 np0005548731 python3.9[214449]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:37 np0005548731 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  6 01:50:37 np0005548731 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  6 01:50:37 np0005548731 podman[214477]: 2025-12-06 06:50:37.918830988 +0000 UTC m=+0.083527306 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 01:50:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:38.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:38 np0005548731 python3.9[214632]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  6 01:50:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:38.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:38 np0005548731 podman[214680]: 2025-12-06 06:50:38.891664251 +0000 UTC m=+0.053268376 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 01:50:39 np0005548731 python3.9[214802]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  6 01:50:39 np0005548731 kernel: Key type psk registered
Dec  6 01:50:40 np0005548731 python3.9[214965]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:50:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:40.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:40 np0005548731 python3.9[215089]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003839.775029-1859-79223837187256/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:50:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:40.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:50:41 np0005548731 python3.9[215241]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:42 np0005548731 python3.9[215394]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:50:42 np0005548731 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  6 01:50:42 np0005548731 systemd[1]: Stopped Load Kernel Modules.
Dec  6 01:50:42 np0005548731 systemd[1]: Stopping Load Kernel Modules...
Dec  6 01:50:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:42.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:42 np0005548731 systemd[1]: Starting Load Kernel Modules...
Dec  6 01:50:42 np0005548731 systemd[1]: Finished Load Kernel Modules.
Dec  6 01:50:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:50:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:42.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:50:43 np0005548731 python3.9[215550]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  6 01:50:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:44.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:44.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:45 np0005548731 systemd[1]: Reloading.
Dec  6 01:50:45 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:50:45 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:50:45 np0005548731 systemd[1]: Reloading.
Dec  6 01:50:45 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:50:45 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:50:46 np0005548731 systemd-logind[794]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  6 01:50:46 np0005548731 systemd-logind[794]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  6 01:50:46 np0005548731 lvm[215781]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 01:50:46 np0005548731 lvm[215781]: VG ceph_vg0 finished
Dec  6 01:50:46 np0005548731 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  6 01:50:46 np0005548731 systemd[1]: Starting man-db-cache-update.service...
Dec  6 01:50:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:46.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:46 np0005548731 systemd[1]: Reloading.
Dec  6 01:50:46 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:50:46 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:50:46 np0005548731 podman[215863]: 2025-12-06 06:50:46.714766628 +0000 UTC m=+0.091759436 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 01:50:46 np0005548731 podman[215863]: 2025-12-06 06:50:46.819506023 +0000 UTC m=+0.196498801 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  6 01:50:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:46.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:46 np0005548731 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  6 01:50:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:47 np0005548731 podman[216752]: 2025-12-06 06:50:47.428353745 +0000 UTC m=+0.061008734 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:50:47 np0005548731 podman[216752]: 2025-12-06 06:50:47.437379954 +0000 UTC m=+0.070034923 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 01:50:47 np0005548731 podman[217117]: 2025-12-06 06:50:47.628288661 +0000 UTC m=+0.046998776 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  6 01:50:47 np0005548731 podman[217117]: 2025-12-06 06:50:47.667669354 +0000 UTC m=+0.086379459 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.component=keepalived-container, name=keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, description=keepalived for Ceph, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  6 01:50:47 np0005548731 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  6 01:50:47 np0005548731 systemd[1]: Finished man-db-cache-update.service.
Dec  6 01:50:47 np0005548731 systemd[1]: man-db-cache-update.service: Consumed 1.431s CPU time.
Dec  6 01:50:47 np0005548731 systemd[1]: run-r828468d9f15a4080a4a9933f58ce3ad6.service: Deactivated successfully.
Dec  6 01:50:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:50:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:48.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:50:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:50:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:48.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:50:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:50:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:50:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:50:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:50:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:50:49 np0005548731 python3.9[217552]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:50:49 np0005548731 systemd[1]: Stopping Open-iSCSI...
Dec  6 01:50:49 np0005548731 iscsid[205099]: iscsid shutting down.
Dec  6 01:50:49 np0005548731 systemd[1]: iscsid.service: Deactivated successfully.
Dec  6 01:50:49 np0005548731 systemd[1]: Stopped Open-iSCSI.
Dec  6 01:50:49 np0005548731 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  6 01:50:49 np0005548731 systemd[1]: Starting Open-iSCSI...
Dec  6 01:50:49 np0005548731 systemd[1]: Started Open-iSCSI.
Dec  6 01:50:50 np0005548731 python3.9[217706]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  6 01:50:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:50.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:50.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:51 np0005548731 python3.9[217913]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:50:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:52 np0005548731 python3.9[218065]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 01:50:52 np0005548731 systemd[1]: Reloading.
Dec  6 01:50:52 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:50:52 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:50:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:52.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:52.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:53 np0005548731 python3.9[218250]: ansible-ansible.builtin.service_facts Invoked
Dec  6 01:50:53 np0005548731 network[218267]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  6 01:50:53 np0005548731 network[218268]: 'network-scripts' will be removed from distribution in near future.
Dec  6 01:50:53 np0005548731 network[218269]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  6 01:50:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:54.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:54.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:56.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:56.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:50:58 np0005548731 python3.9[218546]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:50:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:58.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:58 np0005548731 python3.9[218750]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:50:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:50:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:50:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:58.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:50:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:50:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:50:59 np0005548731 python3.9[218903]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:51:00 np0005548731 python3.9[219056]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:51:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:51:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:00.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:51:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:51:00.837 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:51:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:51:00.838 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:51:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:51:00.838 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:51:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:01 np0005548731 python3.9[219210]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:51:01 np0005548731 python3.9[219363]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:51:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:02.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:02 np0005548731 python3.9[219517]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:51:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:02.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:03 np0005548731 python3.9[219670]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:51:04 np0005548731 python3.9[219824]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:51:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:04.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:51:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:04.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:05 np0005548731 python3.9[219976]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:05 np0005548731 python3.9[220128]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:06 np0005548731 python3.9[220280]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:51:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:06.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:51:06 np0005548731 podman[220405]: 2025-12-06 06:51:06.577277153 +0000 UTC m=+0.062553403 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:51:06 np0005548731 python3.9[220452]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:51:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:06.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:51:07 np0005548731 python3.9[220605]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:07 np0005548731 python3.9[220757]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:08 np0005548731 podman[220882]: 2025-12-06 06:51:08.329582229 +0000 UTC m=+0.071488611 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, container_name=ovn_controller, config_id=ovn_controller)
Dec  6 01:51:08 np0005548731 python3.9[220929]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:08.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:08.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:09 np0005548731 podman[221059]: 2025-12-06 06:51:09.484614897 +0000 UTC m=+0.050747282 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 01:51:09 np0005548731 python3.9[221106]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:10 np0005548731 python3.9[221258]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:10.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:10 np0005548731 python3.9[221461]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:10.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:11 np0005548731 python3.9[221613]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:11 np0005548731 python3.9[221765]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:12 np0005548731 python3.9[221918]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:51:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:12.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:51:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:12.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:13 np0005548731 python3.9[222070]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:13 np0005548731 python3.9[222222]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:14.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:14.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:15 np0005548731 python3.9[222375]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:51:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:16.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.549087) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876549192, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2172, "num_deletes": 257, "total_data_size": 5534645, "memory_usage": 5609360, "flush_reason": "Manual Compaction"}
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Dec  6 01:51:16 np0005548731 python3.9[222528]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876654785, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 3620560, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15661, "largest_seqno": 17828, "table_properties": {"data_size": 3611698, "index_size": 5548, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16929, "raw_average_key_size": 19, "raw_value_size": 3594216, "raw_average_value_size": 4088, "num_data_blocks": 248, "num_entries": 879, "num_filter_entries": 879, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003651, "oldest_key_time": 1765003651, "file_creation_time": 1765003876, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 105769 microseconds, and 7921 cpu microseconds.
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.654869) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 3620560 bytes OK
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.654889) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.657654) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.657697) EVENT_LOG_v1 {"time_micros": 1765003876657686, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.657720) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 5525208, prev total WAL file size 5525208, number of live WAL files 2.
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.659077) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323534' seq:0, type:0; will stop at (end)
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(3535KB)], [30(7407KB)]
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876659118, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 11205596, "oldest_snapshot_seqno": -1}
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4767 keys, 10810026 bytes, temperature: kUnknown
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876867062, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 10810026, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10774786, "index_size": 22202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 119739, "raw_average_key_size": 25, "raw_value_size": 10685196, "raw_average_value_size": 2241, "num_data_blocks": 921, "num_entries": 4767, "num_filter_entries": 4767, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765003876, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.867319) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 10810026 bytes
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.869771) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 53.9 rd, 52.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 7.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.1) write-amplify(3.0) OK, records in: 5298, records dropped: 531 output_compression: NoCompression
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.869834) EVENT_LOG_v1 {"time_micros": 1765003876869797, "job": 16, "event": "compaction_finished", "compaction_time_micros": 208022, "compaction_time_cpu_micros": 25963, "output_level": 6, "num_output_files": 1, "total_output_size": 10810026, "num_input_records": 5298, "num_output_records": 4767, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876870813, "job": 16, "event": "table_file_deletion", "file_number": 32}
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876872431, "job": 16, "event": "table_file_deletion", "file_number": 30}
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.659020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.872487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.872492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.872494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.872496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:16.872497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:16.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:17 np0005548731 python3.9[222680]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 01:51:17 np0005548731 systemd[1]: Reloading.
Dec  6 01:51:17 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:51:17 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:51:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:18.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:18 np0005548731 python3.9[222869]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:51:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:18.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:19 np0005548731 python3.9[223022]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:51:19 np0005548731 python3.9[223175]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:51:20 np0005548731 python3.9[223329]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:51:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:20.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:20.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:21 np0005548731 python3.9[223482]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:51:21 np0005548731 python3.9[223635]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:51:22 np0005548731 python3.9[223788]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:51:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:22.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:22 np0005548731 python3.9[223942]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  6 01:51:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:22.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:24.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:24 np0005548731 python3.9[224096]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:24.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:25 np0005548731 python3.9[224248]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:26 np0005548731 python3.9[224400]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:26.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:26.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:26 np0005548731 python3.9[224553]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:27 np0005548731 python3.9[224705]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:28 np0005548731 python3.9[224858]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:28.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:28.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:29 np0005548731 python3.9[225010]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.602875) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889602936, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 371, "num_deletes": 251, "total_data_size": 359601, "memory_usage": 367048, "flush_reason": "Manual Compaction"}
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889606680, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 237057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17833, "largest_seqno": 18199, "table_properties": {"data_size": 234849, "index_size": 372, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5454, "raw_average_key_size": 18, "raw_value_size": 230534, "raw_average_value_size": 781, "num_data_blocks": 17, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003877, "oldest_key_time": 1765003877, "file_creation_time": 1765003889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 3837 microseconds, and 1495 cpu microseconds.
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:51:29 np0005548731 python3.9[225162]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.606723) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 237057 bytes OK
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.606738) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.608065) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.608079) EVENT_LOG_v1 {"time_micros": 1765003889608075, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.608092) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 357137, prev total WAL file size 357137, number of live WAL files 2.
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.608426) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(231KB)], [33(10MB)]
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889608598, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 11047083, "oldest_snapshot_seqno": -1}
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4552 keys, 8962575 bytes, temperature: kUnknown
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889667740, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 8962575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8930479, "index_size": 19634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 115939, "raw_average_key_size": 25, "raw_value_size": 8846198, "raw_average_value_size": 1943, "num_data_blocks": 806, "num_entries": 4552, "num_filter_entries": 4552, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765003889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.667950) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8962575 bytes
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.669444) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.6 rd, 151.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(84.4) write-amplify(37.8) OK, records in: 5062, records dropped: 510 output_compression: NoCompression
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.669464) EVENT_LOG_v1 {"time_micros": 1765003889669455, "job": 18, "event": "compaction_finished", "compaction_time_micros": 59203, "compaction_time_cpu_micros": 28466, "output_level": 6, "num_output_files": 1, "total_output_size": 8962575, "num_input_records": 5062, "num_output_records": 4552, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889669621, "job": 18, "event": "table_file_deletion", "file_number": 35}
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889671394, "job": 18, "event": "table_file_deletion", "file_number": 33}
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.608368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.671457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.671461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.671463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.671464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:51:29.671466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:51:30 np0005548731 python3.9[225314]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:30.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:30 np0005548731 python3.9[225500]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:30.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:32.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:32.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:34.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:51:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:34.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:51:36 np0005548731 python3.9[225672]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  6 01:51:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:36.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:36 np0005548731 podman[225721]: 2025-12-06 06:51:36.900511082 +0000 UTC m=+0.059790692 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:51:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:36.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:37 np0005548731 python3.9[225843]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  6 01:51:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:38.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:38 np0005548731 podman[225846]: 2025-12-06 06:51:38.629122776 +0000 UTC m=+0.135530831 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  6 01:51:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:38.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:39 np0005548731 python3.9[226029]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  6 01:51:39 np0005548731 podman[226062]: 2025-12-06 06:51:39.908493605 +0000 UTC m=+0.072070478 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  6 01:51:40 np0005548731 systemd-logind[794]: New session 51 of user zuul.
Dec  6 01:51:40 np0005548731 systemd[1]: Started Session 51 of User zuul.
Dec  6 01:51:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:40.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:40 np0005548731 systemd[1]: session-51.scope: Deactivated successfully.
Dec  6 01:51:40 np0005548731 systemd-logind[794]: Session 51 logged out. Waiting for processes to exit.
Dec  6 01:51:40 np0005548731 systemd-logind[794]: Removed session 51.
Dec  6 01:51:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:51:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:40.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:51:41 np0005548731 python3.9[226235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:51:41 np0005548731 python3.9[226356]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003900.8196895-3442-123369330282368/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:42 np0005548731 python3.9[226507]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:51:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:42.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:42 np0005548731 python3.9[226583]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:42.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:43 np0005548731 python3.9[226733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:51:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:44 np0005548731 python3.9[226854]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003903.0471478-3442-21104018682695/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:51:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:44.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:51:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:44.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:45 np0005548731 python3.9[227005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:51:45 np0005548731 python3.9[227126]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003904.2131908-3442-145037472017361/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d01cc1b48d783e4ed08d12bb4d0a107aba230a69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:46 np0005548731 python3.9[227277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:51:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:46.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:46.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:47 np0005548731 python3.9[227398]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003905.8933094-3442-95359679061234/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:47 np0005548731 python3.9[227548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:51:48 np0005548731 python3.9[227669]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003907.175054-3442-593255740962/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:48.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:48.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:49 np0005548731 python3.9[227822]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:49 np0005548731 python3.9[227974]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:51:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:50.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:50.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:51 np0005548731 python3.9[228127]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:51:51 np0005548731 python3.9[228329]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:51:52 np0005548731 python3.9[228453]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765003911.3605888-3765-240577940851025/.source _original_basename=.ysk24z4b follow=False checksum=8284a657b5010cdd2dd8efdd6d8fddb43b102d41 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  6 01:51:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:52.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:52.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:54 np0005548731 python3.9[228605]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:51:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:51:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:54.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:51:54 np0005548731 python3.9[228758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:51:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:54.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:55 np0005548731 python3.9[228879]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003914.3888755-3841-279131422688506/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:56 np0005548731 python3.9[229030]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  6 01:51:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:56.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:56 np0005548731 python3.9[229151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003915.8014188-3886-21010320283842/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  6 01:51:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:56.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:57 np0005548731 python3.9[229303]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  6 01:51:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:58.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:51:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:51:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:51:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:58.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:51:59 np0005548731 python3.9[229456]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  6 01:52:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:52:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:52:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:52:00 np0005548731 python3[229741]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  6 01:52:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:00.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:52:00.838 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:52:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:52:00.839 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:52:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:52:00.839 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:52:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:00.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:02.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:02.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:52:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:04.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:52:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:06.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:52:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:06.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:08.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:52:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:08.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:10.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:10.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:52:12 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 01:52:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:12.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:12.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:13 np0005548731 podman[229845]: 2025-12-06 06:52:13.219426002 +0000 UTC m=+2.380929242 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 01:52:13 np0005548731 podman[229818]: 2025-12-06 06:52:13.256425639 +0000 UTC m=+6.138977104 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Dec  6 01:52:13 np0005548731 podman[229831]: 2025-12-06 06:52:13.258244014 +0000 UTC m=+4.413368996 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 01:52:13 np0005548731 podman[229756]: 2025-12-06 06:52:13.260782047 +0000 UTC m=+12.745106506 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  6 01:52:13 np0005548731 podman[229957]: 2025-12-06 06:52:13.412919328 +0000 UTC m=+0.058443620 container create 5270438ab0f9929d844708e197058d70817dbfb9a2276adbb71c5111687ae25b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, container_name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 01:52:13 np0005548731 podman[229957]: 2025-12-06 06:52:13.377964171 +0000 UTC m=+0.023488483 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  6 01:52:13 np0005548731 python3[229741]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  6 01:52:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:52:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:52:14 np0005548731 python3.9[230198]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:52:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:14.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:14.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:52:15 np0005548731 python3.9[230352]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  6 01:52:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:52:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:16.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:16.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:18 np0005548731 python3.9[230505]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  6 01:52:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:18.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:18.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:19 np0005548731 python3[230658]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  6 01:52:19 np0005548731 podman[230696]: 2025-12-06 06:52:19.792638188 +0000 UTC m=+0.047531369 container create 38f53a0a09be64af6f6f6bdbc1c5d21739c7f71bfede606a10b47d483b4cbd5c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:52:19 np0005548731 podman[230696]: 2025-12-06 06:52:19.768191012 +0000 UTC m=+0.023084223 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  6 01:52:19 np0005548731 python3[230658]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  6 01:52:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:20.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:20.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:21 np0005548731 python3.9[230887]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:52:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:52:22 np0005548731 python3.9[231041]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:52:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:22.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:22 np0005548731 python3.9[231193]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003942.1706316-4162-110492291750331/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  6 01:52:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:22.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:23 np0005548731 python3.9[231269]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  6 01:52:23 np0005548731 systemd[1]: Reloading.
Dec  6 01:52:23 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:52:23 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:52:24 np0005548731 python3.9[231381]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  6 01:52:24 np0005548731 systemd[1]: Reloading.
Dec  6 01:52:24 np0005548731 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  6 01:52:24 np0005548731 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  6 01:52:24 np0005548731 systemd[1]: Starting nova_compute container...
Dec  6 01:52:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:24.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:24 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:52:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42fb7a27549122014af3dc214d958ed1562ba5e46b7418274b5dd16a480d31f1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42fb7a27549122014af3dc214d958ed1562ba5e46b7418274b5dd16a480d31f1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42fb7a27549122014af3dc214d958ed1562ba5e46b7418274b5dd16a480d31f1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42fb7a27549122014af3dc214d958ed1562ba5e46b7418274b5dd16a480d31f1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42fb7a27549122014af3dc214d958ed1562ba5e46b7418274b5dd16a480d31f1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:24 np0005548731 podman[231423]: 2025-12-06 06:52:24.75420437 +0000 UTC m=+0.123196555 container init 38f53a0a09be64af6f6f6bdbc1c5d21739c7f71bfede606a10b47d483b4cbd5c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.schema-version=1.0)
Dec  6 01:52:24 np0005548731 podman[231423]: 2025-12-06 06:52:24.765374747 +0000 UTC m=+0.134366922 container start 38f53a0a09be64af6f6f6bdbc1c5d21739c7f71bfede606a10b47d483b4cbd5c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:52:24 np0005548731 podman[231423]: nova_compute
Dec  6 01:52:24 np0005548731 nova_compute[231439]: + sudo -E kolla_set_configs
Dec  6 01:52:24 np0005548731 systemd[1]: Started nova_compute container.
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Validating config file
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying service configuration files
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Deleting /etc/ceph
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Creating directory /etc/ceph
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /etc/ceph
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Writing out command to execute
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  6 01:52:24 np0005548731 nova_compute[231439]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  6 01:52:24 np0005548731 nova_compute[231439]: ++ cat /run_command
Dec  6 01:52:24 np0005548731 nova_compute[231439]: + CMD=nova-compute
Dec  6 01:52:24 np0005548731 nova_compute[231439]: + ARGS=
Dec  6 01:52:24 np0005548731 nova_compute[231439]: + sudo kolla_copy_cacerts
Dec  6 01:52:24 np0005548731 nova_compute[231439]: + [[ ! -n '' ]]
Dec  6 01:52:24 np0005548731 nova_compute[231439]: + . kolla_extend_start
Dec  6 01:52:24 np0005548731 nova_compute[231439]: Running command: 'nova-compute'
Dec  6 01:52:24 np0005548731 nova_compute[231439]: + echo 'Running command: '\''nova-compute'\'''
Dec  6 01:52:24 np0005548731 nova_compute[231439]: + umask 0022
Dec  6 01:52:24 np0005548731 nova_compute[231439]: + exec nova-compute
Dec  6 01:52:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:24.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:52:26 np0005548731 python3.9[231601]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:52:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:52:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:26.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:52:26 np0005548731 python3.9[231752]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:52:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:26.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:27 np0005548731 nova_compute[231439]: 2025-12-06 06:52:27.457 231443 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 01:52:27 np0005548731 nova_compute[231439]: 2025-12-06 06:52:27.457 231443 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 01:52:27 np0005548731 nova_compute[231439]: 2025-12-06 06:52:27.457 231443 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 01:52:27 np0005548731 nova_compute[231439]: 2025-12-06 06:52:27.458 231443 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  6 01:52:27 np0005548731 nova_compute[231439]: 2025-12-06 06:52:27.631 231443 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:52:27 np0005548731 nova_compute[231439]: 2025-12-06 06:52:27.657 231443 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:52:27 np0005548731 nova_compute[231439]: 2025-12-06 06:52:27.658 231443 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  6 01:52:27 np0005548731 python3.9[231906]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.266 231443 INFO nova.virt.driver [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.404 231443 INFO nova.compute.provider_config [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.514 231443 DEBUG oslo_concurrency.lockutils [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.515 231443 DEBUG oslo_concurrency.lockutils [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.515 231443 DEBUG oslo_concurrency.lockutils [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.516 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.516 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.516 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.516 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.516 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.517 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.517 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.517 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.517 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.517 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.517 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.518 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.518 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.518 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.518 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.518 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.518 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.519 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.519 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.519 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.519 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.519 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.519 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.520 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.520 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.520 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.520 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.520 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.520 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.521 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.521 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.521 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.521 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.521 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.521 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.521 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.522 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.522 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.522 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.522 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.522 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.522 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.523 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.523 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.523 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.523 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.523 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.523 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.524 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.524 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.524 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.524 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.524 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.524 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.524 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.525 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.525 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.525 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.525 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.525 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.525 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.526 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.526 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.526 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.526 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.526 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.526 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.526 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.527 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.527 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.527 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.527 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.527 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.528 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.528 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.528 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.528 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.528 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.528 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.529 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.529 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.529 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.529 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.529 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.530 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.530 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.530 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.530 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.530 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.531 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.531 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.531 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.531 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.531 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.532 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.532 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.532 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.532 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.532 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.532 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.533 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.533 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.533 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.533 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.533 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.533 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.534 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.534 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.534 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.534 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.534 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.535 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.535 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.535 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.535 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.535 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.535 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.536 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.536 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.536 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.536 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.536 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.536 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.536 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.537 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.537 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.537 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.537 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.537 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.537 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.537 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.538 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.538 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.538 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.538 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.538 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.538 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.539 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.539 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.539 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.539 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.540 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.540 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.540 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.540 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.540 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.541 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.541 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.541 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.541 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.541 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.541 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.542 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.542 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.542 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.542 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.542 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.542 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.543 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.543 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.543 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.543 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.543 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.543 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.544 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.544 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.544 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.544 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.544 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.544 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.545 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.545 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.545 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.545 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.545 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.546 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.546 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.546 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.546 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.546 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.546 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.546 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.547 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.547 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.547 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.547 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.547 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.547 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.548 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.548 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.548 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.548 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.548 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.548 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.548 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.549 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.549 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.549 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.549 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.549 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.549 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.550 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.550 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.550 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.550 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.550 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.550 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.550 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.551 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.551 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.551 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.551 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.551 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.551 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.552 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.552 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.552 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.552 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.552 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.552 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.552 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.553 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.553 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.553 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.553 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.553 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.553 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.554 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.554 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.554 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.554 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.554 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.554 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.555 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.555 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.555 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.555 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.555 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.555 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.556 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.556 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.556 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.556 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.556 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.556 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.556 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.557 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.557 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.557 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.557 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.557 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.557 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.558 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.558 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.558 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.558 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.558 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.558 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.558 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.559 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.559 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.559 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.559 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.559 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.559 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.560 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.560 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.560 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.560 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.561 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.561 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.561 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.561 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.561 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.562 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.562 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.562 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.562 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.562 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.563 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.563 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.569 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.569 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.569 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.569 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.569 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.570 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.570 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.570 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.570 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.570 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.571 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.571 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.571 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.571 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.571 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.571 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.572 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.572 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.572 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.572 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.572 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.573 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.573 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.573 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.573 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.573 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.574 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.574 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.574 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.574 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.574 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.575 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.575 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.575 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.575 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.575 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.575 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.575 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.576 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.576 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.576 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.576 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.576 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.576 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.577 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.577 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.577 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.577 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.577 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.578 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.578 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.578 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.578 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.578 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.578 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.579 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.579 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.579 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.579 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.579 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.579 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.580 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.580 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.580 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.580 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.580 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.580 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.581 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.581 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.581 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.581 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.581 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.581 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.582 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.582 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.582 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.582 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.583 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.583 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.583 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.583 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.583 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.583 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.584 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.584 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.584 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.584 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.584 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.584 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.584 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.585 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.585 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.585 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.585 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.585 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.586 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.586 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.586 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.586 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.586 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.586 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.587 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.587 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.587 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.587 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.587 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.587 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.587 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.588 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.588 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.588 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.588 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.588 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.589 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.589 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.589 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.589 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.589 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.589 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.589 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.590 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.590 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.590 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.590 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.590 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.590 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.591 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.591 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.591 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.591 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.591 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.591 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.591 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.592 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.592 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.592 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.592 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.592 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.592 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.592 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.593 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.593 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.593 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.593 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.593 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.593 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.593 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.594 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.594 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.594 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.594 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.594 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.594 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.594 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.595 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.595 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.595 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.595 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.595 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.595 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.595 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.596 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.596 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.596 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.596 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.596 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.596 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.596 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.597 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.597 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.597 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.597 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.597 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.598 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.598 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.598 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.598 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.598 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.599 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.599 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.599 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.599 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.599 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.599 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.599 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.600 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.600 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.600 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.600 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.600 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.600 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.601 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.601 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.601 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.601 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.601 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.601 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.601 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.601 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.602 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.602 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.602 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.602 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.602 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.602 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.602 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.603 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.603 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.603 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.603 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.603 231443 WARNING oslo_config.cfg [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  6 01:52:28 np0005548731 nova_compute[231439]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  6 01:52:28 np0005548731 nova_compute[231439]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  6 01:52:28 np0005548731 nova_compute[231439]: and ``live_migration_inbound_addr`` respectively.
Dec  6 01:52:28 np0005548731 nova_compute[231439]: ).  Its value may be silently ignored in the future.#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.604 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.604 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.604 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.604 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.604 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.604 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.604 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.605 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.605 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.605 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.605 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.605 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.606 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.606 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.606 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.606 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.606 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.606 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.606 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.rbd_secret_uuid        = 40a1bae4-cf76-5610-8dab-c75116dfe0bb log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.607 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.607 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.607 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.607 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.607 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.607 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.608 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.608 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.608 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.608 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.608 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.608 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.608 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.609 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.609 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.609 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.609 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.609 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.610 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.610 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.610 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.610 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.610 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.610 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.611 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.611 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.611 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.611 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.611 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.611 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.611 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.612 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.612 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.612 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.612 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.612 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.613 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.613 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.613 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.613 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.613 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.614 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.614 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.614 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.614 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.614 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.614 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.614 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.615 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.615 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.615 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.615 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.615 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.616 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.616 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.616 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.616 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.616 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.617 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.617 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.617 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.617 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.617 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.618 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.618 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.618 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.618 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.618 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.618 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.619 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.619 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.619 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.619 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.620 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.620 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.620 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.620 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.620 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.620 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.621 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.621 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.621 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.621 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.621 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.621 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.622 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.622 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.622 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.622 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.622 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.622 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.622 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.623 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.623 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.623 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.623 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.623 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.623 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.624 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.624 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.624 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.624 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.624 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.624 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.624 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.624 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.625 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.625 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.625 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.625 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.625 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.625 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.626 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.626 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.626 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.626 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.626 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.626 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.627 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.627 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.627 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.627 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.628 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.628 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.628 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.628 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.628 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.628 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.629 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.629 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.629 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.629 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.629 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.629 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.630 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.630 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.630 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.630 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.630 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.630 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.631 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.631 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.631 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.631 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.631 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.631 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.631 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.632 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.632 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.632 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.632 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.632 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.632 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.633 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.633 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.633 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.633 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.633 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.633 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.634 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.634 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.634 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.634 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.634 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.634 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.635 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.635 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.635 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.635 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.635 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.635 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.636 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.636 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.636 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.636 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.636 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.636 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.636 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:28.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.637 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.637 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.637 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.637 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.637 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.637 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.638 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.638 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.638 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.638 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.638 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.638 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.639 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.639 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.639 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.639 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.639 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.639 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.639 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.640 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.640 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.640 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.640 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.640 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.640 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.640 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.641 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.641 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.641 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.641 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.641 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.641 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.641 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.642 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.642 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.642 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.642 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.642 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.642 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.643 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.643 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.643 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.643 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.643 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.644 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.644 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.644 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.644 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.644 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.645 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.645 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.645 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.645 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.646 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.646 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.646 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.646 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.646 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.647 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.647 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.647 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.647 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.647 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.647 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.647 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.648 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.648 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.648 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.648 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.648 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.648 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.648 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.649 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.649 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.649 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.649 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.649 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.649 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.650 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.650 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.650 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.650 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.650 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.650 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.650 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.651 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.651 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.651 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.651 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.651 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.651 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.651 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.652 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.652 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.652 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.652 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.652 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.652 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.653 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.653 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.653 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.653 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.653 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.653 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.653 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.653 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.654 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.654 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.654 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.654 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.654 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.654 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.655 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.655 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.655 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.655 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.655 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.655 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.655 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.655 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.656 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.656 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.656 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.656 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.656 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.656 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.656 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.657 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.657 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.657 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.657 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.657 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.657 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.657 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.658 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.658 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.658 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.658 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.658 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.658 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.658 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.659 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.659 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.659 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.659 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.659 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.659 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.659 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.660 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.660 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.660 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.660 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.660 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.660 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.660 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.661 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.661 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.661 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.661 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.661 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.661 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.661 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.661 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.662 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.662 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.662 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.662 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.662 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.662 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.663 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.663 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.663 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.663 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.663 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.663 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.664 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.664 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.664 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.664 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.664 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.664 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.664 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.665 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.665 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.665 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.665 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.665 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.665 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.665 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.666 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.666 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.666 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.666 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.666 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.666 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.666 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.667 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.667 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.667 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.667 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.667 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.667 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.667 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.668 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.668 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.668 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.668 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.668 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.668 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.668 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.669 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.669 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.669 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.669 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.669 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.669 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.669 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.670 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.670 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.670 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.670 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.670 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.670 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.670 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.671 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.671 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.671 231443 DEBUG oslo_service.service [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.672 231443 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.685 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.686 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.686 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.686 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  6 01:52:28 np0005548731 systemd[1]: Starting libvirt QEMU daemon...
Dec  6 01:52:28 np0005548731 systemd[1]: Started libvirt QEMU daemon.
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.754 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f2f1e4ce520> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.757 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f2f1e4ce520> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.758 231443 INFO nova.virt.libvirt.driver [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.773 231443 WARNING nova.virt.libvirt.driver [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Cannot update service status on host "compute-2.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Dec  6 01:52:28 np0005548731 nova_compute[231439]: 2025-12-06 06:52:28.773 231443 DEBUG nova.virt.libvirt.volume.mount [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  6 01:52:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:28.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:28 np0005548731 python3.9[232081]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  6 01:52:29 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 01:52:29 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.561 231443 INFO nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Libvirt host capabilities <capabilities>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <host>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <uuid>ec2492e2-e0d1-43d3-b74f-744a8272a0e6</uuid>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <cpu>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <arch>x86_64</arch>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model>EPYC-Rome-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <vendor>AMD</vendor>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <microcode version='16777317'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <signature family='23' model='49' stepping='0'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='x2apic'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='tsc-deadline'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='osxsave'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='hypervisor'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='tsc_adjust'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='spec-ctrl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='stibp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='arch-capabilities'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='cmp_legacy'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='topoext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='virt-ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='lbrv'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='tsc-scale'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='vmcb-clean'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='pause-filter'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='pfthreshold'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='svme-addr-chk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='rdctl-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='skip-l1dfl-vmentry'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='mds-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature name='pschange-mc-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <pages unit='KiB' size='4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <pages unit='KiB' size='2048'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <pages unit='KiB' size='1048576'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </cpu>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <power_management>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <suspend_mem/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </power_management>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <iommu support='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <migration_features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <live/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <uri_transports>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <uri_transport>tcp</uri_transport>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <uri_transport>rdma</uri_transport>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </uri_transports>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </migration_features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <topology>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <cells num='1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <cell id='0'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:          <memory unit='KiB'>7864320</memory>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:          <pages unit='KiB' size='2048'>0</pages>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:          <distances>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:            <sibling id='0' value='10'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:          </distances>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:          <cpus num='8'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:          </cpus>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        </cell>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </cells>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </topology>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <cache>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </cache>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <secmodel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model>selinux</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <doi>0</doi>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </secmodel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <secmodel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model>dac</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <doi>0</doi>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </secmodel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </host>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <guest>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <os_type>hvm</os_type>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <arch name='i686'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <wordsize>32</wordsize>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <domain type='qemu'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <domain type='kvm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </arch>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <pae/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <nonpae/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <acpi default='on' toggle='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <apic default='on' toggle='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <cpuselection/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <deviceboot/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <disksnapshot default='on' toggle='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <externalSnapshot/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </guest>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <guest>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <os_type>hvm</os_type>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <arch name='x86_64'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <wordsize>64</wordsize>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <domain type='qemu'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <domain type='kvm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </arch>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <acpi default='on' toggle='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <apic default='on' toggle='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <cpuselection/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <deviceboot/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <disksnapshot default='on' toggle='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <externalSnapshot/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </guest>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 
Dec  6 01:52:29 np0005548731 nova_compute[231439]: </capabilities>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: #033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.567 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.588 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  6 01:52:29 np0005548731 nova_compute[231439]: <domainCapabilities>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <domain>kvm</domain>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <arch>i686</arch>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <vcpu max='240'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <iothreads supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <os supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <enum name='firmware'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <loader supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>rom</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pflash</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='readonly'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>yes</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>no</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='secure'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>no</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </loader>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </os>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <cpu>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='host-passthrough' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='hostPassthroughMigratable'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>on</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>off</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='maximum' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='maximumMigratable'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>on</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>off</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='host-model' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <vendor>AMD</vendor>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='x2apic'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='hypervisor'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='stibp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='overflow-recov'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='succor'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='lbrv'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc-scale'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='flushbyasid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='pause-filter'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='pfthreshold'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='disable' name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='custom' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Dhyana-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Genoa'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='auto-ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='auto-ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-128'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-256'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-512'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v6'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v7'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='KnightsMill'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512er'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512pf'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='KnightsMill-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512er'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512pf'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G4-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tbm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G5-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tbm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SierraForest'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cmpccxadd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SierraForest-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cmpccxadd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='athlon'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='athlon-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='core2duo'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='core2duo-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='coreduo'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='coreduo-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='n270'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='n270-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='phenom'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='phenom-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </cpu>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <memoryBacking supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <enum name='sourceType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>file</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>anonymous</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>memfd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </memoryBacking>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <devices>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <disk supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='diskDevice'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>disk</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>cdrom</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>floppy</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>lun</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='bus'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>ide</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>fdc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>scsi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>sata</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-non-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </disk>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <graphics supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vnc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>egl-headless</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dbus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </graphics>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <video supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='modelType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vga</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>cirrus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>none</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>bochs</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>ramfb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </video>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <hostdev supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='mode'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>subsystem</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='startupPolicy'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>default</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>mandatory</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>requisite</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>optional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='subsysType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pci</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>scsi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='capsType'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='pciBackend'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </hostdev>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <rng supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-non-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>random</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>egd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>builtin</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </rng>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <filesystem supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='driverType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>path</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>handle</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtiofs</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </filesystem>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <tpm supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tpm-tis</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tpm-crb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>emulator</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>external</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendVersion'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>2.0</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </tpm>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <redirdev supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='bus'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </redirdev>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <channel supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pty</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>unix</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </channel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <crypto supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>qemu</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>builtin</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </crypto>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <interface supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>default</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>passt</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </interface>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <panic supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>isa</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>hyperv</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </panic>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <console supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>null</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pty</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dev</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>file</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pipe</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>stdio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>udp</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tcp</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>unix</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>qemu-vdagent</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dbus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </console>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </devices>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <gic supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <vmcoreinfo supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <genid supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <backingStoreInput supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <backup supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <async-teardown supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <ps2 supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <sev supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <sgx supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <hyperv supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='features'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>relaxed</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vapic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>spinlocks</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vpindex</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>runtime</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>synic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>stimer</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>reset</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vendor_id</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>frequencies</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>reenlightenment</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tlbflush</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>ipi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>avic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>emsr_bitmap</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>xmm_input</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <defaults>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <spinlocks>4095</spinlocks>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <stimer_direct>on</stimer_direct>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </defaults>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </hyperv>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <launchSecurity supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='sectype'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tdx</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </launchSecurity>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: </domainCapabilities>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.595 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  6 01:52:29 np0005548731 nova_compute[231439]: <domainCapabilities>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <domain>kvm</domain>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <arch>i686</arch>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <vcpu max='4096'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <iothreads supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <os supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <enum name='firmware'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <loader supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>rom</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pflash</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='readonly'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>yes</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>no</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='secure'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>no</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </loader>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </os>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <cpu>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='host-passthrough' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='hostPassthroughMigratable'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>on</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>off</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='maximum' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='maximumMigratable'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>on</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>off</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='host-model' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <vendor>AMD</vendor>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='x2apic'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='hypervisor'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='stibp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='overflow-recov'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='succor'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='lbrv'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc-scale'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='flushbyasid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='pause-filter'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='pfthreshold'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='disable' name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='custom' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Dhyana-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Genoa'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='auto-ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='auto-ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-128'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-256'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-512'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v6'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v7'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='KnightsMill'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512er'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512pf'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='KnightsMill-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512er'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512pf'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G4-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tbm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G5-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tbm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SierraForest'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cmpccxadd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SierraForest-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cmpccxadd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='athlon'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='athlon-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='core2duo'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='core2duo-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='coreduo'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='coreduo-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='n270'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='n270-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='phenom'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='phenom-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </cpu>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <memoryBacking supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <enum name='sourceType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>file</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>anonymous</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>memfd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </memoryBacking>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <devices>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <disk supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='diskDevice'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>disk</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>cdrom</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>floppy</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>lun</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='bus'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>fdc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>scsi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>sata</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-non-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </disk>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <graphics supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vnc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>egl-headless</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dbus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </graphics>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <video supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='modelType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vga</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>cirrus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>none</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>bochs</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>ramfb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </video>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <hostdev supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='mode'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>subsystem</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='startupPolicy'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>default</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>mandatory</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>requisite</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>optional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='subsysType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pci</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>scsi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='capsType'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='pciBackend'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </hostdev>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <rng supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-non-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>random</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>egd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>builtin</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </rng>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <filesystem supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='driverType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>path</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>handle</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtiofs</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </filesystem>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <tpm supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tpm-tis</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tpm-crb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>emulator</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>external</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendVersion'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>2.0</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </tpm>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <redirdev supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='bus'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </redirdev>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <channel supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pty</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>unix</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </channel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <crypto supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>qemu</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>builtin</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </crypto>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <interface supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>default</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>passt</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </interface>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <panic supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>isa</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>hyperv</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </panic>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <console supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>null</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pty</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dev</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>file</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pipe</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>stdio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>udp</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tcp</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>unix</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>qemu-vdagent</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dbus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </console>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </devices>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <gic supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <vmcoreinfo supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <genid supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <backingStoreInput supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <backup supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <async-teardown supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <ps2 supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <sev supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <sgx supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <hyperv supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='features'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>relaxed</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vapic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>spinlocks</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vpindex</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>runtime</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>synic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>stimer</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>reset</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vendor_id</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>frequencies</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>reenlightenment</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tlbflush</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>ipi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>avic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>emsr_bitmap</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>xmm_input</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <defaults>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <spinlocks>4095</spinlocks>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <stimer_direct>on</stimer_direct>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </defaults>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </hyperv>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <launchSecurity supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='sectype'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tdx</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </launchSecurity>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: </domainCapabilities>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.622 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.626 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  6 01:52:29 np0005548731 nova_compute[231439]: <domainCapabilities>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <domain>kvm</domain>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <arch>x86_64</arch>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <vcpu max='240'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <iothreads supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <os supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <enum name='firmware'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <loader supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>rom</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pflash</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='readonly'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>yes</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>no</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='secure'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>no</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </loader>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </os>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <cpu>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='host-passthrough' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='hostPassthroughMigratable'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>on</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>off</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='maximum' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='maximumMigratable'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>on</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>off</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='host-model' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <vendor>AMD</vendor>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='x2apic'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='hypervisor'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='stibp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='overflow-recov'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='succor'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='lbrv'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc-scale'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='flushbyasid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='pause-filter'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='pfthreshold'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='disable' name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='custom' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Dhyana-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Genoa'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='auto-ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='auto-ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-128'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-256'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-512'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v6'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v7'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='KnightsMill'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512er'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512pf'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='KnightsMill-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512er'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512pf'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G4-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tbm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G5-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tbm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SierraForest'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cmpccxadd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SierraForest-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cmpccxadd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='athlon'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='athlon-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='core2duo'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='core2duo-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='coreduo'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='coreduo-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='n270'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='n270-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='phenom'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='phenom-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </cpu>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <memoryBacking supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <enum name='sourceType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>file</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>anonymous</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>memfd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </memoryBacking>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <devices>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <disk supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='diskDevice'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>disk</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>cdrom</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>floppy</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>lun</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='bus'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>ide</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>fdc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>scsi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>sata</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-non-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </disk>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <graphics supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vnc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>egl-headless</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dbus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </graphics>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <video supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='modelType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vga</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>cirrus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>none</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>bochs</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>ramfb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </video>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <hostdev supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='mode'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>subsystem</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='startupPolicy'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>default</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>mandatory</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>requisite</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>optional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='subsysType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pci</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>scsi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='capsType'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='pciBackend'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </hostdev>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <rng supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-non-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>random</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>egd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>builtin</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </rng>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <filesystem supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='driverType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>path</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>handle</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtiofs</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </filesystem>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <tpm supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tpm-tis</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tpm-crb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>emulator</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>external</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendVersion'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>2.0</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </tpm>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <redirdev supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='bus'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </redirdev>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <channel supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pty</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>unix</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </channel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <crypto supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>qemu</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>builtin</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </crypto>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <interface supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>default</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>passt</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </interface>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <panic supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>isa</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>hyperv</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </panic>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <console supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>null</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pty</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dev</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>file</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pipe</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>stdio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>udp</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tcp</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>unix</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>qemu-vdagent</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dbus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </console>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </devices>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <gic supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <vmcoreinfo supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <genid supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <backingStoreInput supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <backup supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <async-teardown supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <ps2 supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <sev supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <sgx supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <hyperv supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='features'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>relaxed</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vapic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>spinlocks</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vpindex</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>runtime</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>synic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>stimer</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>reset</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vendor_id</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>frequencies</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>reenlightenment</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tlbflush</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>ipi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>avic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>emsr_bitmap</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>xmm_input</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <defaults>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <spinlocks>4095</spinlocks>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <stimer_direct>on</stimer_direct>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </defaults>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </hyperv>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <launchSecurity supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='sectype'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tdx</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </launchSecurity>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: </domainCapabilities>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.684 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  6 01:52:29 np0005548731 nova_compute[231439]: <domainCapabilities>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <domain>kvm</domain>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <arch>x86_64</arch>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <vcpu max='4096'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <iothreads supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <os supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <enum name='firmware'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>efi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <loader supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>rom</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pflash</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='readonly'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>yes</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>no</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='secure'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>yes</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>no</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </loader>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </os>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <cpu>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='host-passthrough' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='hostPassthroughMigratable'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>on</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>off</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='maximum' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='maximumMigratable'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>on</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>off</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='host-model' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <vendor>AMD</vendor>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='x2apic'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='hypervisor'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='stibp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='overflow-recov'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='succor'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='lbrv'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='tsc-scale'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='flushbyasid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='pause-filter'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='pfthreshold'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <feature policy='disable' name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <mode name='custom' supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Broadwell-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Cooperlake-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Denverton-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Dhyana-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Genoa'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='auto-ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='auto-ibrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Milan-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amd-psfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='stibp-always-on'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-Rome-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='EPYC-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='GraniteRapids-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-128'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-256'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx10-512'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='prefetchiti'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Haswell-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v6'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Icelake-Server-v7'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='IvyBridge-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='KnightsMill'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512er'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512pf'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='KnightsMill-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512er'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512pf'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G4-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tbm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Opteron_G5-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fma4'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tbm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xop'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SapphireRapids-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='amx-tile'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-bf16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-fp16'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bitalg'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrc'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fzrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='la57'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='taa-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xfd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SierraForest'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cmpccxadd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='SierraForest-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ifma'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cmpccxadd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fbsdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='fsrs'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ibrs-all'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mcdt-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pbrsb-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='psdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='serialize'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vaes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Client-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='hle'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='rtm'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Skylake-Server-v5'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512bw'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512cd'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512dq'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512f'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='avx512vl'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='invpcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pcid'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='pku'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='mpx'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v2'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v3'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='core-capability'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='split-lock-detect'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='Snowridge-v4'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='cldemote'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='erms'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='gfni'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdir64b'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='movdiri'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='xsaves'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='athlon'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='athlon-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='core2duo'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='core2duo-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='coreduo'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='coreduo-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='n270'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='n270-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='ss'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='phenom'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <blockers model='phenom-v1'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnow'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <feature name='3dnowext'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </blockers>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </mode>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </cpu>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <memoryBacking supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <enum name='sourceType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>file</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>anonymous</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <value>memfd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </memoryBacking>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <devices>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <disk supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='diskDevice'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>disk</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>cdrom</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>floppy</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>lun</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='bus'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>fdc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>scsi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>sata</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-non-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </disk>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <graphics supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vnc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>egl-headless</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dbus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </graphics>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <video supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='modelType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vga</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>cirrus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>none</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>bochs</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>ramfb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </video>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <hostdev supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='mode'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>subsystem</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='startupPolicy'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>default</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>mandatory</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>requisite</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>optional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='subsysType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pci</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>scsi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='capsType'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='pciBackend'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </hostdev>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <rng supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtio-non-transitional</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>random</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>egd</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>builtin</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </rng>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <filesystem supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='driverType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>path</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>handle</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>virtiofs</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </filesystem>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <tpm supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tpm-tis</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tpm-crb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>emulator</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>external</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendVersion'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>2.0</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </tpm>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <redirdev supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='bus'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>usb</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </redirdev>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <channel supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pty</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>unix</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </channel>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <crypto supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>qemu</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendModel'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>builtin</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </crypto>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <interface supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='backendType'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>default</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>passt</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </interface>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <panic supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='model'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>isa</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>hyperv</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </panic>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <console supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='type'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>null</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vc</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pty</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dev</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>file</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>pipe</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>stdio</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>udp</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tcp</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>unix</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>qemu-vdagent</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>dbus</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </console>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </devices>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <gic supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <vmcoreinfo supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <genid supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <backingStoreInput supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <backup supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <async-teardown supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <ps2 supported='yes'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <sev supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <sgx supported='no'/>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <hyperv supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='features'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>relaxed</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vapic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>spinlocks</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vpindex</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>runtime</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>synic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>stimer</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>reset</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>vendor_id</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>frequencies</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>reenlightenment</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tlbflush</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>ipi</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>avic</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>emsr_bitmap</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>xmm_input</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <defaults>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <spinlocks>4095</spinlocks>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <stimer_direct>on</stimer_direct>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </defaults>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </hyperv>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    <launchSecurity supported='yes'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      <enum name='sectype'>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:        <value>tdx</value>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:      </enum>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:    </launchSecurity>
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  </features>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: </domainCapabilities>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.749 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.749 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.749 231443 DEBUG nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.750 231443 INFO nova.virt.libvirt.host [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Secure Boot support detected#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.751 231443 INFO nova.virt.libvirt.driver [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.751 231443 INFO nova.virt.libvirt.driver [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.760 231443 DEBUG nova.virt.libvirt.driver [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] cpu compare xml: <cpu match="exact">
Dec  6 01:52:29 np0005548731 nova_compute[231439]:  <model>Nehalem</model>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: </cpu>
Dec  6 01:52:29 np0005548731 nova_compute[231439]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.763 231443 DEBUG nova.virt.libvirt.driver [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.793 231443 INFO nova.virt.node [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Determined node identity 6d00757a-082f-486d-ae84-869a2ba2e6e7 from /var/lib/nova/compute_id#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.850 231443 WARNING nova.compute.manager [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Compute nodes ['6d00757a-082f-486d-ae84-869a2ba2e6e7'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.895 231443 INFO nova.compute.manager [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.933 231443 WARNING nova.compute.manager [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.934 231443 DEBUG oslo_concurrency.lockutils [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.934 231443 DEBUG oslo_concurrency.lockutils [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.934 231443 DEBUG oslo_concurrency.lockutils [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.934 231443 DEBUG nova.compute.resource_tracker [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 01:52:29 np0005548731 nova_compute[231439]: 2025-12-06 06:52:29.935 231443 DEBUG oslo_concurrency.processutils [None req-87b5e55b-69ed-4537-972f-992742969399 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:52:29 np0005548731 python3.9[232297]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  6 01:52:30 np0005548731 systemd[1]: Stopping nova_compute container...
Dec  6 01:52:30 np0005548731 nova_compute[231439]: 2025-12-06 06:52:30.099 231443 DEBUG oslo_concurrency.lockutils [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:52:30 np0005548731 nova_compute[231439]: 2025-12-06 06:52:30.100 231443 DEBUG oslo_concurrency.lockutils [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:52:30 np0005548731 nova_compute[231439]: 2025-12-06 06:52:30.100 231443 DEBUG oslo_concurrency.lockutils [None req-224bbb1c-2bac-464a-8a30-2dcdb1ccaa9e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:52:30 np0005548731 systemd[1]: libpod-38f53a0a09be64af6f6f6bdbc1c5d21739c7f71bfede606a10b47d483b4cbd5c.scope: Deactivated successfully.
Dec  6 01:52:30 np0005548731 virtqemud[232080]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  6 01:52:30 np0005548731 virtqemud[232080]: hostname: compute-2
Dec  6 01:52:30 np0005548731 virtqemud[232080]: End of file while reading data: Input/output error
Dec  6 01:52:30 np0005548731 podman[232302]: 2025-12-06 06:52:30.527452299 +0000 UTC m=+0.491945824 container died 38f53a0a09be64af6f6f6bdbc1c5d21739c7f71bfede606a10b47d483b4cbd5c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  6 01:52:30 np0005548731 systemd[1]: libpod-38f53a0a09be64af6f6f6bdbc1c5d21739c7f71bfede606a10b47d483b4cbd5c.scope: Consumed 3.590s CPU time.
Dec  6 01:52:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:30.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:30 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38f53a0a09be64af6f6f6bdbc1c5d21739c7f71bfede606a10b47d483b4cbd5c-userdata-shm.mount: Deactivated successfully.
Dec  6 01:52:30 np0005548731 systemd[1]: var-lib-containers-storage-overlay-42fb7a27549122014af3dc214d958ed1562ba5e46b7418274b5dd16a480d31f1-merged.mount: Deactivated successfully.
Dec  6 01:52:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:30.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:52:32 np0005548731 podman[232302]: 2025-12-06 06:52:32.511912473 +0000 UTC m=+2.476406008 container cleanup 38f53a0a09be64af6f6f6bdbc1c5d21739c7f71bfede606a10b47d483b4cbd5c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec  6 01:52:32 np0005548731 podman[232302]: nova_compute
Dec  6 01:52:32 np0005548731 podman[232403]: nova_compute
Dec  6 01:52:32 np0005548731 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  6 01:52:32 np0005548731 systemd[1]: Stopped nova_compute container.
Dec  6 01:52:32 np0005548731 systemd[1]: Starting nova_compute container...
Dec  6 01:52:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:32.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:52:32 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:52:32 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42fb7a27549122014af3dc214d958ed1562ba5e46b7418274b5dd16a480d31f1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:32 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42fb7a27549122014af3dc214d958ed1562ba5e46b7418274b5dd16a480d31f1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:32 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42fb7a27549122014af3dc214d958ed1562ba5e46b7418274b5dd16a480d31f1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:32 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42fb7a27549122014af3dc214d958ed1562ba5e46b7418274b5dd16a480d31f1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:32 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42fb7a27549122014af3dc214d958ed1562ba5e46b7418274b5dd16a480d31f1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:32 np0005548731 podman[232416]: 2025-12-06 06:52:32.725938158 +0000 UTC m=+0.095486678 container init 38f53a0a09be64af6f6f6bdbc1c5d21739c7f71bfede606a10b47d483b4cbd5c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 01:52:32 np0005548731 podman[232416]: 2025-12-06 06:52:32.73326367 +0000 UTC m=+0.102812170 container start 38f53a0a09be64af6f6f6bdbc1c5d21739c7f71bfede606a10b47d483b4cbd5c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 01:52:32 np0005548731 nova_compute[232433]: + sudo -E kolla_set_configs
Dec  6 01:52:32 np0005548731 podman[232416]: nova_compute
Dec  6 01:52:32 np0005548731 systemd[1]: Started nova_compute container.
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Validating config file
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying service configuration files
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Deleting /etc/ceph
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Creating directory /etc/ceph
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /etc/ceph
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Writing out command to execute
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  6 01:52:32 np0005548731 nova_compute[232433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  6 01:52:32 np0005548731 nova_compute[232433]: ++ cat /run_command
Dec  6 01:52:32 np0005548731 nova_compute[232433]: + CMD=nova-compute
Dec  6 01:52:32 np0005548731 nova_compute[232433]: + ARGS=
Dec  6 01:52:32 np0005548731 nova_compute[232433]: + sudo kolla_copy_cacerts
Dec  6 01:52:32 np0005548731 nova_compute[232433]: + [[ ! -n '' ]]
Dec  6 01:52:32 np0005548731 nova_compute[232433]: + . kolla_extend_start
Dec  6 01:52:32 np0005548731 nova_compute[232433]: + echo 'Running command: '\''nova-compute'\'''
Dec  6 01:52:32 np0005548731 nova_compute[232433]: Running command: 'nova-compute'
Dec  6 01:52:32 np0005548731 nova_compute[232433]: + umask 0022
Dec  6 01:52:32 np0005548731 nova_compute[232433]: + exec nova-compute
Dec  6 01:52:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:52:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:32.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:52:33 np0005548731 python3.9[232596]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  6 01:52:33 np0005548731 systemd[1]: Started libpod-conmon-5270438ab0f9929d844708e197058d70817dbfb9a2276adbb71c5111687ae25b.scope.
Dec  6 01:52:33 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:52:33 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129ced2e143d411ca359705979ee49eff1726ce93e5a617b6193776bb0d72c85/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:33 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129ced2e143d411ca359705979ee49eff1726ce93e5a617b6193776bb0d72c85/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:33 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129ced2e143d411ca359705979ee49eff1726ce93e5a617b6193776bb0d72c85/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  6 01:52:33 np0005548731 podman[232621]: 2025-12-06 06:52:33.933504257 +0000 UTC m=+0.206318784 container init 5270438ab0f9929d844708e197058d70817dbfb9a2276adbb71c5111687ae25b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 01:52:33 np0005548731 podman[232621]: 2025-12-06 06:52:33.945111715 +0000 UTC m=+0.217926232 container start 5270438ab0f9929d844708e197058d70817dbfb9a2276adbb71c5111687ae25b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:52:33 np0005548731 python3.9[232596]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Applying nova statedir ownership
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  6 01:52:33 np0005548731 nova_compute_init[232643]: INFO:nova_statedir:Nova statedir ownership complete
Dec  6 01:52:34 np0005548731 systemd[1]: libpod-5270438ab0f9929d844708e197058d70817dbfb9a2276adbb71c5111687ae25b.scope: Deactivated successfully.
Dec  6 01:52:34 np0005548731 podman[232657]: 2025-12-06 06:52:34.042703174 +0000 UTC m=+0.027583795 container died 5270438ab0f9929d844708e197058d70817dbfb9a2276adbb71c5111687ae25b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 01:52:34 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5270438ab0f9929d844708e197058d70817dbfb9a2276adbb71c5111687ae25b-userdata-shm.mount: Deactivated successfully.
Dec  6 01:52:34 np0005548731 systemd[1]: var-lib-containers-storage-overlay-129ced2e143d411ca359705979ee49eff1726ce93e5a617b6193776bb0d72c85-merged.mount: Deactivated successfully.
Dec  6 01:52:34 np0005548731 podman[232657]: 2025-12-06 06:52:34.103438299 +0000 UTC m=+0.088318920 container cleanup 5270438ab0f9929d844708e197058d70817dbfb9a2276adbb71c5111687ae25b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:52:34 np0005548731 systemd[1]: libpod-conmon-5270438ab0f9929d844708e197058d70817dbfb9a2276adbb71c5111687ae25b.scope: Deactivated successfully.
Dec  6 01:52:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:34.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:34 np0005548731 systemd[1]: session-50.scope: Deactivated successfully.
Dec  6 01:52:34 np0005548731 systemd[1]: session-50.scope: Consumed 2min 17.734s CPU time.
Dec  6 01:52:34 np0005548731 systemd-logind[794]: Session 50 logged out. Waiting for processes to exit.
Dec  6 01:52:34 np0005548731 systemd-logind[794]: Removed session 50.
Dec  6 01:52:34 np0005548731 nova_compute[232433]: 2025-12-06 06:52:34.913 232437 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 01:52:34 np0005548731 nova_compute[232433]: 2025-12-06 06:52:34.914 232437 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 01:52:34 np0005548731 nova_compute[232433]: 2025-12-06 06:52:34.914 232437 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  6 01:52:34 np0005548731 nova_compute[232433]: 2025-12-06 06:52:34.914 232437 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  6 01:52:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:34.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.070 232437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.094 232437 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.095 232437 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.566 232437 INFO nova.virt.driver [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.697 232437 INFO nova.compute.provider_config [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.708 232437 DEBUG oslo_concurrency.lockutils [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.708 232437 DEBUG oslo_concurrency.lockutils [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.709 232437 DEBUG oslo_concurrency.lockutils [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.709 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.709 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.709 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.709 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.709 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.710 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.710 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.710 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.710 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.710 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.710 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.710 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.710 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.711 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.711 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.711 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.711 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.711 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.712 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.712 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.712 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.712 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.712 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.712 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.712 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.713 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.713 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.713 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.713 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.713 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.713 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.713 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.714 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.714 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.714 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.714 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.714 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.715 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.715 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.715 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.715 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.716 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.716 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.716 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.716 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.716 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.716 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.716 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.717 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.717 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.717 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.717 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.717 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.717 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.718 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.718 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.718 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.718 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.718 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.718 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.718 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.718 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.719 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.719 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.719 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.719 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.719 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.719 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.719 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.720 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.720 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.720 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.720 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.720 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.720 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.720 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.721 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.721 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.721 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.721 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.721 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.721 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.722 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.722 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.722 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.722 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.722 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.722 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.722 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.723 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.723 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.723 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.723 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.723 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.723 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.723 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.724 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.724 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.724 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.724 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.724 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.724 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.724 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.725 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.725 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.725 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.725 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.725 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.725 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.725 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.725 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.726 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.726 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.726 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.726 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.726 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.726 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.726 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.727 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.727 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.727 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.727 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.727 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.727 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.727 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.728 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.728 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.728 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.728 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.728 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.728 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.728 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.729 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.729 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.729 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.729 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.729 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.729 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.729 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.730 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.730 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.730 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.730 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.730 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.730 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.730 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.731 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.731 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.731 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.731 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.731 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.731 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.732 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.732 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.732 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.732 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.732 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.732 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.732 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.733 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.733 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.733 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.733 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.733 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.733 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.733 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.734 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.734 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.734 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.734 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.734 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.734 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.734 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.735 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.735 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.735 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.735 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.735 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.735 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.735 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.736 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.736 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.736 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.736 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.736 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.736 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.736 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.737 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.737 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.737 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.737 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.737 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.737 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.737 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.738 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.738 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.738 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.738 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.738 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.738 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.738 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.739 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.739 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.739 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.739 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.739 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.739 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.739 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.740 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.740 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.740 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.740 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.740 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.740 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.740 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.741 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.741 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.741 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.741 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.741 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.741 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.741 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.742 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.742 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.742 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.742 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.742 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.742 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.742 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.743 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.743 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.743 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.743 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.743 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.743 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.743 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.744 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.744 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.744 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.744 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.744 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.744 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.745 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.745 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.745 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.745 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.745 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.745 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.745 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.745 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.746 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.746 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.746 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.746 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.746 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.746 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.747 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.747 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.747 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.747 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.747 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.747 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.747 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.748 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.748 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.748 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.748 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.748 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.748 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.749 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.749 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.749 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.749 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.749 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.749 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.749 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.750 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.750 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.750 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.750 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.750 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.750 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.751 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.751 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.751 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.751 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.751 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.751 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.752 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.752 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.752 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.752 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.752 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.752 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.753 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.753 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.753 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.753 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.753 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.753 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.753 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.754 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.754 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.754 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.754 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.754 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.754 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.755 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.755 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.755 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.755 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.755 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.755 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.755 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.756 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.756 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.756 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.756 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.756 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.756 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.757 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.757 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.757 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.757 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.757 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.757 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.757 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.758 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.758 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.758 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.758 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.758 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.758 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.759 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.759 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.759 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.759 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.759 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.759 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.759 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.760 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.760 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.760 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.760 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.760 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.760 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.760 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.760 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.761 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.761 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.761 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.761 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.761 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.761 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.761 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.762 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.762 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.762 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.762 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.762 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.763 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.763 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.763 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.763 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.763 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.763 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.764 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.764 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.764 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.764 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.764 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.764 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.764 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.764 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.765 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.765 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.765 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.765 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.765 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.765 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.765 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.766 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.766 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.766 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.766 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.766 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.766 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.766 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.767 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.767 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.767 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.767 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.767 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.767 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.767 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.768 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.768 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.768 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.768 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.768 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.768 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.768 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.769 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.769 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.769 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.769 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.769 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.769 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.769 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.770 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.770 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.770 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.770 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.770 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.770 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.770 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.770 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.771 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.771 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.771 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.771 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.771 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.771 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.771 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.772 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.772 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.772 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.772 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.772 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.772 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.772 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.773 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.773 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.773 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.773 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.773 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.773 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.773 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.773 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.774 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.774 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.774 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.774 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.774 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.774 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.774 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.775 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.775 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.775 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.775 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.775 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.775 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.775 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.776 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.776 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.776 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.776 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.776 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.776 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.776 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.777 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.777 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.777 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.777 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.777 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.777 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.777 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.778 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.778 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.778 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.778 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.778 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.778 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.778 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.779 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.779 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.779 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.779 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.779 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.779 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.779 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.780 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.780 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.780 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.780 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.780 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.780 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.780 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.781 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.781 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.781 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.781 232437 WARNING oslo_config.cfg [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  6 01:52:35 np0005548731 nova_compute[232433]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  6 01:52:35 np0005548731 nova_compute[232433]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  6 01:52:35 np0005548731 nova_compute[232433]: and ``live_migration_inbound_addr`` respectively.
Dec  6 01:52:35 np0005548731 nova_compute[232433]: ).  Its value may be silently ignored in the future.#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.781 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.781 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.782 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.782 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.782 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.782 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.782 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.782 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.783 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.783 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.783 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.783 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.783 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.783 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.784 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.784 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.784 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.784 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.784 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.rbd_secret_uuid        = 40a1bae4-cf76-5610-8dab-c75116dfe0bb log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.784 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.784 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.785 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.785 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.785 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.785 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.785 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.785 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.785 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.786 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.786 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.786 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.786 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.786 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.786 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.787 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.787 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.787 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.787 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.787 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.787 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.788 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.788 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.788 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.788 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.788 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.789 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.789 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.789 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.789 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.790 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.790 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.790 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.790 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.790 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.791 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.791 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.791 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.791 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.791 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.792 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.792 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.792 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.792 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.793 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.793 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.793 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.793 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.793 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.794 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.794 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.794 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.794 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.794 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.795 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.795 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.795 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.795 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.795 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.796 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.796 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.796 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.796 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.796 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.797 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.797 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.797 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.797 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.797 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.798 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.798 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.798 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.798 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.798 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.799 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.799 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.799 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.799 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.799 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.800 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.800 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.800 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.800 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.800 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.801 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.801 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.801 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.801 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.801 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.802 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.802 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.802 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.802 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.803 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.803 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.803 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.803 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.803 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.804 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.804 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.804 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.804 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.804 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.804 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.805 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.805 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.805 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.805 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.806 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.806 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.806 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.806 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.806 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.807 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.807 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.807 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.807 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.807 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.808 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.808 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.808 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.808 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.809 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.809 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.809 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.809 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.809 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.810 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.810 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.810 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.810 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.810 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.811 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.811 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.811 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.811 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.812 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.812 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.812 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.812 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.812 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.813 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.813 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.813 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.813 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.813 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.814 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.814 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.814 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.814 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.814 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.814 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.815 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.815 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.815 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.815 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.815 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.816 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.816 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.816 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.816 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.817 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.817 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.817 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.817 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.817 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.818 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.818 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.818 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.818 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.818 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.819 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.819 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.819 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.819 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.819 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.820 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.820 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.820 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.820 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.821 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.821 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.821 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.821 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.821 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.822 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.822 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.822 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.822 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.822 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.823 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.823 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.823 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.823 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.823 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.824 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.824 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.824 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.824 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.824 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.824 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.824 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.825 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.825 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.825 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.825 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.825 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.825 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.825 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.826 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.826 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.826 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.826 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.826 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.826 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.826 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.826 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.827 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.827 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.827 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.827 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.827 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.827 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.827 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.828 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.828 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.828 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.828 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.828 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.828 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.829 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.829 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.829 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.829 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.829 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.829 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.830 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.830 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.830 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.830 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.830 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.830 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.830 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.830 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.831 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.831 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.831 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.831 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.831 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.831 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.831 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.832 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.832 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.832 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.832 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.832 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.832 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.832 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.833 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.833 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.833 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.833 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.833 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.833 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.833 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.834 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.834 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.834 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.834 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.834 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.834 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.835 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.835 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.835 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.835 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.835 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.835 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.835 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.836 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.836 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.836 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.836 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.836 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.837 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.837 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.837 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.837 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.837 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.838 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.838 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.838 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.838 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.838 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.839 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.839 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.839 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.839 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.839 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.839 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.839 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.840 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.840 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.840 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.840 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.840 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.840 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.840 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.841 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.841 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.841 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.841 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.841 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.841 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.841 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.841 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.842 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.842 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.842 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.842 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.842 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.842 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.843 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.843 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.843 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.843 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.843 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.843 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.843 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.844 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.844 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.844 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.844 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.844 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.844 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.844 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.844 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.845 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.845 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.845 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.845 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.845 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.845 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.846 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.846 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.846 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.846 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.846 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.846 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.846 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.847 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.847 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.847 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.847 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.847 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.847 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.847 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.848 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.848 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.848 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.848 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.848 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.848 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.848 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.849 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.849 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.849 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.849 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.849 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.849 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.849 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.850 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.850 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.850 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.850 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.850 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.850 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.850 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.851 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.851 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.851 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.851 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.851 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.851 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.851 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.852 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.852 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.852 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.852 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.852 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.852 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.853 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.853 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.853 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.853 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.853 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.853 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.853 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.854 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.854 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.854 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.854 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.854 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.854 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.854 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.854 232437 DEBUG oslo_service.service [None req-7240cb29-7a4a-4f35-bc80-ca461a1c1e90 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.856 232437 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.882 232437 INFO nova.virt.node [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Determined node identity 6d00757a-082f-486d-ae84-869a2ba2e6e7 from /var/lib/nova/compute_id#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.882 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.883 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.883 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.884 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.895 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f8cf2213970> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.897 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f8cf2213970> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.898 232437 INFO nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.905 232437 INFO nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Libvirt host capabilities <capabilities>
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <host>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <uuid>ec2492e2-e0d1-43d3-b74f-744a8272a0e6</uuid>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <cpu>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <arch>x86_64</arch>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model>EPYC-Rome-v4</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <vendor>AMD</vendor>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <microcode version='16777317'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <signature family='23' model='49' stepping='0'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='x2apic'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='tsc-deadline'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='osxsave'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='hypervisor'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='tsc_adjust'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='spec-ctrl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='stibp'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='arch-capabilities'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='ssbd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='cmp_legacy'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='topoext'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='virt-ssbd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='lbrv'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='tsc-scale'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='vmcb-clean'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='pause-filter'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='pfthreshold'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='svme-addr-chk'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='rdctl-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='skip-l1dfl-vmentry'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='mds-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature name='pschange-mc-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <pages unit='KiB' size='4'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <pages unit='KiB' size='2048'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <pages unit='KiB' size='1048576'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </cpu>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <power_management>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <suspend_mem/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </power_management>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <iommu support='no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <migration_features>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <live/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <uri_transports>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <uri_transport>tcp</uri_transport>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <uri_transport>rdma</uri_transport>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </uri_transports>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </migration_features>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <topology>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <cells num='1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <cell id='0'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:          <memory unit='KiB'>7864320</memory>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:          <pages unit='KiB' size='2048'>0</pages>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:          <distances>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:            <sibling id='0' value='10'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:          </distances>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:          <cpus num='8'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:          </cpus>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        </cell>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </cells>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </topology>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <cache>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </cache>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <secmodel>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model>selinux</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <doi>0</doi>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </secmodel>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <secmodel>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model>dac</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <doi>0</doi>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </secmodel>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  </host>
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <guest>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <os_type>hvm</os_type>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <arch name='i686'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <wordsize>32</wordsize>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <domain type='qemu'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <domain type='kvm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </arch>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <features>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <pae/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <nonpae/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <acpi default='on' toggle='yes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <apic default='on' toggle='no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <cpuselection/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <deviceboot/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <disksnapshot default='on' toggle='no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <externalSnapshot/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </features>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  </guest>
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <guest>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <os_type>hvm</os_type>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <arch name='x86_64'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <wordsize>64</wordsize>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <domain type='qemu'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <domain type='kvm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </arch>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <features>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <acpi default='on' toggle='yes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <apic default='on' toggle='no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <cpuselection/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <deviceboot/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <disksnapshot default='on' toggle='no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <externalSnapshot/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </features>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  </guest>
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 
Dec  6 01:52:35 np0005548731 nova_compute[232433]: </capabilities>
Dec  6 01:52:35 np0005548731 nova_compute[232433]: #033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.910 232437 DEBUG nova.virt.libvirt.volume.mount [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.912 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  6 01:52:35 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.916 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  6 01:52:35 np0005548731 nova_compute[232433]: <domainCapabilities>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <domain>kvm</domain>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <arch>i686</arch>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <vcpu max='4096'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <iothreads supported='yes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <os supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <enum name='firmware'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <loader supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>rom</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>pflash</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='readonly'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>yes</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>no</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='secure'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>no</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </loader>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  </os>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <cpu>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <mode name='host-passthrough' supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='hostPassthroughMigratable'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>on</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>off</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <mode name='maximum' supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='maximumMigratable'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>on</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>off</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <mode name='host-model' supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <vendor>AMD</vendor>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='x2apic'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='hypervisor'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='stibp'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='ssbd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='overflow-recov'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='succor'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='ibrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='lbrv'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc-scale'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='flushbyasid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='pause-filter'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='pfthreshold'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <feature policy='disable' name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <mode name='custom' supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Broadwell'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-IBRS'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-noTSX'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v3'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v4'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Denverton'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v3'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Dhyana-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Genoa'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='auto-ibrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='auto-ibrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v3'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='EPYC-v3'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='EPYC-v4'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx10'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx10-128'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx10-256'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx10-512'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Haswell'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Haswell-IBRS'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Haswell-noTSX'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v3'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v4'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v3'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v4'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v5'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v6'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v7'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-IBRS'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='KnightsMill'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512er'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512pf'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='KnightsMill-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512er'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512pf'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G4'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G4-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G5'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='tbm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G5-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='tbm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v3'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='SierraForest'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='cmpccxadd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='SierraForest-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-ifma'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='cmpccxadd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v3'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v4'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v3'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v4'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v5'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Snowridge'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v2'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v3'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v4'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='athlon'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='athlon-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='core2duo'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='core2duo-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='coreduo'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='coreduo-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='n270'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='n270-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='phenom'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <blockers model='phenom-v1'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <memoryBacking supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <enum name='sourceType'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <value>file</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <value>anonymous</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <value>memfd</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  </memoryBacking>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <devices>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <disk supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='diskDevice'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>disk</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>cdrom</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>floppy</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>lun</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='bus'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>fdc</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>scsi</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>sata</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>virtio-transitional</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>virtio-non-transitional</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <graphics supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>vnc</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>egl-headless</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>dbus</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <video supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='modelType'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>vga</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>cirrus</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>none</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>bochs</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>ramfb</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </video>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <hostdev supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='mode'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>subsystem</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='startupPolicy'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>default</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>mandatory</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>requisite</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>optional</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='subsysType'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>pci</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>scsi</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='capsType'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='pciBackend'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </hostdev>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <rng supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>virtio-transitional</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>virtio-non-transitional</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>random</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>egd</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>builtin</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </rng>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <filesystem supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='driverType'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>path</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>handle</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>virtiofs</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </filesystem>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <tpm supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>tpm-tis</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>tpm-crb</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>emulator</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>external</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='backendVersion'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>2.0</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </tpm>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <redirdev supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='bus'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </redirdev>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <channel supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>pty</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>unix</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </channel>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <crypto supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='model'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>qemu</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>builtin</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </crypto>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <interface supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='backendType'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>default</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>passt</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </interface>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <panic supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>isa</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>hyperv</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </panic>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <console supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>null</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>vc</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>pty</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>dev</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>file</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>pipe</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>stdio</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>udp</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>tcp</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>unix</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>qemu-vdagent</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>dbus</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    </console>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  </devices>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:  <features>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <gic supported='no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <vmcoreinfo supported='yes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <genid supported='yes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <backingStoreInput supported='yes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <backup supported='yes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <async-teardown supported='yes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <ps2 supported='yes'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <sev supported='no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <sgx supported='no'/>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:    <hyperv supported='yes'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:      <enum name='features'>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>relaxed</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>vapic</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>spinlocks</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>vpindex</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>runtime</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>synic</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>stimer</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>reset</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>vendor_id</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>frequencies</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>reenlightenment</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>tlbflush</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>ipi</value>
Dec  6 01:52:35 np0005548731 nova_compute[232433]:        <value>avic</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>emsr_bitmap</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>xmm_input</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <defaults>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <spinlocks>4095</spinlocks>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <stimer_direct>on</stimer_direct>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </defaults>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </hyperv>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <launchSecurity supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='sectype'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tdx</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </launchSecurity>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </features>
Dec  6 01:52:36 np0005548731 nova_compute[232433]: </domainCapabilities>
Dec  6 01:52:36 np0005548731 nova_compute[232433]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.921 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  6 01:52:36 np0005548731 nova_compute[232433]: <domainCapabilities>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <domain>kvm</domain>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <arch>i686</arch>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <vcpu max='240'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <iothreads supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <os supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <enum name='firmware'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <loader supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>rom</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pflash</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='readonly'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>yes</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>no</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='secure'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>no</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </loader>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </os>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <cpu>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='host-passthrough' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='hostPassthroughMigratable'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>on</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>off</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='maximum' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='maximumMigratable'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>on</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>off</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='host-model' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <vendor>AMD</vendor>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='x2apic'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='hypervisor'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='stibp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='ssbd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='overflow-recov'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='succor'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='ibrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='lbrv'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc-scale'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='flushbyasid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='pause-filter'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='pfthreshold'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='disable' name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='custom' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Dhyana-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Genoa'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='auto-ibrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='auto-ibrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10-128'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10-256'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10-512'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v6'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v7'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='KnightsMill'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512er'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512pf'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='KnightsMill-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512er'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512pf'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G4-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tbm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G5-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tbm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SierraForest'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cmpccxadd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SierraForest-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cmpccxadd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='athlon'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='athlon-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='core2duo'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='core2duo-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='coreduo'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='coreduo-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='n270'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='n270-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='phenom'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='phenom-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <memoryBacking supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <enum name='sourceType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>file</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>anonymous</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>memfd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </memoryBacking>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <devices>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <disk supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='diskDevice'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>disk</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>cdrom</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>floppy</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>lun</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='bus'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>ide</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>fdc</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>scsi</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>sata</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-non-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <graphics supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vnc</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>egl-headless</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>dbus</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <video supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='modelType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vga</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>cirrus</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>none</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>bochs</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>ramfb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </video>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <hostdev supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='mode'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>subsystem</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='startupPolicy'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>default</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>mandatory</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>requisite</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>optional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='subsysType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pci</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>scsi</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='capsType'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='pciBackend'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </hostdev>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <rng supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-non-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>random</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>egd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>builtin</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </rng>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <filesystem supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='driverType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>path</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>handle</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtiofs</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </filesystem>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <tpm supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tpm-tis</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tpm-crb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>emulator</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>external</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendVersion'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>2.0</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </tpm>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <redirdev supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='bus'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </redirdev>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <channel supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pty</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>unix</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </channel>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <crypto supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>qemu</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>builtin</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </crypto>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <interface supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>default</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>passt</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </interface>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <panic supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>isa</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>hyperv</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </panic>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <console supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>null</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vc</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pty</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>dev</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>file</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pipe</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>stdio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>udp</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tcp</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>unix</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>qemu-vdagent</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>dbus</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </console>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </devices>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <features>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <gic supported='no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <vmcoreinfo supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <genid supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <backingStoreInput supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <backup supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <async-teardown supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <ps2 supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <sev supported='no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <sgx supported='no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <hyperv supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='features'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>relaxed</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vapic</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>spinlocks</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vpindex</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>runtime</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>synic</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>stimer</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>reset</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vendor_id</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>frequencies</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>reenlightenment</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tlbflush</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>ipi</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>avic</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>emsr_bitmap</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>xmm_input</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <defaults>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <spinlocks>4095</spinlocks>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <stimer_direct>on</stimer_direct>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </defaults>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </hyperv>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <launchSecurity supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='sectype'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tdx</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </launchSecurity>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </features>
Dec  6 01:52:36 np0005548731 nova_compute[232433]: </domainCapabilities>
Dec  6 01:52:36 np0005548731 nova_compute[232433]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.967 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:35.972 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  6 01:52:36 np0005548731 nova_compute[232433]: <domainCapabilities>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <domain>kvm</domain>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <arch>x86_64</arch>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <vcpu max='4096'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <iothreads supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <os supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <enum name='firmware'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>efi</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <loader supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>rom</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pflash</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='readonly'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>yes</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>no</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='secure'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>yes</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>no</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </loader>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </os>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <cpu>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='host-passthrough' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='hostPassthroughMigratable'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>on</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>off</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='maximum' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='maximumMigratable'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>on</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>off</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='host-model' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <vendor>AMD</vendor>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='x2apic'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='hypervisor'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='stibp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='ssbd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='overflow-recov'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='succor'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='ibrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='lbrv'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc-scale'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='flushbyasid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='pause-filter'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='pfthreshold'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='disable' name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='custom' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Dhyana-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Genoa'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='auto-ibrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='auto-ibrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10-128'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10-256'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10-512'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v6'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v7'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='KnightsMill'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512er'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512pf'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='KnightsMill-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512er'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512pf'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G4-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tbm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G5-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tbm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SierraForest'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cmpccxadd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SierraForest-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cmpccxadd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='athlon'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='athlon-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='core2duo'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='core2duo-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='coreduo'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='coreduo-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='n270'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='n270-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='phenom'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='phenom-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <memoryBacking supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <enum name='sourceType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>file</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>anonymous</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>memfd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </memoryBacking>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <devices>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <disk supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='diskDevice'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>disk</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>cdrom</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>floppy</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>lun</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='bus'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>fdc</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>scsi</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>sata</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-non-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <graphics supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vnc</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>egl-headless</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>dbus</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <video supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='modelType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vga</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>cirrus</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>none</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>bochs</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>ramfb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </video>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <hostdev supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='mode'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>subsystem</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='startupPolicy'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>default</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>mandatory</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>requisite</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>optional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='subsysType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pci</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>scsi</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='capsType'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='pciBackend'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </hostdev>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <rng supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-non-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>random</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>egd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>builtin</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </rng>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <filesystem supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='driverType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>path</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>handle</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtiofs</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </filesystem>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <tpm supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tpm-tis</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tpm-crb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>emulator</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>external</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendVersion'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>2.0</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </tpm>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <redirdev supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='bus'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </redirdev>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <channel supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pty</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>unix</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </channel>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <crypto supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>qemu</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>builtin</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </crypto>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <interface supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>default</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>passt</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </interface>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <panic supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>isa</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>hyperv</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </panic>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <console supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>null</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vc</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pty</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>dev</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>file</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pipe</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>stdio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>udp</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tcp</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>unix</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>qemu-vdagent</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>dbus</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </console>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </devices>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <features>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <gic supported='no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <vmcoreinfo supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <genid supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <backingStoreInput supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <backup supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <async-teardown supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <ps2 supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <sev supported='no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <sgx supported='no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <hyperv supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='features'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>relaxed</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vapic</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>spinlocks</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vpindex</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>runtime</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>synic</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>stimer</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>reset</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vendor_id</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>frequencies</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>reenlightenment</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tlbflush</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>ipi</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>avic</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>emsr_bitmap</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>xmm_input</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <defaults>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <spinlocks>4095</spinlocks>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <stimer_direct>on</stimer_direct>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </defaults>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </hyperv>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <launchSecurity supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='sectype'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tdx</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </launchSecurity>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </features>
Dec  6 01:52:36 np0005548731 nova_compute[232433]: </domainCapabilities>
Dec  6 01:52:36 np0005548731 nova_compute[232433]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.036 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  6 01:52:36 np0005548731 nova_compute[232433]: <domainCapabilities>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <path>/usr/libexec/qemu-kvm</path>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <domain>kvm</domain>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <arch>x86_64</arch>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <vcpu max='240'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <iothreads supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <os supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <enum name='firmware'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <loader supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>rom</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pflash</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='readonly'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>yes</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>no</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='secure'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>no</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </loader>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </os>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <cpu>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='host-passthrough' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='hostPassthroughMigratable'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>on</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>off</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='maximum' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='maximumMigratable'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>on</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>off</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='host-model' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <vendor>AMD</vendor>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='x2apic'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc-deadline'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='hypervisor'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc_adjust'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='spec-ctrl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='stibp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='ssbd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='cmp_legacy'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='overflow-recov'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='succor'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='ibrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='amd-ssbd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='virt-ssbd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='lbrv'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='tsc-scale'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='vmcb-clean'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='flushbyasid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='pause-filter'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='pfthreshold'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='svme-addr-chk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <feature policy='disable' name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <mode name='custom' supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Broadwell-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cascadelake-Server-v5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Cooperlake-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Denverton-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Dhyana-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Genoa'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='auto-ibrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Genoa-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='auto-ibrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Milan-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amd-psfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='no-nested-data-bp'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='null-sel-clr-base'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='stibp-always-on'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-Rome-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='EPYC-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='GraniteRapids-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10-128'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10-256'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx10-512'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='prefetchiti'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Haswell-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-noTSX'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v6'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Icelake-Server-v7'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='IvyBridge-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='KnightsMill'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512er'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512pf'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='KnightsMill-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4fmaps'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-4vnniw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512er'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512pf'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G4-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tbm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Opteron_G5-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fma4'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tbm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xop'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SapphireRapids-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='amx-tile'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-bf16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-fp16'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512-vpopcntdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bitalg'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vbmi2'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrc'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fzrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='la57'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='taa-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='tsx-ldtrk'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xfd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SierraForest'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cmpccxadd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='SierraForest-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ifma'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-ne-convert'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx-vnni-int8'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='bus-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cmpccxadd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fbsdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='fsrs'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ibrs-all'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mcdt-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pbrsb-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='psdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='sbdr-ssdp-no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='serialize'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vaes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='vpclmulqdq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Client-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='hle'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='rtm'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Skylake-Server-v5'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512bw'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512cd'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512dq'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512f'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='avx512vl'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='invpcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pcid'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='pku'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='mpx'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v2'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v3'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='core-capability'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='split-lock-detect'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='Snowridge-v4'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='cldemote'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='erms'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='gfni'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdir64b'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='movdiri'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='xsaves'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='athlon'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='athlon-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='core2duo'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='core2duo-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='coreduo'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='coreduo-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='n270'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='n270-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='ss'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='phenom'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <blockers model='phenom-v1'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnow'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <feature name='3dnowext'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </blockers>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </mode>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <memoryBacking supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <enum name='sourceType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>file</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>anonymous</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <value>memfd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </memoryBacking>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <devices>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <disk supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='diskDevice'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>disk</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>cdrom</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>floppy</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>lun</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='bus'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>ide</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>fdc</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>scsi</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>sata</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-non-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <graphics supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vnc</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>egl-headless</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>dbus</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <video supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='modelType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vga</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>cirrus</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>none</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>bochs</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>ramfb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </video>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <hostdev supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='mode'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>subsystem</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='startupPolicy'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>default</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>mandatory</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>requisite</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>optional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='subsysType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pci</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>scsi</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='capsType'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='pciBackend'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </hostdev>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <rng supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtio-non-transitional</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>random</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>egd</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>builtin</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </rng>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <filesystem supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='driverType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>path</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>handle</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>virtiofs</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </filesystem>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <tpm supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tpm-tis</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tpm-crb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>emulator</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>external</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendVersion'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>2.0</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </tpm>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <redirdev supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='bus'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>usb</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </redirdev>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <channel supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pty</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>unix</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </channel>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <crypto supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>qemu</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendModel'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>builtin</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </crypto>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <interface supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='backendType'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>default</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>passt</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </interface>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <panic supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='model'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>isa</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>hyperv</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </panic>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <console supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='type'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>null</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vc</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pty</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>dev</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>file</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>pipe</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>stdio</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>udp</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tcp</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>unix</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>qemu-vdagent</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>dbus</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </console>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </devices>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <features>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <gic supported='no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <vmcoreinfo supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <genid supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <backingStoreInput supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <backup supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <async-teardown supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <ps2 supported='yes'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <sev supported='no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <sgx supported='no'/>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <hyperv supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='features'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>relaxed</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vapic</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>spinlocks</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vpindex</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>runtime</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>synic</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>stimer</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>reset</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>vendor_id</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>frequencies</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>reenlightenment</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tlbflush</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>ipi</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>avic</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>emsr_bitmap</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>xmm_input</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <defaults>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <spinlocks>4095</spinlocks>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <stimer_direct>on</stimer_direct>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <tlbflush_direct>on</tlbflush_direct>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <tlbflush_extended>on</tlbflush_extended>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </defaults>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </hyperv>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    <launchSecurity supported='yes'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      <enum name='sectype'>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:        <value>tdx</value>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:      </enum>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:    </launchSecurity>
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  </features>
Dec  6 01:52:36 np0005548731 nova_compute[232433]: </domainCapabilities>
Dec  6 01:52:36 np0005548731 nova_compute[232433]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.103 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.103 232437 INFO nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Secure Boot support detected#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.105 232437 INFO nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.105 232437 INFO nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.113 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] cpu compare xml: <cpu match="exact">
Dec  6 01:52:36 np0005548731 nova_compute[232433]:  <model>Nehalem</model>
Dec  6 01:52:36 np0005548731 nova_compute[232433]: </cpu>
Dec  6 01:52:36 np0005548731 nova_compute[232433]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.116 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.153 232437 INFO nova.virt.node [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Determined node identity 6d00757a-082f-486d-ae84-869a2ba2e6e7 from /var/lib/nova/compute_id#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.181 232437 WARNING nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Compute nodes ['6d00757a-082f-486d-ae84-869a2ba2e6e7'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.213 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.250 232437 WARNING nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.250 232437 DEBUG oslo_concurrency.lockutils [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.251 232437 DEBUG oslo_concurrency.lockutils [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.251 232437 DEBUG oslo_concurrency.lockutils [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.251 232437 DEBUG nova.compute.resource_tracker [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.251 232437 DEBUG oslo_concurrency.processutils [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:52:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:36.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:52:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:52:36 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3277231340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:52:36 np0005548731 nova_compute[232433]: 2025-12-06 06:52:36.700 232437 DEBUG oslo_concurrency.processutils [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:52:36 np0005548731 systemd[1]: Starting libvirt nodedev daemon...
Dec  6 01:52:36 np0005548731 systemd[1]: Started libvirt nodedev daemon.
Dec  6 01:52:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:36.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.019 232437 WARNING nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.021 232437 DEBUG nova.compute.resource_tracker [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5317MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.021 232437 DEBUG oslo_concurrency.lockutils [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.021 232437 DEBUG oslo_concurrency.lockutils [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.045 232437 WARNING nova.compute.resource_tracker [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] No compute node record for compute-2.ctlplane.example.com:6d00757a-082f-486d-ae84-869a2ba2e6e7: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 6d00757a-082f-486d-ae84-869a2ba2e6e7 could not be found.#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.066 232437 INFO nova.compute.resource_tracker [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Compute node record created for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com with uuid: 6d00757a-082f-486d-ae84-869a2ba2e6e7#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.123 232437 DEBUG nova.compute.resource_tracker [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.123 232437 DEBUG nova.compute.resource_tracker [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 01:52:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.409 232437 INFO nova.scheduler.client.report [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [req-efe1a035-423f-4bf8-a271-ec881edf224e] Created resource provider record via placement API for resource provider with UUID 6d00757a-082f-486d-ae84-869a2ba2e6e7 and name compute-2.ctlplane.example.com.#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.530 232437 DEBUG oslo_concurrency.processutils [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:52:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:52:37 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/822919840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.948 232437 DEBUG oslo_concurrency.processutils [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.953 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec  6 01:52:37 np0005548731 nova_compute[232433]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.953 232437 INFO nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.954 232437 DEBUG nova.compute.provider_tree [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.954 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 01:52:37 np0005548731 nova_compute[232433]: 2025-12-06 06:52:37.956 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Libvirt baseline CPU <cpu>
Dec  6 01:52:37 np0005548731 nova_compute[232433]:  <arch>x86_64</arch>
Dec  6 01:52:37 np0005548731 nova_compute[232433]:  <model>Nehalem</model>
Dec  6 01:52:37 np0005548731 nova_compute[232433]:  <vendor>AMD</vendor>
Dec  6 01:52:37 np0005548731 nova_compute[232433]:  <topology sockets="8" cores="1" threads="1"/>
Dec  6 01:52:37 np0005548731 nova_compute[232433]: </cpu>
Dec  6 01:52:37 np0005548731 nova_compute[232433]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Dec  6 01:52:38 np0005548731 nova_compute[232433]: 2025-12-06 06:52:38.025 232437 DEBUG nova.scheduler.client.report [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Updated inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  6 01:52:38 np0005548731 nova_compute[232433]: 2025-12-06 06:52:38.026 232437 DEBUG nova.compute.provider_tree [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Updating resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  6 01:52:38 np0005548731 nova_compute[232433]: 2025-12-06 06:52:38.026 232437 DEBUG nova.compute.provider_tree [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 01:52:38 np0005548731 nova_compute[232433]: 2025-12-06 06:52:38.142 232437 DEBUG nova.compute.provider_tree [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Updating resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  6 01:52:38 np0005548731 nova_compute[232433]: 2025-12-06 06:52:38.169 232437 DEBUG nova.compute.resource_tracker [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 01:52:38 np0005548731 nova_compute[232433]: 2025-12-06 06:52:38.169 232437 DEBUG oslo_concurrency.lockutils [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:52:38 np0005548731 nova_compute[232433]: 2025-12-06 06:52:38.170 232437 DEBUG nova.service [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec  6 01:52:38 np0005548731 nova_compute[232433]: 2025-12-06 06:52:38.244 232437 DEBUG nova.service [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec  6 01:52:38 np0005548731 nova_compute[232433]: 2025-12-06 06:52:38.245 232437 DEBUG nova.servicegroup.drivers.db [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] DB_Driver: join new ServiceGroup member compute-2.ctlplane.example.com to the compute group, service = <Service: host=compute-2.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec  6 01:52:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:38.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:38.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:40.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:52:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:40.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:52:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:42.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:42.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:43 np0005548731 podman[232801]: 2025-12-06 06:52:43.898432565 +0000 UTC m=+0.055389663 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 01:52:43 np0005548731 podman[232803]: 2025-12-06 06:52:43.905957442 +0000 UTC m=+0.061385582 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 01:52:43 np0005548731 podman[232802]: 2025-12-06 06:52:43.931360332 +0000 UTC m=+0.088414923 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:52:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:44.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:44.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:46.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:46.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:52:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:48.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:48.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:50.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:50.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:52:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:52.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:52.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:52:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:54.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:54.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:56.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:56.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:52:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:52:58 np0005548731 nova_compute[232433]: 2025-12-06 06:52:58.247 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:52:58 np0005548731 nova_compute[232433]: 2025-12-06 06:52:58.293 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:52:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:52:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:58.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:52:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:52:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:52:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:58.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:00.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:53:00.839 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:53:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:53:00.839 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:53:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:53:00.839 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:53:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:00.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:02.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:03.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:04.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:05.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:06.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:07.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:08.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:53:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:09.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:53:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:10.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:11.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:12.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:13.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:14.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:53:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:53:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:53:14 np0005548731 podman[233114]: 2025-12-06 06:53:14.896533588 +0000 UTC m=+0.056765477 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 01:53:14 np0005548731 podman[233116]: 2025-12-06 06:53:14.904507716 +0000 UTC m=+0.059608649 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:53:14 np0005548731 podman[233115]: 2025-12-06 06:53:14.930510821 +0000 UTC m=+0.090000962 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Dec  6 01:53:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:15.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:16.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:17.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:18.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:19.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:20.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:53:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:53:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:21.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:22.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:23.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:24.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:25.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:26.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:27.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:28.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:29.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:30.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:31.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:32.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:33.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:34.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:35.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.107 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.107 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.107 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.129 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.130 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.130 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.130 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.131 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.131 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.131 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.131 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.132 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.158 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.159 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.159 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.159 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.160 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:53:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:53:35 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3584048377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.588 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.791 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.792 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5340MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.793 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.793 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.933 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.933 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 01:53:35 np0005548731 nova_compute[232433]: 2025-12-06 06:53:35.972 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:53:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:53:36 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1477125112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:53:36 np0005548731 nova_compute[232433]: 2025-12-06 06:53:36.431 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:53:36 np0005548731 nova_compute[232433]: 2025-12-06 06:53:36.437 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:53:36 np0005548731 nova_compute[232433]: 2025-12-06 06:53:36.492 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:53:36 np0005548731 nova_compute[232433]: 2025-12-06 06:53:36.493 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 01:53:36 np0005548731 nova_compute[232433]: 2025-12-06 06:53:36.494 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:53:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:36.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:37.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:38.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:39.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:40.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:41.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:42.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:43.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:44.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:45.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:45 np0005548731 podman[233338]: 2025-12-06 06:53:45.876835231 +0000 UTC m=+0.043822674 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:53:45 np0005548731 podman[233339]: 2025-12-06 06:53:45.911502728 +0000 UTC m=+0.074664847 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:53:45 np0005548731 podman[233340]: 2025-12-06 06:53:45.917289681 +0000 UTC m=+0.078165563 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 01:53:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:46.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:47.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:48.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:49.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:53:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:50.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:53:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:51.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:53:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:52.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:53:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:53.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:54.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:55.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:56.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:53:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:57.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:53:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:53:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:58.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:53:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:53:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:53:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:59.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:00.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:54:00.839 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:54:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:54:00.840 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:54:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:54:00.840 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:54:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:01.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:02.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:03.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:04.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:05.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:06.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:07.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:08.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:09.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:10.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:11.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:12.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:13.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:14.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:15.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:16.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:16 np0005548731 podman[233516]: 2025-12-06 06:54:16.907356725 +0000 UTC m=+0.060516666 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:54:16 np0005548731 podman[233518]: 2025-12-06 06:54:16.912358129 +0000 UTC m=+0.062003793 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 01:54:16 np0005548731 podman[233517]: 2025-12-06 06:54:16.930111478 +0000 UTC m=+0.083001863 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  6 01:54:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:17.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:18.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:19.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:20.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:21.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 01:54:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:54:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:54:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:54:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:22.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:23.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:24.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:25.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:26.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:27.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:27 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:54:27 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:54:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:28.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:29.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:30.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:31.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:32.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:33.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:34.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:35.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:36 np0005548731 nova_compute[232433]: 2025-12-06 06:54:36.487 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:54:36 np0005548731 nova_compute[232433]: 2025-12-06 06:54:36.487 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:54:36 np0005548731 nova_compute[232433]: 2025-12-06 06:54:36.590 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:54:36 np0005548731 nova_compute[232433]: 2025-12-06 06:54:36.590 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 01:54:36 np0005548731 nova_compute[232433]: 2025-12-06 06:54:36.590 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 01:54:36 np0005548731 nova_compute[232433]: 2025-12-06 06:54:36.750 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 01:54:36 np0005548731 nova_compute[232433]: 2025-12-06 06:54:36.750 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:54:36 np0005548731 nova_compute[232433]: 2025-12-06 06:54:36.750 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:54:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:36.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:54:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:37.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.190 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.190 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.191 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.191 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.191 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:54:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:54:37 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2042275790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.622 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.757 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.759 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5325MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.759 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.759 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.822 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.823 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 01:54:37 np0005548731 nova_compute[232433]: 2025-12-06 06:54:37.840 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:54:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:54:38 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1746649706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:54:38 np0005548731 nova_compute[232433]: 2025-12-06 06:54:38.339 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:54:38 np0005548731 nova_compute[232433]: 2025-12-06 06:54:38.345 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:54:38 np0005548731 nova_compute[232433]: 2025-12-06 06:54:38.360 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:54:38 np0005548731 nova_compute[232433]: 2025-12-06 06:54:38.362 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 01:54:38 np0005548731 nova_compute[232433]: 2025-12-06 06:54:38.362 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:54:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:38.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:39.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:40.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:41.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:54:42.098 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 01:54:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:54:42.099 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 01:54:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:54:42.100 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:54:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:42.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:43.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:44.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:45.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:46.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:47.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:47 np0005548731 podman[233867]: 2025-12-06 06:54:47.888570778 +0000 UTC m=+0.050062658 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true)
Dec  6 01:54:47 np0005548731 podman[233869]: 2025-12-06 06:54:47.904471521 +0000 UTC m=+0.058638731 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Dec  6 01:54:47 np0005548731 podman[233868]: 2025-12-06 06:54:47.92950631 +0000 UTC m=+0.084881200 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  6 01:54:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:48.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:49.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:54:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:50.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:54:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:51.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:52.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:53.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:54.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:55.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:56.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:57.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:54:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:58.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:54:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:54:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:54:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:59.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:55:00.841 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:55:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:55:00.841 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:55:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:55:00.841 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:55:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:00.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:01.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:02.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:03.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:04.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:05.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:55:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:06.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:55:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:07.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:08.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:09.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:10.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:11.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:55:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:12.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:55:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:13.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:55:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:14.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:55:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:55:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:15.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:55:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:16.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:17.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:18 np0005548731 podman[234047]: 2025-12-06 06:55:18.902339588 +0000 UTC m=+0.058514257 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:55:18 np0005548731 podman[234045]: 2025-12-06 06:55:18.909469964 +0000 UTC m=+0.075152918 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:55:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:55:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:18.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:55:18 np0005548731 podman[234046]: 2025-12-06 06:55:18.953466662 +0000 UTC m=+0.105687523 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:55:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:19.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:20.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:21.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:22.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:23.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:24.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:25 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Dec  6 01:55:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:25.998806) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:55:25 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Dec  6 01:55:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004125998853, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2346, "num_deletes": 251, "total_data_size": 5896342, "memory_usage": 5950640, "flush_reason": "Manual Compaction"}
Dec  6 01:55:25 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126027684, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3870542, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18204, "largest_seqno": 20545, "table_properties": {"data_size": 3861180, "index_size": 5920, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18670, "raw_average_key_size": 19, "raw_value_size": 3842507, "raw_average_value_size": 4096, "num_data_blocks": 266, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003890, "oldest_key_time": 1765003890, "file_creation_time": 1765004125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 28950 microseconds, and 9031 cpu microseconds.
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.027744) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3870542 bytes OK
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.027776) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.029646) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.029669) EVENT_LOG_v1 {"time_micros": 1765004126029663, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.029688) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5886137, prev total WAL file size 5886137, number of live WAL files 2.
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.030951) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3779KB)], [36(8752KB)]
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126030994, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 12833117, "oldest_snapshot_seqno": -1}
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4975 keys, 10725608 bytes, temperature: kUnknown
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126110491, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 10725608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10689305, "index_size": 22753, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125147, "raw_average_key_size": 25, "raw_value_size": 10596132, "raw_average_value_size": 2129, "num_data_blocks": 941, "num_entries": 4975, "num_filter_entries": 4975, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765004126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.110709) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10725608 bytes
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.112665) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.3 rd, 134.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.5 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5490, records dropped: 515 output_compression: NoCompression
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.112686) EVENT_LOG_v1 {"time_micros": 1765004126112676, "job": 20, "event": "compaction_finished", "compaction_time_micros": 79574, "compaction_time_cpu_micros": 24261, "output_level": 6, "num_output_files": 1, "total_output_size": 10725608, "num_input_records": 5490, "num_output_records": 4975, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126113588, "job": 20, "event": "table_file_deletion", "file_number": 38}
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126115460, "job": 20, "event": "table_file_deletion", "file_number": 36}
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.030875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.115589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.115597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.115599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.115601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:55:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:55:26.115603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:55:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:26.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:55:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:27.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:55:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:55:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:55:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:55:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:28.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:30.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:31.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:55:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:32.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:55:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:33.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:55:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:55:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:34.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:35.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:36 np0005548731 nova_compute[232433]: 2025-12-06 06:55:36.363 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:55:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:36.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:37 np0005548731 nova_compute[232433]: 2025-12-06 06:55:37.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:55:37 np0005548731 nova_compute[232433]: 2025-12-06 06:55:37.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:55:37 np0005548731 nova_compute[232433]: 2025-12-06 06:55:37.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 01:55:37 np0005548731 nova_compute[232433]: 2025-12-06 06:55:37.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 01:55:37 np0005548731 nova_compute[232433]: 2025-12-06 06:55:37.124 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 01:55:37 np0005548731 nova_compute[232433]: 2025-12-06 06:55:37.124 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:55:37 np0005548731 nova_compute[232433]: 2025-12-06 06:55:37.125 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:55:37 np0005548731 nova_compute[232433]: 2025-12-06 06:55:37.125 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:55:37 np0005548731 nova_compute[232433]: 2025-12-06 06:55:37.125 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 01:55:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:37.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.155 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.155 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.156 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.156 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.156 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:55:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:55:38 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4068762730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.591 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:55:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.759 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.761 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5307MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.761 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.761 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.845 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.846 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 01:55:38 np0005548731 nova_compute[232433]: 2025-12-06 06:55:38.870 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:55:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:38.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:39.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:55:39 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/713881256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:55:39 np0005548731 nova_compute[232433]: 2025-12-06 06:55:39.379 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:55:39 np0005548731 nova_compute[232433]: 2025-12-06 06:55:39.385 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:55:39 np0005548731 nova_compute[232433]: 2025-12-06 06:55:39.420 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:55:39 np0005548731 nova_compute[232433]: 2025-12-06 06:55:39.422 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 01:55:39 np0005548731 nova_compute[232433]: 2025-12-06 06:55:39.422 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:55:40 np0005548731 nova_compute[232433]: 2025-12-06 06:55:40.423 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:55:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:40.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:41.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:42.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:43.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:55:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:44.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:55:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:45.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:46.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:47.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:55:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:48.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:55:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:49.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:49 np0005548731 podman[234399]: 2025-12-06 06:55:49.915886679 +0000 UTC m=+0.077111493 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  6 01:55:49 np0005548731 podman[234400]: 2025-12-06 06:55:49.95846237 +0000 UTC m=+0.116407633 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:55:49 np0005548731 podman[234401]: 2025-12-06 06:55:49.959023553 +0000 UTC m=+0.112858584 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 01:55:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:50.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:51.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:52.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:53.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:54.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:55.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:55:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:56.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:55:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:57.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:55:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:58.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:55:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:55:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:55:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:59.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:56:00.841 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:56:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:56:00.842 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:56:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:56:00.842 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:56:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:00.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:01.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:02.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:03.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:56:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:04.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:56:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:05.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:56:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:06.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:07.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:56:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:08.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 01:56:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/351251706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 01:56:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 01:56:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/351251706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 01:56:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:09.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:10.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:11.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:12.554555) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004172554651, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 661, "num_deletes": 250, "total_data_size": 1170760, "memory_usage": 1192664, "flush_reason": "Manual Compaction"}
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004172665910, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 528173, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20550, "largest_seqno": 21206, "table_properties": {"data_size": 525272, "index_size": 873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7601, "raw_average_key_size": 20, "raw_value_size": 519199, "raw_average_value_size": 1366, "num_data_blocks": 39, "num_entries": 380, "num_filter_entries": 380, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004127, "oldest_key_time": 1765004127, "file_creation_time": 1765004172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 111400 microseconds, and 3923 cpu microseconds.
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:12.665982) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 528173 bytes OK
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:12.666003) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:12.696766) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:12.696818) EVENT_LOG_v1 {"time_micros": 1765004172696807, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:12.696843) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1167143, prev total WAL file size 1214830, number of live WAL files 2.
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:12.697500) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353038' seq:72057594037927935, type:22 .. '6D67727374617400373539' seq:0, type:0; will stop at (end)
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(515KB)], [39(10MB)]
Dec  6 01:56:12 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004172697571, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 11253781, "oldest_snapshot_seqno": -1}
Dec  6 01:56:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:12.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 4859 keys, 7660068 bytes, temperature: kUnknown
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004173281644, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 7660068, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7628623, "index_size": 18215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 123148, "raw_average_key_size": 25, "raw_value_size": 7541510, "raw_average_value_size": 1552, "num_data_blocks": 743, "num_entries": 4859, "num_filter_entries": 4859, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765004172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:13.281939) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7660068 bytes
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:13.284873) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 19.3 rd, 13.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.2 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(35.8) write-amplify(14.5) OK, records in: 5355, records dropped: 496 output_compression: NoCompression
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:13.284921) EVENT_LOG_v1 {"time_micros": 1765004173284903, "job": 22, "event": "compaction_finished", "compaction_time_micros": 584156, "compaction_time_cpu_micros": 24039, "output_level": 6, "num_output_files": 1, "total_output_size": 7660068, "num_input_records": 5355, "num_output_records": 4859, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004173285257, "job": 22, "event": "table_file_deletion", "file_number": 41}
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004173287533, "job": 22, "event": "table_file_deletion", "file_number": 39}
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:12.697424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:13.287638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:13.287644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:13.287646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:13.287648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:56:13.287649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:56:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:56:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:14.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:15.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:16.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:56:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:56:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:56:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:19.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=404 latency=0.002000048s ======
Dec  6 01:56:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:20.177 +0000] "GET /healthcheck HTTP/1.1" 404 240 - "python-urllib3/1.26.5" - latency=0.002000048s
Dec  6 01:56:20 np0005548731 podman[234576]: 2025-12-06 06:56:20.910214342 +0000 UTC m=+0.061390126 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 01:56:20 np0005548731 podman[234574]: 2025-12-06 06:56:20.936717135 +0000 UTC m=+0.088254278 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:56:20 np0005548731 podman[234575]: 2025-12-06 06:56:20.936733096 +0000 UTC m=+0.088492654 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 01:56:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:21.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:21.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:23.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:23.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:56:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:25.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:25.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:27.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:56:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:27.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:56:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:56:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:56:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:29.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:56:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:56:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:56:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:31.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:51 np0005548731 podman[234925]: 2025-12-06 06:56:51.892670703 +0000 UTC m=+0.051271896 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:56:51 np0005548731 podman[234927]: 2025-12-06 06:56:51.902559377 +0000 UTC m=+0.056650218 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:56:51 np0005548731 podman[234926]: 2025-12-06 06:56:51.929378388 +0000 UTC m=+0.085818498 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 01:56:52 np0005548731 rsyslogd[1006]: imjournal: 164 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  6 01:56:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:53.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:53.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:56:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:55.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:55.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:56:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:57.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:56:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:57.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:56:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:59.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:56:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:56:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:56:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:59.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:56:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:57:00.842 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:57:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:57:00.843 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:57:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:57:00.843 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:57:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:01.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:57:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:01.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:57:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:03.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:03.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:05.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:57:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:07.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:57:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:07.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:09.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:57:09.089 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 01:57:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:57:09.091 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 01:57:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:09.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:57:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:11.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:57:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:57:11.094 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:57:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:57:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:11.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:57:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:13.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:57:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:13.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:57:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:15.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:57:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:15.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:57:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:17.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:17.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:19.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:19.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:21.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:21.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:22 np0005548731 podman[235112]: 2025-12-06 06:57:22.906727673 +0000 UTC m=+0.064424650 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  6 01:57:22 np0005548731 podman[235110]: 2025-12-06 06:57:22.919500498 +0000 UTC m=+0.084374213 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 01:57:22 np0005548731 podman[235111]: 2025-12-06 06:57:22.943363757 +0000 UTC m=+0.103281079 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 01:57:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e151 e151: 3 total, 3 up, 3 in
Dec  6 01:57:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:23.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e152 e152: 3 total, 3 up, 3 in
Dec  6 01:57:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:57:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:25.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:57:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:25.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:27.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:27.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:29.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:29.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:31.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:31.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e153 e153: 3 total, 3 up, 3 in
Dec  6 01:57:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:33.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:33.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:35 np0005548731 nova_compute[232433]: 2025-12-06 06:57:35.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:35 np0005548731 nova_compute[232433]: 2025-12-06 06:57:35.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 01:57:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:35.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:35 np0005548731 nova_compute[232433]: 2025-12-06 06:57:35.124 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 01:57:35 np0005548731 nova_compute[232433]: 2025-12-06 06:57:35.127 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:35 np0005548731 nova_compute[232433]: 2025-12-06 06:57:35.128 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 01:57:35 np0005548731 nova_compute[232433]: 2025-12-06 06:57:35.145 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:35.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:36 np0005548731 nova_compute[232433]: 2025-12-06 06:57:36.163 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:37.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:37.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:38 np0005548731 nova_compute[232433]: 2025-12-06 06:57:38.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:39 np0005548731 nova_compute[232433]: 2025-12-06 06:57:39.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:39 np0005548731 nova_compute[232433]: 2025-12-06 06:57:39.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:39 np0005548731 nova_compute[232433]: 2025-12-06 06:57:39.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 01:57:39 np0005548731 nova_compute[232433]: 2025-12-06 06:57:39.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 01:57:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:39.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:57:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:39.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:57:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:40 np0005548731 nova_compute[232433]: 2025-12-06 06:57:40.183 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 01:57:40 np0005548731 nova_compute[232433]: 2025-12-06 06:57:40.183 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:40 np0005548731 nova_compute[232433]: 2025-12-06 06:57:40.183 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:40 np0005548731 nova_compute[232433]: 2025-12-06 06:57:40.184 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:40 np0005548731 nova_compute[232433]: 2025-12-06 06:57:40.669 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:57:40 np0005548731 nova_compute[232433]: 2025-12-06 06:57:40.671 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:57:40 np0005548731 nova_compute[232433]: 2025-12-06 06:57:40.671 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:57:40 np0005548731 nova_compute[232433]: 2025-12-06 06:57:40.672 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 01:57:40 np0005548731 nova_compute[232433]: 2025-12-06 06:57:40.672 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:41.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:57:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:41.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:57:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:57:41 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1122354135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:57:41 np0005548731 nova_compute[232433]: 2025-12-06 06:57:41.355 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:41 np0005548731 nova_compute[232433]: 2025-12-06 06:57:41.494 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 01:57:41 np0005548731 nova_compute[232433]: 2025-12-06 06:57:41.495 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5300MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 01:57:41 np0005548731 nova_compute[232433]: 2025-12-06 06:57:41.496 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:57:41 np0005548731 nova_compute[232433]: 2025-12-06 06:57:41.496 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:57:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:43.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:43.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:45.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:45.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:45 np0005548731 nova_compute[232433]: 2025-12-06 06:57:45.462 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 01:57:45 np0005548731 nova_compute[232433]: 2025-12-06 06:57:45.462 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 01:57:45 np0005548731 nova_compute[232433]: 2025-12-06 06:57:45.486 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 01:57:45 np0005548731 nova_compute[232433]: 2025-12-06 06:57:45.518 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 01:57:45 np0005548731 nova_compute[232433]: 2025-12-06 06:57:45.518 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 01:57:45 np0005548731 nova_compute[232433]: 2025-12-06 06:57:45.536 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 01:57:45 np0005548731 nova_compute[232433]: 2025-12-06 06:57:45.566 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 01:57:45 np0005548731 nova_compute[232433]: 2025-12-06 06:57:45.627 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:46 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  6 01:57:46 np0005548731 nova_compute[232433]: 2025-12-06 06:57:46.652 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "012052dd-fee1-4ac6-bc17-336aff7b5db3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:57:46 np0005548731 nova_compute[232433]: 2025-12-06 06:57:46.652 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:57:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:57:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:57:46 np0005548731 nova_compute[232433]: 2025-12-06 06:57:46.669 232437 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 01:57:46 np0005548731 nova_compute[232433]: 2025-12-06 06:57:46.791 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:57:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:57:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2115016062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.056 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.062 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.079 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.081 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.081 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.082 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.087 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.088 232437 INFO nova.compute.claims [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 01:57:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:47.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:47.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.353 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:57:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:57:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:57:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2428242993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.830 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.838 232437 DEBUG nova.compute.provider_tree [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.857 232437 DEBUG nova.scheduler.client.report [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.887 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.889 232437 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.958 232437 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 01:57:47 np0005548731 nova_compute[232433]: 2025-12-06 06:57:47.959 232437 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.002 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.003 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.003 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.013 232437 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.069 232437 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.307 232437 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.309 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.309 232437 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Creating image(s)#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.338 232437 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.365 232437 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.397 232437 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.401 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:57:48 np0005548731 nova_compute[232433]: 2025-12-06 06:57:48.403 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:57:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:57:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:57:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:57:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:57:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:57:49 np0005548731 nova_compute[232433]: 2025-12-06 06:57:49.052 232437 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Automatically allocating a network for project 066c314d67e347f6a49e8e3e27998441. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Dec  6 01:57:49 np0005548731 nova_compute[232433]: 2025-12-06 06:57:49.082 232437 DEBUG nova.virt.libvirt.imagebackend [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6efab05d-c7cf-4770-a5c3-c806a2739063/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6efab05d-c7cf-4770-a5c3-c806a2739063/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  6 01:57:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:49.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000012s ======
Dec  6 01:57:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:49.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Dec  6 01:57:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000012s ======
Dec  6 01:57:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:51.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.188 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "ccb415c8-1183-4921-bc8c-1c40722eeb98" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.189 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "ccb415c8-1183-4921-bc8c-1c40722eeb98" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.205 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.225 232437 DEBUG nova.compute.manager [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.284 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.284 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.286 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.287 232437 DEBUG nova.virt.images [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] 6efab05d-c7cf-4770-a5c3-c806a2739063 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.288 232437 DEBUG nova.privsep.utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.288 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.312 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.312 232437 INFO nova.compute.claims [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 01:57:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:51.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:51 np0005548731 nova_compute[232433]: 2025-12-06 06:57:51.466 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.213 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted" returned: 0 in 0.925s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.220 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.272 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.274 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:57:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:57:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4214025489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.573 232437 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.576 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.593 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.598 232437 DEBUG nova.compute.provider_tree [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.632 232437 ERROR nova.scheduler.client.report [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [req-bcfc3204-8bbf-439d-805b-5d6a9518d77c] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 6d00757a-082f-486d-ae84-869a2ba2e6e7.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-bcfc3204-8bbf-439d-805b-5d6a9518d77c"}]}#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.647 232437 DEBUG nova.scheduler.client.report [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.687 232437 DEBUG nova.scheduler.client.report [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.687 232437 DEBUG nova.compute.provider_tree [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.701 232437 DEBUG nova.scheduler.client.report [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.727 232437 DEBUG nova.scheduler.client.report [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 01:57:52 np0005548731 nova_compute[232433]: 2025-12-06 06:57:52.794 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:53.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:57:53 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/594518666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.270 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.276 232437 DEBUG nova.compute.provider_tree [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 01:57:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000012s ======
Dec  6 01:57:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:53.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.347 232437 DEBUG nova.scheduler.client.report [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Updated inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.347 232437 DEBUG nova.compute.provider_tree [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Updating resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.348 232437 DEBUG nova.compute.provider_tree [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.386 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.388 232437 DEBUG nova.compute.manager [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.474 232437 DEBUG nova.compute.manager [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.474 232437 DEBUG nova.network.neutron [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 01:57:53 np0005548731 podman[235613]: 2025-12-06 06:57:53.49547492 +0000 UTC m=+0.064068859 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.499 232437 INFO nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 01:57:53 np0005548731 podman[235611]: 2025-12-06 06:57:53.50624258 +0000 UTC m=+0.080722360 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  6 01:57:53 np0005548731 podman[235612]: 2025-12-06 06:57:53.5177361 +0000 UTC m=+0.090231816 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.546 232437 DEBUG nova.compute.manager [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.658 232437 DEBUG nova.compute.manager [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.660 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.660 232437 INFO nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Creating image(s)#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.679 232437 DEBUG nova.storage.rbd_utils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image ccb415c8-1183-4921-bc8c-1c40722eeb98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.702 232437 DEBUG nova.storage.rbd_utils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image ccb415c8-1183-4921-bc8c-1c40722eeb98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.724 232437 DEBUG nova.storage.rbd_utils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image ccb415c8-1183-4921-bc8c-1c40722eeb98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.727 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.784 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.784 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.785 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:57:53 np0005548731 nova_compute[232433]: 2025-12-06 06:57:53.786 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:57:54 np0005548731 nova_compute[232433]: 2025-12-06 06:57:54.162 232437 DEBUG nova.storage.rbd_utils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image ccb415c8-1183-4921-bc8c-1c40722eeb98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:57:54 np0005548731 nova_compute[232433]: 2025-12-06 06:57:54.166 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef ccb415c8-1183-4921-bc8c-1c40722eeb98_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:54 np0005548731 nova_compute[232433]: 2025-12-06 06:57:54.187 232437 DEBUG nova.network.neutron [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec  6 01:57:54 np0005548731 nova_compute[232433]: 2025-12-06 06:57:54.188 232437 DEBUG nova.compute.manager [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 01:57:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:57:54 np0005548731 nova_compute[232433]: 2025-12-06 06:57:54.727 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:54 np0005548731 nova_compute[232433]: 2025-12-06 06:57:54.829 232437 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] resizing rbd image 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 01:57:54 np0005548731 nova_compute[232433]: 2025-12-06 06:57:54.868 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef ccb415c8-1183-4921-bc8c-1c40722eeb98_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.702s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:54 np0005548731 nova_compute[232433]: 2025-12-06 06:57:54.982 232437 DEBUG nova.storage.rbd_utils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] resizing rbd image ccb415c8-1183-4921-bc8c-1c40722eeb98_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.019 232437 DEBUG nova.objects.instance [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'migration_context' on Instance uuid 012052dd-fee1-4ac6-bc17-336aff7b5db3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.038 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.038 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Ensure instance console log exists: /var/lib/nova/instances/012052dd-fee1-4ac6-bc17-336aff7b5db3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.039 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.039 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.039 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.096 232437 DEBUG nova.objects.instance [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lazy-loading 'migration_context' on Instance uuid ccb415c8-1183-4921-bc8c-1c40722eeb98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.107 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.107 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Ensure instance console log exists: /var/lib/nova/instances/ccb415c8-1183-4921-bc8c-1c40722eeb98/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.107 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.108 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.108 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.110 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.114 232437 WARNING nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.118 232437 DEBUG nova.virt.libvirt.host [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.119 232437 DEBUG nova.virt.libvirt.host [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.121 232437 DEBUG nova.virt.libvirt.host [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.122 232437 DEBUG nova.virt.libvirt.host [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.124 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.124 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.125 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.125 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.125 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.125 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.126 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.126 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.126 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.126 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.127 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.127 232437 DEBUG nova.virt.hardware [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.130 232437 DEBUG nova.privsep.utils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.131 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000011s ======
Dec  6 01:57:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:55.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Dec  6 01:57:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:57:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:57:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:55.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 01:57:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3377158659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.570 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.611 232437 DEBUG nova.storage.rbd_utils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image ccb415c8-1183-4921-bc8c-1c40722eeb98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:57:55 np0005548731 nova_compute[232433]: 2025-12-06 06:57:55.618 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 01:57:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2594476977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.081 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.085 232437 DEBUG nova.objects.instance [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lazy-loading 'pci_devices' on Instance uuid ccb415c8-1183-4921-bc8c-1c40722eeb98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.101 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] End _get_guest_xml xml=<domain type="kvm">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  <uuid>ccb415c8-1183-4921-bc8c-1c40722eeb98</uuid>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  <name>instance-00000006</name>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1409725138</nova:name>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 06:57:55</nova:creationTime>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <nova:user uuid="16a339a76ea148bc97bcb8e720ea8511">tempest-LiveMigrationNegativeTest-2087231600-project-member</nova:user>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <nova:project uuid="cc0441dee99e4b51ac652297c4221de3">tempest-LiveMigrationNegativeTest-2087231600</nova:project>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <nova:ports/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <system>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <entry name="serial">ccb415c8-1183-4921-bc8c-1c40722eeb98</entry>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <entry name="uuid">ccb415c8-1183-4921-bc8c-1c40722eeb98</entry>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    </system>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  <os>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  </os>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  <features>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  </features>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  </clock>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  <devices>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/ccb415c8-1183-4921-bc8c-1c40722eeb98_disk">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      </source>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      </auth>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/ccb415c8-1183-4921-bc8c-1c40722eeb98_disk.config">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      </source>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      </auth>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/ccb415c8-1183-4921-bc8c-1c40722eeb98/console.log" append="off"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    </serial>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <video>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    </video>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    </rng>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 01:57:56 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 01:57:56 np0005548731 nova_compute[232433]:  </devices>
Dec  6 01:57:56 np0005548731 nova_compute[232433]: </domain>
Dec  6 01:57:56 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.156 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.157 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.158 232437 INFO nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Using config drive#033[00m
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.183 232437 DEBUG nova.storage.rbd_utils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image ccb415c8-1183-4921-bc8c-1c40722eeb98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:57:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e154 e154: 3 total, 3 up, 3 in
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.665 232437 INFO nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Creating config drive at /var/lib/nova/instances/ccb415c8-1183-4921-bc8c-1c40722eeb98/disk.config#033[00m
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.673 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ccb415c8-1183-4921-bc8c-1c40722eeb98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaxccrkzf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.817 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ccb415c8-1183-4921-bc8c-1c40722eeb98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaxccrkzf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.846 232437 DEBUG nova.storage.rbd_utils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image ccb415c8-1183-4921-bc8c-1c40722eeb98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:57:56 np0005548731 nova_compute[232433]: 2025-12-06 06:57:56.850 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ccb415c8-1183-4921-bc8c-1c40722eeb98/disk.config ccb415c8-1183-4921-bc8c-1c40722eeb98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.013 232437 DEBUG oslo_concurrency.processutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ccb415c8-1183-4921-bc8c-1c40722eeb98/disk.config ccb415c8-1183-4921-bc8c-1c40722eeb98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.014 232437 INFO nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Deleting local config drive /var/lib/nova/instances/ccb415c8-1183-4921-bc8c-1c40722eeb98/disk.config because it was imported into RBD.#033[00m
Dec  6 01:57:57 np0005548731 systemd[1]: Starting libvirt secret daemon...
Dec  6 01:57:57 np0005548731 systemd[1]: Started libvirt secret daemon.
Dec  6 01:57:57 np0005548731 systemd-machined[195355]: New machine qemu-1-instance-00000006.
Dec  6 01:57:57 np0005548731 systemd[1]: Started Virtual Machine qemu-1-instance-00000006.
Dec  6 01:57:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:57.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:57.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e155 e155: 3 total, 3 up, 3 in
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.862 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004277.8619533, ccb415c8-1183-4921-bc8c-1c40722eeb98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.863 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] VM Resumed (Lifecycle Event)#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.866 232437 DEBUG nova.compute.manager [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.867 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.873 232437 INFO nova.virt.libvirt.driver [-] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Instance spawned successfully.#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.874 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.897 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.902 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.906 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.906 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.906 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.907 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.907 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.908 232437 DEBUG nova.virt.libvirt.driver [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.931 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.931 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004277.8622692, ccb415c8-1183-4921-bc8c-1c40722eeb98 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.931 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] VM Started (Lifecycle Event)#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.961 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.964 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.971 232437 INFO nova.compute.manager [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Took 4.31 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 01:57:57 np0005548731 nova_compute[232433]: 2025-12-06 06:57:57.971 232437 DEBUG nova.compute.manager [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:57:58 np0005548731 nova_compute[232433]: 2025-12-06 06:57:58.004 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 01:57:58 np0005548731 nova_compute[232433]: 2025-12-06 06:57:58.044 232437 INFO nova.compute.manager [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Took 6.78 seconds to build instance.#033[00m
Dec  6 01:57:58 np0005548731 nova_compute[232433]: 2025-12-06 06:57:58.078 232437 DEBUG oslo_concurrency.lockutils [None req-5e184569-1450-40ba-818b-3aff0acf5033 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "ccb415c8-1183-4921-bc8c-1c40722eeb98" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:57:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:59.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:57:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:57:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:59.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:57:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:00.843 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:00.844 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:00.844 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:01.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000011s ======
Dec  6 01:58:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:01.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Dec  6 01:58:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:03.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000012s ======
Dec  6 01:58:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:03.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Dec  6 01:58:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000011s ======
Dec  6 01:58:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:05.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Dec  6 01:58:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000012s ======
Dec  6 01:58:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:05.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Dec  6 01:58:05 np0005548731 nova_compute[232433]: 2025-12-06 06:58:05.864 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Acquiring lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:05 np0005548731 nova_compute[232433]: 2025-12-06 06:58:05.864 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:05 np0005548731 nova_compute[232433]: 2025-12-06 06:58:05.906 232437 DEBUG nova.compute.manager [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.050 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.051 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.056 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.057 232437 INFO nova.compute.claims [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.190 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 e156: 3 total, 3 up, 3 in
Dec  6 01:58:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:58:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/364414299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.687 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.694 232437 DEBUG nova.compute.provider_tree [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.721 232437 DEBUG nova.scheduler.client.report [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.748 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.749 232437 DEBUG nova.compute.manager [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.846 232437 DEBUG nova.compute.manager [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.847 232437 DEBUG nova.network.neutron [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.870 232437 INFO nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.888 232437 DEBUG nova.compute.manager [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.992 232437 DEBUG nova.compute.manager [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.994 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 01:58:06 np0005548731 nova_compute[232433]: 2025-12-06 06:58:06.994 232437 INFO nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Creating image(s)#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.027 232437 DEBUG nova.storage.rbd_utils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] rbd image 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.065 232437 DEBUG nova.storage.rbd_utils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] rbd image 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.099 232437 DEBUG nova.storage.rbd_utils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] rbd image 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.103 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.158 232437 WARNING oslo_policy.policy [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.159 232437 WARNING oslo_policy.policy [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.161 232437 DEBUG nova.policy [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4e358e8cf5374d0b818b9d1de2f01848', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bccff877c0f34760846113a84f5b0709', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.165 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.166 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.167 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.167 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:07.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.197 232437 DEBUG nova.storage.rbd_utils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] rbd image 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.203 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000011s ======
Dec  6 01:58:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:07.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.530 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.607 232437 DEBUG nova.storage.rbd_utils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] resizing rbd image 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.731 232437 DEBUG nova.objects.instance [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lazy-loading 'migration_context' on Instance uuid 1dd278c9-e3ac-480b-9a02-2e580da2d211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.755 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.756 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Ensure instance console log exists: /var/lib/nova/instances/1dd278c9-e3ac-480b-9a02-2e580da2d211/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.757 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.757 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:07 np0005548731 nova_compute[232433]: 2025-12-06 06:58:07.757 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:08 np0005548731 nova_compute[232433]: 2025-12-06 06:58:08.678 232437 DEBUG nova.network.neutron [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Successfully created port: 1c43f5f0-b72d-49f6-9dc5-10367318c26c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 01:58:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:09.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000011s ======
Dec  6 01:58:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:09.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Dec  6 01:58:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:11.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000012s ======
Dec  6 01:58:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:11.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Dec  6 01:58:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:12.215 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 01:58:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:12.216 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 01:58:12 np0005548731 nova_compute[232433]: 2025-12-06 06:58:12.283 232437 DEBUG nova.network.neutron [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Successfully updated port: 1c43f5f0-b72d-49f6-9dc5-10367318c26c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 01:58:12 np0005548731 nova_compute[232433]: 2025-12-06 06:58:12.313 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Acquiring lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:58:12 np0005548731 nova_compute[232433]: 2025-12-06 06:58:12.313 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Acquired lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:58:12 np0005548731 nova_compute[232433]: 2025-12-06 06:58:12.314 232437 DEBUG nova.network.neutron [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 01:58:12 np0005548731 nova_compute[232433]: 2025-12-06 06:58:12.546 232437 DEBUG nova.network.neutron [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 01:58:12 np0005548731 nova_compute[232433]: 2025-12-06 06:58:12.892 232437 DEBUG nova.compute.manager [req-e02926c8-d287-46e7-920c-338aa6a4c41f req-6707bebf-64a3-4b49-9d82-cd344ce0e1c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Received event network-changed-1c43f5f0-b72d-49f6-9dc5-10367318c26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:58:12 np0005548731 nova_compute[232433]: 2025-12-06 06:58:12.893 232437 DEBUG nova.compute.manager [req-e02926c8-d287-46e7-920c-338aa6a4c41f req-6707bebf-64a3-4b49-9d82-cd344ce0e1c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Refreshing instance network info cache due to event network-changed-1c43f5f0-b72d-49f6-9dc5-10367318c26c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 01:58:12 np0005548731 nova_compute[232433]: 2025-12-06 06:58:12.893 232437 DEBUG oslo_concurrency.lockutils [req-e02926c8-d287-46e7-920c-338aa6a4c41f req-6707bebf-64a3-4b49-9d82-cd344ce0e1c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:58:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000011s ======
Dec  6 01:58:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:13.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Dec  6 01:58:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000011s ======
Dec  6 01:58:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:13.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.834 232437 DEBUG nova.network.neutron [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Updating instance_info_cache with network_info: [{"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.872 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Releasing lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.873 232437 DEBUG nova.compute.manager [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Instance network_info: |[{"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.873 232437 DEBUG oslo_concurrency.lockutils [req-e02926c8-d287-46e7-920c-338aa6a4c41f req-6707bebf-64a3-4b49-9d82-cd344ce0e1c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.874 232437 DEBUG nova.network.neutron [req-e02926c8-d287-46e7-920c-338aa6a4c41f req-6707bebf-64a3-4b49-9d82-cd344ce0e1c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Refreshing network info cache for port 1c43f5f0-b72d-49f6-9dc5-10367318c26c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.877 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Start _get_guest_xml network_info=[{"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.883 232437 WARNING nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.891 232437 DEBUG nova.virt.libvirt.host [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.892 232437 DEBUG nova.virt.libvirt.host [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.904 232437 DEBUG nova.virt.libvirt.host [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.905 232437 DEBUG nova.virt.libvirt.host [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.907 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.907 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.908 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.908 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.908 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.908 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.909 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.909 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.909 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.909 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.910 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.910 232437 DEBUG nova.virt.hardware [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 01:58:13 np0005548731 nova_compute[232433]: 2025-12-06 06:58:13.914 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 01:58:14 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2324399139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.385 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.416 232437 DEBUG nova.storage.rbd_utils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] rbd image 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.420 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 01:58:14 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/440116575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.905 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.907 232437 DEBUG nova.virt.libvirt.vif [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T06:58:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-1565128301',display_name='tempest-VolumesAssistedSnapshotsTest-server-1565128301',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-1565128301',id=9,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZen7TNm8dgJLJTr9dDp0A2wz2tSf4zkjaCr9LsbdXZ82SH9igdlnEuLY5ag1VrL+QQmGmWXHiV5ouzMHUdEKpcjVfelrNJZX+qQ6N1KkZQiNXyXDA1UmeqB9bVkSsJIA==',key_name='tempest-keypair-559281048',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bccff877c0f34760846113a84f5b0709',ramdisk_id='',reservation_id='r-1pofku7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-1680544419',owner_user_name='tempest-VolumesAssistedSnapshotsTest-1680544419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T06:58:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4e358e8cf5374d0b818b9d1de2f01848',uuid=1dd278c9-e3ac-480b-9a02-2e580da2d211,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.907 232437 DEBUG nova.network.os_vif_util [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Converting VIF {"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.909 232437 DEBUG nova.network.os_vif_util [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:e0:e2,bridge_name='br-int',has_traffic_filtering=True,id=1c43f5f0-b72d-49f6-9dc5-10367318c26c,network=Network(26dc03be-378a-4751-a903-cbe72fe09f14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c43f5f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.911 232437 DEBUG nova.objects.instance [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1dd278c9-e3ac-480b-9a02-2e580da2d211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.931 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] End _get_guest_xml xml=<domain type="kvm">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  <uuid>1dd278c9-e3ac-480b-9a02-2e580da2d211</uuid>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  <name>instance-00000009</name>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <nova:name>tempest-VolumesAssistedSnapshotsTest-server-1565128301</nova:name>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 06:58:13</nova:creationTime>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <nova:user uuid="4e358e8cf5374d0b818b9d1de2f01848">tempest-VolumesAssistedSnapshotsTest-1680544419-project-member</nova:user>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <nova:project uuid="bccff877c0f34760846113a84f5b0709">tempest-VolumesAssistedSnapshotsTest-1680544419</nova:project>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <nova:port uuid="1c43f5f0-b72d-49f6-9dc5-10367318c26c">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <system>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <entry name="serial">1dd278c9-e3ac-480b-9a02-2e580da2d211</entry>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <entry name="uuid">1dd278c9-e3ac-480b-9a02-2e580da2d211</entry>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    </system>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  <os>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  </os>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  <features>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  </features>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  </clock>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  <devices>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/1dd278c9-e3ac-480b-9a02-2e580da2d211_disk">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      </source>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      </auth>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/1dd278c9-e3ac-480b-9a02-2e580da2d211_disk.config">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      </source>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      </auth>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:20:e0:e2"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <target dev="tap1c43f5f0-b7"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    </interface>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/1dd278c9-e3ac-480b-9a02-2e580da2d211/console.log" append="off"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    </serial>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <video>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    </video>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    </rng>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 01:58:14 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 01:58:14 np0005548731 nova_compute[232433]:  </devices>
Dec  6 01:58:14 np0005548731 nova_compute[232433]: </domain>
Dec  6 01:58:14 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.933 232437 DEBUG nova.compute.manager [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Preparing to wait for external event network-vif-plugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.933 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Acquiring lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.934 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.934 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.935 232437 DEBUG nova.virt.libvirt.vif [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T06:58:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-1565128301',display_name='tempest-VolumesAssistedSnapshotsTest-server-1565128301',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-1565128301',id=9,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZen7TNm8dgJLJTr9dDp0A2wz2tSf4zkjaCr9LsbdXZ82SH9igdlnEuLY5ag1VrL+QQmGmWXHiV5ouzMHUdEKpcjVfelrNJZX+qQ6N1KkZQiNXyXDA1UmeqB9bVkSsJIA==',key_name='tempest-keypair-559281048',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bccff877c0f34760846113a84f5b0709',ramdisk_id='',reservation_id='r-1pofku7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-1680544419',owner_user_name='tempest-VolumesAssistedSnapshotsTest-1680544419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T06:58:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4e358e8cf5374d0b818b9d1de2f01848',uuid=1dd278c9-e3ac-480b-9a02-2e580da2d211,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.935 232437 DEBUG nova.network.os_vif_util [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Converting VIF {"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.935 232437 DEBUG nova.network.os_vif_util [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:e0:e2,bridge_name='br-int',has_traffic_filtering=True,id=1c43f5f0-b72d-49f6-9dc5-10367318c26c,network=Network(26dc03be-378a-4751-a903-cbe72fe09f14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c43f5f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 01:58:14 np0005548731 nova_compute[232433]: 2025-12-06 06:58:14.936 232437 DEBUG os_vif [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:e0:e2,bridge_name='br-int',has_traffic_filtering=True,id=1c43f5f0-b72d-49f6-9dc5-10367318c26c,network=Network(26dc03be-378a-4751-a903-cbe72fe09f14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c43f5f0-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.016 232437 DEBUG ovsdbapp.backend.ovs_idl [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.016 232437 DEBUG ovsdbapp.backend.ovs_idl [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.017 232437 DEBUG ovsdbapp.backend.ovs_idl [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.017 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.018 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.019 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.019 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.021 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.023 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.033 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.034 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.034 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.035 232437 INFO oslo.privsep.daemon [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp3v7o7a6q/privsep.sock']#033[00m
Dec  6 01:58:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000011s ======
Dec  6 01:58:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:15.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Dec  6 01:58:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:15.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.584 232437 DEBUG nova.network.neutron [req-e02926c8-d287-46e7-920c-338aa6a4c41f req-6707bebf-64a3-4b49-9d82-cd344ce0e1c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Updated VIF entry in instance network info cache for port 1c43f5f0-b72d-49f6-9dc5-10367318c26c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.585 232437 DEBUG nova.network.neutron [req-e02926c8-d287-46e7-920c-338aa6a4c41f req-6707bebf-64a3-4b49-9d82-cd344ce0e1c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Updating instance_info_cache with network_info: [{"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.605 232437 DEBUG oslo_concurrency.lockutils [req-e02926c8-d287-46e7-920c-338aa6a4c41f req-6707bebf-64a3-4b49-9d82-cd344ce0e1c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.731 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.899 232437 INFO oslo.privsep.daemon [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.741 236507 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.747 236507 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.749 236507 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  6 01:58:15 np0005548731 nova_compute[232433]: 2025-12-06 06:58:15.749 236507 INFO oslo.privsep.daemon [-] privsep daemon running as pid 236507#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.149 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "ccb415c8-1183-4921-bc8c-1c40722eeb98" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.149 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "ccb415c8-1183-4921-bc8c-1c40722eeb98" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.150 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "ccb415c8-1183-4921-bc8c-1c40722eeb98-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.150 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "ccb415c8-1183-4921-bc8c-1c40722eeb98-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.150 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "ccb415c8-1183-4921-bc8c-1c40722eeb98-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.152 232437 INFO nova.compute.manager [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Terminating instance#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.153 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "refresh_cache-ccb415c8-1183-4921-bc8c-1c40722eeb98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.153 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquired lock "refresh_cache-ccb415c8-1183-4921-bc8c-1c40722eeb98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.153 232437 DEBUG nova.network.neutron [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.315 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.316 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c43f5f0-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.316 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c43f5f0-b7, col_values=(('external_ids', {'iface-id': '1c43f5f0-b72d-49f6-9dc5-10367318c26c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:e0:e2', 'vm-uuid': '1dd278c9-e3ac-480b-9a02-2e580da2d211'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.319 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:16 np0005548731 NetworkManager[49182]: <info>  [1765004296.3198] manager: (tap1c43f5f0-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.322 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.330 232437 INFO os_vif [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:e0:e2,bridge_name='br-int',has_traffic_filtering=True,id=1c43f5f0-b72d-49f6-9dc5-10367318c26c,network=Network(26dc03be-378a-4751-a903-cbe72fe09f14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c43f5f0-b7')#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.407 232437 DEBUG nova.network.neutron [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.448 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.449 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.449 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] No VIF found with MAC fa:16:3e:20:e0:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.450 232437 INFO nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Using config drive#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.481 232437 DEBUG nova.storage.rbd_utils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] rbd image 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.922 232437 DEBUG nova.network.neutron [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.975 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Releasing lock "refresh_cache-ccb415c8-1183-4921-bc8c-1c40722eeb98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:58:16 np0005548731 nova_compute[232433]: 2025-12-06 06:58:16.976 232437 DEBUG nova.compute.manager [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 01:58:17 np0005548731 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec  6 01:58:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:17.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:17 np0005548731 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Consumed 14.698s CPU time.
Dec  6 01:58:17 np0005548731 systemd-machined[195355]: Machine qemu-1-instance-00000006 terminated.
Dec  6 01:58:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000011s ======
Dec  6 01:58:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:17.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Dec  6 01:58:17 np0005548731 nova_compute[232433]: 2025-12-06 06:58:17.396 232437 INFO nova.virt.libvirt.driver [-] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Instance destroyed successfully.#033[00m
Dec  6 01:58:17 np0005548731 nova_compute[232433]: 2025-12-06 06:58:17.397 232437 DEBUG nova.objects.instance [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lazy-loading 'resources' on Instance uuid ccb415c8-1183-4921-bc8c-1c40722eeb98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:58:17 np0005548731 nova_compute[232433]: 2025-12-06 06:58:17.836 232437 INFO nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Creating config drive at /var/lib/nova/instances/1dd278c9-e3ac-480b-9a02-2e580da2d211/disk.config#033[00m
Dec  6 01:58:17 np0005548731 nova_compute[232433]: 2025-12-06 06:58:17.846 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1dd278c9-e3ac-480b-9a02-2e580da2d211/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpus365nxp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:17 np0005548731 nova_compute[232433]: 2025-12-06 06:58:17.977 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1dd278c9-e3ac-480b-9a02-2e580da2d211/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpus365nxp" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.011 232437 DEBUG nova.storage.rbd_utils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] rbd image 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.017 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1dd278c9-e3ac-480b-9a02-2e580da2d211/disk.config 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.406 232437 DEBUG oslo_concurrency.processutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1dd278c9-e3ac-480b-9a02-2e580da2d211/disk.config 1dd278c9-e3ac-480b-9a02-2e580da2d211_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.407 232437 INFO nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Deleting local config drive /var/lib/nova/instances/1dd278c9-e3ac-480b-9a02-2e580da2d211/disk.config because it was imported into RBD.#033[00m
Dec  6 01:58:18 np0005548731 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  6 01:58:18 np0005548731 kernel: tap1c43f5f0-b7: entered promiscuous mode
Dec  6 01:58:18 np0005548731 NetworkManager[49182]: <info>  [1765004298.4744] manager: (tap1c43f5f0-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.477 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:18 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:18Z|00027|binding|INFO|Claiming lport 1c43f5f0-b72d-49f6-9dc5-10367318c26c for this chassis.
Dec  6 01:58:18 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:18Z|00028|binding|INFO|1c43f5f0-b72d-49f6-9dc5-10367318c26c: Claiming fa:16:3e:20:e0:e2 10.100.0.4
Dec  6 01:58:18 np0005548731 systemd-udevd[236607]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.482 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:18.493 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:e0:e2 10.100.0.4'], port_security=['fa:16:3e:20:e0:e2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1dd278c9-e3ac-480b-9a02-2e580da2d211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26dc03be-378a-4751-a903-cbe72fe09f14', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bccff877c0f34760846113a84f5b0709', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1b42260-c646-42f8-aae4-ef1d7199eb37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5f82587-5432-4011-b9b3-695194943504, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=1c43f5f0-b72d-49f6-9dc5-10367318c26c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 01:58:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:18.495 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 1c43f5f0-b72d-49f6-9dc5-10367318c26c in datapath 26dc03be-378a-4751-a903-cbe72fe09f14 bound to our chassis#033[00m
Dec  6 01:58:18 np0005548731 NetworkManager[49182]: <info>  [1765004298.4981] device (tap1c43f5f0-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 01:58:18 np0005548731 NetworkManager[49182]: <info>  [1765004298.4993] device (tap1c43f5f0-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 01:58:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:18.499 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26dc03be-378a-4751-a903-cbe72fe09f14#033[00m
Dec  6 01:58:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:18.500 143965 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpr9om8e6d/privsep.sock']#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.524 232437 INFO nova.virt.libvirt.driver [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Deleting instance files /var/lib/nova/instances/ccb415c8-1183-4921-bc8c-1c40722eeb98_del#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.524 232437 INFO nova.virt.libvirt.driver [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Deletion of /var/lib/nova/instances/ccb415c8-1183-4921-bc8c-1c40722eeb98_del complete#033[00m
Dec  6 01:58:18 np0005548731 systemd-machined[195355]: New machine qemu-2-instance-00000009.
Dec  6 01:58:18 np0005548731 systemd[1]: Started Virtual Machine qemu-2-instance-00000009.
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.565 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:18 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:18Z|00029|binding|INFO|Setting lport 1c43f5f0-b72d-49f6-9dc5-10367318c26c ovn-installed in OVS
Dec  6 01:58:18 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:18Z|00030|binding|INFO|Setting lport 1c43f5f0-b72d-49f6-9dc5-10367318c26c up in Southbound
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.575 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.577 232437 DEBUG nova.virt.libvirt.host [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.578 232437 INFO nova.virt.libvirt.host [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] UEFI support detected#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.580 232437 INFO nova.compute.manager [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Took 1.60 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.580 232437 DEBUG oslo.service.loopingcall [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.580 232437 DEBUG nova.compute.manager [-] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 01:58:18 np0005548731 nova_compute[232433]: 2025-12-06 06:58:18.581 232437 DEBUG nova.network.neutron [-] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 01:58:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:19.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:19.218 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.238 232437 DEBUG nova.network.neutron [-] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.255 232437 DEBUG nova.network.neutron [-] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.281 232437 INFO nova.compute.manager [-] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Took 0.70 seconds to deallocate network for instance.#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.330 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.332 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:19.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.398 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004299.3981302, 1dd278c9-e3ac-480b-9a02-2e580da2d211 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.398 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] VM Started (Lifecycle Event)#033[00m
Dec  6 01:58:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.428 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.435 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004299.4003794, 1dd278c9-e3ac-480b-9a02-2e580da2d211 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.435 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] VM Paused (Lifecycle Event)#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.457 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.462 232437 DEBUG oslo_concurrency.processutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:19.479 143965 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  6 01:58:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:19.479 143965 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpr9om8e6d/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  6 01:58:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:19.318 236668 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  6 01:58:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:19.324 236668 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  6 01:58:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:19.327 236668 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  6 01:58:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:19.327 236668 INFO oslo.privsep.daemon [-] privsep daemon running as pid 236668#033[00m
Dec  6 01:58:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:19.483 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[59150d9d-290a-4f09-9d15-fdecbd68d4bd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.487 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.511 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.530 232437 DEBUG nova.compute.manager [req-40ce8ea5-6f71-4c2d-b680-c5b2657579a2 req-57e9ef47-10b3-450d-9586-c5e4935cfddb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Received event network-vif-plugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.531 232437 DEBUG oslo_concurrency.lockutils [req-40ce8ea5-6f71-4c2d-b680-c5b2657579a2 req-57e9ef47-10b3-450d-9586-c5e4935cfddb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.531 232437 DEBUG oslo_concurrency.lockutils [req-40ce8ea5-6f71-4c2d-b680-c5b2657579a2 req-57e9ef47-10b3-450d-9586-c5e4935cfddb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.531 232437 DEBUG oslo_concurrency.lockutils [req-40ce8ea5-6f71-4c2d-b680-c5b2657579a2 req-57e9ef47-10b3-450d-9586-c5e4935cfddb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.531 232437 DEBUG nova.compute.manager [req-40ce8ea5-6f71-4c2d-b680-c5b2657579a2 req-57e9ef47-10b3-450d-9586-c5e4935cfddb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Processing event network-vif-plugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.532 232437 DEBUG nova.compute.manager [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.535 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004299.534933, 1dd278c9-e3ac-480b-9a02-2e580da2d211 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.535 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] VM Resumed (Lifecycle Event)#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.537 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.539 232437 INFO nova.virt.libvirt.driver [-] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Instance spawned successfully.#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.540 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.557 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.569 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.572 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.572 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.573 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.573 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.574 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.574 232437 DEBUG nova.virt.libvirt.driver [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.608 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.634 232437 INFO nova.compute.manager [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Took 12.64 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.634 232437 DEBUG nova.compute.manager [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.697 232437 INFO nova.compute.manager [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Took 13.69 seconds to build instance.#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.717 232437 DEBUG oslo_concurrency.lockutils [None req-0901ec50-8738-4a25-a816-f698070dc00b 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:58:19 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/723393079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.964 232437 DEBUG oslo_concurrency.processutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:19 np0005548731 nova_compute[232433]: 2025-12-06 06:58:19.972 232437 DEBUG nova.compute.provider_tree [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:58:20 np0005548731 nova_compute[232433]: 2025-12-06 06:58:20.005 232437 DEBUG nova.scheduler.client.report [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:58:20 np0005548731 nova_compute[232433]: 2025-12-06 06:58:20.044 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:20 np0005548731 nova_compute[232433]: 2025-12-06 06:58:20.088 232437 INFO nova.scheduler.client.report [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Deleted allocations for instance ccb415c8-1183-4921-bc8c-1c40722eeb98#033[00m
Dec  6 01:58:20 np0005548731 nova_compute[232433]: 2025-12-06 06:58:20.161 232437 DEBUG oslo_concurrency.lockutils [None req-293fb63c-a7bb-4b08-a6a6-7f32752d9b90 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "ccb415c8-1183-4921-bc8c-1c40722eeb98" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:20.597 236668 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:20.597 236668 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:20.597 236668 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:20 np0005548731 nova_compute[232433]: 2025-12-06 06:58:20.733 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:21.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:21 np0005548731 nova_compute[232433]: 2025-12-06 06:58:21.318 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:58:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:21.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:58:21 np0005548731 nova_compute[232433]: 2025-12-06 06:58:21.611 232437 DEBUG nova.compute.manager [req-b668475e-056f-4d42-9bb2-fde722aaf0a0 req-38c73deb-1d63-4c96-828e-b2aab996ed01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Received event network-vif-plugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:58:21 np0005548731 nova_compute[232433]: 2025-12-06 06:58:21.611 232437 DEBUG oslo_concurrency.lockutils [req-b668475e-056f-4d42-9bb2-fde722aaf0a0 req-38c73deb-1d63-4c96-828e-b2aab996ed01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:21 np0005548731 nova_compute[232433]: 2025-12-06 06:58:21.612 232437 DEBUG oslo_concurrency.lockutils [req-b668475e-056f-4d42-9bb2-fde722aaf0a0 req-38c73deb-1d63-4c96-828e-b2aab996ed01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:21 np0005548731 nova_compute[232433]: 2025-12-06 06:58:21.612 232437 DEBUG oslo_concurrency.lockutils [req-b668475e-056f-4d42-9bb2-fde722aaf0a0 req-38c73deb-1d63-4c96-828e-b2aab996ed01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:21 np0005548731 nova_compute[232433]: 2025-12-06 06:58:21.613 232437 DEBUG nova.compute.manager [req-b668475e-056f-4d42-9bb2-fde722aaf0a0 req-38c73deb-1d63-4c96-828e-b2aab996ed01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] No waiting events found dispatching network-vif-plugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 01:58:21 np0005548731 nova_compute[232433]: 2025-12-06 06:58:21.613 232437 WARNING nova.compute.manager [req-b668475e-056f-4d42-9bb2-fde722aaf0a0 req-38c73deb-1d63-4c96-828e-b2aab996ed01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Received unexpected event network-vif-plugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c for instance with vm_state active and task_state None.#033[00m
Dec  6 01:58:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:21.741 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f7939f7c-6d4e-4072-88a0-eee2123b53f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:21.742 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26dc03be-31 in ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 01:58:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:21.745 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26dc03be-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 01:58:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:21.745 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[67157232-5a63-4992-be24-1f76bc47498b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:21.748 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9841a35f-a9d4-49e2-b74d-f5ff3b0e5706]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:21.784 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[903cf710-8c5a-47ef-9152-5107c8b46500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:21.815 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0e90da0e-d8cb-48a3-994f-1e038b1fe2a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:21.817 143965 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpllrcq6cp/privsep.sock']#033[00m
Dec  6 01:58:22 np0005548731 nova_compute[232433]: 2025-12-06 06:58:22.181 232437 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Automatically allocated network: {'id': 'fa805a2c-a79c-458b-b658-8e0534714a02', 'name': 'auto_allocated_network', 'tenant_id': '066c314d67e347f6a49e8e3e27998441', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['04b3c0e5-fc90-42d4-bc2f-73e704cb01a0', 'd4a716c0-b329-403f-9680-e6cc8e535b63'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-12-06T06:57:49Z', 'updated_at': '2025-12-06T06:58:07Z', 'revision_number': 4, 'project_id': '066c314d67e347f6a49e8e3e27998441'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Dec  6 01:58:22 np0005548731 nova_compute[232433]: 2025-12-06 06:58:22.183 232437 DEBUG nova.policy [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5122185c6194067bdb22d6ba8205dca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '066c314d67e347f6a49e8e3e27998441', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 01:58:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:22.537 143965 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  6 01:58:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:22.538 143965 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpllrcq6cp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  6 01:58:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:22.412 236707 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  6 01:58:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:22.417 236707 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  6 01:58:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:22.419 236707 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  6 01:58:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:22.419 236707 INFO oslo.privsep.daemon [-] privsep daemon running as pid 236707#033[00m
Dec  6 01:58:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:22.540 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[74a3946e-7d7e-4ed4-8558-5b3d7b8563fb]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:23 np0005548731 nova_compute[232433]: 2025-12-06 06:58:23.004 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:23 np0005548731 NetworkManager[49182]: <info>  [1765004303.0075] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Dec  6 01:58:23 np0005548731 NetworkManager[49182]: <info>  [1765004303.0088] device (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:58:23 np0005548731 NetworkManager[49182]: <info>  [1765004303.0110] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Dec  6 01:58:23 np0005548731 NetworkManager[49182]: <info>  [1765004303.0116] device (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  6 01:58:23 np0005548731 NetworkManager[49182]: <info>  [1765004303.0128] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec  6 01:58:23 np0005548731 NetworkManager[49182]: <info>  [1765004303.0136] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Dec  6 01:58:23 np0005548731 NetworkManager[49182]: <info>  [1765004303.0143] device (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  6 01:58:23 np0005548731 NetworkManager[49182]: <info>  [1765004303.0148] device (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  6 01:58:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:23.132 236707 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:23.132 236707 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:23.132 236707 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:23 np0005548731 nova_compute[232433]: 2025-12-06 06:58:23.148 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:23 np0005548731 nova_compute[232433]: 2025-12-06 06:58:23.171 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:23.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:23.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:23 np0005548731 nova_compute[232433]: 2025-12-06 06:58:23.670 232437 DEBUG nova.compute.manager [req-8ecfbdd4-be45-4198-85c8-52b2ec71f17c req-71fad14f-a9c4-4267-8730-cf709fb0c2b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Received event network-changed-1c43f5f0-b72d-49f6-9dc5-10367318c26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:58:23 np0005548731 nova_compute[232433]: 2025-12-06 06:58:23.671 232437 DEBUG nova.compute.manager [req-8ecfbdd4-be45-4198-85c8-52b2ec71f17c req-71fad14f-a9c4-4267-8730-cf709fb0c2b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Refreshing instance network info cache due to event network-changed-1c43f5f0-b72d-49f6-9dc5-10367318c26c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 01:58:23 np0005548731 nova_compute[232433]: 2025-12-06 06:58:23.671 232437 DEBUG oslo_concurrency.lockutils [req-8ecfbdd4-be45-4198-85c8-52b2ec71f17c req-71fad14f-a9c4-4267-8730-cf709fb0c2b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:58:23 np0005548731 nova_compute[232433]: 2025-12-06 06:58:23.671 232437 DEBUG oslo_concurrency.lockutils [req-8ecfbdd4-be45-4198-85c8-52b2ec71f17c req-71fad14f-a9c4-4267-8730-cf709fb0c2b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:58:23 np0005548731 nova_compute[232433]: 2025-12-06 06:58:23.672 232437 DEBUG nova.network.neutron [req-8ecfbdd4-be45-4198-85c8-52b2ec71f17c req-71fad14f-a9c4-4267-8730-cf709fb0c2b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Refreshing network info cache for port 1c43f5f0-b72d-49f6-9dc5-10367318c26c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 01:58:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:23.788 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[23a783e8-347f-4e6e-afec-1e213527857c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:23.813 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cec0755c-40af-4a64-97db-880def084606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:23 np0005548731 NetworkManager[49182]: <info>  [1765004303.8149] manager: (tap26dc03be-30): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Dec  6 01:58:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:23.849 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c854ff12-d9e3-4a9b-83ae-3cd3781a4477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:23.854 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ae001828-5079-4173-852c-8bac8e3f238e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:23 np0005548731 systemd-udevd[236742]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 01:58:23 np0005548731 NetworkManager[49182]: <info>  [1765004303.8844] device (tap26dc03be-30): carrier: link connected
Dec  6 01:58:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:23.896 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[95b53e2a-d1f3-437f-9b31-9db28cc1d071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:23.922 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1063403d-0f64-4fa0-81b3-6702e68560d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26dc03be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:12:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456589, 'reachable_time': 42908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236777, 'error': None, 'target': 'ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:23 np0005548731 podman[236721]: 2025-12-06 06:58:23.938088536 +0000 UTC m=+0.091137602 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 01:58:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:23.948 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2100dbdb-0f24-41bc-8cd7-588dcf48c7a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:1218'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456589, 'tstamp': 456589}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236793, 'error': None, 'target': 'ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:23 np0005548731 podman[236717]: 2025-12-06 06:58:23.960496617 +0000 UTC m=+0.116811193 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:58:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:23.970 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[390ea989-51d2-4a65-bead-84035238e025]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26dc03be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:12:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456589, 'reachable_time': 42908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 236802, 'error': None, 'target': 'ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:23 np0005548731 podman[236719]: 2025-12-06 06:58:23.993423207 +0000 UTC m=+0.148495773 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true)
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:24.006 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7d95eae1-e6b9-40f9-a53e-4b6c249ea858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:24.066 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b81a87-c19b-4c23-b38e-095f53b1bea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:24.068 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26dc03be-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:24.069 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:24.069 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26dc03be-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:24 np0005548731 NetworkManager[49182]: <info>  [1765004304.0722] manager: (tap26dc03be-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Dec  6 01:58:24 np0005548731 kernel: tap26dc03be-30: entered promiscuous mode
Dec  6 01:58:24 np0005548731 nova_compute[232433]: 2025-12-06 06:58:24.072 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:24.074 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26dc03be-30, col_values=(('external_ids', {'iface-id': '5902f277-f87f-40e0-a71c-841de11f5ec9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:24 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:24Z|00031|binding|INFO|Releasing lport 5902f277-f87f-40e0-a71c-841de11f5ec9 from this chassis (sb_readonly=0)
Dec  6 01:58:24 np0005548731 nova_compute[232433]: 2025-12-06 06:58:24.077 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:24.077 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26dc03be-378a-4751-a903-cbe72fe09f14.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26dc03be-378a-4751-a903-cbe72fe09f14.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:24.078 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[350fd6d9-8e6d-4c0a-914b-8cc69b80b71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:24.079 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-26dc03be-378a-4751-a903-cbe72fe09f14
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/26dc03be-378a-4751-a903-cbe72fe09f14.pid.haproxy
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 26dc03be-378a-4751-a903-cbe72fe09f14
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 01:58:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:24.080 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14', 'env', 'PROCESS_TAG=haproxy-26dc03be-378a-4751-a903-cbe72fe09f14', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26dc03be-378a-4751-a903-cbe72fe09f14.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 01:58:24 np0005548731 nova_compute[232433]: 2025-12-06 06:58:24.094 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:24 np0005548731 podman[236836]: 2025-12-06 06:58:24.445480933 +0000 UTC m=+0.024817721 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 01:58:24 np0005548731 podman[236836]: 2025-12-06 06:58:24.595632655 +0000 UTC m=+0.174969423 container create d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 01:58:24 np0005548731 systemd[1]: Started libpod-conmon-d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9.scope.
Dec  6 01:58:24 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:58:24 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ba86cf1926c20f6b97a2c3054bb59a6dabb81b0343ad261978c24701b3d3a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 01:58:24 np0005548731 nova_compute[232433]: 2025-12-06 06:58:24.889 232437 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Successfully created port: 4cce2b8b-a836-47f0-a6e4-adec68eed375 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 01:58:24 np0005548731 podman[236836]: 2025-12-06 06:58:24.935080792 +0000 UTC m=+0.514417580 container init d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 01:58:24 np0005548731 podman[236836]: 2025-12-06 06:58:24.942928875 +0000 UTC m=+0.522265643 container start d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 01:58:24 np0005548731 neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14[236851]: [NOTICE]   (236855) : New worker (236857) forked
Dec  6 01:58:24 np0005548731 neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14[236851]: [NOTICE]   (236855) : Loading success.
Dec  6 01:58:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:25.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:25.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:25 np0005548731 nova_compute[232433]: 2025-12-06 06:58:25.734 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:26 np0005548731 nova_compute[232433]: 2025-12-06 06:58:26.320 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:26 np0005548731 nova_compute[232433]: 2025-12-06 06:58:26.336 232437 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Successfully updated port: 4cce2b8b-a836-47f0-a6e4-adec68eed375 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 01:58:26 np0005548731 nova_compute[232433]: 2025-12-06 06:58:26.378 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "refresh_cache-012052dd-fee1-4ac6-bc17-336aff7b5db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:58:26 np0005548731 nova_compute[232433]: 2025-12-06 06:58:26.379 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquired lock "refresh_cache-012052dd-fee1-4ac6-bc17-336aff7b5db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:58:26 np0005548731 nova_compute[232433]: 2025-12-06 06:58:26.379 232437 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 01:58:26 np0005548731 nova_compute[232433]: 2025-12-06 06:58:26.635 232437 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 01:58:26 np0005548731 nova_compute[232433]: 2025-12-06 06:58:26.642 232437 DEBUG nova.network.neutron [req-8ecfbdd4-be45-4198-85c8-52b2ec71f17c req-71fad14f-a9c4-4267-8730-cf709fb0c2b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Updated VIF entry in instance network info cache for port 1c43f5f0-b72d-49f6-9dc5-10367318c26c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 01:58:26 np0005548731 nova_compute[232433]: 2025-12-06 06:58:26.643 232437 DEBUG nova.network.neutron [req-8ecfbdd4-be45-4198-85c8-52b2ec71f17c req-71fad14f-a9c4-4267-8730-cf709fb0c2b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Updating instance_info_cache with network_info: [{"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:58:26 np0005548731 nova_compute[232433]: 2025-12-06 06:58:26.661 232437 DEBUG oslo_concurrency.lockutils [req-8ecfbdd4-be45-4198-85c8-52b2ec71f17c req-71fad14f-a9c4-4267-8730-cf709fb0c2b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:58:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:58:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:27.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:58:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:27.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.193 232437 DEBUG nova.compute.manager [req-1f375069-a0c0-4ada-910d-7ddb5314bb0d req-500da6b8-2113-4408-9da1-2718a3465ad5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Received event network-changed-4cce2b8b-a836-47f0-a6e4-adec68eed375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.194 232437 DEBUG nova.compute.manager [req-1f375069-a0c0-4ada-910d-7ddb5314bb0d req-500da6b8-2113-4408-9da1-2718a3465ad5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Refreshing instance network info cache due to event network-changed-4cce2b8b-a836-47f0-a6e4-adec68eed375. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.194 232437 DEBUG oslo_concurrency.lockutils [req-1f375069-a0c0-4ada-910d-7ddb5314bb0d req-500da6b8-2113-4408-9da1-2718a3465ad5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-012052dd-fee1-4ac6-bc17-336aff7b5db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.477 232437 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Updating instance_info_cache with network_info: [{"id": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "address": "fa:16:3e:8c:5e:b5", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::2dd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.50", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cce2b8b-a8", "ovs_interfaceid": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.496 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Releasing lock "refresh_cache-012052dd-fee1-4ac6-bc17-336aff7b5db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.497 232437 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Instance network_info: |[{"id": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "address": "fa:16:3e:8c:5e:b5", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::2dd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.50", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cce2b8b-a8", "ovs_interfaceid": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.499 232437 DEBUG oslo_concurrency.lockutils [req-1f375069-a0c0-4ada-910d-7ddb5314bb0d req-500da6b8-2113-4408-9da1-2718a3465ad5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-012052dd-fee1-4ac6-bc17-336aff7b5db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.500 232437 DEBUG nova.network.neutron [req-1f375069-a0c0-4ada-910d-7ddb5314bb0d req-500da6b8-2113-4408-9da1-2718a3465ad5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Refreshing network info cache for port 4cce2b8b-a836-47f0-a6e4-adec68eed375 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.507 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Start _get_guest_xml network_info=[{"id": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "address": "fa:16:3e:8c:5e:b5", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::2dd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.50", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cce2b8b-a8", "ovs_interfaceid": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.516 232437 WARNING nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.532 232437 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.534 232437 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.539 232437 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.539 232437 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.541 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.542 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.543 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.543 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.543 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.543 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.544 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.545 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.546 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.546 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.547 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.548 232437 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.551 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 01:58:28 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1018108383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 01:58:28 np0005548731 nova_compute[232433]: 2025-12-06 06:58:28.993 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.027 232437 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.033 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 01:58:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:29.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 01:58:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:29.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 01:58:29 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/788469425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.486 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.488 232437 DEBUG nova.virt.libvirt.vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T06:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2024863607-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2024863607-1',id=3,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='066c314d67e347f6a49e8e3e27998441',ramdisk_id='',reservation_id='r-bb9k3jjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1572395875',owner_user_name='tempest-AutoAllocateNetworkTest-1572395875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T06:57:48Z,user_data=None,user_id='e5122185c6194067bdb22d6ba8205dca',uuid=012052dd-fee1-4ac6-bc17-336aff7b5db3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "address": "fa:16:3e:8c:5e:b5", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::2dd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.50", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cce2b8b-a8", "ovs_interfaceid": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.488 232437 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converting VIF {"id": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "address": "fa:16:3e:8c:5e:b5", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::2dd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.50", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cce2b8b-a8", "ovs_interfaceid": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.489 232437 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:5e:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cce2b8b-a836-47f0-a6e4-adec68eed375,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cce2b8b-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.491 232437 DEBUG nova.objects.instance [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'pci_devices' on Instance uuid 012052dd-fee1-4ac6-bc17-336aff7b5db3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.517 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] End _get_guest_xml xml=<domain type="kvm">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  <uuid>012052dd-fee1-4ac6-bc17-336aff7b5db3</uuid>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  <name>instance-00000003</name>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <nova:name>tempest-tempest.common.compute-instance-2024863607-1</nova:name>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 06:58:28</nova:creationTime>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <nova:user uuid="e5122185c6194067bdb22d6ba8205dca">tempest-AutoAllocateNetworkTest-1572395875-project-member</nova:user>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <nova:project uuid="066c314d67e347f6a49e8e3e27998441">tempest-AutoAllocateNetworkTest-1572395875</nova:project>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <nova:port uuid="4cce2b8b-a836-47f0-a6e4-adec68eed375">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="fdfe:381f:8400::2dd" ipVersion="6"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.1.0.50" ipVersion="4"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <system>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <entry name="serial">012052dd-fee1-4ac6-bc17-336aff7b5db3</entry>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <entry name="uuid">012052dd-fee1-4ac6-bc17-336aff7b5db3</entry>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    </system>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  <os>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  </os>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  <features>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  </features>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  </clock>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  <devices>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/012052dd-fee1-4ac6-bc17-336aff7b5db3_disk">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      </source>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      </auth>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/012052dd-fee1-4ac6-bc17-336aff7b5db3_disk.config">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      </source>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      </auth>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:8c:5e:b5"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <target dev="tap4cce2b8b-a8"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    </interface>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/012052dd-fee1-4ac6-bc17-336aff7b5db3/console.log" append="off"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    </serial>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <video>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    </video>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    </rng>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 01:58:29 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 01:58:29 np0005548731 nova_compute[232433]:  </devices>
Dec  6 01:58:29 np0005548731 nova_compute[232433]: </domain>
Dec  6 01:58:29 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.519 232437 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Preparing to wait for external event network-vif-plugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.520 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.521 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.522 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.523 232437 DEBUG nova.virt.libvirt.vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T06:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2024863607-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2024863607-1',id=3,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='066c314d67e347f6a49e8e3e27998441',ramdisk_id='',reservation_id='r-bb9k3jjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1572395875',owner_user_name='tempest-AutoAllocateNetworkTest-1572395875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T06:57:48Z,user_data=None,user_id='e5122185c6194067bdb22d6ba8205dca',uuid=012052dd-fee1-4ac6-bc17-336aff7b5db3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "address": "fa:16:3e:8c:5e:b5", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::2dd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.50", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cce2b8b-a8", "ovs_interfaceid": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.523 232437 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converting VIF {"id": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "address": "fa:16:3e:8c:5e:b5", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::2dd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.50", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cce2b8b-a8", "ovs_interfaceid": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.524 232437 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:5e:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cce2b8b-a836-47f0-a6e4-adec68eed375,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cce2b8b-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.524 232437 DEBUG os_vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:5e:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cce2b8b-a836-47f0-a6e4-adec68eed375,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cce2b8b-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.525 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.526 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.526 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.534 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.535 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4cce2b8b-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.537 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4cce2b8b-a8, col_values=(('external_ids', {'iface-id': '4cce2b8b-a836-47f0-a6e4-adec68eed375', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:5e:b5', 'vm-uuid': '012052dd-fee1-4ac6-bc17-336aff7b5db3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:29 np0005548731 NetworkManager[49182]: <info>  [1765004309.5396] manager: (tap4cce2b8b-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.551 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.552 232437 INFO os_vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:5e:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cce2b8b-a836-47f0-a6e4-adec68eed375,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cce2b8b-a8')#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.620 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.621 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.621 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] No VIF found with MAC fa:16:3e:8c:5e:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.622 232437 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Using config drive#033[00m
Dec  6 01:58:29 np0005548731 nova_compute[232433]: 2025-12-06 06:58:29.664 232437 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.111 232437 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Creating config drive at /var/lib/nova/instances/012052dd-fee1-4ac6-bc17-336aff7b5db3/disk.config#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.117 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/012052dd-fee1-4ac6-bc17-336aff7b5db3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbsagpe0d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.245 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/012052dd-fee1-4ac6-bc17-336aff7b5db3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbsagpe0d" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.272 232437 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.276 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/012052dd-fee1-4ac6-bc17-336aff7b5db3/disk.config 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.556 232437 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/012052dd-fee1-4ac6-bc17-336aff7b5db3/disk.config 012052dd-fee1-4ac6-bc17-336aff7b5db3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.558 232437 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Deleting local config drive /var/lib/nova/instances/012052dd-fee1-4ac6-bc17-336aff7b5db3/disk.config because it was imported into RBD.#033[00m
Dec  6 01:58:30 np0005548731 kernel: tap4cce2b8b-a8: entered promiscuous mode
Dec  6 01:58:30 np0005548731 NetworkManager[49182]: <info>  [1765004310.5996] manager: (tap4cce2b8b-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec  6 01:58:30 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:30Z|00032|binding|INFO|Claiming lport 4cce2b8b-a836-47f0-a6e4-adec68eed375 for this chassis.
Dec  6 01:58:30 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:30Z|00033|binding|INFO|4cce2b8b-a836-47f0-a6e4-adec68eed375: Claiming fa:16:3e:8c:5e:b5 10.1.0.50 fdfe:381f:8400::2dd
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.601 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.609 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:5e:b5 10.1.0.50 fdfe:381f:8400::2dd'], port_security=['fa:16:3e:8c:5e:b5 10.1.0.50 fdfe:381f:8400::2dd'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.50/26 fdfe:381f:8400::2dd/64', 'neutron:device_id': '012052dd-fee1-4ac6-bc17-336aff7b5db3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa805a2c-a79c-458b-b658-8e0534714a02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '066c314d67e347f6a49e8e3e27998441', 'neutron:revision_number': '2', 'neutron:security_group_ids': '460e28b2-b45f-4429-a2b9-8f57e45c4e5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a47e992-7383-43fa-bf34-745cbd8b74f1, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=4cce2b8b-a836-47f0-a6e4-adec68eed375) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.613 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 4cce2b8b-a836-47f0-a6e4-adec68eed375 in datapath fa805a2c-a79c-458b-b658-8e0534714a02 bound to our chassis#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.615 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa805a2c-a79c-458b-b658-8e0534714a02#033[00m
Dec  6 01:58:30 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:30Z|00034|binding|INFO|Setting lport 4cce2b8b-a836-47f0-a6e4-adec68eed375 ovn-installed in OVS
Dec  6 01:58:30 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:30Z|00035|binding|INFO|Setting lport 4cce2b8b-a836-47f0-a6e4-adec68eed375 up in Southbound
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.631 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4acf44-13ea-4d3b-a002-928ad8ef800f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.633 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfa805a2c-a1 in ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 01:58:30 np0005548731 systemd-udevd[237005]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.636 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfa805a2c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.636 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2f2ea9-eea5-4b07-a85d-14c6b4fc701c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.641 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.642 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[66ca59e1-6b1e-4c6b-a2b6-3bbf143cecfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 systemd-machined[195355]: New machine qemu-3-instance-00000003.
Dec  6 01:58:30 np0005548731 NetworkManager[49182]: <info>  [1765004310.6573] device (tap4cce2b8b-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 01:58:30 np0005548731 NetworkManager[49182]: <info>  [1765004310.6580] device (tap4cce2b8b-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 01:58:30 np0005548731 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.670 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[19d60757-e4d8-434f-b1e5-ea27d0ee2a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.686 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[980c397d-0fae-4c66-b749-4c4c6f882822]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.722 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b45a40-fed5-40d2-a1ab-8cecf008653c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 NetworkManager[49182]: <info>  [1765004310.7280] manager: (tapfa805a2c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.727 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b87cc3a9-d5c7-40db-9ce2-bcadf370360c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 systemd-udevd[237010]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.740 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.762 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[1d892644-84d0-4194-b147-79322818afe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.765 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6afd77e7-0cb8-459b-ba8d-d248e9224295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 NetworkManager[49182]: <info>  [1765004310.7887] device (tapfa805a2c-a0): carrier: link connected
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.794 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8c5934-5fbb-4cd7-80a2-bd70ad9d3cee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.813 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9b731103-5cae-4be2-9d4e-f86868cbb01d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa805a2c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:f0:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457279, 'reachable_time': 16978, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237039, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.830 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[66c557f6-1502-4843-859c-fa658889c58d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:f092'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 457279, 'tstamp': 457279}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237040, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.850 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c34bdbfe-7401-4279-b9eb-a4dbb3f8ae5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa805a2c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:f0:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457279, 'reachable_time': 16978, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237041, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.881 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[86d08fbb-aeb1-4906-a14e-010409a71f72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.899 232437 DEBUG nova.network.neutron [req-1f375069-a0c0-4ada-910d-7ddb5314bb0d req-500da6b8-2113-4408-9da1-2718a3465ad5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Updated VIF entry in instance network info cache for port 4cce2b8b-a836-47f0-a6e4-adec68eed375. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.900 232437 DEBUG nova.network.neutron [req-1f375069-a0c0-4ada-910d-7ddb5314bb0d req-500da6b8-2113-4408-9da1-2718a3465ad5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Updating instance_info_cache with network_info: [{"id": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "address": "fa:16:3e:8c:5e:b5", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::2dd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.50", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cce2b8b-a8", "ovs_interfaceid": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.917 232437 DEBUG oslo_concurrency.lockutils [req-1f375069-a0c0-4ada-910d-7ddb5314bb0d req-500da6b8-2113-4408-9da1-2718a3465ad5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-012052dd-fee1-4ac6-bc17-336aff7b5db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.948 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1d61ad-2a69-4fea-b17e-d902c86d972a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.949 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa805a2c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.949 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.950 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa805a2c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.951 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:30 np0005548731 kernel: tapfa805a2c-a0: entered promiscuous mode
Dec  6 01:58:30 np0005548731 NetworkManager[49182]: <info>  [1765004310.9537] manager: (tapfa805a2c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.957 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa805a2c-a0, col_values=(('external_ids', {'iface-id': '34b96bd0-4cf5-4098-a5e5-d0a4d39a953b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.958 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:30 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:30Z|00036|binding|INFO|Releasing lport 34b96bd0-4cf5-4098-a5e5-d0a4d39a953b from this chassis (sb_readonly=0)
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.962 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fa805a2c-a79c-458b-b658-8e0534714a02.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fa805a2c-a79c-458b-b658-8e0534714a02.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.962 232437 DEBUG nova.compute.manager [req-a360e09f-d48c-4e95-9549-2759c8ae0758 req-23552c0c-4fa0-4c16-9a6f-c4504adfc642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Received event network-vif-plugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.962 232437 DEBUG oslo_concurrency.lockutils [req-a360e09f-d48c-4e95-9549-2759c8ae0758 req-23552c0c-4fa0-4c16-9a6f-c4504adfc642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.963 232437 DEBUG oslo_concurrency.lockutils [req-a360e09f-d48c-4e95-9549-2759c8ae0758 req-23552c0c-4fa0-4c16-9a6f-c4504adfc642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.963 232437 DEBUG oslo_concurrency.lockutils [req-a360e09f-d48c-4e95-9549-2759c8ae0758 req-23552c0c-4fa0-4c16-9a6f-c4504adfc642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.963 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d74da103-d27e-4a46-b5e9-7a39be4e3708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.963 232437 DEBUG nova.compute.manager [req-a360e09f-d48c-4e95-9549-2759c8ae0758 req-23552c0c-4fa0-4c16-9a6f-c4504adfc642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Processing event network-vif-plugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.964 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-fa805a2c-a79c-458b-b658-8e0534714a02
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/fa805a2c-a79c-458b-b658-8e0534714a02.pid.haproxy
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID fa805a2c-a79c-458b-b658-8e0534714a02
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 01:58:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:30.964 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'env', 'PROCESS_TAG=haproxy-fa805a2c-a79c-458b-b658-8e0534714a02', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fa805a2c-a79c-458b-b658-8e0534714a02.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 01:58:30 np0005548731 nova_compute[232433]: 2025-12-06 06:58:30.975 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.145 232437 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.146 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004311.144435, 012052dd-fee1-4ac6-bc17-336aff7b5db3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.146 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] VM Started (Lifecycle Event)#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.151 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.155 232437 INFO nova.virt.libvirt.driver [-] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Instance spawned successfully.#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.155 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.167 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.173 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.175 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.176 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.176 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.176 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.177 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.177 232437 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.195 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.195 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004311.1446388, 012052dd-fee1-4ac6-bc17-336aff7b5db3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.196 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] VM Paused (Lifecycle Event)#033[00m
Dec  6 01:58:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:31.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:31.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:31 np0005548731 podman[237115]: 2025-12-06 06:58:31.352313116 +0000 UTC m=+0.024238838 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.458 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:58:31 np0005548731 podman[237115]: 2025-12-06 06:58:31.461905591 +0000 UTC m=+0.133831313 container create bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.463 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004311.1492918, 012052dd-fee1-4ac6-bc17-336aff7b5db3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.463 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] VM Resumed (Lifecycle Event)#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.487 232437 INFO nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Took 43.18 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.487 232437 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.488 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.498 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 01:58:31 np0005548731 systemd[1]: Started libpod-conmon-bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8.scope.
Dec  6 01:58:31 np0005548731 systemd[1]: Started libcrun container.
Dec  6 01:58:31 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5e0581e973c2d347e5c9713ff534f66f4d093302d3f7e5567a9a7ca339ad2f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.547 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.576 232437 INFO nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Took 44.86 seconds to build instance.#033[00m
Dec  6 01:58:31 np0005548731 nova_compute[232433]: 2025-12-06 06:58:31.618 232437 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 44.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:31 np0005548731 podman[237115]: 2025-12-06 06:58:31.797204385 +0000 UTC m=+0.469130127 container init bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:58:31 np0005548731 podman[237115]: 2025-12-06 06:58:31.804974267 +0000 UTC m=+0.476899999 container start bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:58:31 np0005548731 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[237130]: [NOTICE]   (237134) : New worker (237136) forked
Dec  6 01:58:31 np0005548731 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[237130]: [NOTICE]   (237134) : Loading success.
Dec  6 01:58:32 np0005548731 nova_compute[232433]: 2025-12-06 06:58:32.395 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004297.3942668, ccb415c8-1183-4921-bc8c-1c40722eeb98 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:58:32 np0005548731 nova_compute[232433]: 2025-12-06 06:58:32.395 232437 INFO nova.compute.manager [-] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] VM Stopped (Lifecycle Event)#033[00m
Dec  6 01:58:32 np0005548731 nova_compute[232433]: 2025-12-06 06:58:32.627 232437 DEBUG nova.compute.manager [None req-753350ba-fade-4f33-9463-baab05727911 - - - - - -] [instance: ccb415c8-1183-4921-bc8c-1c40722eeb98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:58:33 np0005548731 nova_compute[232433]: 2025-12-06 06:58:33.154 232437 DEBUG nova.compute.manager [req-ed1dd30c-f51b-48f0-85c6-f856547dabc6 req-49fc577e-0a57-4fab-8d35-94190970dd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Received event network-vif-plugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:58:33 np0005548731 nova_compute[232433]: 2025-12-06 06:58:33.154 232437 DEBUG oslo_concurrency.lockutils [req-ed1dd30c-f51b-48f0-85c6-f856547dabc6 req-49fc577e-0a57-4fab-8d35-94190970dd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:33 np0005548731 nova_compute[232433]: 2025-12-06 06:58:33.154 232437 DEBUG oslo_concurrency.lockutils [req-ed1dd30c-f51b-48f0-85c6-f856547dabc6 req-49fc577e-0a57-4fab-8d35-94190970dd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:33 np0005548731 nova_compute[232433]: 2025-12-06 06:58:33.154 232437 DEBUG oslo_concurrency.lockutils [req-ed1dd30c-f51b-48f0-85c6-f856547dabc6 req-49fc577e-0a57-4fab-8d35-94190970dd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:33 np0005548731 nova_compute[232433]: 2025-12-06 06:58:33.155 232437 DEBUG nova.compute.manager [req-ed1dd30c-f51b-48f0-85c6-f856547dabc6 req-49fc577e-0a57-4fab-8d35-94190970dd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] No waiting events found dispatching network-vif-plugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 01:58:33 np0005548731 nova_compute[232433]: 2025-12-06 06:58:33.155 232437 WARNING nova.compute.manager [req-ed1dd30c-f51b-48f0-85c6-f856547dabc6 req-49fc577e-0a57-4fab-8d35-94190970dd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Received unexpected event network-vif-plugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 for instance with vm_state active and task_state None.#033[00m
Dec  6 01:58:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:33.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:58:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:33.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:58:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:34 np0005548731 nova_compute[232433]: 2025-12-06 06:58:34.540 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:35.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:35.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:35 np0005548731 nova_compute[232433]: 2025-12-06 06:58:35.744 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:36 np0005548731 nova_compute[232433]: 2025-12-06 06:58:36.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:58:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:58:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:37.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:58:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:37.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:37 np0005548731 nova_compute[232433]: 2025-12-06 06:58:37.828 232437 DEBUG oslo_concurrency.lockutils [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "012052dd-fee1-4ac6-bc17-336aff7b5db3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:37 np0005548731 nova_compute[232433]: 2025-12-06 06:58:37.829 232437 DEBUG oslo_concurrency.lockutils [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:37 np0005548731 nova_compute[232433]: 2025-12-06 06:58:37.830 232437 DEBUG oslo_concurrency.lockutils [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:37 np0005548731 nova_compute[232433]: 2025-12-06 06:58:37.830 232437 DEBUG oslo_concurrency.lockutils [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:37 np0005548731 nova_compute[232433]: 2025-12-06 06:58:37.830 232437 DEBUG oslo_concurrency.lockutils [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:37 np0005548731 nova_compute[232433]: 2025-12-06 06:58:37.831 232437 INFO nova.compute.manager [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Terminating instance#033[00m
Dec  6 01:58:37 np0005548731 nova_compute[232433]: 2025-12-06 06:58:37.832 232437 DEBUG nova.compute.manager [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 01:58:38 np0005548731 kernel: tap4cce2b8b-a8 (unregistering): left promiscuous mode
Dec  6 01:58:38 np0005548731 NetworkManager[49182]: <info>  [1765004318.0291] device (tap4cce2b8b-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 01:58:38 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:38Z|00037|binding|INFO|Releasing lport 4cce2b8b-a836-47f0-a6e4-adec68eed375 from this chassis (sb_readonly=0)
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.037 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:38 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:38Z|00038|binding|INFO|Setting lport 4cce2b8b-a836-47f0-a6e4-adec68eed375 down in Southbound
Dec  6 01:58:38 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:38Z|00039|binding|INFO|Removing iface tap4cce2b8b-a8 ovn-installed in OVS
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.039 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.046 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:5e:b5 10.1.0.50 fdfe:381f:8400::2dd'], port_security=['fa:16:3e:8c:5e:b5 10.1.0.50 fdfe:381f:8400::2dd'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.50/26 fdfe:381f:8400::2dd/64', 'neutron:device_id': '012052dd-fee1-4ac6-bc17-336aff7b5db3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa805a2c-a79c-458b-b658-8e0534714a02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '066c314d67e347f6a49e8e3e27998441', 'neutron:revision_number': '4', 'neutron:security_group_ids': '460e28b2-b45f-4429-a2b9-8f57e45c4e5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a47e992-7383-43fa-bf34-745cbd8b74f1, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=4cce2b8b-a836-47f0-a6e4-adec68eed375) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.047 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 4cce2b8b-a836-47f0-a6e4-adec68eed375 in datapath fa805a2c-a79c-458b-b658-8e0534714a02 unbound from our chassis#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.049 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa805a2c-a79c-458b-b658-8e0534714a02, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.050 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9b471c8c-8c18-467c-a502-6498e2a00cb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.051 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 namespace which is not needed anymore#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.053 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:38 np0005548731 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec  6 01:58:38 np0005548731 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 7.343s CPU time.
Dec  6 01:58:38 np0005548731 systemd-machined[195355]: Machine qemu-3-instance-00000003 terminated.
Dec  6 01:58:38 np0005548731 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[237130]: [NOTICE]   (237134) : haproxy version is 2.8.14-c23fe91
Dec  6 01:58:38 np0005548731 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[237130]: [NOTICE]   (237134) : path to executable is /usr/sbin/haproxy
Dec  6 01:58:38 np0005548731 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[237130]: [WARNING]  (237134) : Exiting Master process...
Dec  6 01:58:38 np0005548731 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[237130]: [WARNING]  (237134) : Exiting Master process...
Dec  6 01:58:38 np0005548731 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[237130]: [ALERT]    (237134) : Current worker (237136) exited with code 143 (Terminated)
Dec  6 01:58:38 np0005548731 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[237130]: [WARNING]  (237134) : All workers exited. Exiting... (0)
Dec  6 01:58:38 np0005548731 systemd[1]: libpod-bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8.scope: Deactivated successfully.
Dec  6 01:58:38 np0005548731 conmon[237130]: conmon bb00dbd2f6c163987d7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8.scope/container/memory.events
Dec  6 01:58:38 np0005548731 podman[237223]: 2025-12-06 06:58:38.20068588 +0000 UTC m=+0.046955817 container died bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.219 232437 INFO nova.virt.libvirt.driver [-] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Instance destroyed successfully.#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.220 232437 DEBUG nova.objects.instance [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'resources' on Instance uuid 012052dd-fee1-4ac6-bc17-336aff7b5db3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:58:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8-userdata-shm.mount: Deactivated successfully.
Dec  6 01:58:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay-bd5e0581e973c2d347e5c9713ff534f66f4d093302d3f7e5567a9a7ca339ad2f-merged.mount: Deactivated successfully.
Dec  6 01:58:38 np0005548731 podman[237223]: 2025-12-06 06:58:38.237934356 +0000 UTC m=+0.084204293 container cleanup bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.240 232437 DEBUG nova.virt.libvirt.vif [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T06:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2024863607-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2024863607-1',id=3,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T06:58:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='066c314d67e347f6a49e8e3e27998441',ramdisk_id='',reservation_id='r-bb9k3jjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1572395875',owner_user_name='tempest-AutoAllocateNetworkTest-1572395875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T06:58:31Z,user_data=None,user_id='e5122185c6194067bdb22d6ba8205dca',uuid=012052dd-fee1-4ac6-bc17-336aff7b5db3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "address": "fa:16:3e:8c:5e:b5", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::2dd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.50", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cce2b8b-a8", "ovs_interfaceid": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.240 232437 DEBUG nova.network.os_vif_util [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converting VIF {"id": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "address": "fa:16:3e:8c:5e:b5", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::2dd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.50", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cce2b8b-a8", "ovs_interfaceid": "4cce2b8b-a836-47f0-a6e4-adec68eed375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.241 232437 DEBUG nova.network.os_vif_util [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:5e:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cce2b8b-a836-47f0-a6e4-adec68eed375,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cce2b8b-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.241 232437 DEBUG os_vif [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:5e:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cce2b8b-a836-47f0-a6e4-adec68eed375,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cce2b8b-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.243 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.243 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4cce2b8b-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.245 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.246 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:38 np0005548731 systemd[1]: libpod-conmon-bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8.scope: Deactivated successfully.
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.249 232437 INFO os_vif [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:5e:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cce2b8b-a836-47f0-a6e4-adec68eed375,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cce2b8b-a8')#033[00m
Dec  6 01:58:38 np0005548731 podman[237261]: 2025-12-06 06:58:38.307361063 +0000 UTC m=+0.042662180 container remove bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.312 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f20377-a8e9-4c26-bcfe-b2be290af1a2]: (4, ('Sat Dec  6 06:58:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 (bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8)\nbb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8\nSat Dec  6 06:58:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 (bb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8)\nbb00dbd2f6c163987d7ac2f24de93c45df99f88bb9d033b79388c935b32ff0a8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.314 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2936dd-0677-466e-8e0e-d9faf0a6da6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.315 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa805a2c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.317 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:38 np0005548731 kernel: tapfa805a2c-a0: left promiscuous mode
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.327 232437 DEBUG nova.compute.manager [req-d0fd4376-a9bb-4ee8-9015-ce0927a89395 req-a6172995-a3f2-423c-916e-91eb8ab732eb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Received event network-vif-unplugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.327 232437 DEBUG oslo_concurrency.lockutils [req-d0fd4376-a9bb-4ee8-9015-ce0927a89395 req-a6172995-a3f2-423c-916e-91eb8ab732eb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.328 232437 DEBUG oslo_concurrency.lockutils [req-d0fd4376-a9bb-4ee8-9015-ce0927a89395 req-a6172995-a3f2-423c-916e-91eb8ab732eb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.328 232437 DEBUG oslo_concurrency.lockutils [req-d0fd4376-a9bb-4ee8-9015-ce0927a89395 req-a6172995-a3f2-423c-916e-91eb8ab732eb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.328 232437 DEBUG nova.compute.manager [req-d0fd4376-a9bb-4ee8-9015-ce0927a89395 req-a6172995-a3f2-423c-916e-91eb8ab732eb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] No waiting events found dispatching network-vif-unplugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.328 232437 DEBUG nova.compute.manager [req-d0fd4376-a9bb-4ee8-9015-ce0927a89395 req-a6172995-a3f2-423c-916e-91eb8ab732eb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Received event network-vif-unplugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.331 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:38 np0005548731 nova_compute[232433]: 2025-12-06 06:58:38.332 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.335 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff6d5a5-6ae0-4cdf-9074-5f540bd7dea4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.348 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2b177a81-5584-4189-a114-5a118c3c2b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.350 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7a13192c-c600-4e00-8fc1-e070c3f33302]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.365 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3a638147-9b38-4eed-b044-8d20866da65c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457272, 'reachable_time': 20578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237292, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:38 np0005548731 systemd[1]: run-netns-ovnmeta\x2dfa805a2c\x2da79c\x2d458b\x2db658\x2d8e0534714a02.mount: Deactivated successfully.
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.377 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 01:58:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:58:38.378 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc4d4ba-a717-4c1c-ab12-46b83315bd1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:39.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:39.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:39 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:39Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:e0:e2 10.100.0.4
Dec  6 01:58:39 np0005548731 ovn_controller[133927]: 2025-12-06T06:58:39Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:e0:e2 10.100.0.4
Dec  6 01:58:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.121 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.529 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.530 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.530 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.530 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1dd278c9-e3ac-480b-9a02-2e580da2d211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.689 232437 DEBUG nova.compute.manager [req-6aaaf5a9-f964-4cc1-95b7-f95e732c1a7d req-7ab9f303-b206-4b9c-a7ff-dc1df875a98d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Received event network-vif-plugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.689 232437 DEBUG oslo_concurrency.lockutils [req-6aaaf5a9-f964-4cc1-95b7-f95e732c1a7d req-7ab9f303-b206-4b9c-a7ff-dc1df875a98d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.690 232437 DEBUG oslo_concurrency.lockutils [req-6aaaf5a9-f964-4cc1-95b7-f95e732c1a7d req-7ab9f303-b206-4b9c-a7ff-dc1df875a98d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.690 232437 DEBUG oslo_concurrency.lockutils [req-6aaaf5a9-f964-4cc1-95b7-f95e732c1a7d req-7ab9f303-b206-4b9c-a7ff-dc1df875a98d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.690 232437 DEBUG nova.compute.manager [req-6aaaf5a9-f964-4cc1-95b7-f95e732c1a7d req-7ab9f303-b206-4b9c-a7ff-dc1df875a98d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] No waiting events found dispatching network-vif-plugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.690 232437 WARNING nova.compute.manager [req-6aaaf5a9-f964-4cc1-95b7-f95e732c1a7d req-7ab9f303-b206-4b9c-a7ff-dc1df875a98d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Received unexpected event network-vif-plugged-4cce2b8b-a836-47f0-a6e4-adec68eed375 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 01:58:40 np0005548731 nova_compute[232433]: 2025-12-06 06:58:40.746 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:41.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:41.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:42 np0005548731 nova_compute[232433]: 2025-12-06 06:58:42.644 232437 INFO nova.virt.libvirt.driver [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Deleting instance files /var/lib/nova/instances/012052dd-fee1-4ac6-bc17-336aff7b5db3_del#033[00m
Dec  6 01:58:42 np0005548731 nova_compute[232433]: 2025-12-06 06:58:42.645 232437 INFO nova.virt.libvirt.driver [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Deletion of /var/lib/nova/instances/012052dd-fee1-4ac6-bc17-336aff7b5db3_del complete#033[00m
Dec  6 01:58:42 np0005548731 nova_compute[232433]: 2025-12-06 06:58:42.703 232437 INFO nova.compute.manager [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Took 4.87 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 01:58:42 np0005548731 nova_compute[232433]: 2025-12-06 06:58:42.704 232437 DEBUG oslo.service.loopingcall [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 01:58:42 np0005548731 nova_compute[232433]: 2025-12-06 06:58:42.704 232437 DEBUG nova.compute.manager [-] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 01:58:42 np0005548731 nova_compute[232433]: 2025-12-06 06:58:42.704 232437 DEBUG nova.network.neutron [-] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.069 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Updating instance_info_cache with network_info: [{"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.088 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-1dd278c9-e3ac-480b-9a02-2e580da2d211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.089 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.089 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.090 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.090 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.090 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.091 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.091 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.091 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.131 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.131 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:43.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.248 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:58:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:43.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:58:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:58:43 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1246655193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.652 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.717 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.718 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.822 232437 DEBUG nova.network.neutron [-] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.843 232437 INFO nova.compute.manager [-] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Took 1.14 seconds to deallocate network for instance.#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.891 232437 DEBUG oslo_concurrency.lockutils [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.892 232437 DEBUG oslo_concurrency.lockutils [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.929 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.930 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4743MB free_disk=20.835464477539062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.930 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.959 232437 DEBUG oslo_concurrency.processutils [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:43 np0005548731 nova_compute[232433]: 2025-12-06 06:58:43.988 232437 DEBUG nova.compute.manager [req-71425bee-99cd-47fd-9150-ad21a56b82b9 req-0bc3736a-59f6-40d1-a639-982d845b8768 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Received event network-vif-deleted-4cce2b8b-a836-47f0-a6e4-adec68eed375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:58:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:58:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1330517347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:58:44 np0005548731 nova_compute[232433]: 2025-12-06 06:58:44.807 232437 DEBUG oslo_concurrency.processutils [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.848s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:44 np0005548731 nova_compute[232433]: 2025-12-06 06:58:44.812 232437 DEBUG nova.compute.provider_tree [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:58:44 np0005548731 nova_compute[232433]: 2025-12-06 06:58:44.828 232437 DEBUG nova.scheduler.client.report [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:58:44 np0005548731 nova_compute[232433]: 2025-12-06 06:58:44.859 232437 DEBUG oslo_concurrency.lockutils [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:44 np0005548731 nova_compute[232433]: 2025-12-06 06:58:44.861 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:44 np0005548731 nova_compute[232433]: 2025-12-06 06:58:44.895 232437 INFO nova.scheduler.client.report [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Deleted allocations for instance 012052dd-fee1-4ac6-bc17-336aff7b5db3#033[00m
Dec  6 01:58:44 np0005548731 nova_compute[232433]: 2025-12-06 06:58:44.925 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 1dd278c9-e3ac-480b-9a02-2e580da2d211 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 01:58:44 np0005548731 nova_compute[232433]: 2025-12-06 06:58:44.926 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 01:58:44 np0005548731 nova_compute[232433]: 2025-12-06 06:58:44.926 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 01:58:44 np0005548731 nova_compute[232433]: 2025-12-06 06:58:44.971 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:44 np0005548731 nova_compute[232433]: 2025-12-06 06:58:44.994 232437 DEBUG oslo_concurrency.lockutils [None req-f6c379d3-82db-49af-bfff-f1712f7a5426 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "012052dd-fee1-4ac6-bc17-336aff7b5db3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:58:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:45.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:58:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:58:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:45.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:58:45 np0005548731 nova_compute[232433]: 2025-12-06 06:58:45.455 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:45 np0005548731 nova_compute[232433]: 2025-12-06 06:58:45.460 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:58:45 np0005548731 nova_compute[232433]: 2025-12-06 06:58:45.475 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:58:45 np0005548731 nova_compute[232433]: 2025-12-06 06:58:45.494 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 01:58:45 np0005548731 nova_compute[232433]: 2025-12-06 06:58:45.495 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:45 np0005548731 nova_compute[232433]: 2025-12-06 06:58:45.748 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:46 np0005548731 nova_compute[232433]: 2025-12-06 06:58:46.491 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:58:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:47.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:47.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:48 np0005548731 nova_compute[232433]: 2025-12-06 06:58:48.253 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.558995) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328559035, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1999, "num_deletes": 258, "total_data_size": 4644573, "memory_usage": 4694112, "flush_reason": "Manual Compaction"}
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328631271, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 3012158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21211, "largest_seqno": 23205, "table_properties": {"data_size": 3003971, "index_size": 4937, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17186, "raw_average_key_size": 19, "raw_value_size": 2987205, "raw_average_value_size": 3441, "num_data_blocks": 220, "num_entries": 868, "num_filter_entries": 868, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004172, "oldest_key_time": 1765004172, "file_creation_time": 1765004328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 72348 microseconds, and 7648 cpu microseconds.
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.631334) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 3012158 bytes OK
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.631365) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.633268) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.633284) EVENT_LOG_v1 {"time_micros": 1765004328633278, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.633304) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 4635470, prev total WAL file size 4635470, number of live WAL files 2.
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.634748) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323533' seq:72057594037927935, type:22 .. '6C6F676D00353037' seq:0, type:0; will stop at (end)
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(2941KB)], [42(7480KB)]
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328634834, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 10672226, "oldest_snapshot_seqno": -1}
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5192 keys, 10457056 bytes, temperature: kUnknown
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328748859, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 10457056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10420525, "index_size": 22469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 131387, "raw_average_key_size": 25, "raw_value_size": 10324659, "raw_average_value_size": 1988, "num_data_blocks": 924, "num_entries": 5192, "num_filter_entries": 5192, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765004328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.749150) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 10457056 bytes
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.750853) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.5 rd, 91.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 7.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(7.0) write-amplify(3.5) OK, records in: 5727, records dropped: 535 output_compression: NoCompression
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.750875) EVENT_LOG_v1 {"time_micros": 1765004328750865, "job": 24, "event": "compaction_finished", "compaction_time_micros": 114160, "compaction_time_cpu_micros": 29380, "output_level": 6, "num_output_files": 1, "total_output_size": 10457056, "num_input_records": 5727, "num_output_records": 5192, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328751475, "job": 24, "event": "table_file_deletion", "file_number": 44}
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328753376, "job": 24, "event": "table_file_deletion", "file_number": 42}
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.634613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.753454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.753461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.753463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.753465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:48.753467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:49.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:58:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:49.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:50 np0005548731 nova_compute[232433]: 2025-12-06 06:58:50.773 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:50.823617) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330823658, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 284, "num_deletes": 251, "total_data_size": 75479, "memory_usage": 82184, "flush_reason": "Manual Compaction"}
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330825742, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 49123, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23210, "largest_seqno": 23489, "table_properties": {"data_size": 47258, "index_size": 94, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4973, "raw_average_key_size": 18, "raw_value_size": 43550, "raw_average_value_size": 161, "num_data_blocks": 4, "num_entries": 270, "num_filter_entries": 270, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004329, "oldest_key_time": 1765004329, "file_creation_time": 1765004330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 2163 microseconds, and 859 cpu microseconds.
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:50.825780) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 49123 bytes OK
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:50.825796) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:50.827719) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:50.827770) EVENT_LOG_v1 {"time_micros": 1765004330827758, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:50.827797) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 73348, prev total WAL file size 73348, number of live WAL files 2.
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:50.828404) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(47KB)], [45(10211KB)]
Dec  6 01:58:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330828514, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 10506179, "oldest_snapshot_seqno": -1}
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 4952 keys, 8462127 bytes, temperature: kUnknown
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004331219181, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 8462127, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8428931, "index_size": 19710, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 127106, "raw_average_key_size": 25, "raw_value_size": 8338971, "raw_average_value_size": 1683, "num_data_blocks": 799, "num_entries": 4952, "num_filter_entries": 4952, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765004330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 01:58:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:51.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:51.219464) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8462127 bytes
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:51.317802) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 26.9 rd, 21.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.0 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(386.1) write-amplify(172.3) OK, records in: 5462, records dropped: 510 output_compression: NoCompression
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:51.317843) EVENT_LOG_v1 {"time_micros": 1765004331317825, "job": 26, "event": "compaction_finished", "compaction_time_micros": 390676, "compaction_time_cpu_micros": 22074, "output_level": 6, "num_output_files": 1, "total_output_size": 8462127, "num_input_records": 5462, "num_output_records": 4952, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004331318118, "job": 26, "event": "table_file_deletion", "file_number": 47}
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004331320157, "job": 26, "event": "table_file_deletion", "file_number": 45}
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:50.828223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:51.320270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:51.320275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:51.320277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:51.320278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-06:58:51.320280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 01:58:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:51.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:53 np0005548731 nova_compute[232433]: 2025-12-06 06:58:53.218 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004318.2174683, 012052dd-fee1-4ac6-bc17-336aff7b5db3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:58:53 np0005548731 nova_compute[232433]: 2025-12-06 06:58:53.219 232437 INFO nova.compute.manager [-] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] VM Stopped (Lifecycle Event)#033[00m
Dec  6 01:58:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:53 np0005548731 nova_compute[232433]: 2025-12-06 06:58:53.240 232437 DEBUG nova.compute.manager [None req-9b080ef0-505a-45da-a17b-de48426608db - - - - - -] [instance: 012052dd-fee1-4ac6-bc17-336aff7b5db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:58:53 np0005548731 nova_compute[232433]: 2025-12-06 06:58:53.256 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:53.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:54 np0005548731 podman[237421]: 2025-12-06 06:58:54.895914941 +0000 UTC m=+0.056093390 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 01:58:54 np0005548731 podman[237422]: 2025-12-06 06:58:54.921429309 +0000 UTC m=+0.080598383 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller)
Dec  6 01:58:54 np0005548731 podman[237423]: 2025-12-06 06:58:54.925395356 +0000 UTC m=+0.080809208 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:58:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:58:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:55.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:55.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:55 np0005548731 nova_compute[232433]: 2025-12-06 06:58:55.774 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:55 np0005548731 nova_compute[232433]: 2025-12-06 06:58:55.995 232437 DEBUG oslo_concurrency.lockutils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Acquiring lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:55 np0005548731 nova_compute[232433]: 2025-12-06 06:58:55.996 232437 DEBUG oslo_concurrency.lockutils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:56 np0005548731 nova_compute[232433]: 2025-12-06 06:58:56.014 232437 DEBUG nova.objects.instance [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Lazy-loading 'flavor' on Instance uuid 1dd278c9-e3ac-480b-9a02-2e580da2d211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:58:56 np0005548731 nova_compute[232433]: 2025-12-06 06:58:56.090 232437 DEBUG oslo_concurrency.lockutils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:56 np0005548731 nova_compute[232433]: 2025-12-06 06:58:56.403 232437 DEBUG oslo_concurrency.lockutils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Acquiring lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:56 np0005548731 nova_compute[232433]: 2025-12-06 06:58:56.404 232437 DEBUG oslo_concurrency.lockutils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:56 np0005548731 nova_compute[232433]: 2025-12-06 06:58:56.404 232437 INFO nova.compute.manager [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Attaching volume efe8ed74-896b-4b4f-9562-1c790f1f79e8 to /dev/vdb#033[00m
Dec  6 01:58:56 np0005548731 nova_compute[232433]: 2025-12-06 06:58:56.619 232437 DEBUG os_brick.utils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 01:58:56 np0005548731 nova_compute[232433]: 2025-12-06 06:58:56.621 232437 INFO oslo.privsep.daemon [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmprjtsnqks/privsep.sock']#033[00m
Dec  6 01:58:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:57.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.337 232437 INFO oslo.privsep.daemon [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.195 237736 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.199 237736 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.202 237736 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.203 237736 INFO oslo.privsep.daemon [-] privsep daemon running as pid 237736#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.340 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0167a2-418d-4234-afc0-6e19bf419fc4]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:58:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:57.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.450 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.469 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.470 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[f37a30a1-02de-40e9-ae27-ac663d7466ab]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.472 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:58:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.481 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.482 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[b8097ccc-58de-432f-84ce-81d371d259ec]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.486 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.498 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.498 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[275d3ea0-7c83-42c1-b04c-bc418841fde3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.501 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdcf58b-9419-46d5-afea-db9bffe3cd43]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.502 232437 DEBUG oslo_concurrency.processutils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.526 232437 DEBUG oslo_concurrency.processutils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.529 232437 DEBUG os_brick.initiator.connectors.lightos [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.530 232437 DEBUG os_brick.initiator.connectors.lightos [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.531 232437 DEBUG os_brick.initiator.connectors.lightos [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.531 232437 DEBUG os_brick.utils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] <== get_connector_properties: return (910ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 01:58:57 np0005548731 nova_compute[232433]: 2025-12-06 06:58:57.532 232437 DEBUG nova.virt.block_device [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Updating existing volume attachment record: 91bc07dc-2e56-419d-9563-145a0a1f9ffd _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 01:58:58 np0005548731 nova_compute[232433]: 2025-12-06 06:58:58.259 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:58:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:58:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:58:59 np0005548731 nova_compute[232433]: 2025-12-06 06:58:59.018 232437 DEBUG oslo_concurrency.lockutils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:58:59 np0005548731 nova_compute[232433]: 2025-12-06 06:58:59.019 232437 DEBUG oslo_concurrency.lockutils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:58:59 np0005548731 nova_compute[232433]: 2025-12-06 06:58:59.020 232437 DEBUG oslo_concurrency.lockutils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:58:59 np0005548731 nova_compute[232433]: 2025-12-06 06:58:59.030 232437 DEBUG nova.objects.instance [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Lazy-loading 'flavor' on Instance uuid 1dd278c9-e3ac-480b-9a02-2e580da2d211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:58:59 np0005548731 nova_compute[232433]: 2025-12-06 06:58:59.054 232437 DEBUG nova.virt.libvirt.driver [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Attempting to attach volume efe8ed74-896b-4b4f-9562-1c790f1f79e8 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 01:58:59 np0005548731 nova_compute[232433]: 2025-12-06 06:58:59.057 232437 DEBUG nova.virt.libvirt.guest [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 01:58:59 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 01:58:59 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-efe8ed74-896b-4b4f-9562-1c790f1f79e8">
Dec  6 01:58:59 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 01:58:59 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 01:58:59 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 01:58:59 np0005548731 nova_compute[232433]:  </source>
Dec  6 01:58:59 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 01:58:59 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 01:58:59 np0005548731 nova_compute[232433]:  </auth>
Dec  6 01:58:59 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 01:58:59 np0005548731 nova_compute[232433]:  <serial>efe8ed74-896b-4b4f-9562-1c790f1f79e8</serial>
Dec  6 01:58:59 np0005548731 nova_compute[232433]: </disk>
Dec  6 01:58:59 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 01:58:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:59.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:58:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:58:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:59.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:58:59 np0005548731 nova_compute[232433]: 2025-12-06 06:58:59.653 232437 DEBUG nova.virt.libvirt.driver [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 01:58:59 np0005548731 nova_compute[232433]: 2025-12-06 06:58:59.653 232437 DEBUG nova.virt.libvirt.driver [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 01:58:59 np0005548731 nova_compute[232433]: 2025-12-06 06:58:59.654 232437 DEBUG nova.virt.libvirt.driver [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 01:58:59 np0005548731 nova_compute[232433]: 2025-12-06 06:58:59.654 232437 DEBUG nova.virt.libvirt.driver [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] No VIF found with MAC fa:16:3e:20:e0:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 01:58:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:58:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:58:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:58:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:58:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 01:58:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:58:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 01:59:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.117 232437 DEBUG oslo_concurrency.lockutils [None req-39fb2093-6b3e-497f-8716-7178e976245d 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.455 232437 DEBUG nova.virt.libvirt.driver [None req-6fd1b145-6234-4644-a1bc-f8db3826c82d 60162eb3313e412fbdb405f46832c14a a33d484d48c9436da22da22551266623 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] volume_snapshot_create: create_info: {'snapshot_id': '27dacc48-1b9e-47e0-90b9-b9e62edecc83', 'type': 'qcow2', 'new_file': 'new_file'} volume_snapshot_create /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3572#033[00m
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.462 232437 ERROR nova.virt.libvirt.driver [None req-6fd1b145-6234-4644-a1bc-f8db3826c82d 60162eb3313e412fbdb405f46832c14a a33d484d48c9436da22da22551266623 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Error occurred during volume_snapshot_create, sending error status to Cinder.: nova.exception.InternalError: Found no disk to snapshot.
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.462 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Traceback (most recent call last):
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.462 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.462 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211]     self._volume_snapshot_create(context, instance, guest,
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.462 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.462 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211]     raise exception.InternalError(msg)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.462 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] nova.exception.InternalError: Found no disk to snapshot.
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.462 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] #033[00m
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.567 232437 DEBUG nova.virt.libvirt.driver [None req-e80a4008-669f-4cd5-a266-9186fe89a40a 60162eb3313e412fbdb405f46832c14a a33d484d48c9436da22da22551266623 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] volume_snapshot_delete: delete_info: {'volume_id': 'efe8ed74-896b-4b4f-9562-1c790f1f79e8'} _volume_snapshot_delete /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3673#033[00m
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.568 232437 ERROR nova.virt.libvirt.driver [None req-e80a4008-669f-4cd5-a266-9186fe89a40a 60162eb3313e412fbdb405f46832c14a a33d484d48c9436da22da22551266623 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Error occurred during volume_snapshot_delete, sending error status to Cinder.: KeyError: 'type'
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.568 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Traceback (most recent call last):
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.568 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.568 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211]     self._volume_snapshot_delete(context, instance, volume_id,
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.568 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.568 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211]     if delete_info['type'] != 'qcow2':
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.568 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] KeyError: 'type'
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.568 232437 ERROR nova.virt.libvirt.driver [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] #033[00m
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver [None req-6fd1b145-6234-4644-a1bc-f8db3826c82d 60162eb3313e412fbdb405f46832c14a a33d484d48c9436da22da22551266623 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot 27dacc48-1b9e-47e0-90b9-b9e62edecc83 could not be found.
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     self._volume_snapshot_create(context, instance, guest,
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     raise exception.InternalError(msg)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver nova.exception.InternalError: Found no disk to snapshot.
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver 
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver 
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot 27dacc48-1b9e-47e0-90b9-b9e62edecc83 could not be found. (HTTP 404) (Request-ID: req-d675b535-8226-4ec7-9bff-eca749f5fc05)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver 
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver 
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot 27dacc48-1b9e-47e0-90b9-b9e62edecc83 could not be found.
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.613 232437 ERROR nova.virt.libvirt.driver #033[00m
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server [None req-6fd1b145-6234-4644-a1bc-f8db3826c82d 60162eb3313e412fbdb405f46832c14a a33d484d48c9436da22da22551266623 - - default default] Exception during message handling: nova.exception.InternalError: Found no disk to snapshot.
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     raise self.value
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4410, in volume_snapshot_create
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_create(context, instance, volume_id,
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3597, in volume_snapshot_create
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     raise self.value
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     self._volume_snapshot_create(context, instance, guest,
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server     raise exception.InternalError(msg)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server nova.exception.InternalError: Found no disk to snapshot.
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.619 232437 ERROR oslo_messaging.rpc.server #033[00m
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver [None req-e80a4008-669f-4cd5-a266-9186fe89a40a 60162eb3313e412fbdb405f46832c14a a33d484d48c9436da22da22551266623 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot None could not be found.
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     self._volume_snapshot_delete(context, instance, volume_id,
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     if delete_info['type'] != 'qcow2':
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver KeyError: 'type'
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver 
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver 
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot None could not be found. (HTTP 404) (Request-ID: req-a3e591f4-810a-4365-8570-842230387e0b)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver 
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver 
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot None could not be found.
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.736 232437 ERROR nova.virt.libvirt.driver #033[00m
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server [None req-e80a4008-669f-4cd5-a266-9186fe89a40a 60162eb3313e412fbdb405f46832c14a a33d484d48c9436da22da22551266623 - - default default] Exception during message handling: KeyError: 'type'
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     raise self.value
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4422, in volume_snapshot_delete
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_delete(context, instance, volume_id,
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3853, in volume_snapshot_delete
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     raise self.value
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     self._volume_snapshot_delete(context, instance, volume_id,
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server     if delete_info['type'] != 'qcow2':
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server KeyError: 'type'
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.738 232437 ERROR oslo_messaging.rpc.server #033[00m
Dec  6 01:59:00 np0005548731 nova_compute[232433]: 2025-12-06 06:59:00.775 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:00.843 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:00.844 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:00.844 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:01.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.356 232437 DEBUG oslo_concurrency.lockutils [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Acquiring lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.356 232437 DEBUG oslo_concurrency.lockutils [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.385 232437 INFO nova.compute.manager [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Detaching volume efe8ed74-896b-4b4f-9562-1c790f1f79e8#033[00m
Dec  6 01:59:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:01.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.541 232437 INFO nova.virt.block_device [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Attempting to driver detach volume efe8ed74-896b-4b4f-9562-1c790f1f79e8 from mountpoint /dev/vdb#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.550 232437 DEBUG nova.virt.libvirt.driver [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Attempting to detach device vdb from instance 1dd278c9-e3ac-480b-9a02-2e580da2d211 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.550 232437 DEBUG nova.virt.libvirt.guest [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-efe8ed74-896b-4b4f-9562-1c790f1f79e8">
Dec  6 01:59:01 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  </source>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  <serial>efe8ed74-896b-4b4f-9562-1c790f1f79e8</serial>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]: </disk>
Dec  6 01:59:01 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.559 232437 INFO nova.virt.libvirt.driver [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Successfully detached device vdb from instance 1dd278c9-e3ac-480b-9a02-2e580da2d211 from the persistent domain config.#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.559 232437 DEBUG nova.virt.libvirt.driver [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 1dd278c9-e3ac-480b-9a02-2e580da2d211 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.560 232437 DEBUG nova.virt.libvirt.guest [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-efe8ed74-896b-4b4f-9562-1c790f1f79e8">
Dec  6 01:59:01 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  </source>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  <serial>efe8ed74-896b-4b4f-9562-1c790f1f79e8</serial>
Dec  6 01:59:01 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 01:59:01 np0005548731 nova_compute[232433]: </disk>
Dec  6 01:59:01 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.608 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765004341.6082354, 1dd278c9-e3ac-480b-9a02-2e580da2d211 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.609 232437 DEBUG nova.virt.libvirt.driver [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 1dd278c9-e3ac-480b-9a02-2e580da2d211 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.611 232437 INFO nova.virt.libvirt.driver [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Successfully detached device vdb from instance 1dd278c9-e3ac-480b-9a02-2e580da2d211 from the live domain config.#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.896 232437 DEBUG nova.objects.instance [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Lazy-loading 'flavor' on Instance uuid 1dd278c9-e3ac-480b-9a02-2e580da2d211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:01 np0005548731 nova_compute[232433]: 2025-12-06 06:59:01.932 232437 DEBUG oslo_concurrency.lockutils [None req-ea81ed0f-d6ff-459f-8661-e83f68c7a96e 28495070d0e94414977511fb98ee8288 ac4575fdd57240a9988c7ae425f0c2aa - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:02 np0005548731 ovn_controller[133927]: 2025-12-06T06:59:02Z|00040|binding|INFO|Releasing lport 5902f277-f87f-40e0-a71c-841de11f5ec9 from this chassis (sb_readonly=0)
Dec  6 01:59:02 np0005548731 nova_compute[232433]: 2025-12-06 06:59:02.093 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:03.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:03 np0005548731 nova_compute[232433]: 2025-12-06 06:59:03.300 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:03.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.120 232437 DEBUG oslo_concurrency.lockutils [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Acquiring lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.121 232437 DEBUG oslo_concurrency.lockutils [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.121 232437 DEBUG oslo_concurrency.lockutils [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Acquiring lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.121 232437 DEBUG oslo_concurrency.lockutils [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.121 232437 DEBUG oslo_concurrency.lockutils [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.122 232437 INFO nova.compute.manager [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Terminating instance#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.123 232437 DEBUG nova.compute.manager [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 01:59:04 np0005548731 kernel: tap1c43f5f0-b7 (unregistering): left promiscuous mode
Dec  6 01:59:04 np0005548731 NetworkManager[49182]: <info>  [1765004344.1750] device (tap1c43f5f0-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.184 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:04 np0005548731 ovn_controller[133927]: 2025-12-06T06:59:04Z|00041|binding|INFO|Releasing lport 1c43f5f0-b72d-49f6-9dc5-10367318c26c from this chassis (sb_readonly=0)
Dec  6 01:59:04 np0005548731 ovn_controller[133927]: 2025-12-06T06:59:04Z|00042|binding|INFO|Setting lport 1c43f5f0-b72d-49f6-9dc5-10367318c26c down in Southbound
Dec  6 01:59:04 np0005548731 ovn_controller[133927]: 2025-12-06T06:59:04Z|00043|binding|INFO|Removing iface tap1c43f5f0-b7 ovn-installed in OVS
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.185 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.189 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:e0:e2 10.100.0.4'], port_security=['fa:16:3e:20:e0:e2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1dd278c9-e3ac-480b-9a02-2e580da2d211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26dc03be-378a-4751-a903-cbe72fe09f14', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bccff877c0f34760846113a84f5b0709', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1b42260-c646-42f8-aae4-ef1d7199eb37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5f82587-5432-4011-b9b3-695194943504, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=1c43f5f0-b72d-49f6-9dc5-10367318c26c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.190 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 1c43f5f0-b72d-49f6-9dc5-10367318c26c in datapath 26dc03be-378a-4751-a903-cbe72fe09f14 unbound from our chassis#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.191 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26dc03be-378a-4751-a903-cbe72fe09f14, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.192 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2fff756f-353a-42df-8c30-8f3e16f4c5fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.193 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14 namespace which is not needed anymore#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.203 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:04 np0005548731 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec  6 01:59:04 np0005548731 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000009.scope: Consumed 16.106s CPU time.
Dec  6 01:59:04 np0005548731 systemd-machined[195355]: Machine qemu-2-instance-00000009 terminated.
Dec  6 01:59:04 np0005548731 neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14[236851]: [NOTICE]   (236855) : haproxy version is 2.8.14-c23fe91
Dec  6 01:59:04 np0005548731 neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14[236851]: [NOTICE]   (236855) : path to executable is /usr/sbin/haproxy
Dec  6 01:59:04 np0005548731 neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14[236851]: [ALERT]    (236855) : Current worker (236857) exited with code 143 (Terminated)
Dec  6 01:59:04 np0005548731 neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14[236851]: [WARNING]  (236855) : All workers exited. Exiting... (0)
Dec  6 01:59:04 np0005548731 systemd[1]: libpod-d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9.scope: Deactivated successfully.
Dec  6 01:59:04 np0005548731 podman[237795]: 2025-12-06 06:59:04.329803954 +0000 UTC m=+0.045284796 container died d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.383 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.393 232437 INFO nova.virt.libvirt.driver [-] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Instance destroyed successfully.#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.395 232437 DEBUG nova.objects.instance [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lazy-loading 'resources' on Instance uuid 1dd278c9-e3ac-480b-9a02-2e580da2d211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:04 np0005548731 systemd[1]: var-lib-containers-storage-overlay-a4ba86cf1926c20f6b97a2c3054bb59a6dabb81b0343ad261978c24701b3d3a9-merged.mount: Deactivated successfully.
Dec  6 01:59:04 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9-userdata-shm.mount: Deactivated successfully.
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.410 232437 DEBUG nova.virt.libvirt.vif [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T06:58:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-1565128301',display_name='tempest-VolumesAssistedSnapshotsTest-server-1565128301',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-1565128301',id=9,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZen7TNm8dgJLJTr9dDp0A2wz2tSf4zkjaCr9LsbdXZ82SH9igdlnEuLY5ag1VrL+QQmGmWXHiV5ouzMHUdEKpcjVfelrNJZX+qQ6N1KkZQiNXyXDA1UmeqB9bVkSsJIA==',key_name='tempest-keypair-559281048',keypairs=<?>,launch_index=0,launched_at=2025-12-06T06:58:19Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bccff877c0f34760846113a84f5b0709',ramdisk_id='',reservation_id='r-1pofku7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAssistedSnapshotsTest-1680544419',owner_user_name='tempest-VolumesAssistedSnapshotsTest-1680544419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T06:58:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4e358e8cf5374d0b818b9d1de2f01848',uuid=1dd278c9-e3ac-480b-9a02-2e580da2d211,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.411 232437 DEBUG nova.network.os_vif_util [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Converting VIF {"id": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "address": "fa:16:3e:20:e0:e2", "network": {"id": "26dc03be-378a-4751-a903-cbe72fe09f14", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1268666002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bccff877c0f34760846113a84f5b0709", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c43f5f0-b7", "ovs_interfaceid": "1c43f5f0-b72d-49f6-9dc5-10367318c26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.412 232437 DEBUG nova.network.os_vif_util [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:20:e0:e2,bridge_name='br-int',has_traffic_filtering=True,id=1c43f5f0-b72d-49f6-9dc5-10367318c26c,network=Network(26dc03be-378a-4751-a903-cbe72fe09f14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c43f5f0-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.412 232437 DEBUG os_vif [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:e0:e2,bridge_name='br-int',has_traffic_filtering=True,id=1c43f5f0-b72d-49f6-9dc5-10367318c26c,network=Network(26dc03be-378a-4751-a903-cbe72fe09f14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c43f5f0-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.414 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.414 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c43f5f0-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:59:04 np0005548731 podman[237795]: 2025-12-06 06:59:04.414991399 +0000 UTC m=+0.130472241 container cleanup d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.417 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.420 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.422 232437 INFO os_vif [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:20:e0:e2,bridge_name='br-int',has_traffic_filtering=True,id=1c43f5f0-b72d-49f6-9dc5-10367318c26c,network=Network(26dc03be-378a-4751-a903-cbe72fe09f14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c43f5f0-b7')#033[00m
Dec  6 01:59:04 np0005548731 systemd[1]: libpod-conmon-d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9.scope: Deactivated successfully.
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.500 232437 DEBUG nova.compute.manager [req-12e6fda3-0322-4b99-8e5e-ab5aa473e1e7 req-cb37ba95-08d9-437e-b3a7-dc8e32ffb85d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Received event network-vif-unplugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.500 232437 DEBUG oslo_concurrency.lockutils [req-12e6fda3-0322-4b99-8e5e-ab5aa473e1e7 req-cb37ba95-08d9-437e-b3a7-dc8e32ffb85d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.500 232437 DEBUG oslo_concurrency.lockutils [req-12e6fda3-0322-4b99-8e5e-ab5aa473e1e7 req-cb37ba95-08d9-437e-b3a7-dc8e32ffb85d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.501 232437 DEBUG oslo_concurrency.lockutils [req-12e6fda3-0322-4b99-8e5e-ab5aa473e1e7 req-cb37ba95-08d9-437e-b3a7-dc8e32ffb85d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.501 232437 DEBUG nova.compute.manager [req-12e6fda3-0322-4b99-8e5e-ab5aa473e1e7 req-cb37ba95-08d9-437e-b3a7-dc8e32ffb85d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] No waiting events found dispatching network-vif-unplugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.501 232437 DEBUG nova.compute.manager [req-12e6fda3-0322-4b99-8e5e-ab5aa473e1e7 req-cb37ba95-08d9-437e-b3a7-dc8e32ffb85d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Received event network-vif-unplugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 01:59:04 np0005548731 podman[237833]: 2025-12-06 06:59:04.587984512 +0000 UTC m=+0.144909104 container remove d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.594 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d6efec17-5c73-42e5-8a53-97bc1432aea7]: (4, ('Sat Dec  6 06:59:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14 (d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9)\nd8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9\nSat Dec  6 06:59:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14 (d8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9)\nd8f9c245c0d315ab5aa6f3f6623b2ced3308843993fad027f58d720b16506ea9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.597 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[33a5152f-2fa1-4655-9651-0bf23a9682ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.599 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26dc03be-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.600 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:04 np0005548731 kernel: tap26dc03be-30: left promiscuous mode
Dec  6 01:59:04 np0005548731 nova_compute[232433]: 2025-12-06 06:59:04.617 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.620 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d48f4c3e-6ea6-444b-b5c4-03e902847126]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.635 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a3e679-1caa-4cbf-bf1f-23f66c3b9f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.638 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[af8d4888-f565-46b0-b620-ae8d47988096]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.656 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cacbfcef-be50-4058-9e50-fbe08c7b41af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456578, 'reachable_time': 27921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237866, 'error': None, 'target': 'ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.659 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26dc03be-378a-4751-a903-cbe72fe09f14 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 01:59:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:04.659 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[ab79ec40-2aa1-4e29-b840-33ab9fcc79ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 01:59:04 np0005548731 systemd[1]: run-netns-ovnmeta\x2d26dc03be\x2d378a\x2d4751\x2da903\x2dcbe72fe09f14.mount: Deactivated successfully.
Dec  6 01:59:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:05 np0005548731 nova_compute[232433]: 2025-12-06 06:59:05.106 232437 INFO nova.virt.libvirt.driver [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Deleting instance files /var/lib/nova/instances/1dd278c9-e3ac-480b-9a02-2e580da2d211_del#033[00m
Dec  6 01:59:05 np0005548731 nova_compute[232433]: 2025-12-06 06:59:05.108 232437 INFO nova.virt.libvirt.driver [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Deletion of /var/lib/nova/instances/1dd278c9-e3ac-480b-9a02-2e580da2d211_del complete#033[00m
Dec  6 01:59:05 np0005548731 nova_compute[232433]: 2025-12-06 06:59:05.148 232437 INFO nova.compute.manager [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 01:59:05 np0005548731 nova_compute[232433]: 2025-12-06 06:59:05.149 232437 DEBUG oslo.service.loopingcall [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 01:59:05 np0005548731 nova_compute[232433]: 2025-12-06 06:59:05.149 232437 DEBUG nova.compute.manager [-] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 01:59:05 np0005548731 nova_compute[232433]: 2025-12-06 06:59:05.150 232437 DEBUG nova.network.neutron [-] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 01:59:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:05.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:05.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:05 np0005548731 nova_compute[232433]: 2025-12-06 06:59:05.778 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:59:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.073 232437 DEBUG nova.network.neutron [-] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.093 232437 INFO nova.compute.manager [-] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Took 0.94 seconds to deallocate network for instance.#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.147 232437 DEBUG oslo_concurrency.lockutils [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.148 232437 DEBUG oslo_concurrency.lockutils [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.199 232437 DEBUG oslo_concurrency.processutils [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.599 232437 DEBUG nova.compute.manager [req-4405980e-0b58-4c0a-867a-e56e71732372 req-cebf5bae-5d4e-40ff-92f6-fe4acdd16028 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Received event network-vif-plugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.600 232437 DEBUG oslo_concurrency.lockutils [req-4405980e-0b58-4c0a-867a-e56e71732372 req-cebf5bae-5d4e-40ff-92f6-fe4acdd16028 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.601 232437 DEBUG oslo_concurrency.lockutils [req-4405980e-0b58-4c0a-867a-e56e71732372 req-cebf5bae-5d4e-40ff-92f6-fe4acdd16028 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.601 232437 DEBUG oslo_concurrency.lockutils [req-4405980e-0b58-4c0a-867a-e56e71732372 req-cebf5bae-5d4e-40ff-92f6-fe4acdd16028 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.601 232437 DEBUG nova.compute.manager [req-4405980e-0b58-4c0a-867a-e56e71732372 req-cebf5bae-5d4e-40ff-92f6-fe4acdd16028 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] No waiting events found dispatching network-vif-plugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.601 232437 WARNING nova.compute.manager [req-4405980e-0b58-4c0a-867a-e56e71732372 req-cebf5bae-5d4e-40ff-92f6-fe4acdd16028 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Received unexpected event network-vif-plugged-1c43f5f0-b72d-49f6-9dc5-10367318c26c for instance with vm_state deleted and task_state None.#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.602 232437 DEBUG nova.compute.manager [req-4405980e-0b58-4c0a-867a-e56e71732372 req-cebf5bae-5d4e-40ff-92f6-fe4acdd16028 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Received event network-vif-deleted-1c43f5f0-b72d-49f6-9dc5-10367318c26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 01:59:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:59:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3731078009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.625 232437 DEBUG oslo_concurrency.processutils [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.631 232437 DEBUG nova.compute.provider_tree [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.646 232437 DEBUG nova.scheduler.client.report [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.665 232437 DEBUG oslo_concurrency.lockutils [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.690 232437 INFO nova.scheduler.client.report [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Deleted allocations for instance 1dd278c9-e3ac-480b-9a02-2e580da2d211#033[00m
Dec  6 01:59:06 np0005548731 nova_compute[232433]: 2025-12-06 06:59:06.761 232437 DEBUG oslo_concurrency.lockutils [None req-6af28e06-f0e2-4535-aaf9-6f5f6a761040 4e358e8cf5374d0b818b9d1de2f01848 bccff877c0f34760846113a84f5b0709 - - default default] Lock "1dd278c9-e3ac-480b-9a02-2e580da2d211" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:07.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:07.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:09.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:09.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:09 np0005548731 nova_compute[232433]: 2025-12-06 06:59:09.418 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.752 232437 DEBUG nova.compute.manager [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.781 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.848 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.849 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.875 232437 DEBUG nova.objects.instance [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'pci_requests' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.895 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.896 232437 INFO nova.compute.claims [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.897 232437 DEBUG nova.objects.instance [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'resources' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.915 232437 DEBUG nova.objects.instance [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'numa_topology' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.928 232437 DEBUG nova.objects.instance [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'pci_devices' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.975 232437 INFO nova.compute.resource_tracker [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Updating resource usage from migration c599b82a-ac1d-47a0-919b-16ec8a9eb01d#033[00m
Dec  6 01:59:10 np0005548731 nova_compute[232433]: 2025-12-06 06:59:10.976 232437 DEBUG nova.compute.resource_tracker [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Starting to track incoming migration c599b82a-ac1d-47a0-919b-16ec8a9eb01d with flavor 25848a18-11d9-4f11-80b5-5d005675c76d _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Dec  6 01:59:11 np0005548731 nova_compute[232433]: 2025-12-06 06:59:11.035 232437 DEBUG oslo_concurrency.processutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:59:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:11.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:11.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:59:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1905717733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:59:11 np0005548731 nova_compute[232433]: 2025-12-06 06:59:11.513 232437 DEBUG oslo_concurrency.processutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:59:11 np0005548731 nova_compute[232433]: 2025-12-06 06:59:11.519 232437 DEBUG nova.compute.provider_tree [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:59:11 np0005548731 nova_compute[232433]: 2025-12-06 06:59:11.686 232437 DEBUG nova.scheduler.client.report [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:59:11 np0005548731 nova_compute[232433]: 2025-12-06 06:59:11.809 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:11 np0005548731 nova_compute[232433]: 2025-12-06 06:59:11.809 232437 INFO nova.compute.manager [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Migrating#033[00m
Dec  6 01:59:11 np0005548731 nova_compute[232433]: 2025-12-06 06:59:11.809 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:59:11 np0005548731 nova_compute[232433]: 2025-12-06 06:59:11.810 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:59:11 np0005548731 nova_compute[232433]: 2025-12-06 06:59:11.931 232437 INFO nova.compute.rpcapi [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Dec  6 01:59:11 np0005548731 nova_compute[232433]: 2025-12-06 06:59:11.932 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:59:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:13.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:13.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:14 np0005548731 nova_compute[232433]: 2025-12-06 06:59:14.420 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:15.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:15 np0005548731 systemd[1]: Created slice User Slice of UID 42436.
Dec  6 01:59:15 np0005548731 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec  6 01:59:15 np0005548731 systemd-logind[794]: New session 52 of user nova.
Dec  6 01:59:15 np0005548731 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec  6 01:59:15 np0005548731 systemd[1]: Starting User Manager for UID 42436...
Dec  6 01:59:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:15.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:15 np0005548731 nova_compute[232433]: 2025-12-06 06:59:15.424 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:15 np0005548731 systemd[238021]: Queued start job for default target Main User Target.
Dec  6 01:59:15 np0005548731 systemd[238021]: Created slice User Application Slice.
Dec  6 01:59:15 np0005548731 systemd[238021]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 01:59:15 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 01:59:15 np0005548731 systemd[238021]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 01:59:15 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 01:59:15 np0005548731 systemd[238021]: Reached target Paths.
Dec  6 01:59:15 np0005548731 systemd[238021]: Reached target Timers.
Dec  6 01:59:15 np0005548731 systemd[238021]: Starting D-Bus User Message Bus Socket...
Dec  6 01:59:15 np0005548731 systemd[238021]: Starting Create User's Volatile Files and Directories...
Dec  6 01:59:15 np0005548731 nova_compute[232433]: 2025-12-06 06:59:15.622 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:15 np0005548731 systemd[238021]: Finished Create User's Volatile Files and Directories.
Dec  6 01:59:15 np0005548731 systemd[238021]: Listening on D-Bus User Message Bus Socket.
Dec  6 01:59:15 np0005548731 systemd[238021]: Reached target Sockets.
Dec  6 01:59:15 np0005548731 systemd[238021]: Reached target Basic System.
Dec  6 01:59:15 np0005548731 systemd[1]: Started User Manager for UID 42436.
Dec  6 01:59:15 np0005548731 systemd[238021]: Reached target Main User Target.
Dec  6 01:59:15 np0005548731 systemd[238021]: Startup finished in 229ms.
Dec  6 01:59:15 np0005548731 systemd[1]: Started Session 52 of User nova.
Dec  6 01:59:15 np0005548731 systemd[1]: session-52.scope: Deactivated successfully.
Dec  6 01:59:15 np0005548731 systemd-logind[794]: Session 52 logged out. Waiting for processes to exit.
Dec  6 01:59:15 np0005548731 systemd-logind[794]: Removed session 52.
Dec  6 01:59:15 np0005548731 nova_compute[232433]: 2025-12-06 06:59:15.783 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:15 np0005548731 systemd-logind[794]: New session 54 of user nova.
Dec  6 01:59:15 np0005548731 systemd[1]: Started Session 54 of User nova.
Dec  6 01:59:15 np0005548731 systemd[1]: session-54.scope: Deactivated successfully.
Dec  6 01:59:15 np0005548731 systemd-logind[794]: Session 54 logged out. Waiting for processes to exit.
Dec  6 01:59:15 np0005548731 systemd-logind[794]: Removed session 54.
Dec  6 01:59:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:17.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:17.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:19.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:19.335 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 01:59:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:19.336 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 01:59:19 np0005548731 nova_compute[232433]: 2025-12-06 06:59:19.336 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:19 np0005548731 nova_compute[232433]: 2025-12-06 06:59:19.391 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004344.3907762, 1dd278c9-e3ac-480b-9a02-2e580da2d211 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:59:19 np0005548731 nova_compute[232433]: 2025-12-06 06:59:19.391 232437 INFO nova.compute.manager [-] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] VM Stopped (Lifecycle Event)#033[00m
Dec  6 01:59:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:19.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:19 np0005548731 nova_compute[232433]: 2025-12-06 06:59:19.421 232437 DEBUG nova.compute.manager [None req-dbec06c2-c991-4b0b-9ffc-2182f4c6adda - - - - - -] [instance: 1dd278c9-e3ac-480b-9a02-2e580da2d211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:59:19 np0005548731 nova_compute[232433]: 2025-12-06 06:59:19.421 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:20 np0005548731 nova_compute[232433]: 2025-12-06 06:59:20.822 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:21.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 06:59:21.338 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 01:59:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:21.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:23.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:23.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:24 np0005548731 nova_compute[232433]: 2025-12-06 06:59:24.425 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 01:59:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4267 writes, 23K keys, 4267 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 4267 writes, 4267 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1482 writes, 7366 keys, 1482 commit groups, 1.0 writes per commit group, ingest: 15.52 MB, 0.03 MB/s#012Interval WAL: 1482 writes, 1482 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.9      1.98              0.08        13    0.152       0      0       0.0       0.0#012  L6      1/0    8.07 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.7     51.9     43.3      2.50              0.31        12    0.208     60K   6383       0.0       0.0#012 Sum      1/0    8.07 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.7     29.0     30.8      4.48              0.40        25    0.179     60K   6383       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.0     36.6     37.1      1.76              0.19        12    0.147     32K   3097       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     51.9     43.3      2.50              0.31        12    0.208     60K   6383       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.9      1.98              0.08        12    0.165       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.08 MB/s write, 0.13 GB read, 0.07 MB/s read, 4.5 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 1.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 10.67 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000108 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(584,10.18 MB,3.34949%) FilterBlock(25,172.86 KB,0.0555289%) IndexBlock(25,324.83 KB,0.104347%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 01:59:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 01:59:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:25.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 01:59:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:25.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:25 np0005548731 nova_compute[232433]: 2025-12-06 06:59:25.825 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:25 np0005548731 podman[238050]: 2025-12-06 06:59:25.905449977 +0000 UTC m=+0.059833662 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 01:59:25 np0005548731 podman[238052]: 2025-12-06 06:59:25.907090868 +0000 UTC m=+0.058299805 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  6 01:59:25 np0005548731 podman[238051]: 2025-12-06 06:59:25.938870729 +0000 UTC m=+0.093183642 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 01:59:25 np0005548731 systemd[1]: Stopping User Manager for UID 42436...
Dec  6 01:59:25 np0005548731 systemd[238021]: Activating special unit Exit the Session...
Dec  6 01:59:25 np0005548731 systemd[238021]: Stopped target Main User Target.
Dec  6 01:59:25 np0005548731 systemd[238021]: Stopped target Basic System.
Dec  6 01:59:25 np0005548731 systemd[238021]: Stopped target Paths.
Dec  6 01:59:25 np0005548731 systemd[238021]: Stopped target Sockets.
Dec  6 01:59:25 np0005548731 systemd[238021]: Stopped target Timers.
Dec  6 01:59:25 np0005548731 systemd[238021]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  6 01:59:25 np0005548731 systemd[238021]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  6 01:59:25 np0005548731 systemd[238021]: Closed D-Bus User Message Bus Socket.
Dec  6 01:59:25 np0005548731 systemd[238021]: Stopped Create User's Volatile Files and Directories.
Dec  6 01:59:25 np0005548731 systemd[238021]: Removed slice User Application Slice.
Dec  6 01:59:25 np0005548731 systemd[238021]: Reached target Shutdown.
Dec  6 01:59:25 np0005548731 systemd[238021]: Finished Exit the Session.
Dec  6 01:59:25 np0005548731 systemd[238021]: Reached target Exit the Session.
Dec  6 01:59:25 np0005548731 systemd[1]: user@42436.service: Deactivated successfully.
Dec  6 01:59:25 np0005548731 systemd[1]: Stopped User Manager for UID 42436.
Dec  6 01:59:25 np0005548731 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec  6 01:59:25 np0005548731 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec  6 01:59:25 np0005548731 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec  6 01:59:25 np0005548731 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec  6 01:59:25 np0005548731 systemd[1]: Removed slice User Slice of UID 42436.
Dec  6 01:59:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:27.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:27.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:29.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:29.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:29 np0005548731 nova_compute[232433]: 2025-12-06 06:59:29.427 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:29 np0005548731 nova_compute[232433]: 2025-12-06 06:59:29.477 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:59:29 np0005548731 nova_compute[232433]: 2025-12-06 06:59:29.478 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquired lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:59:29 np0005548731 nova_compute[232433]: 2025-12-06 06:59:29.478 232437 DEBUG nova.network.neutron [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 01:59:29 np0005548731 nova_compute[232433]: 2025-12-06 06:59:29.666 232437 DEBUG nova.network.neutron [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 01:59:30 np0005548731 nova_compute[232433]: 2025-12-06 06:59:30.099 232437 DEBUG nova.network.neutron [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:59:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:30 np0005548731 nova_compute[232433]: 2025-12-06 06:59:30.117 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Releasing lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:59:30 np0005548731 nova_compute[232433]: 2025-12-06 06:59:30.218 232437 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Dec  6 01:59:30 np0005548731 nova_compute[232433]: 2025-12-06 06:59:30.220 232437 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 01:59:30 np0005548731 nova_compute[232433]: 2025-12-06 06:59:30.220 232437 INFO nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Creating image(s)#033[00m
Dec  6 01:59:30 np0005548731 nova_compute[232433]: 2025-12-06 06:59:30.264 232437 DEBUG nova.storage.rbd_utils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] creating snapshot(nova-resize) on rbd image(d32579c4-62c8-41ac-9d01-b617cc7992ed_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 01:59:30 np0005548731 nova_compute[232433]: 2025-12-06 06:59:30.828 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e157 e157: 3 total, 3 up, 3 in
Dec  6 01:59:30 np0005548731 nova_compute[232433]: 2025-12-06 06:59:30.963 232437 DEBUG nova.objects.instance [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'trusted_certs' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.100 232437 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.100 232437 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Ensure instance console log exists: /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.101 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.101 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.101 232437 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.103 232437 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.107 232437 WARNING nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.114 232437 DEBUG nova.virt.libvirt.host [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.115 232437 DEBUG nova.virt.libvirt.host [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.118 232437 DEBUG nova.virt.libvirt.host [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.119 232437 DEBUG nova.virt.libvirt.host [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.121 232437 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.122 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.122 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.122 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.123 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.123 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.123 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.123 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.123 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.124 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.124 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.124 232437 DEBUG nova.virt.hardware [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.124 232437 DEBUG nova.objects.instance [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'vcpu_model' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:31 np0005548731 nova_compute[232433]: 2025-12-06 06:59:31.150 232437 DEBUG oslo_concurrency.processutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:59:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:31.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:31.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 01:59:32 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1428329051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 01:59:33 np0005548731 nova_compute[232433]: 2025-12-06 06:59:33.220 232437 DEBUG oslo_concurrency.processutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:59:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:33.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:33.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:34 np0005548731 nova_compute[232433]: 2025-12-06 06:59:34.072 232437 DEBUG oslo_concurrency.processutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:59:34 np0005548731 nova_compute[232433]: 2025-12-06 06:59:34.430 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 01:59:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3323031020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 01:59:34 np0005548731 nova_compute[232433]: 2025-12-06 06:59:34.601 232437 DEBUG oslo_concurrency.processutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:59:34 np0005548731 nova_compute[232433]: 2025-12-06 06:59:34.606 232437 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] End _get_guest_xml xml=<domain type="kvm">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  <uuid>d32579c4-62c8-41ac-9d01-b617cc7992ed</uuid>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  <name>instance-0000000b</name>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <nova:name>tempest-MigrationsAdminTest-server-822732855</nova:name>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 06:59:31</nova:creationTime>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <nova:user uuid="538aa592cfb04958ab11223ed2d98106">tempest-MigrationsAdminTest-541331030-project-member</nova:user>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <nova:project uuid="fc6c493097a84d069d178020ca398a25">tempest-MigrationsAdminTest-541331030</nova:project>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <nova:ports/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <system>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <entry name="serial">d32579c4-62c8-41ac-9d01-b617cc7992ed</entry>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <entry name="uuid">d32579c4-62c8-41ac-9d01-b617cc7992ed</entry>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    </system>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  <os>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  </os>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  <features>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  </features>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  </clock>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  <devices>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/d32579c4-62c8-41ac-9d01-b617cc7992ed_disk">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      </source>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      </auth>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/d32579c4-62c8-41ac-9d01-b617cc7992ed_disk.config">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      </source>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      </auth>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    </disk>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/console.log" append="off"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    </serial>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <video>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    </video>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    </rng>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 01:59:34 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 01:59:34 np0005548731 nova_compute[232433]:  </devices>
Dec  6 01:59:34 np0005548731 nova_compute[232433]: </domain>
Dec  6 01:59:34 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 01:59:34 np0005548731 nova_compute[232433]: 2025-12-06 06:59:34.667 232437 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 01:59:34 np0005548731 nova_compute[232433]: 2025-12-06 06:59:34.668 232437 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 01:59:34 np0005548731 nova_compute[232433]: 2025-12-06 06:59:34.669 232437 INFO nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Using config drive#033[00m
Dec  6 01:59:34 np0005548731 systemd-machined[195355]: New machine qemu-4-instance-0000000b.
Dec  6 01:59:34 np0005548731 systemd[1]: Started Virtual Machine qemu-4-instance-0000000b.
Dec  6 01:59:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:35.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:35.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:35 np0005548731 nova_compute[232433]: 2025-12-06 06:59:35.829 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:35 np0005548731 nova_compute[232433]: 2025-12-06 06:59:35.968 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004375.9681065, d32579c4-62c8-41ac-9d01-b617cc7992ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:59:35 np0005548731 nova_compute[232433]: 2025-12-06 06:59:35.969 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] VM Resumed (Lifecycle Event)#033[00m
Dec  6 01:59:35 np0005548731 nova_compute[232433]: 2025-12-06 06:59:35.973 232437 DEBUG nova.compute.manager [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 01:59:35 np0005548731 nova_compute[232433]: 2025-12-06 06:59:35.977 232437 INFO nova.virt.libvirt.driver [-] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance running successfully.#033[00m
Dec  6 01:59:35 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 01:59:35 np0005548731 nova_compute[232433]: 2025-12-06 06:59:35.980 232437 DEBUG nova.virt.libvirt.guest [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 01:59:35 np0005548731 nova_compute[232433]: 2025-12-06 06:59:35.980 232437 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Dec  6 01:59:36 np0005548731 nova_compute[232433]: 2025-12-06 06:59:36.008 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:59:36 np0005548731 nova_compute[232433]: 2025-12-06 06:59:36.020 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 01:59:36 np0005548731 nova_compute[232433]: 2025-12-06 06:59:36.075 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 01:59:36 np0005548731 nova_compute[232433]: 2025-12-06 06:59:36.076 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004375.9682283, d32579c4-62c8-41ac-9d01-b617cc7992ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 01:59:36 np0005548731 nova_compute[232433]: 2025-12-06 06:59:36.076 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] VM Started (Lifecycle Event)#033[00m
Dec  6 01:59:36 np0005548731 nova_compute[232433]: 2025-12-06 06:59:36.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:59:36 np0005548731 nova_compute[232433]: 2025-12-06 06:59:36.110 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 01:59:36 np0005548731 nova_compute[232433]: 2025-12-06 06:59:36.113 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 01:59:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:37.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:37.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:39.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:39.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:39 np0005548731 nova_compute[232433]: 2025-12-06 06:59:39.473 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:40 np0005548731 nova_compute[232433]: 2025-12-06 06:59:40.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:59:40 np0005548731 nova_compute[232433]: 2025-12-06 06:59:40.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 01:59:40 np0005548731 nova_compute[232433]: 2025-12-06 06:59:40.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 01:59:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:40 np0005548731 nova_compute[232433]: 2025-12-06 06:59:40.530 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 01:59:40 np0005548731 nova_compute[232433]: 2025-12-06 06:59:40.530 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 01:59:40 np0005548731 nova_compute[232433]: 2025-12-06 06:59:40.530 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 01:59:40 np0005548731 nova_compute[232433]: 2025-12-06 06:59:40.531 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:40 np0005548731 nova_compute[232433]: 2025-12-06 06:59:40.712 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 01:59:40 np0005548731 nova_compute[232433]: 2025-12-06 06:59:40.832 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:40 np0005548731 nova_compute[232433]: 2025-12-06 06:59:40.994 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 01:59:41 np0005548731 nova_compute[232433]: 2025-12-06 06:59:41.032 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 01:59:41 np0005548731 nova_compute[232433]: 2025-12-06 06:59:41.033 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 01:59:41 np0005548731 nova_compute[232433]: 2025-12-06 06:59:41.033 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:59:41 np0005548731 nova_compute[232433]: 2025-12-06 06:59:41.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:59:41 np0005548731 nova_compute[232433]: 2025-12-06 06:59:41.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:59:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e158 e158: 3 total, 3 up, 3 in
Dec  6 01:59:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:41.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.136 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.137 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.137 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.137 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.138 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:59:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:59:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1155863205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.590 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.694 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.695 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.860 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.862 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4792MB free_disk=20.921852111816406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.862 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:42 np0005548731 nova_compute[232433]: 2025-12-06 06:59:42.863 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:43 np0005548731 nova_compute[232433]: 2025-12-06 06:59:43.167 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance d32579c4-62c8-41ac-9d01-b617cc7992ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 01:59:43 np0005548731 nova_compute[232433]: 2025-12-06 06:59:43.168 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 01:59:43 np0005548731 nova_compute[232433]: 2025-12-06 06:59:43.168 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 01:59:43 np0005548731 nova_compute[232433]: 2025-12-06 06:59:43.205 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:59:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:43.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000048s ======
Dec  6 01:59:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Dec  6 01:59:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:59:43 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/323353182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:59:43 np0005548731 nova_compute[232433]: 2025-12-06 06:59:43.625 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:59:43 np0005548731 nova_compute[232433]: 2025-12-06 06:59:43.631 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:59:43 np0005548731 nova_compute[232433]: 2025-12-06 06:59:43.648 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:59:43 np0005548731 nova_compute[232433]: 2025-12-06 06:59:43.689 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 01:59:43 np0005548731 nova_compute[232433]: 2025-12-06 06:59:43.690 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:44 np0005548731 nova_compute[232433]: 2025-12-06 06:59:44.477 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:45.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:45.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:45 np0005548731 nova_compute[232433]: 2025-12-06 06:59:45.690 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 01:59:45 np0005548731 nova_compute[232433]: 2025-12-06 06:59:45.691 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 01:59:45 np0005548731 nova_compute[232433]: 2025-12-06 06:59:45.834 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:47.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:49.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:49.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:49 np0005548731 nova_compute[232433]: 2025-12-06 06:59:49.480 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:50 np0005548731 nova_compute[232433]: 2025-12-06 06:59:50.833 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:51.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:51.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e159 e159: 3 total, 3 up, 3 in
Dec  6 01:59:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:53.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:53 np0005548731 nova_compute[232433]: 2025-12-06 06:59:53.411 232437 DEBUG nova.compute.manager [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Dec  6 01:59:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:53.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:53 np0005548731 nova_compute[232433]: 2025-12-06 06:59:53.569 232437 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:53 np0005548731 nova_compute[232433]: 2025-12-06 06:59:53.570 232437 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:53 np0005548731 nova_compute[232433]: 2025-12-06 06:59:53.597 232437 DEBUG nova.objects.instance [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2341fd69-a672-42e2-834b-0f7b8269c7ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:53 np0005548731 nova_compute[232433]: 2025-12-06 06:59:53.615 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 01:59:53 np0005548731 nova_compute[232433]: 2025-12-06 06:59:53.615 232437 INFO nova.compute.claims [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 01:59:53 np0005548731 nova_compute[232433]: 2025-12-06 06:59:53.616 232437 DEBUG nova.objects.instance [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'resources' on Instance uuid 2341fd69-a672-42e2-834b-0f7b8269c7ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:53 np0005548731 nova_compute[232433]: 2025-12-06 06:59:53.633 232437 DEBUG nova.objects.instance [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2341fd69-a672-42e2-834b-0f7b8269c7ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 01:59:53 np0005548731 nova_compute[232433]: 2025-12-06 06:59:53.679 232437 INFO nova.compute.resource_tracker [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Updating resource usage from migration 0b2af42c-f92a-4adc-b824-3a317ca631b0#033[00m
Dec  6 01:59:53 np0005548731 nova_compute[232433]: 2025-12-06 06:59:53.679 232437 DEBUG nova.compute.resource_tracker [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Starting to track incoming migration 0b2af42c-f92a-4adc-b824-3a317ca631b0 with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Dec  6 01:59:53 np0005548731 nova_compute[232433]: 2025-12-06 06:59:53.756 232437 DEBUG oslo_concurrency.processutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:59:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:59:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4292194976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:59:54 np0005548731 nova_compute[232433]: 2025-12-06 06:59:54.201 232437 DEBUG oslo_concurrency.processutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:59:54 np0005548731 nova_compute[232433]: 2025-12-06 06:59:54.206 232437 DEBUG nova.compute.provider_tree [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:59:54 np0005548731 nova_compute[232433]: 2025-12-06 06:59:54.224 232437 DEBUG nova.scheduler.client.report [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:59:54 np0005548731 nova_compute[232433]: 2025-12-06 06:59:54.244 232437 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:54 np0005548731 nova_compute[232433]: 2025-12-06 06:59:54.245 232437 INFO nova.compute.manager [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Migrating#033[00m
Dec  6 01:59:54 np0005548731 nova_compute[232433]: 2025-12-06 06:59:54.537 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 01:59:55 np0005548731 systemd[1]: Created slice User Slice of UID 42436.
Dec  6 01:59:55 np0005548731 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec  6 01:59:55 np0005548731 systemd-logind[794]: New session 55 of user nova.
Dec  6 01:59:55 np0005548731 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec  6 01:59:55 np0005548731 systemd[1]: Starting User Manager for UID 42436...
Dec  6 01:59:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:55.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:55 np0005548731 systemd[238510]: Queued start job for default target Main User Target.
Dec  6 01:59:55 np0005548731 systemd[238510]: Created slice User Application Slice.
Dec  6 01:59:55 np0005548731 systemd[238510]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 01:59:55 np0005548731 systemd[238510]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 01:59:55 np0005548731 systemd[238510]: Reached target Paths.
Dec  6 01:59:55 np0005548731 systemd[238510]: Reached target Timers.
Dec  6 01:59:55 np0005548731 systemd[238510]: Starting D-Bus User Message Bus Socket...
Dec  6 01:59:55 np0005548731 systemd[238510]: Starting Create User's Volatile Files and Directories...
Dec  6 01:59:55 np0005548731 systemd[238510]: Listening on D-Bus User Message Bus Socket.
Dec  6 01:59:55 np0005548731 systemd[238510]: Reached target Sockets.
Dec  6 01:59:55 np0005548731 systemd[238510]: Finished Create User's Volatile Files and Directories.
Dec  6 01:59:55 np0005548731 systemd[238510]: Reached target Basic System.
Dec  6 01:59:55 np0005548731 systemd[238510]: Reached target Main User Target.
Dec  6 01:59:55 np0005548731 systemd[238510]: Startup finished in 141ms.
Dec  6 01:59:55 np0005548731 systemd[1]: Started User Manager for UID 42436.
Dec  6 01:59:55 np0005548731 systemd[1]: Started Session 55 of User nova.
Dec  6 01:59:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:55.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:55 np0005548731 systemd[1]: session-55.scope: Deactivated successfully.
Dec  6 01:59:55 np0005548731 systemd-logind[794]: Session 55 logged out. Waiting for processes to exit.
Dec  6 01:59:55 np0005548731 systemd-logind[794]: Removed session 55.
Dec  6 01:59:55 np0005548731 systemd-logind[794]: New session 57 of user nova.
Dec  6 01:59:55 np0005548731 systemd[1]: Started Session 57 of User nova.
Dec  6 01:59:55 np0005548731 systemd[1]: session-57.scope: Deactivated successfully.
Dec  6 01:59:55 np0005548731 systemd-logind[794]: Session 57 logged out. Waiting for processes to exit.
Dec  6 01:59:55 np0005548731 systemd-logind[794]: Removed session 57.
Dec  6 01:59:55 np0005548731 nova_compute[232433]: 2025-12-06 06:59:55.834 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:56 np0005548731 nova_compute[232433]: 2025-12-06 06:59:56.540 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:56 np0005548731 nova_compute[232433]: 2025-12-06 06:59:56.541 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:56 np0005548731 nova_compute[232433]: 2025-12-06 06:59:56.567 232437 DEBUG nova.compute.manager [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 01:59:56 np0005548731 nova_compute[232433]: 2025-12-06 06:59:56.687 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:56 np0005548731 nova_compute[232433]: 2025-12-06 06:59:56.688 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:56 np0005548731 nova_compute[232433]: 2025-12-06 06:59:56.699 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 01:59:56 np0005548731 nova_compute[232433]: 2025-12-06 06:59:56.700 232437 INFO nova.compute.claims [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 01:59:56 np0005548731 podman[238533]: 2025-12-06 06:59:56.932111392 +0000 UTC m=+0.070629108 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  6 01:59:56 np0005548731 podman[238535]: 2025-12-06 06:59:56.932336408 +0000 UTC m=+0.071681944 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 01:59:57 np0005548731 podman[238534]: 2025-12-06 06:59:57.018407189 +0000 UTC m=+0.158104483 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.163 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:59:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:57.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 01:59:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:57.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 01:59:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 01:59:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/843334347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.639 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.649 232437 DEBUG nova.compute.provider_tree [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.671 232437 DEBUG nova.scheduler.client.report [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.698 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.699 232437 DEBUG nova.compute.manager [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.754 232437 DEBUG nova.compute.manager [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.755 232437 DEBUG nova.network.neutron [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.784 232437 INFO nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.805 232437 DEBUG nova.compute.manager [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.914 232437 DEBUG nova.compute.manager [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.915 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.915 232437 INFO nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Creating image(s)#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.945 232437 DEBUG nova.storage.rbd_utils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:59:57 np0005548731 nova_compute[232433]: 2025-12-06 06:59:57.979 232437 DEBUG nova.storage.rbd_utils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:59:58 np0005548731 nova_compute[232433]: 2025-12-06 06:59:58.009 232437 DEBUG nova.storage.rbd_utils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:59:58 np0005548731 nova_compute[232433]: 2025-12-06 06:59:58.014 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:59:58 np0005548731 nova_compute[232433]: 2025-12-06 06:59:58.080 232437 DEBUG nova.policy [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56a80bccd06a46eb841b4a39bdc45f76', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e2869bdde00d4ef8bea339be8805aa3f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 01:59:58 np0005548731 nova_compute[232433]: 2025-12-06 06:59:58.085 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 01:59:58 np0005548731 nova_compute[232433]: 2025-12-06 06:59:58.085 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 01:59:58 np0005548731 nova_compute[232433]: 2025-12-06 06:59:58.086 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 01:59:58 np0005548731 nova_compute[232433]: 2025-12-06 06:59:58.086 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 01:59:58 np0005548731 nova_compute[232433]: 2025-12-06 06:59:58.113 232437 DEBUG nova.storage.rbd_utils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 01:59:58 np0005548731 nova_compute[232433]: 2025-12-06 06:59:58.119 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 01:59:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:59.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 01:59:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 01:59:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:59.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 01:59:59 np0005548731 nova_compute[232433]: 2025-12-06 06:59:59.541 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 01:59:59 np0005548731 nova_compute[232433]: 2025-12-06 06:59:59.846 232437 DEBUG nova.network.neutron [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Successfully created port: ee2e3cc0-250d-488e-8216-a6ff00f323fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:00:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:00 np0005548731 nova_compute[232433]: 2025-12-06 07:00:00.161 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:00 np0005548731 nova_compute[232433]: 2025-12-06 07:00:00.247 232437 DEBUG nova.storage.rbd_utils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] resizing rbd image 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:00:00 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 02:00:00 np0005548731 nova_compute[232433]: 2025-12-06 07:00:00.701 232437 DEBUG nova.objects.instance [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lazy-loading 'migration_context' on Instance uuid 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:00:00 np0005548731 nova_compute[232433]: 2025-12-06 07:00:00.740 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:00:00 np0005548731 nova_compute[232433]: 2025-12-06 07:00:00.740 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Ensure instance console log exists: /var/lib/nova/instances/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:00:00 np0005548731 nova_compute[232433]: 2025-12-06 07:00:00.741 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:00 np0005548731 nova_compute[232433]: 2025-12-06 07:00:00.742 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:00 np0005548731 nova_compute[232433]: 2025-12-06 07:00:00.742 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:00 np0005548731 nova_compute[232433]: 2025-12-06 07:00:00.836 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:00.844 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:00.845 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:00.845 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:01.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:01.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:01 np0005548731 nova_compute[232433]: 2025-12-06 07:00:01.693 232437 DEBUG nova.network.neutron [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Successfully updated port: ee2e3cc0-250d-488e-8216-a6ff00f323fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:00:02 np0005548731 nova_compute[232433]: 2025-12-06 07:00:02.381 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "refresh_cache-4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:00:02 np0005548731 nova_compute[232433]: 2025-12-06 07:00:02.381 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquired lock "refresh_cache-4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:00:02 np0005548731 nova_compute[232433]: 2025-12-06 07:00:02.381 232437 DEBUG nova.network.neutron [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:00:02 np0005548731 nova_compute[232433]: 2025-12-06 07:00:02.493 232437 DEBUG nova.compute.manager [req-7b282070-6d0a-4f15-9a5b-0a3323b64a25 req-6a00c10d-95fb-4c8d-9393-6b2f8d9cc6c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Received event network-changed-ee2e3cc0-250d-488e-8216-a6ff00f323fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:00:02 np0005548731 nova_compute[232433]: 2025-12-06 07:00:02.494 232437 DEBUG nova.compute.manager [req-7b282070-6d0a-4f15-9a5b-0a3323b64a25 req-6a00c10d-95fb-4c8d-9393-6b2f8d9cc6c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Refreshing instance network info cache due to event network-changed-ee2e3cc0-250d-488e-8216-a6ff00f323fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:00:02 np0005548731 nova_compute[232433]: 2025-12-06 07:00:02.494 232437 DEBUG oslo_concurrency.lockutils [req-7b282070-6d0a-4f15-9a5b-0a3323b64a25 req-6a00c10d-95fb-4c8d-9393-6b2f8d9cc6c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:00:02 np0005548731 nova_compute[232433]: 2025-12-06 07:00:02.747 232437 DEBUG nova.network.neutron [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:00:02 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 02:00:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:03.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:03Z|00044|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  6 02:00:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:03.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.026 232437 DEBUG nova.network.neutron [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Updating instance_info_cache with network_info: [{"id": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "address": "fa:16:3e:94:fe:b1", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2e3cc0-25", "ovs_interfaceid": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.052 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Releasing lock "refresh_cache-4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.052 232437 DEBUG nova.compute.manager [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Instance network_info: |[{"id": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "address": "fa:16:3e:94:fe:b1", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2e3cc0-25", "ovs_interfaceid": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.053 232437 DEBUG oslo_concurrency.lockutils [req-7b282070-6d0a-4f15-9a5b-0a3323b64a25 req-6a00c10d-95fb-4c8d-9393-6b2f8d9cc6c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.053 232437 DEBUG nova.network.neutron [req-7b282070-6d0a-4f15-9a5b-0a3323b64a25 req-6a00c10d-95fb-4c8d-9393-6b2f8d9cc6c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Refreshing network info cache for port ee2e3cc0-250d-488e-8216-a6ff00f323fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.056 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Start _get_guest_xml network_info=[{"id": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "address": "fa:16:3e:94:fe:b1", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2e3cc0-25", "ovs_interfaceid": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.062 232437 WARNING nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.076 232437 DEBUG nova.virt.libvirt.host [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.077 232437 DEBUG nova.virt.libvirt.host [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.087 232437 DEBUG nova.virt.libvirt.host [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.088 232437 DEBUG nova.virt.libvirt.host [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.089 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.089 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:59:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1324136914',id=23,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1545631178',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.090 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.090 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.090 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.091 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.091 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.091 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.091 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.092 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.092 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.092 232437 DEBUG nova.virt.hardware [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.095 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:00:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3864384142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.517 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.541 232437 DEBUG nova.storage.rbd_utils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.544 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.567 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:00:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/807715224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.956 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.957 232437 DEBUG nova.virt.libvirt.vif [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T06:59:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1375954339',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1375954339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(23),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1375954339',id=14,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=23,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyYd2vj0IZaejjGwrppG7vtmC1xs+0JSlQJBkmu53OyTo4w/o53nNppqE2nymdT0UkGVn1vFVEZmmF+N+AjxIMF81FiLMyyI4ZyWmUw9aTkhGmFymaAYJKyFfVSYZ1oNQ==',key_name='tempest-keypair-1145479885',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e2869bdde00d4ef8bea339be8805aa3f',ramdisk_id='',reservation_id='r-wu9t5vze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-631560428',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-631560428-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T06:59:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='56a80bccd06a46eb841b4a39bdc45f76',uuid=4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "address": "fa:16:3e:94:fe:b1", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2e3cc0-25", "ovs_interfaceid": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.958 232437 DEBUG nova.network.os_vif_util [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converting VIF {"id": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "address": "fa:16:3e:94:fe:b1", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2e3cc0-25", "ovs_interfaceid": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.959 232437 DEBUG nova.network.os_vif_util [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:fe:b1,bridge_name='br-int',has_traffic_filtering=True,id=ee2e3cc0-250d-488e-8216-a6ff00f323fa,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2e3cc0-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.960 232437 DEBUG nova.objects.instance [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.979 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  <uuid>4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e</uuid>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  <name>instance-0000000e</name>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1375954339</nova:name>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:00:04</nova:creationTime>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-1545631178">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <nova:user uuid="56a80bccd06a46eb841b4a39bdc45f76">tempest-ServersWithSpecificFlavorTestJSON-631560428-project-member</nova:user>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <nova:project uuid="e2869bdde00d4ef8bea339be8805aa3f">tempest-ServersWithSpecificFlavorTestJSON-631560428</nova:project>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <nova:port uuid="ee2e3cc0-250d-488e-8216-a6ff00f323fa">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <entry name="serial">4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e</entry>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <entry name="uuid">4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e</entry>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk.config">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:94:fe:b1"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <target dev="tapee2e3cc0-25"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e/console.log" append="off"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:00:04 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:00:04 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:00:04 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:00:04 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.981 232437 DEBUG nova.compute.manager [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Preparing to wait for external event network-vif-plugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.981 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.982 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.982 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.983 232437 DEBUG nova.virt.libvirt.vif [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T06:59:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1375954339',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1375954339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(23),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1375954339',id=14,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=23,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyYd2vj0IZaejjGwrppG7vtmC1xs+0JSlQJBkmu53OyTo4w/o53nNppqE2nymdT0UkGVn1vFVEZmmF+N+AjxIMF81FiLMyyI4ZyWmUw9aTkhGmFymaAYJKyFfVSYZ1oNQ==',key_name='tempest-keypair-1145479885',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e2869bdde00d4ef8bea339be8805aa3f',ramdisk_id='',reservation_id='r-wu9t5vze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-631560428',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-631560428-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T06:59:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='56a80bccd06a46eb841b4a39bdc45f76',uuid=4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "address": "fa:16:3e:94:fe:b1", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2e3cc0-25", "ovs_interfaceid": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.983 232437 DEBUG nova.network.os_vif_util [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converting VIF {"id": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "address": "fa:16:3e:94:fe:b1", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2e3cc0-25", "ovs_interfaceid": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.984 232437 DEBUG nova.network.os_vif_util [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:fe:b1,bridge_name='br-int',has_traffic_filtering=True,id=ee2e3cc0-250d-488e-8216-a6ff00f323fa,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2e3cc0-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.984 232437 DEBUG os_vif [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:fe:b1,bridge_name='br-int',has_traffic_filtering=True,id=ee2e3cc0-250d-488e-8216-a6ff00f323fa,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2e3cc0-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.985 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.986 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.987 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.992 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.992 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee2e3cc0-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.993 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee2e3cc0-25, col_values=(('external_ids', {'iface-id': 'ee2e3cc0-250d-488e-8216-a6ff00f323fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:fe:b1', 'vm-uuid': '4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.994 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:04 np0005548731 NetworkManager[49182]: <info>  [1765004404.9959] manager: (tapee2e3cc0-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Dec  6 02:00:04 np0005548731 nova_compute[232433]: 2025-12-06 07:00:04.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.001 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.002 232437 INFO os_vif [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:fe:b1,bridge_name='br-int',has_traffic_filtering=True,id=ee2e3cc0-250d-488e-8216-a6ff00f323fa,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2e3cc0-25')#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.045 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.045 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.046 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] No VIF found with MAC fa:16:3e:94:fe:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.046 232437 INFO nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Using config drive#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.073 232437 DEBUG nova.storage.rbd_utils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:00:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:05.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:05.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.524 232437 INFO nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Creating config drive at /var/lib/nova/instances/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e/disk.config#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.531 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqpn8b29l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.664 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqpn8b29l" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.691 232437 DEBUG nova.storage.rbd_utils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.694 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e/disk.config 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:05 np0005548731 systemd[1]: Stopping User Manager for UID 42436...
Dec  6 02:00:05 np0005548731 systemd[238510]: Activating special unit Exit the Session...
Dec  6 02:00:05 np0005548731 systemd[238510]: Stopped target Main User Target.
Dec  6 02:00:05 np0005548731 systemd[238510]: Stopped target Basic System.
Dec  6 02:00:05 np0005548731 systemd[238510]: Stopped target Paths.
Dec  6 02:00:05 np0005548731 systemd[238510]: Stopped target Sockets.
Dec  6 02:00:05 np0005548731 systemd[238510]: Stopped target Timers.
Dec  6 02:00:05 np0005548731 systemd[238510]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  6 02:00:05 np0005548731 systemd[238510]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  6 02:00:05 np0005548731 systemd[238510]: Closed D-Bus User Message Bus Socket.
Dec  6 02:00:05 np0005548731 systemd[238510]: Stopped Create User's Volatile Files and Directories.
Dec  6 02:00:05 np0005548731 systemd[238510]: Removed slice User Application Slice.
Dec  6 02:00:05 np0005548731 systemd[238510]: Reached target Shutdown.
Dec  6 02:00:05 np0005548731 systemd[238510]: Finished Exit the Session.
Dec  6 02:00:05 np0005548731 systemd[238510]: Reached target Exit the Session.
Dec  6 02:00:05 np0005548731 systemd[1]: user@42436.service: Deactivated successfully.
Dec  6 02:00:05 np0005548731 systemd[1]: Stopped User Manager for UID 42436.
Dec  6 02:00:05 np0005548731 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.838 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:05 np0005548731 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec  6 02:00:05 np0005548731 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec  6 02:00:05 np0005548731 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec  6 02:00:05 np0005548731 systemd[1]: Removed slice User Slice of UID 42436.
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.859 232437 DEBUG nova.network.neutron [req-7b282070-6d0a-4f15-9a5b-0a3323b64a25 req-6a00c10d-95fb-4c8d-9393-6b2f8d9cc6c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Updated VIF entry in instance network info cache for port ee2e3cc0-250d-488e-8216-a6ff00f323fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.860 232437 DEBUG nova.network.neutron [req-7b282070-6d0a-4f15-9a5b-0a3323b64a25 req-6a00c10d-95fb-4c8d-9393-6b2f8d9cc6c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Updating instance_info_cache with network_info: [{"id": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "address": "fa:16:3e:94:fe:b1", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2e3cc0-25", "ovs_interfaceid": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:00:05 np0005548731 nova_compute[232433]: 2025-12-06 07:00:05.877 232437 DEBUG oslo_concurrency.lockutils [req-7b282070-6d0a-4f15-9a5b-0a3323b64a25 req-6a00c10d-95fb-4c8d-9393-6b2f8d9cc6c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:00:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:07.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:07.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 02:00:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:00:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 02:00:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:00:08 np0005548731 nova_compute[232433]: 2025-12-06 07:00:08.416 232437 DEBUG oslo_concurrency.processutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e/disk.config 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.722s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:08 np0005548731 nova_compute[232433]: 2025-12-06 07:00:08.417 232437 INFO nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Deleting local config drive /var/lib/nova/instances/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e/disk.config because it was imported into RBD.#033[00m
Dec  6 02:00:08 np0005548731 kernel: tapee2e3cc0-25: entered promiscuous mode
Dec  6 02:00:08 np0005548731 NetworkManager[49182]: <info>  [1765004408.4704] manager: (tapee2e3cc0-25): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Dec  6 02:00:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:08Z|00045|binding|INFO|Claiming lport ee2e3cc0-250d-488e-8216-a6ff00f323fa for this chassis.
Dec  6 02:00:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:08Z|00046|binding|INFO|ee2e3cc0-250d-488e-8216-a6ff00f323fa: Claiming fa:16:3e:94:fe:b1 10.100.0.12
Dec  6 02:00:08 np0005548731 nova_compute[232433]: 2025-12-06 07:00:08.470 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:08 np0005548731 nova_compute[232433]: 2025-12-06 07:00:08.476 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:08 np0005548731 nova_compute[232433]: 2025-12-06 07:00:08.479 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.488 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:fe:b1 10.100.0.12'], port_security=['fa:16:3e:94:fe:b1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0468f21-498f-424d-a04a-d011b4adc72e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2869bdde00d4ef8bea339be8805aa3f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a097e786-0d4a-48b2-8e8b-48554d219e38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fc5a527-4a15-4cc8-9ef4-9d61b8890236, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ee2e3cc0-250d-488e-8216-a6ff00f323fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.490 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ee2e3cc0-250d-488e-8216-a6ff00f323fa in datapath a0468f21-498f-424d-a04a-d011b4adc72e bound to our chassis#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.492 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0468f21-498f-424d-a04a-d011b4adc72e#033[00m
Dec  6 02:00:08 np0005548731 systemd-udevd[239180]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:00:08 np0005548731 systemd-machined[195355]: New machine qemu-5-instance-0000000e.
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.507 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8935abb0-0f8d-4bf2-a0fb-09b235ebea7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.508 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0468f21-41 in ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:00:08 np0005548731 NetworkManager[49182]: <info>  [1765004408.5120] device (tapee2e3cc0-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.512 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0468f21-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.512 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a84a8a11-8e29-4b79-a46d-d86ad8c0abda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 NetworkManager[49182]: <info>  [1765004408.5137] device (tapee2e3cc0-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.513 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca54026-4ced-45d4-8cfd-029e79a5fc8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 systemd[1]: Started Virtual Machine qemu-5-instance-0000000e.
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.524 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c8cdc11f-5a04-430a-9200-6120092f8d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 nova_compute[232433]: 2025-12-06 07:00:08.548 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.550 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ac9eb6-164d-41b8-ab2c-89b283a5046a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:08Z|00047|binding|INFO|Setting lport ee2e3cc0-250d-488e-8216-a6ff00f323fa ovn-installed in OVS
Dec  6 02:00:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:08Z|00048|binding|INFO|Setting lport ee2e3cc0-250d-488e-8216-a6ff00f323fa up in Southbound
Dec  6 02:00:08 np0005548731 nova_compute[232433]: 2025-12-06 07:00:08.554 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.577 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e40b93ff-206a-41db-b618-9ae8d53d85df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 NetworkManager[49182]: <info>  [1765004408.5843] manager: (tapa0468f21-40): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.584 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3a59b4e6-b1ca-4cd8-a772-139539e515b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.615 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6a25167f-1be2-44a6-a97c-40bcfa594b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.618 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d0db75e5-f31d-4d91-84d3-704780b5b491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 NetworkManager[49182]: <info>  [1765004408.6417] device (tapa0468f21-40): carrier: link connected
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.649 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[141c51e0-115a-4d21-a1b8-cad5315a0bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:00:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/371361718' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:00:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:00:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/371361718' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.665 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6006d2aa-152f-431c-bbb7-aa43310c9b0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0468f21-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:9d:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467064, 'reachable_time': 39895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239214, 'error': None, 'target': 'ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.681 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4da576f6-2d95-45b7-b414-4ff0cc7df4bb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:9d4b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467064, 'tstamp': 467064}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239215, 'error': None, 'target': 'ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.696 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d246f911-85dd-4287-90d5-cea277594c7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0468f21-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:9d:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467064, 'reachable_time': 39895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239216, 'error': None, 'target': 'ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.726 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3c22a6c5-04b1-48b1-9846-86a6c69e353c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.784 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[86ae3a3e-4aea-41c4-870b-5f49c2ec3061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.787 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0468f21-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.787 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.788 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0468f21-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:00:08 np0005548731 kernel: tapa0468f21-40: entered promiscuous mode
Dec  6 02:00:08 np0005548731 nova_compute[232433]: 2025-12-06 07:00:08.790 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:08 np0005548731 NetworkManager[49182]: <info>  [1765004408.7927] manager: (tapa0468f21-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Dec  6 02:00:08 np0005548731 nova_compute[232433]: 2025-12-06 07:00:08.792 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.796 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0468f21-40, col_values=(('external_ids', {'iface-id': '552234cc-df01-471d-8a4b-772bc37af92a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:00:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:08Z|00049|binding|INFO|Releasing lport 552234cc-df01-471d-8a4b-772bc37af92a from this chassis (sb_readonly=0)
Dec  6 02:00:08 np0005548731 nova_compute[232433]: 2025-12-06 07:00:08.797 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.800 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0468f21-498f-424d-a04a-d011b4adc72e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0468f21-498f-424d-a04a-d011b4adc72e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.801 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb145bb-b596-4f66-8216-d6af2b4e44c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.802 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-a0468f21-498f-424d-a04a-d011b4adc72e
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/a0468f21-498f-424d-a04a-d011b4adc72e.pid.haproxy
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID a0468f21-498f-424d-a04a-d011b4adc72e
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:00:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:08.803 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e', 'env', 'PROCESS_TAG=haproxy-a0468f21-498f-424d-a04a-d011b4adc72e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0468f21-498f-424d-a04a-d011b4adc72e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:00:08 np0005548731 nova_compute[232433]: 2025-12-06 07:00:08.811 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.121 232437 DEBUG nova.compute.manager [req-96eefa63-f3bf-4bc7-9bec-ca0fdc9ed158 req-99bb835d-fe9f-4e73-b5c6-2d220e27bcab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Received event network-vif-plugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.122 232437 DEBUG oslo_concurrency.lockutils [req-96eefa63-f3bf-4bc7-9bec-ca0fdc9ed158 req-99bb835d-fe9f-4e73-b5c6-2d220e27bcab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.123 232437 DEBUG oslo_concurrency.lockutils [req-96eefa63-f3bf-4bc7-9bec-ca0fdc9ed158 req-99bb835d-fe9f-4e73-b5c6-2d220e27bcab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.123 232437 DEBUG oslo_concurrency.lockutils [req-96eefa63-f3bf-4bc7-9bec-ca0fdc9ed158 req-99bb835d-fe9f-4e73-b5c6-2d220e27bcab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.123 232437 DEBUG nova.compute.manager [req-96eefa63-f3bf-4bc7-9bec-ca0fdc9ed158 req-99bb835d-fe9f-4e73-b5c6-2d220e27bcab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Processing event network-vif-plugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:00:09 np0005548731 podman[239264]: 2025-12-06 07:00:09.174673799 +0000 UTC m=+0.048959415 container create f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  6 02:00:09 np0005548731 systemd[1]: Started libpod-conmon-f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b.scope.
Dec  6 02:00:09 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:00:09 np0005548731 podman[239264]: 2025-12-06 07:00:09.148290731 +0000 UTC m=+0.022576367 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:00:09 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c8aaba6f169a361864c3f2b99d4a2fb99bbdb82f32f84de98abee1f7396024/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:00:09 np0005548731 podman[239264]: 2025-12-06 07:00:09.25540548 +0000 UTC m=+0.129691116 container init f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  6 02:00:09 np0005548731 podman[239264]: 2025-12-06 07:00:09.262354798 +0000 UTC m=+0.136640414 container start f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:00:09 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[239282]: [NOTICE]   (239286) : New worker (239288) forked
Dec  6 02:00:09 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[239282]: [NOTICE]   (239286) : Loading success.
Dec  6 02:00:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:00:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:09.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:00:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:09.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.573 232437 DEBUG nova.compute.manager [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.574 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004409.5729656, 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.574 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] VM Started (Lifecycle Event)#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.577 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.580 232437 INFO nova.virt.libvirt.driver [-] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Instance spawned successfully.#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.581 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.676 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.679 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:00:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:00:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:00:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.692 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.693 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.693 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.694 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.694 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.695 232437 DEBUG nova.virt.libvirt.driver [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.734 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.737 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004409.5738142, 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.738 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.770 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.774 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004409.5768526, 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.775 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.780 232437 INFO nova.compute.manager [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Took 11.87 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.780 232437 DEBUG nova.compute.manager [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.791 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.794 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.818 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.872 232437 INFO nova.compute.manager [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Took 13.21 seconds to build instance.#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.895 232437 DEBUG oslo_concurrency.lockutils [None req-fdecf097-41f0-4720-b43d-dba0eb17def6 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:09 np0005548731 nova_compute[232433]: 2025-12-06 07:00:09.995 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:10 np0005548731 nova_compute[232433]: 2025-12-06 07:00:10.840 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:11 np0005548731 nova_compute[232433]: 2025-12-06 07:00:11.273 232437 DEBUG nova.compute.manager [req-809b82ea-6c2d-45f3-aed5-1f11e5a4a4b3 req-51291f5f-1042-4899-a692-134609e2ae2c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Received event network-vif-plugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:00:11 np0005548731 nova_compute[232433]: 2025-12-06 07:00:11.273 232437 DEBUG oslo_concurrency.lockutils [req-809b82ea-6c2d-45f3-aed5-1f11e5a4a4b3 req-51291f5f-1042-4899-a692-134609e2ae2c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:11 np0005548731 nova_compute[232433]: 2025-12-06 07:00:11.274 232437 DEBUG oslo_concurrency.lockutils [req-809b82ea-6c2d-45f3-aed5-1f11e5a4a4b3 req-51291f5f-1042-4899-a692-134609e2ae2c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:11 np0005548731 nova_compute[232433]: 2025-12-06 07:00:11.274 232437 DEBUG oslo_concurrency.lockutils [req-809b82ea-6c2d-45f3-aed5-1f11e5a4a4b3 req-51291f5f-1042-4899-a692-134609e2ae2c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:11 np0005548731 nova_compute[232433]: 2025-12-06 07:00:11.274 232437 DEBUG nova.compute.manager [req-809b82ea-6c2d-45f3-aed5-1f11e5a4a4b3 req-51291f5f-1042-4899-a692-134609e2ae2c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] No waiting events found dispatching network-vif-plugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:00:11 np0005548731 nova_compute[232433]: 2025-12-06 07:00:11.274 232437 WARNING nova.compute.manager [req-809b82ea-6c2d-45f3-aed5-1f11e5a4a4b3 req-51291f5f-1042-4899-a692-134609e2ae2c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Received unexpected event network-vif-plugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa for instance with vm_state active and task_state None.#033[00m
Dec  6 02:00:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:11.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:11.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:11 np0005548731 nova_compute[232433]: 2025-12-06 07:00:11.674 232437 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:00:11 np0005548731 nova_compute[232433]: 2025-12-06 07:00:11.675 232437 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:00:11 np0005548731 nova_compute[232433]: 2025-12-06 07:00:11.675 232437 DEBUG nova.network.neutron [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:00:11 np0005548731 nova_compute[232433]: 2025-12-06 07:00:11.827 232437 DEBUG nova.network.neutron [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.180 232437 DEBUG nova.network.neutron [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.200 232437 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.314 232437 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.317 232437 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.317 232437 INFO nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Creating image(s)#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.366 232437 DEBUG nova.storage.rbd_utils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] creating snapshot(nova-resize) on rbd image(2341fd69-a672-42e2-834b-0f7b8269c7ef_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:00:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e160 e160: 3 total, 3 up, 3 in
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.588 232437 DEBUG nova.objects.instance [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2341fd69-a672-42e2-834b-0f7b8269c7ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.725 232437 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.726 232437 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Ensure instance console log exists: /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.726 232437 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.726 232437 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.727 232437 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.730 232437 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.737 232437 WARNING nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.744 232437 DEBUG nova.virt.libvirt.host [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.745 232437 DEBUG nova.virt.libvirt.host [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.749 232437 DEBUG nova.virt.libvirt.host [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.749 232437 DEBUG nova.virt.libvirt.host [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.750 232437 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.750 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.751 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.751 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.751 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.751 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.752 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.752 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.752 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.752 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.753 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.753 232437 DEBUG nova.virt.hardware [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.753 232437 DEBUG nova.objects.instance [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2341fd69-a672-42e2-834b-0f7b8269c7ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:00:12 np0005548731 nova_compute[232433]: 2025-12-06 07:00:12.776 232437 DEBUG oslo_concurrency.processutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:13 np0005548731 nova_compute[232433]: 2025-12-06 07:00:13.173 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:13 np0005548731 NetworkManager[49182]: <info>  [1765004413.1744] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Dec  6 02:00:13 np0005548731 NetworkManager[49182]: <info>  [1765004413.1751] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Dec  6 02:00:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:00:13 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1111738318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:00:13 np0005548731 nova_compute[232433]: 2025-12-06 07:00:13.220 232437 DEBUG oslo_concurrency.processutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:13Z|00050|binding|INFO|Releasing lport 552234cc-df01-471d-8a4b-772bc37af92a from this chassis (sb_readonly=0)
Dec  6 02:00:13 np0005548731 nova_compute[232433]: 2025-12-06 07:00:13.265 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:13 np0005548731 nova_compute[232433]: 2025-12-06 07:00:13.275 232437 DEBUG oslo_concurrency.processutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:13.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:13.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:00:13 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3130597809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:00:13 np0005548731 nova_compute[232433]: 2025-12-06 07:00:13.804 232437 DEBUG oslo_concurrency.processutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:13 np0005548731 nova_compute[232433]: 2025-12-06 07:00:13.808 232437 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  <uuid>2341fd69-a672-42e2-834b-0f7b8269c7ef</uuid>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  <name>instance-0000000d</name>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  <memory>196608</memory>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <nova:name>tempest-MigrationsAdminTest-server-524387245</nova:name>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:00:12</nova:creationTime>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.micro">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <nova:memory>192</nova:memory>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <nova:user uuid="538aa592cfb04958ab11223ed2d98106">tempest-MigrationsAdminTest-541331030-project-member</nova:user>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <nova:project uuid="fc6c493097a84d069d178020ca398a25">tempest-MigrationsAdminTest-541331030</nova:project>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <nova:ports/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <entry name="serial">2341fd69-a672-42e2-834b-0f7b8269c7ef</entry>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <entry name="uuid">2341fd69-a672-42e2-834b-0f7b8269c7ef</entry>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2341fd69-a672-42e2-834b-0f7b8269c7ef_disk">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2341fd69-a672-42e2-834b-0f7b8269c7ef_disk.config">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/console.log" append="off"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:00:13 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:00:13 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:00:13 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:00:13 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:00:13 np0005548731 nova_compute[232433]: 2025-12-06 07:00:13.879 232437 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:00:13 np0005548731 nova_compute[232433]: 2025-12-06 07:00:13.880 232437 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:00:13 np0005548731 nova_compute[232433]: 2025-12-06 07:00:13.880 232437 INFO nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Using config drive#033[00m
Dec  6 02:00:13 np0005548731 systemd-machined[195355]: New machine qemu-6-instance-0000000d.
Dec  6 02:00:14 np0005548731 systemd[1]: Started Virtual Machine qemu-6-instance-0000000d.
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.214 232437 DEBUG nova.compute.manager [req-789ba80b-72b8-46b9-9cb3-8b536de91bb2 req-34270ea2-de87-4640-9ad1-e83f82a233c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Received event network-changed-ee2e3cc0-250d-488e-8216-a6ff00f323fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.215 232437 DEBUG nova.compute.manager [req-789ba80b-72b8-46b9-9cb3-8b536de91bb2 req-34270ea2-de87-4640-9ad1-e83f82a233c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Refreshing instance network info cache due to event network-changed-ee2e3cc0-250d-488e-8216-a6ff00f323fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.215 232437 DEBUG oslo_concurrency.lockutils [req-789ba80b-72b8-46b9-9cb3-8b536de91bb2 req-34270ea2-de87-4640-9ad1-e83f82a233c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.215 232437 DEBUG oslo_concurrency.lockutils [req-789ba80b-72b8-46b9-9cb3-8b536de91bb2 req-34270ea2-de87-4640-9ad1-e83f82a233c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.215 232437 DEBUG nova.network.neutron [req-789ba80b-72b8-46b9-9cb3-8b536de91bb2 req-34270ea2-de87-4640-9ad1-e83f82a233c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Refreshing network info cache for port ee2e3cc0-250d-488e-8216-a6ff00f323fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.892 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004414.8921888, 2341fd69-a672-42e2-834b-0f7b8269c7ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.892 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.910 232437 DEBUG nova.compute.manager [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.914 232437 INFO nova.virt.libvirt.driver [-] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance running successfully.#033[00m
Dec  6 02:00:14 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.917 232437 DEBUG nova.virt.libvirt.guest [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.917 232437 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.922 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.927 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.957 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.958 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004414.909261, 2341fd69-a672-42e2-834b-0f7b8269c7ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.958 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] VM Started (Lifecycle Event)#033[00m
Dec  6 02:00:14 np0005548731 nova_compute[232433]: 2025-12-06 07:00:14.998 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:15 np0005548731 nova_compute[232433]: 2025-12-06 07:00:15.004 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:00:15 np0005548731 nova_compute[232433]: 2025-12-06 07:00:15.008 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:00:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:15.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:15.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:15 np0005548731 nova_compute[232433]: 2025-12-06 07:00:15.842 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:16 np0005548731 nova_compute[232433]: 2025-12-06 07:00:16.731 232437 DEBUG nova.network.neutron [req-789ba80b-72b8-46b9-9cb3-8b536de91bb2 req-34270ea2-de87-4640-9ad1-e83f82a233c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Updated VIF entry in instance network info cache for port ee2e3cc0-250d-488e-8216-a6ff00f323fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:00:16 np0005548731 nova_compute[232433]: 2025-12-06 07:00:16.732 232437 DEBUG nova.network.neutron [req-789ba80b-72b8-46b9-9cb3-8b536de91bb2 req-34270ea2-de87-4640-9ad1-e83f82a233c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Updating instance_info_cache with network_info: [{"id": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "address": "fa:16:3e:94:fe:b1", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2e3cc0-25", "ovs_interfaceid": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:00:16 np0005548731 nova_compute[232433]: 2025-12-06 07:00:16.759 232437 DEBUG oslo_concurrency.lockutils [req-789ba80b-72b8-46b9-9cb3-8b536de91bb2 req-34270ea2-de87-4640-9ad1-e83f82a233c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:00:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:17.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:17.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:00:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:00:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e161 e161: 3 total, 3 up, 3 in
Dec  6 02:00:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:19.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:19.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:20 np0005548731 nova_compute[232433]: 2025-12-06 07:00:20.001 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:20 np0005548731 nova_compute[232433]: 2025-12-06 07:00:20.844 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:21.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:23.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:23.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:23.569 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:00:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:23.571 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:00:23 np0005548731 nova_compute[232433]: 2025-12-06 07:00:23.570 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:25 np0005548731 nova_compute[232433]: 2025-12-06 07:00:25.004 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:25.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:25.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:25 np0005548731 nova_compute[232433]: 2025-12-06 07:00:25.845 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:26Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:94:fe:b1 10.100.0.12
Dec  6 02:00:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:26Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:94:fe:b1 10.100.0.12
Dec  6 02:00:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:26.572 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:00:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:27.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:27.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e162 e162: 3 total, 3 up, 3 in
Dec  6 02:00:27 np0005548731 podman[239643]: 2025-12-06 07:00:27.93363633 +0000 UTC m=+0.073321444 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Dec  6 02:00:27 np0005548731 podman[239641]: 2025-12-06 07:00:27.953738795 +0000 UTC m=+0.092985858 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:00:27 np0005548731 podman[239642]: 2025-12-06 07:00:27.96055774 +0000 UTC m=+0.100042660 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec  6 02:00:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:29.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:29.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:30 np0005548731 nova_compute[232433]: 2025-12-06 07:00:30.007 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:30 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec  6 02:00:30 np0005548731 nova_compute[232433]: 2025-12-06 07:00:30.847 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:00:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1802.4 total, 600.0 interval#012Cumulative writes: 9661 writes, 39K keys, 9661 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 9661 writes, 2471 syncs, 3.91 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4257 writes, 16K keys, 4257 commit groups, 1.0 writes per commit group, ingest: 20.97 MB, 0.03 MB/s#012Interval WAL: 4257 writes, 1550 syncs, 2.75 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 02:00:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:31.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:31.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:33.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:33.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:35 np0005548731 nova_compute[232433]: 2025-12-06 07:00:35.011 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:35.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:35.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:35 np0005548731 nova_compute[232433]: 2025-12-06 07:00:35.849 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:37 np0005548731 nova_compute[232433]: 2025-12-06 07:00:37.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:00:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:37.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:37.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:39.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:39.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:40 np0005548731 nova_compute[232433]: 2025-12-06 07:00:40.015 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:40 np0005548731 nova_compute[232433]: 2025-12-06 07:00:40.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:00:40 np0005548731 nova_compute[232433]: 2025-12-06 07:00:40.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:00:40 np0005548731 nova_compute[232433]: 2025-12-06 07:00:40.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:00:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:40 np0005548731 nova_compute[232433]: 2025-12-06 07:00:40.298 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:00:40 np0005548731 nova_compute[232433]: 2025-12-06 07:00:40.298 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:00:40 np0005548731 nova_compute[232433]: 2025-12-06 07:00:40.298 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:00:40 np0005548731 nova_compute[232433]: 2025-12-06 07:00:40.298 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:00:40 np0005548731 nova_compute[232433]: 2025-12-06 07:00:40.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:41 np0005548731 nova_compute[232433]: 2025-12-06 07:00:41.156 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:00:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:41.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:00:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:41.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:00:42 np0005548731 nova_compute[232433]: 2025-12-06 07:00:42.374 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:00:42 np0005548731 nova_compute[232433]: 2025-12-06 07:00:42.495 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:00:42 np0005548731 nova_compute[232433]: 2025-12-06 07:00:42.496 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:00:42 np0005548731 nova_compute[232433]: 2025-12-06 07:00:42.496 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:00:42 np0005548731 nova_compute[232433]: 2025-12-06 07:00:42.496 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:00:43 np0005548731 nova_compute[232433]: 2025-12-06 07:00:43.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:00:43 np0005548731 nova_compute[232433]: 2025-12-06 07:00:43.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:00:43 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:43Z|00051|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  6 02:00:43 np0005548731 nova_compute[232433]: 2025-12-06 07:00:43.326 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:00:43 np0005548731 nova_compute[232433]: 2025-12-06 07:00:43.326 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:00:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:43.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:00:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:43.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.168 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.169 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.170 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.170 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.171 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:00:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2444872434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.691 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.845 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.846 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.849 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.850 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.852 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:00:44 np0005548731 nova_compute[232433]: 2025-12-06 07:00:44.853 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.072 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.073 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4318MB free_disk=20.801654815673828GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.073 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.073 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.147 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance d32579c4-62c8-41ac-9d01-b617cc7992ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.147 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 2341fd69-a672-42e2-834b-0f7b8269c7ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.148 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.148 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.148 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:00:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.231 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:45.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:00:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:45.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:00:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:00:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3634300850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.754 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.761 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.830 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.853 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.891 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:00:45 np0005548731 nova_compute[232433]: 2025-12-06 07:00:45.891 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:46 np0005548731 nova_compute[232433]: 2025-12-06 07:00:46.669 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:00:46 np0005548731 nova_compute[232433]: 2025-12-06 07:00:46.671 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:00:46 np0005548731 nova_compute[232433]: 2025-12-06 07:00:46.671 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:00:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:47.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:47.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:00:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:49.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:00:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:00:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:49.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:00:49 np0005548731 nova_compute[232433]: 2025-12-06 07:00:49.778 232437 DEBUG oslo_concurrency.lockutils [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:49 np0005548731 nova_compute[232433]: 2025-12-06 07:00:49.778 232437 DEBUG oslo_concurrency.lockutils [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:49 np0005548731 nova_compute[232433]: 2025-12-06 07:00:49.779 232437 DEBUG oslo_concurrency.lockutils [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:49 np0005548731 nova_compute[232433]: 2025-12-06 07:00:49.779 232437 DEBUG oslo_concurrency.lockutils [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:49 np0005548731 nova_compute[232433]: 2025-12-06 07:00:49.779 232437 DEBUG oslo_concurrency.lockutils [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:49 np0005548731 nova_compute[232433]: 2025-12-06 07:00:49.780 232437 INFO nova.compute.manager [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Terminating instance#033[00m
Dec  6 02:00:49 np0005548731 nova_compute[232433]: 2025-12-06 07:00:49.781 232437 DEBUG nova.compute.manager [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:00:50 np0005548731 nova_compute[232433]: 2025-12-06 07:00:50.062 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:50 np0005548731 nova_compute[232433]: 2025-12-06 07:00:50.891 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:51 np0005548731 kernel: tapee2e3cc0-25 (unregistering): left promiscuous mode
Dec  6 02:00:51 np0005548731 NetworkManager[49182]: <info>  [1765004451.0277] device (tapee2e3cc0-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:00:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:51Z|00052|binding|INFO|Releasing lport ee2e3cc0-250d-488e-8216-a6ff00f323fa from this chassis (sb_readonly=0)
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.040 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:51Z|00053|binding|INFO|Setting lport ee2e3cc0-250d-488e-8216-a6ff00f323fa down in Southbound
Dec  6 02:00:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:00:51Z|00054|binding|INFO|Removing iface tapee2e3cc0-25 ovn-installed in OVS
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.066 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:51 np0005548731 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec  6 02:00:51 np0005548731 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000e.scope: Consumed 15.773s CPU time.
Dec  6 02:00:51 np0005548731 systemd-machined[195355]: Machine qemu-5-instance-0000000e terminated.
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.223 232437 INFO nova.virt.libvirt.driver [-] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Instance destroyed successfully.#033[00m
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.223 232437 DEBUG nova.objects.instance [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lazy-loading 'resources' on Instance uuid 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:00:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:51.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.484 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:fe:b1 10.100.0.12'], port_security=['fa:16:3e:94:fe:b1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0468f21-498f-424d-a04a-d011b4adc72e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2869bdde00d4ef8bea339be8805aa3f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a097e786-0d4a-48b2-8e8b-48554d219e38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fc5a527-4a15-4cc8-9ef4-9d61b8890236, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ee2e3cc0-250d-488e-8216-a6ff00f323fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.485 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ee2e3cc0-250d-488e-8216-a6ff00f323fa in datapath a0468f21-498f-424d-a04a-d011b4adc72e unbound from our chassis#033[00m
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.487 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0468f21-498f-424d-a04a-d011b4adc72e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.488 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[35949eeb-3a00-4459-925f-00833f0d86e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.488 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e namespace which is not needed anymore#033[00m
Dec  6 02:00:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:51.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:51 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[239282]: [NOTICE]   (239286) : haproxy version is 2.8.14-c23fe91
Dec  6 02:00:51 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[239282]: [NOTICE]   (239286) : path to executable is /usr/sbin/haproxy
Dec  6 02:00:51 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[239282]: [WARNING]  (239286) : Exiting Master process...
Dec  6 02:00:51 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[239282]: [WARNING]  (239286) : Exiting Master process...
Dec  6 02:00:51 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[239282]: [ALERT]    (239286) : Current worker (239288) exited with code 143 (Terminated)
Dec  6 02:00:51 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[239282]: [WARNING]  (239286) : All workers exited. Exiting... (0)
Dec  6 02:00:51 np0005548731 systemd[1]: libpod-f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b.scope: Deactivated successfully.
Dec  6 02:00:51 np0005548731 podman[239851]: 2025-12-06 07:00:51.62536035 +0000 UTC m=+0.048864473 container died f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.642 232437 DEBUG nova.virt.libvirt.vif [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T06:59:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1375954339',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1375954339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(23),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1375954339',id=14,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=23,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyYd2vj0IZaejjGwrppG7vtmC1xs+0JSlQJBkmu53OyTo4w/o53nNppqE2nymdT0UkGVn1vFVEZmmF+N+AjxIMF81FiLMyyI4ZyWmUw9aTkhGmFymaAYJKyFfVSYZ1oNQ==',key_name='tempest-keypair-1145479885',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:00:09Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e2869bdde00d4ef8bea339be8805aa3f',ramdisk_id='',reservation_id='r-wu9t5vze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-631560428',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-631560428-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:00:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='56a80bccd06a46eb841b4a39bdc45f76',uuid=4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "address": "fa:16:3e:94:fe:b1", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2e3cc0-25", "ovs_interfaceid": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.643 232437 DEBUG nova.network.os_vif_util [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converting VIF {"id": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "address": "fa:16:3e:94:fe:b1", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2e3cc0-25", "ovs_interfaceid": "ee2e3cc0-250d-488e-8216-a6ff00f323fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.644 232437 DEBUG nova.network.os_vif_util [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:94:fe:b1,bridge_name='br-int',has_traffic_filtering=True,id=ee2e3cc0-250d-488e-8216-a6ff00f323fa,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2e3cc0-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.644 232437 DEBUG os_vif [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:fe:b1,bridge_name='br-int',has_traffic_filtering=True,id=ee2e3cc0-250d-488e-8216-a6ff00f323fa,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2e3cc0-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.648 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.648 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee2e3cc0-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.651 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.652 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:51 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b-userdata-shm.mount: Deactivated successfully.
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.655 232437 INFO os_vif [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:fe:b1,bridge_name='br-int',has_traffic_filtering=True,id=ee2e3cc0-250d-488e-8216-a6ff00f323fa,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2e3cc0-25')#033[00m
Dec  6 02:00:51 np0005548731 systemd[1]: var-lib-containers-storage-overlay-34c8aaba6f169a361864c3f2b99d4a2fb99bbdb82f32f84de98abee1f7396024-merged.mount: Deactivated successfully.
Dec  6 02:00:51 np0005548731 podman[239851]: 2025-12-06 07:00:51.668638286 +0000 UTC m=+0.092142399 container cleanup f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:00:51 np0005548731 systemd[1]: libpod-conmon-f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b.scope: Deactivated successfully.
Dec  6 02:00:51 np0005548731 podman[239896]: 2025-12-06 07:00:51.741530728 +0000 UTC m=+0.048648597 container remove f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.747 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fb75e889-11b3-4125-8d5d-c0b04d965bc6]: (4, ('Sat Dec  6 07:00:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e (f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b)\nf944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b\nSat Dec  6 07:00:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e (f944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b)\nf944bb6036baad392c1d6fa230ecdfcd6c3b8a80bfcd957de95c777c50ef009b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.749 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[04165894-42ad-4ac9-82dd-3640f18b2ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.750 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0468f21-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.752 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:51 np0005548731 kernel: tapa0468f21-40: left promiscuous mode
Dec  6 02:00:51 np0005548731 nova_compute[232433]: 2025-12-06 07:00:51.768 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.772 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[119452b8-0840-4c33-bcde-d57ed7075ef4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.786 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0991e4cd-243f-49a3-8674-205a507862b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.788 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a69420a3-355d-4ca8-9d5f-c005cd10fae7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.804 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a45c92-308e-4e87-9340-a40e08499d39]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467057, 'reachable_time': 20650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239914, 'error': None, 'target': 'ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:51 np0005548731 systemd[1]: run-netns-ovnmeta\x2da0468f21\x2d498f\x2d424d\x2da04a\x2dd011b4adc72e.mount: Deactivated successfully.
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.809 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:00:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:00:51.809 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[edc279bd-121e-4acd-9740-574e115446c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:00:52 np0005548731 nova_compute[232433]: 2025-12-06 07:00:52.183 232437 INFO nova.virt.libvirt.driver [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Deleting instance files /var/lib/nova/instances/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_del#033[00m
Dec  6 02:00:52 np0005548731 nova_compute[232433]: 2025-12-06 07:00:52.184 232437 INFO nova.virt.libvirt.driver [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Deletion of /var/lib/nova/instances/4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e_del complete#033[00m
Dec  6 02:00:52 np0005548731 nova_compute[232433]: 2025-12-06 07:00:52.447 232437 INFO nova.compute.manager [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Took 2.67 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:00:52 np0005548731 nova_compute[232433]: 2025-12-06 07:00:52.448 232437 DEBUG oslo.service.loopingcall [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:00:52 np0005548731 nova_compute[232433]: 2025-12-06 07:00:52.448 232437 DEBUG nova.compute.manager [-] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:00:52 np0005548731 nova_compute[232433]: 2025-12-06 07:00:52.448 232437 DEBUG nova.network.neutron [-] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:00:53 np0005548731 nova_compute[232433]: 2025-12-06 07:00:53.403 232437 DEBUG nova.compute.manager [req-0de6a395-8f05-4193-aa32-93bd7c80937c req-b52a7ffd-c031-4068-aa6a-1317f182ecf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Received event network-vif-unplugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:00:53 np0005548731 nova_compute[232433]: 2025-12-06 07:00:53.404 232437 DEBUG oslo_concurrency.lockutils [req-0de6a395-8f05-4193-aa32-93bd7c80937c req-b52a7ffd-c031-4068-aa6a-1317f182ecf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:53 np0005548731 nova_compute[232433]: 2025-12-06 07:00:53.404 232437 DEBUG oslo_concurrency.lockutils [req-0de6a395-8f05-4193-aa32-93bd7c80937c req-b52a7ffd-c031-4068-aa6a-1317f182ecf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:53 np0005548731 nova_compute[232433]: 2025-12-06 07:00:53.404 232437 DEBUG oslo_concurrency.lockutils [req-0de6a395-8f05-4193-aa32-93bd7c80937c req-b52a7ffd-c031-4068-aa6a-1317f182ecf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:53 np0005548731 nova_compute[232433]: 2025-12-06 07:00:53.405 232437 DEBUG nova.compute.manager [req-0de6a395-8f05-4193-aa32-93bd7c80937c req-b52a7ffd-c031-4068-aa6a-1317f182ecf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] No waiting events found dispatching network-vif-unplugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:00:53 np0005548731 nova_compute[232433]: 2025-12-06 07:00:53.405 232437 DEBUG nova.compute.manager [req-0de6a395-8f05-4193-aa32-93bd7c80937c req-b52a7ffd-c031-4068-aa6a-1317f182ecf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Received event network-vif-unplugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:00:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:00:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:53.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:00:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:53.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:00:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:55.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:55.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.883 232437 DEBUG nova.network.neutron [-] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.894 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.901 232437 INFO nova.compute.manager [-] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Took 3.45 seconds to deallocate network for instance.#033[00m
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.925 232437 DEBUG nova.compute.manager [req-1c7699ba-61a1-4b9c-b527-9060c55aefca req-b1f6b087-b555-44b9-b968-ef77a51e4927 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Received event network-vif-plugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.925 232437 DEBUG oslo_concurrency.lockutils [req-1c7699ba-61a1-4b9c-b527-9060c55aefca req-b1f6b087-b555-44b9-b968-ef77a51e4927 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.926 232437 DEBUG oslo_concurrency.lockutils [req-1c7699ba-61a1-4b9c-b527-9060c55aefca req-b1f6b087-b555-44b9-b968-ef77a51e4927 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.926 232437 DEBUG oslo_concurrency.lockutils [req-1c7699ba-61a1-4b9c-b527-9060c55aefca req-b1f6b087-b555-44b9-b968-ef77a51e4927 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.926 232437 DEBUG nova.compute.manager [req-1c7699ba-61a1-4b9c-b527-9060c55aefca req-b1f6b087-b555-44b9-b968-ef77a51e4927 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] No waiting events found dispatching network-vif-plugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.927 232437 WARNING nova.compute.manager [req-1c7699ba-61a1-4b9c-b527-9060c55aefca req-b1f6b087-b555-44b9-b968-ef77a51e4927 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Received unexpected event network-vif-plugged-ee2e3cc0-250d-488e-8216-a6ff00f323fa for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.966 232437 DEBUG oslo_concurrency.lockutils [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.967 232437 DEBUG oslo_concurrency.lockutils [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:55 np0005548731 nova_compute[232433]: 2025-12-06 07:00:55.980 232437 DEBUG nova.compute.manager [req-d33f5d21-79e5-41d1-97f4-3a401f95c3fe req-bd6956ed-8de2-44a4-9ea5-6a519ec064cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Received event network-vif-deleted-ee2e3cc0-250d-488e-8216-a6ff00f323fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:00:56 np0005548731 nova_compute[232433]: 2025-12-06 07:00:56.072 232437 DEBUG oslo_concurrency.processutils [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:00:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/487672312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:00:56 np0005548731 nova_compute[232433]: 2025-12-06 07:00:56.523 232437 DEBUG oslo_concurrency.processutils [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:56 np0005548731 nova_compute[232433]: 2025-12-06 07:00:56.529 232437 DEBUG nova.compute.provider_tree [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:00:56 np0005548731 nova_compute[232433]: 2025-12-06 07:00:56.546 232437 DEBUG nova.scheduler.client.report [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:00:56 np0005548731 nova_compute[232433]: 2025-12-06 07:00:56.571 232437 DEBUG oslo_concurrency.lockutils [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:56 np0005548731 nova_compute[232433]: 2025-12-06 07:00:56.618 232437 INFO nova.scheduler.client.report [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Deleted allocations for instance 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e#033[00m
Dec  6 02:00:56 np0005548731 nova_compute[232433]: 2025-12-06 07:00:56.651 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:00:56 np0005548731 nova_compute[232433]: 2025-12-06 07:00:56.685 232437 DEBUG oslo_concurrency.lockutils [None req-daa4d182-198b-4c3f-a5e9-b29438516860 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.137 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "f9728486-7db3-4d21-bd7a-b41b891489a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.137 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.156 232437 DEBUG nova.compute.manager [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.226 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.227 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.234 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.234 232437 INFO nova.compute.claims [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:00:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:00:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:57.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.473 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:00:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:57.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.665 232437 DEBUG nova.compute.manager [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.768 232437 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:00:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1289067618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.955 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.961 232437 DEBUG nova.compute.provider_tree [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.974 232437 DEBUG nova.scheduler.client.report [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.998 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:57 np0005548731 nova_compute[232433]: 2025-12-06 07:00:57.999 232437 DEBUG nova.compute.manager [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.002 232437 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.049 232437 DEBUG nova.objects.instance [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.080 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.080 232437 INFO nova.compute.claims [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.081 232437 DEBUG nova.objects.instance [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'resources' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.082 232437 DEBUG nova.compute.manager [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.082 232437 DEBUG nova.network.neutron [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.107 232437 DEBUG nova.objects.instance [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.111 232437 INFO nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.164 232437 INFO nova.compute.resource_tracker [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Updating resource usage from migration 9dc19b80-b16f-48ae-b0fd-0e7051a28cdf#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.164 232437 DEBUG nova.compute.resource_tracker [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Starting to track incoming migration 9dc19b80-b16f-48ae-b0fd-0e7051a28cdf with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.166 232437 DEBUG nova.compute.manager [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.264 232437 DEBUG nova.compute.manager [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.266 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.266 232437 INFO nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Creating image(s)#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.289 232437 DEBUG nova.storage.rbd_utils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image f9728486-7db3-4d21-bd7a-b41b891489a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.312 232437 DEBUG nova.storage.rbd_utils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image f9728486-7db3-4d21-bd7a-b41b891489a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.334 232437 DEBUG nova.storage.rbd_utils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image f9728486-7db3-4d21-bd7a-b41b891489a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.338 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.357 232437 DEBUG oslo_concurrency.processutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.397 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.398 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.399 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.399 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.422 232437 DEBUG nova.storage.rbd_utils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image f9728486-7db3-4d21-bd7a-b41b891489a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.426 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f9728486-7db3-4d21-bd7a-b41b891489a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:00:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:00:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4229951688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.805 232437 DEBUG oslo_concurrency.processutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:00:58 np0005548731 nova_compute[232433]: 2025-12-06 07:00:58.812 232437 DEBUG nova.compute.provider_tree [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:00:58 np0005548731 podman[240130]: 2025-12-06 07:00:58.896595011 +0000 UTC m=+0.054639192 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  6 02:00:58 np0005548731 podman[240132]: 2025-12-06 07:00:58.903089749 +0000 UTC m=+0.056586860 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:00:58 np0005548731 podman[240131]: 2025-12-06 07:00:58.93045811 +0000 UTC m=+0.085496478 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 02:00:59 np0005548731 nova_compute[232433]: 2025-12-06 07:00:59.041 232437 DEBUG nova.scheduler.client.report [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:00:59 np0005548731 nova_compute[232433]: 2025-12-06 07:00:59.223 232437 DEBUG nova.policy [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56a80bccd06a46eb841b4a39bdc45f76', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e2869bdde00d4ef8bea339be8805aa3f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:00:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:59.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:00:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:00:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:59.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:00:59 np0005548731 nova_compute[232433]: 2025-12-06 07:00:59.767 232437 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:00:59 np0005548731 nova_compute[232433]: 2025-12-06 07:00:59.768 232437 INFO nova.compute.manager [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Migrating#033[00m
Dec  6 02:00:59 np0005548731 nova_compute[232433]: 2025-12-06 07:00:59.973 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f9728486-7db3-4d21-bd7a-b41b891489a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.051 232437 DEBUG nova.storage.rbd_utils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] resizing rbd image f9728486-7db3-4d21-bd7a-b41b891489a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:01:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.398 232437 DEBUG nova.objects.instance [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lazy-loading 'migration_context' on Instance uuid f9728486-7db3-4d21-bd7a-b41b891489a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.443 232437 DEBUG nova.storage.rbd_utils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image f9728486-7db3-4d21-bd7a-b41b891489a5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.473 232437 DEBUG nova.storage.rbd_utils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image f9728486-7db3-4d21-bd7a-b41b891489a5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.476 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.477 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.477 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.503 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.505 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.548 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.549 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.576 232437 DEBUG nova.storage.rbd_utils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image f9728486-7db3-4d21-bd7a-b41b891489a5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.581 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 f9728486-7db3-4d21-bd7a-b41b891489a5_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:00.845 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:00.846 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:00.846 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:00 np0005548731 nova_compute[232433]: 2025-12-06 07:01:00.896 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:01.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:01.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:01 np0005548731 nova_compute[232433]: 2025-12-06 07:01:01.653 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:01 np0005548731 nova_compute[232433]: 2025-12-06 07:01:01.665 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 f9728486-7db3-4d21-bd7a-b41b891489a5_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:01 np0005548731 nova_compute[232433]: 2025-12-06 07:01:01.755 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:01:01 np0005548731 nova_compute[232433]: 2025-12-06 07:01:01.756 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Ensure instance console log exists: /var/lib/nova/instances/f9728486-7db3-4d21-bd7a-b41b891489a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:01:01 np0005548731 nova_compute[232433]: 2025-12-06 07:01:01.756 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:01 np0005548731 nova_compute[232433]: 2025-12-06 07:01:01.757 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:01 np0005548731 nova_compute[232433]: 2025-12-06 07:01:01.757 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:02 np0005548731 nova_compute[232433]: 2025-12-06 07:01:02.677 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:03.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:03.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:05.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:05.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:05 np0005548731 nova_compute[232433]: 2025-12-06 07:01:05.600 232437 DEBUG nova.network.neutron [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Successfully created port: 196b1de8-9246-4ec3-91b5-3686f8691152 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:01:05 np0005548731 nova_compute[232433]: 2025-12-06 07:01:05.942 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:06 np0005548731 systemd[1]: Created slice User Slice of UID 42436.
Dec  6 02:01:06 np0005548731 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec  6 02:01:06 np0005548731 systemd-logind[794]: New session 58 of user nova.
Dec  6 02:01:06 np0005548731 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec  6 02:01:06 np0005548731 systemd[1]: Starting User Manager for UID 42436...
Dec  6 02:01:06 np0005548731 nova_compute[232433]: 2025-12-06 07:01:06.221 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004451.2192683, 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:01:06 np0005548731 nova_compute[232433]: 2025-12-06 07:01:06.222 232437 INFO nova.compute.manager [-] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:01:06 np0005548731 nova_compute[232433]: 2025-12-06 07:01:06.271 232437 DEBUG nova.compute.manager [None req-f714139d-63e2-4ee4-a0c1-9c57adf35196 - - - - - -] [instance: 4fb7ec18-30a1-4bd1-8daa-fd0ac899b57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:06 np0005548731 systemd[240415]: Queued start job for default target Main User Target.
Dec  6 02:01:06 np0005548731 systemd[240415]: Created slice User Application Slice.
Dec  6 02:01:06 np0005548731 systemd[240415]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 02:01:06 np0005548731 systemd[240415]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 02:01:06 np0005548731 systemd[240415]: Reached target Paths.
Dec  6 02:01:06 np0005548731 systemd[240415]: Reached target Timers.
Dec  6 02:01:06 np0005548731 systemd[240415]: Starting D-Bus User Message Bus Socket...
Dec  6 02:01:06 np0005548731 systemd[240415]: Starting Create User's Volatile Files and Directories...
Dec  6 02:01:06 np0005548731 systemd[240415]: Finished Create User's Volatile Files and Directories.
Dec  6 02:01:06 np0005548731 systemd[240415]: Listening on D-Bus User Message Bus Socket.
Dec  6 02:01:06 np0005548731 systemd[240415]: Reached target Sockets.
Dec  6 02:01:06 np0005548731 systemd[240415]: Reached target Basic System.
Dec  6 02:01:06 np0005548731 systemd[240415]: Reached target Main User Target.
Dec  6 02:01:06 np0005548731 systemd[240415]: Startup finished in 188ms.
Dec  6 02:01:06 np0005548731 systemd[1]: Started User Manager for UID 42436.
Dec  6 02:01:06 np0005548731 systemd[1]: Started Session 58 of User nova.
Dec  6 02:01:06 np0005548731 systemd[1]: session-58.scope: Deactivated successfully.
Dec  6 02:01:06 np0005548731 systemd-logind[794]: Session 58 logged out. Waiting for processes to exit.
Dec  6 02:01:06 np0005548731 systemd-logind[794]: Removed session 58.
Dec  6 02:01:06 np0005548731 systemd-logind[794]: New session 60 of user nova.
Dec  6 02:01:06 np0005548731 systemd[1]: Started Session 60 of User nova.
Dec  6 02:01:06 np0005548731 systemd[1]: session-60.scope: Deactivated successfully.
Dec  6 02:01:06 np0005548731 systemd-logind[794]: Session 60 logged out. Waiting for processes to exit.
Dec  6 02:01:06 np0005548731 systemd-logind[794]: Removed session 60.
Dec  6 02:01:06 np0005548731 nova_compute[232433]: 2025-12-06 07:01:06.655 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:01:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/347649379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:01:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:07.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:07 np0005548731 nova_compute[232433]: 2025-12-06 07:01:07.702 232437 DEBUG nova.network.neutron [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Successfully updated port: 196b1de8-9246-4ec3-91b5-3686f8691152 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:01:07 np0005548731 nova_compute[232433]: 2025-12-06 07:01:07.879 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "refresh_cache-f9728486-7db3-4d21-bd7a-b41b891489a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:01:07 np0005548731 nova_compute[232433]: 2025-12-06 07:01:07.880 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquired lock "refresh_cache-f9728486-7db3-4d21-bd7a-b41b891489a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:01:07 np0005548731 nova_compute[232433]: 2025-12-06 07:01:07.880 232437 DEBUG nova.network.neutron [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:01:07 np0005548731 nova_compute[232433]: 2025-12-06 07:01:07.884 232437 DEBUG nova.compute.manager [req-750fd448-e6aa-42ad-a59d-8e71c78c5b9e req-9b5bd306-8a53-4ff2-91e8-36bd5ea0ea5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Received event network-changed-196b1de8-9246-4ec3-91b5-3686f8691152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:01:07 np0005548731 nova_compute[232433]: 2025-12-06 07:01:07.884 232437 DEBUG nova.compute.manager [req-750fd448-e6aa-42ad-a59d-8e71c78c5b9e req-9b5bd306-8a53-4ff2-91e8-36bd5ea0ea5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Refreshing instance network info cache due to event network-changed-196b1de8-9246-4ec3-91b5-3686f8691152. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:01:07 np0005548731 nova_compute[232433]: 2025-12-06 07:01:07.884 232437 DEBUG oslo_concurrency.lockutils [req-750fd448-e6aa-42ad-a59d-8e71c78c5b9e req-9b5bd306-8a53-4ff2-91e8-36bd5ea0ea5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f9728486-7db3-4d21-bd7a-b41b891489a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:01:08 np0005548731 nova_compute[232433]: 2025-12-06 07:01:08.271 232437 DEBUG nova.network.neutron [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:01:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:09.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:09.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:10 np0005548731 nova_compute[232433]: 2025-12-06 07:01:10.420 232437 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:01:10 np0005548731 nova_compute[232433]: 2025-12-06 07:01:10.421 232437 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:01:10 np0005548731 nova_compute[232433]: 2025-12-06 07:01:10.421 232437 DEBUG nova.network.neutron [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:01:10 np0005548731 nova_compute[232433]: 2025-12-06 07:01:10.945 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.168 232437 DEBUG nova.network.neutron [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.202 232437 DEBUG nova.network.neutron [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Updating instance_info_cache with network_info: [{"id": "196b1de8-9246-4ec3-91b5-3686f8691152", "address": "fa:16:3e:ff:51:7a", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196b1de8-92", "ovs_interfaceid": "196b1de8-9246-4ec3-91b5-3686f8691152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.445 232437 DEBUG nova.network.neutron [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:01:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:11.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:11.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.658 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.832 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Releasing lock "refresh_cache-f9728486-7db3-4d21-bd7a-b41b891489a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.833 232437 DEBUG nova.compute.manager [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Instance network_info: |[{"id": "196b1de8-9246-4ec3-91b5-3686f8691152", "address": "fa:16:3e:ff:51:7a", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196b1de8-92", "ovs_interfaceid": "196b1de8-9246-4ec3-91b5-3686f8691152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.833 232437 DEBUG oslo_concurrency.lockutils [req-750fd448-e6aa-42ad-a59d-8e71c78c5b9e req-9b5bd306-8a53-4ff2-91e8-36bd5ea0ea5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f9728486-7db3-4d21-bd7a-b41b891489a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.834 232437 DEBUG nova.network.neutron [req-750fd448-e6aa-42ad-a59d-8e71c78c5b9e req-9b5bd306-8a53-4ff2-91e8-36bd5ea0ea5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Refreshing network info cache for port 196b1de8-9246-4ec3-91b5-3686f8691152 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.837 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Start _get_guest_xml network_info=[{"id": "196b1de8-9246-4ec3-91b5-3686f8691152", "address": "fa:16:3e:ff:51:7a", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196b1de8-92", "ovs_interfaceid": "196b1de8-9246-4ec3-91b5-3686f8691152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'encryption_options': None, 'size': 1, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.843 232437 WARNING nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.852 232437 DEBUG nova.virt.libvirt.host [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.853 232437 DEBUG nova.virt.libvirt.host [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.857 232437 DEBUG nova.virt.libvirt.host [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.857 232437 DEBUG nova.virt.libvirt.host [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.859 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.859 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:59:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1891906518',id=22,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-857994228',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.859 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.860 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.860 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.860 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.860 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.861 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.861 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.861 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.861 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.862 232437 DEBUG nova.virt.hardware [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.865 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:11 np0005548731 nova_compute[232433]: 2025-12-06 07:01:11.891 232437 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:01:12 np0005548731 nova_compute[232433]: 2025-12-06 07:01:12.239 232437 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Dec  6 02:01:12 np0005548731 nova_compute[232433]: 2025-12-06 07:01:12.241 232437 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 02:01:12 np0005548731 nova_compute[232433]: 2025-12-06 07:01:12.242 232437 INFO nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Creating image(s)#033[00m
Dec  6 02:01:12 np0005548731 nova_compute[232433]: 2025-12-06 07:01:12.282 232437 DEBUG nova.storage.rbd_utils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] creating snapshot(nova-resize) on rbd image(8818a36b-f8ca-411f-9037-85036a64a941_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:01:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:01:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/698066887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:01:12 np0005548731 nova_compute[232433]: 2025-12-06 07:01:12.441 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:12 np0005548731 nova_compute[232433]: 2025-12-06 07:01:12.442 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:01:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2348888828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:01:12 np0005548731 nova_compute[232433]: 2025-12-06 07:01:12.889 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:12 np0005548731 nova_compute[232433]: 2025-12-06 07:01:12.893 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:12 np0005548731 nova_compute[232433]: 2025-12-06 07:01:12.918 232437 DEBUG nova.storage.rbd_utils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image f9728486-7db3-4d21-bd7a-b41b891489a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:12 np0005548731 nova_compute[232433]: 2025-12-06 07:01:12.922 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.354 232437 DEBUG nova.network.neutron [req-750fd448-e6aa-42ad-a59d-8e71c78c5b9e req-9b5bd306-8a53-4ff2-91e8-36bd5ea0ea5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Updated VIF entry in instance network info cache for port 196b1de8-9246-4ec3-91b5-3686f8691152. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.355 232437 DEBUG nova.network.neutron [req-750fd448-e6aa-42ad-a59d-8e71c78c5b9e req-9b5bd306-8a53-4ff2-91e8-36bd5ea0ea5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Updating instance_info_cache with network_info: [{"id": "196b1de8-9246-4ec3-91b5-3686f8691152", "address": "fa:16:3e:ff:51:7a", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196b1de8-92", "ovs_interfaceid": "196b1de8-9246-4ec3-91b5-3686f8691152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:01:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:01:13 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3837711917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.392 232437 DEBUG oslo_concurrency.lockutils [req-750fd448-e6aa-42ad-a59d-8e71c78c5b9e req-9b5bd306-8a53-4ff2-91e8-36bd5ea0ea5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f9728486-7db3-4d21-bd7a-b41b891489a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.397 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.398 232437 DEBUG nova.virt.libvirt.vif [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1632796309',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1632796309',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(22),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1632796309',id=16,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=22,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyYd2vj0IZaejjGwrppG7vtmC1xs+0JSlQJBkmu53OyTo4w/o53nNppqE2nymdT0UkGVn1vFVEZmmF+N+AjxIMF81FiLMyyI4ZyWmUw9aTkhGmFymaAYJKyFfVSYZ1oNQ==',key_name='tempest-keypair-1145479885',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e2869bdde00d4ef8bea339be8805aa3f',ramdisk_id='',reservation_id='r-g9dey3es',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-631560428',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-631560428-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:00:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='56a80bccd06a46eb841b4a39bdc45f76',uuid=f9728486-7db3-4d21-bd7a-b41b891489a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "196b1de8-9246-4ec3-91b5-3686f8691152", "address": "fa:16:3e:ff:51:7a", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196b1de8-92", "ovs_interfaceid": "196b1de8-9246-4ec3-91b5-3686f8691152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.399 232437 DEBUG nova.network.os_vif_util [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converting VIF {"id": "196b1de8-9246-4ec3-91b5-3686f8691152", "address": "fa:16:3e:ff:51:7a", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196b1de8-92", "ovs_interfaceid": "196b1de8-9246-4ec3-91b5-3686f8691152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.400 232437 DEBUG nova.network.os_vif_util [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:51:7a,bridge_name='br-int',has_traffic_filtering=True,id=196b1de8-9246-4ec3-91b5-3686f8691152,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196b1de8-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.401 232437 DEBUG nova.objects.instance [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lazy-loading 'pci_devices' on Instance uuid f9728486-7db3-4d21-bd7a-b41b891489a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:01:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:13.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.505 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  <uuid>f9728486-7db3-4d21-bd7a-b41b891489a5</uuid>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  <name>instance-00000010</name>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1632796309</nova:name>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:01:11</nova:creationTime>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-857994228">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <nova:ephemeral>1</nova:ephemeral>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <nova:user uuid="56a80bccd06a46eb841b4a39bdc45f76">tempest-ServersWithSpecificFlavorTestJSON-631560428-project-member</nova:user>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <nova:project uuid="e2869bdde00d4ef8bea339be8805aa3f">tempest-ServersWithSpecificFlavorTestJSON-631560428</nova:project>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <nova:port uuid="196b1de8-9246-4ec3-91b5-3686f8691152">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <entry name="serial">f9728486-7db3-4d21-bd7a-b41b891489a5</entry>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <entry name="uuid">f9728486-7db3-4d21-bd7a-b41b891489a5</entry>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f9728486-7db3-4d21-bd7a-b41b891489a5_disk">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f9728486-7db3-4d21-bd7a-b41b891489a5_disk.eph0">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <target dev="vdb" bus="virtio"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f9728486-7db3-4d21-bd7a-b41b891489a5_disk.config">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:ff:51:7a"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <target dev="tap196b1de8-92"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/f9728486-7db3-4d21-bd7a-b41b891489a5/console.log" append="off"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:01:13 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:01:13 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:01:13 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:01:13 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.506 232437 DEBUG nova.compute.manager [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Preparing to wait for external event network-vif-plugged-196b1de8-9246-4ec3-91b5-3686f8691152 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.506 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.506 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.506 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.507 232437 DEBUG nova.virt.libvirt.vif [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1632796309',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1632796309',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(22),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1632796309',id=16,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=22,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyYd2vj0IZaejjGwrppG7vtmC1xs+0JSlQJBkmu53OyTo4w/o53nNppqE2nymdT0UkGVn1vFVEZmmF+N+AjxIMF81FiLMyyI4ZyWmUw9aTkhGmFymaAYJKyFfVSYZ1oNQ==',key_name='tempest-keypair-1145479885',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e2869bdde00d4ef8bea339be8805aa3f',ramdisk_id='',reservation_id='r-g9dey3es',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-631560428',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-631560428-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:00:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='56a80bccd06a46eb841b4a39bdc45f76',uuid=f9728486-7db3-4d21-bd7a-b41b891489a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "196b1de8-9246-4ec3-91b5-3686f8691152", "address": "fa:16:3e:ff:51:7a", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196b1de8-92", "ovs_interfaceid": "196b1de8-9246-4ec3-91b5-3686f8691152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.508 232437 DEBUG nova.network.os_vif_util [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converting VIF {"id": "196b1de8-9246-4ec3-91b5-3686f8691152", "address": "fa:16:3e:ff:51:7a", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196b1de8-92", "ovs_interfaceid": "196b1de8-9246-4ec3-91b5-3686f8691152", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.508 232437 DEBUG nova.network.os_vif_util [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:51:7a,bridge_name='br-int',has_traffic_filtering=True,id=196b1de8-9246-4ec3-91b5-3686f8691152,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196b1de8-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.509 232437 DEBUG os_vif [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:51:7a,bridge_name='br-int',has_traffic_filtering=True,id=196b1de8-9246-4ec3-91b5-3686f8691152,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196b1de8-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.509 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.510 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.510 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.513 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.514 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap196b1de8-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.514 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap196b1de8-92, col_values=(('external_ids', {'iface-id': '196b1de8-9246-4ec3-91b5-3686f8691152', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:51:7a', 'vm-uuid': 'f9728486-7db3-4d21-bd7a-b41b891489a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.516 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:13 np0005548731 NetworkManager[49182]: <info>  [1765004473.5169] manager: (tap196b1de8-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.518 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.523 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.524 232437 INFO os_vif [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:51:7a,bridge_name='br-int',has_traffic_filtering=True,id=196b1de8-9246-4ec3-91b5-3686f8691152,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196b1de8-92')#033[00m
Dec  6 02:01:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:13.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.595 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.596 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.596 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.597 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] No VIF found with MAC fa:16:3e:ff:51:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.597 232437 INFO nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Using config drive#033[00m
Dec  6 02:01:13 np0005548731 nova_compute[232433]: 2025-12-06 07:01:13.633 232437 DEBUG nova.storage.rbd_utils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image f9728486-7db3-4d21-bd7a-b41b891489a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:14 np0005548731 nova_compute[232433]: 2025-12-06 07:01:14.303 232437 INFO nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Creating config drive at /var/lib/nova/instances/f9728486-7db3-4d21-bd7a-b41b891489a5/disk.config#033[00m
Dec  6 02:01:14 np0005548731 nova_compute[232433]: 2025-12-06 07:01:14.309 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f9728486-7db3-4d21-bd7a-b41b891489a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzfaew125 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:14 np0005548731 nova_compute[232433]: 2025-12-06 07:01:14.443 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f9728486-7db3-4d21-bd7a-b41b891489a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzfaew125" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e163 e163: 3 total, 3 up, 3 in
Dec  6 02:01:14 np0005548731 nova_compute[232433]: 2025-12-06 07:01:14.473 232437 DEBUG nova.storage.rbd_utils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] rbd image f9728486-7db3-4d21-bd7a-b41b891489a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:14 np0005548731 nova_compute[232433]: 2025-12-06 07:01:14.477 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f9728486-7db3-4d21-bd7a-b41b891489a5/disk.config f9728486-7db3-4d21-bd7a-b41b891489a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:15.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:15.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.781 232437 DEBUG nova.objects.instance [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.872 232437 DEBUG oslo_concurrency.processutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f9728486-7db3-4d21-bd7a-b41b891489a5/disk.config f9728486-7db3-4d21-bd7a-b41b891489a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.873 232437 INFO nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Deleting local config drive /var/lib/nova/instances/f9728486-7db3-4d21-bd7a-b41b891489a5/disk.config because it was imported into RBD.#033[00m
Dec  6 02:01:15 np0005548731 kernel: tap196b1de8-92: entered promiscuous mode
Dec  6 02:01:15 np0005548731 NetworkManager[49182]: <info>  [1765004475.9221] manager: (tap196b1de8-92): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Dec  6 02:01:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:15Z|00055|binding|INFO|Claiming lport 196b1de8-9246-4ec3-91b5-3686f8691152 for this chassis.
Dec  6 02:01:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:15Z|00056|binding|INFO|196b1de8-9246-4ec3-91b5-3686f8691152: Claiming fa:16:3e:ff:51:7a 10.100.0.3
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.928 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:15.934 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:51:7a 10.100.0.3'], port_security=['fa:16:3e:ff:51:7a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f9728486-7db3-4d21-bd7a-b41b891489a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0468f21-498f-424d-a04a-d011b4adc72e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2869bdde00d4ef8bea339be8805aa3f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a097e786-0d4a-48b2-8e8b-48554d219e38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fc5a527-4a15-4cc8-9ef4-9d61b8890236, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=196b1de8-9246-4ec3-91b5-3686f8691152) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:01:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:15.937 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 196b1de8-9246-4ec3-91b5-3686f8691152 in datapath a0468f21-498f-424d-a04a-d011b4adc72e bound to our chassis#033[00m
Dec  6 02:01:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:15Z|00057|binding|INFO|Setting lport 196b1de8-9246-4ec3-91b5-3686f8691152 ovn-installed in OVS
Dec  6 02:01:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:15Z|00058|binding|INFO|Setting lport 196b1de8-9246-4ec3-91b5-3686f8691152 up in Southbound
Dec  6 02:01:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:15.940 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0468f21-498f-424d-a04a-d011b4adc72e#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.941 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.944 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.951 232437 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.951 232437 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Ensure instance console log exists: /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.952 232437 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.952 232437 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.952 232437 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.954 232437 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.955 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:15 np0005548731 systemd-udevd[240722]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:01:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:15.956 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[274d88b3-e78d-4c1e-8616-360a326d28b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:15.958 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0468f21-41 in ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:01:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:15.960 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0468f21-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:01:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:15.961 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed14a47-886b-437e-9e7b-9640e494c269]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:15.962 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d89e41ee-fc66-487d-a2da-33cdfb876cc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.965 232437 WARNING nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:01:15 np0005548731 systemd-machined[195355]: New machine qemu-7-instance-00000010.
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.970 232437 DEBUG nova.virt.libvirt.host [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.971 232437 DEBUG nova.virt.libvirt.host [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:01:15 np0005548731 NetworkManager[49182]: <info>  [1765004475.9752] device (tap196b1de8-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:01:15 np0005548731 NetworkManager[49182]: <info>  [1765004475.9761] device (tap196b1de8-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:01:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:15.975 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[45e48bc7-4fff-4f9a-847e-e084a3453f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:15 np0005548731 systemd[1]: Started Virtual Machine qemu-7-instance-00000010.
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.980 232437 DEBUG nova.virt.libvirt.host [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.981 232437 DEBUG nova.virt.libvirt.host [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.983 232437 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.983 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.984 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.984 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.984 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.984 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.984 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.984 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.985 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.985 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.985 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.985 232437 DEBUG nova.virt.hardware [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:01:15 np0005548731 nova_compute[232433]: 2025-12-06 07:01:15.985 232437 DEBUG nova.objects.instance [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:01:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:15.989 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a7439b40-91ce-41ce-9920-56b4bb0117c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.016 232437 DEBUG oslo_concurrency.processutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.025 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d85455f8-3d79-487d-b9c5-e8e96f44df96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 NetworkManager[49182]: <info>  [1765004476.0323] manager: (tapa0468f21-40): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Dec  6 02:01:16 np0005548731 systemd-udevd[240726]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.031 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd109f5-f975-4075-8c99-4ecaec76fe3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.073 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[827f6f50-8a9b-43f3-8da3-9620ba897b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.076 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4d05211f-1102-4223-9482-1ea2511aa785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 NetworkManager[49182]: <info>  [1765004476.1060] device (tapa0468f21-40): carrier: link connected
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.113 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3544f25a-badc-4dd4-aaa4-125b5cec7636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.131 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2bd523-6e72-40e3-a4d9-4a4a14a2f77c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0468f21-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:9d:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473811, 'reachable_time': 25857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240756, 'error': None, 'target': 'ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.151 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9ca0e1-0d4c-4858-9cb9-2f091fae3f55]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:9d4b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473811, 'tstamp': 473811}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240758, 'error': None, 'target': 'ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.175 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c7785aec-af93-4e39-b62f-a5c9bfc7b9ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0468f21-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:9d:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473811, 'reachable_time': 25857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240776, 'error': None, 'target': 'ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.189 232437 DEBUG nova.compute.manager [req-8beac833-bde4-42fe-99fa-fee616a6dd93 req-91d8953e-90b6-4b24-87a4-8fa0faf5595e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Received event network-vif-plugged-196b1de8-9246-4ec3-91b5-3686f8691152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.189 232437 DEBUG oslo_concurrency.lockutils [req-8beac833-bde4-42fe-99fa-fee616a6dd93 req-91d8953e-90b6-4b24-87a4-8fa0faf5595e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.190 232437 DEBUG oslo_concurrency.lockutils [req-8beac833-bde4-42fe-99fa-fee616a6dd93 req-91d8953e-90b6-4b24-87a4-8fa0faf5595e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.190 232437 DEBUG oslo_concurrency.lockutils [req-8beac833-bde4-42fe-99fa-fee616a6dd93 req-91d8953e-90b6-4b24-87a4-8fa0faf5595e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.190 232437 DEBUG nova.compute.manager [req-8beac833-bde4-42fe-99fa-fee616a6dd93 req-91d8953e-90b6-4b24-87a4-8fa0faf5595e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Processing event network-vif-plugged-196b1de8-9246-4ec3-91b5-3686f8691152 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.213 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4974b87a-a860-4d36-8dbd-9d07d34d3b54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.280 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1437f8-e26f-44bc-8e71-ad2f8e687f46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.282 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0468f21-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.282 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.283 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0468f21-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:16 np0005548731 NetworkManager[49182]: <info>  [1765004476.2856] manager: (tapa0468f21-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Dec  6 02:01:16 np0005548731 kernel: tapa0468f21-40: entered promiscuous mode
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.288 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0468f21-40, col_values=(('external_ids', {'iface-id': '552234cc-df01-471d-8a4b-772bc37af92a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.289 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:16Z|00059|binding|INFO|Releasing lport 552234cc-df01-471d-8a4b-772bc37af92a from this chassis (sb_readonly=0)
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.304 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.306 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0468f21-498f-424d-a04a-d011b4adc72e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0468f21-498f-424d-a04a-d011b4adc72e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.307 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e2595c5e-d0f0-4815-a44e-a7bc82111dae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.308 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-a0468f21-498f-424d-a04a-d011b4adc72e
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/a0468f21-498f-424d-a04a-d011b4adc72e.pid.haproxy
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID a0468f21-498f-424d-a04a-d011b4adc72e
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:01:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:16.311 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e', 'env', 'PROCESS_TAG=haproxy-a0468f21-498f-424d-a04a-d011b4adc72e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0468f21-498f-424d-a04a-d011b4adc72e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:01:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:01:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1114402516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.537 232437 DEBUG oslo_concurrency.processutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.635 232437 DEBUG oslo_concurrency.processutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:16 np0005548731 podman[240844]: 2025-12-06 07:01:16.715625992 +0000 UTC m=+0.063787333 container create 1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 02:01:16 np0005548731 systemd[1]: Stopping User Manager for UID 42436...
Dec  6 02:01:16 np0005548731 systemd[240415]: Activating special unit Exit the Session...
Dec  6 02:01:16 np0005548731 systemd[240415]: Stopped target Main User Target.
Dec  6 02:01:16 np0005548731 systemd[240415]: Stopped target Basic System.
Dec  6 02:01:16 np0005548731 systemd[240415]: Stopped target Paths.
Dec  6 02:01:16 np0005548731 systemd[240415]: Stopped target Sockets.
Dec  6 02:01:16 np0005548731 systemd[240415]: Stopped target Timers.
Dec  6 02:01:16 np0005548731 systemd[240415]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  6 02:01:16 np0005548731 systemd[240415]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  6 02:01:16 np0005548731 systemd[240415]: Closed D-Bus User Message Bus Socket.
Dec  6 02:01:16 np0005548731 systemd[240415]: Stopped Create User's Volatile Files and Directories.
Dec  6 02:01:16 np0005548731 systemd[240415]: Removed slice User Application Slice.
Dec  6 02:01:16 np0005548731 systemd[240415]: Reached target Shutdown.
Dec  6 02:01:16 np0005548731 systemd[240415]: Finished Exit the Session.
Dec  6 02:01:16 np0005548731 systemd[240415]: Reached target Exit the Session.
Dec  6 02:01:16 np0005548731 systemd[1]: user@42436.service: Deactivated successfully.
Dec  6 02:01:16 np0005548731 systemd[1]: Stopped User Manager for UID 42436.
Dec  6 02:01:16 np0005548731 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec  6 02:01:16 np0005548731 systemd[1]: Started libpod-conmon-1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7.scope.
Dec  6 02:01:16 np0005548731 podman[240844]: 2025-12-06 07:01:16.685994665 +0000 UTC m=+0.034156036 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:01:16 np0005548731 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec  6 02:01:16 np0005548731 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec  6 02:01:16 np0005548731 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec  6 02:01:16 np0005548731 systemd[1]: Removed slice User Slice of UID 42436.
Dec  6 02:01:16 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:01:16 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f215e87b0be28edf72877c728ad3ff6077ebac27a5fccbb50ff687037a1efd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:01:16 np0005548731 podman[240844]: 2025-12-06 07:01:16.838189185 +0000 UTC m=+0.186350526 container init 1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:01:16 np0005548731 podman[240844]: 2025-12-06 07:01:16.847584482 +0000 UTC m=+0.195745833 container start 1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  6 02:01:16 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[240909]: [NOTICE]   (240928) : New worker (240931) forked
Dec  6 02:01:16 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[240909]: [NOTICE]   (240928) : Loading success.
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.969 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004476.9687276, f9728486-7db3-4d21-bd7a-b41b891489a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.970 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] VM Started (Lifecycle Event)#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.973 232437 DEBUG nova.compute.manager [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.978 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.982 232437 INFO nova.virt.libvirt.driver [-] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Instance spawned successfully.#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.983 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:01:16 np0005548731 nova_compute[232433]: 2025-12-06 07:01:16.998 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.013 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.017 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.017 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.018 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.018 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.018 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.019 232437 DEBUG nova.virt.libvirt.driver [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.045 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.045 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004476.9728532, f9728486-7db3-4d21-bd7a-b41b891489a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.045 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.069 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.073 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004476.9783757, f9728486-7db3-4d21-bd7a-b41b891489a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.073 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.078 232437 INFO nova.compute.manager [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Took 18.81 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.079 232437 DEBUG nova.compute.manager [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.095 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.100 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.128 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:01:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:01:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4001666700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.148 232437 DEBUG oslo_concurrency.processutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.151 232437 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  <uuid>8818a36b-f8ca-411f-9037-85036a64a941</uuid>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  <name>instance-0000000f</name>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  <memory>196608</memory>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <nova:name>tempest-MigrationsAdminTest-server-1896797625</nova:name>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:01:15</nova:creationTime>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.micro">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <nova:memory>192</nova:memory>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <nova:user uuid="538aa592cfb04958ab11223ed2d98106">tempest-MigrationsAdminTest-541331030-project-member</nova:user>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <nova:project uuid="fc6c493097a84d069d178020ca398a25">tempest-MigrationsAdminTest-541331030</nova:project>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <nova:ports/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <entry name="serial">8818a36b-f8ca-411f-9037-85036a64a941</entry>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <entry name="uuid">8818a36b-f8ca-411f-9037-85036a64a941</entry>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/8818a36b-f8ca-411f-9037-85036a64a941_disk">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/8818a36b-f8ca-411f-9037-85036a64a941_disk.config">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/console.log" append="off"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:01:17 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:01:17 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:01:17 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:01:17 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.154 232437 INFO nova.compute.manager [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Took 19.95 seconds to build instance.#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.175 232437 DEBUG oslo_concurrency.lockutils [None req-eada883f-c420-4743-83b7-c804aa458a68 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.248 232437 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.249 232437 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.250 232437 INFO nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Using config drive#033[00m
Dec  6 02:01:17 np0005548731 systemd-machined[195355]: New machine qemu-8-instance-0000000f.
Dec  6 02:01:17 np0005548731 systemd[1]: Started Virtual Machine qemu-8-instance-0000000f.
Dec  6 02:01:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:17.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:17.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.931 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004477.930272, 8818a36b-f8ca-411f-9037-85036a64a941 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.932 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.934 232437 DEBUG nova.compute.manager [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.940 232437 INFO nova.virt.libvirt.driver [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance running successfully.#033[00m
Dec  6 02:01:17 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.943 232437 DEBUG nova.virt.libvirt.guest [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.943 232437 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.949 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.953 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.973 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.973 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004477.9304113, 8818a36b-f8ca-411f-9037-85036a64a941 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:01:17 np0005548731 nova_compute[232433]: 2025-12-06 07:01:17.973 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] VM Started (Lifecycle Event)#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.009 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.012 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.304 232437 DEBUG nova.compute.manager [req-c8899b97-b49a-4f39-a132-b072f77ecc26 req-78a78a72-ae52-4b1d-8455-cf6411dd1e58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Received event network-vif-plugged-196b1de8-9246-4ec3-91b5-3686f8691152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.305 232437 DEBUG oslo_concurrency.lockutils [req-c8899b97-b49a-4f39-a132-b072f77ecc26 req-78a78a72-ae52-4b1d-8455-cf6411dd1e58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.305 232437 DEBUG oslo_concurrency.lockutils [req-c8899b97-b49a-4f39-a132-b072f77ecc26 req-78a78a72-ae52-4b1d-8455-cf6411dd1e58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.305 232437 DEBUG oslo_concurrency.lockutils [req-c8899b97-b49a-4f39-a132-b072f77ecc26 req-78a78a72-ae52-4b1d-8455-cf6411dd1e58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.305 232437 DEBUG nova.compute.manager [req-c8899b97-b49a-4f39-a132-b072f77ecc26 req-78a78a72-ae52-4b1d-8455-cf6411dd1e58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] No waiting events found dispatching network-vif-plugged-196b1de8-9246-4ec3-91b5-3686f8691152 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.306 232437 WARNING nova.compute.manager [req-c8899b97-b49a-4f39-a132-b072f77ecc26 req-78a78a72-ae52-4b1d-8455-cf6411dd1e58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Received unexpected event network-vif-plugged-196b1de8-9246-4ec3-91b5-3686f8691152 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.516 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.986 232437 DEBUG oslo_concurrency.lockutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.987 232437 DEBUG oslo_concurrency.lockutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:01:18 np0005548731 nova_compute[232433]: 2025-12-06 07:01:18.987 232437 DEBUG nova.network.neutron [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:01:19 np0005548731 nova_compute[232433]: 2025-12-06 07:01:19.139 232437 DEBUG nova.network.neutron [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:01:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 02:01:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:19.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 02:01:19 np0005548731 nova_compute[232433]: 2025-12-06 07:01:19.487 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:19 np0005548731 podman[241185]: 2025-12-06 07:01:19.493233395 +0000 UTC m=+0.089471384 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  6 02:01:19 np0005548731 nova_compute[232433]: 2025-12-06 07:01:19.514 232437 DEBUG nova.network.neutron [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:01:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:19.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:19 np0005548731 nova_compute[232433]: 2025-12-06 07:01:19.629 232437 DEBUG oslo_concurrency.lockutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:01:19 np0005548731 podman[241185]: 2025-12-06 07:01:19.674869386 +0000 UTC m=+0.271107375 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  6 02:01:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:20 np0005548731 podman[241337]: 2025-12-06 07:01:20.278735143 +0000 UTC m=+0.054847357 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:01:20 np0005548731 podman[241337]: 2025-12-06 07:01:20.28894322 +0000 UTC m=+0.065055404 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:01:20 np0005548731 podman[241403]: 2025-12-06 07:01:20.538974794 +0000 UTC m=+0.079213526 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-type=git, version=2.2.4, architecture=x86_64)
Dec  6 02:01:20 np0005548731 podman[241403]: 2025-12-06 07:01:20.553148686 +0000 UTC m=+0.093387418 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, io.openshift.expose-services=)
Dec  6 02:01:20 np0005548731 nova_compute[232433]: 2025-12-06 07:01:20.952 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:21.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:21 np0005548731 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec  6 02:01:21 np0005548731 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000f.scope: Consumed 2.257s CPU time.
Dec  6 02:01:21 np0005548731 systemd-machined[195355]: Machine qemu-8-instance-0000000f terminated.
Dec  6 02:01:21 np0005548731 nova_compute[232433]: 2025-12-06 07:01:21.538 232437 DEBUG nova.compute.manager [req-e7e465c1-441c-41d5-b23e-db79672d9ca6 req-a8a48a4c-6868-46b6-8325-2fe27d48b615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Received event network-changed-196b1de8-9246-4ec3-91b5-3686f8691152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:01:21 np0005548731 nova_compute[232433]: 2025-12-06 07:01:21.539 232437 DEBUG nova.compute.manager [req-e7e465c1-441c-41d5-b23e-db79672d9ca6 req-a8a48a4c-6868-46b6-8325-2fe27d48b615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Refreshing instance network info cache due to event network-changed-196b1de8-9246-4ec3-91b5-3686f8691152. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:01:21 np0005548731 nova_compute[232433]: 2025-12-06 07:01:21.539 232437 DEBUG oslo_concurrency.lockutils [req-e7e465c1-441c-41d5-b23e-db79672d9ca6 req-a8a48a4c-6868-46b6-8325-2fe27d48b615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f9728486-7db3-4d21-bd7a-b41b891489a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:01:21 np0005548731 nova_compute[232433]: 2025-12-06 07:01:21.539 232437 DEBUG oslo_concurrency.lockutils [req-e7e465c1-441c-41d5-b23e-db79672d9ca6 req-a8a48a4c-6868-46b6-8325-2fe27d48b615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f9728486-7db3-4d21-bd7a-b41b891489a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:01:21 np0005548731 nova_compute[232433]: 2025-12-06 07:01:21.540 232437 DEBUG nova.network.neutron [req-e7e465c1-441c-41d5-b23e-db79672d9ca6 req-a8a48a4c-6868-46b6-8325-2fe27d48b615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Refreshing network info cache for port 196b1de8-9246-4ec3-91b5-3686f8691152 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:01:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:21.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:21 np0005548731 nova_compute[232433]: 2025-12-06 07:01:21.715 232437 INFO nova.virt.libvirt.driver [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance destroyed successfully.#033[00m
Dec  6 02:01:21 np0005548731 nova_compute[232433]: 2025-12-06 07:01:21.717 232437 DEBUG nova.objects.instance [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'resources' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:01:21 np0005548731 nova_compute[232433]: 2025-12-06 07:01:21.750 232437 DEBUG oslo_concurrency.lockutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:21 np0005548731 nova_compute[232433]: 2025-12-06 07:01:21.751 232437 DEBUG oslo_concurrency.lockutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:21 np0005548731 nova_compute[232433]: 2025-12-06 07:01:21.995 232437 DEBUG nova.objects.instance [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'migration_context' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:01:22 np0005548731 nova_compute[232433]: 2025-12-06 07:01:22.134 232437 DEBUG oslo_concurrency.processutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:22 np0005548731 nova_compute[232433]: 2025-12-06 07:01:22.455 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:01:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/938090136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:01:22 np0005548731 nova_compute[232433]: 2025-12-06 07:01:22.630 232437 DEBUG oslo_concurrency.processutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:22 np0005548731 nova_compute[232433]: 2025-12-06 07:01:22.638 232437 DEBUG nova.compute.provider_tree [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:01:22 np0005548731 nova_compute[232433]: 2025-12-06 07:01:22.755 232437 DEBUG nova.scheduler.client.report [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:01:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:01:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:01:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:01:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:01:22 np0005548731 nova_compute[232433]: 2025-12-06 07:01:22.984 232437 DEBUG oslo_concurrency.lockutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 1.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:23 np0005548731 nova_compute[232433]: 2025-12-06 07:01:23.370 232437 DEBUG nova.network.neutron [req-e7e465c1-441c-41d5-b23e-db79672d9ca6 req-a8a48a4c-6868-46b6-8325-2fe27d48b615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Updated VIF entry in instance network info cache for port 196b1de8-9246-4ec3-91b5-3686f8691152. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:01:23 np0005548731 nova_compute[232433]: 2025-12-06 07:01:23.371 232437 DEBUG nova.network.neutron [req-e7e465c1-441c-41d5-b23e-db79672d9ca6 req-a8a48a4c-6868-46b6-8325-2fe27d48b615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Updating instance_info_cache with network_info: [{"id": "196b1de8-9246-4ec3-91b5-3686f8691152", "address": "fa:16:3e:ff:51:7a", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196b1de8-92", "ovs_interfaceid": "196b1de8-9246-4ec3-91b5-3686f8691152", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:01:23 np0005548731 nova_compute[232433]: 2025-12-06 07:01:23.396 232437 DEBUG oslo_concurrency.lockutils [req-e7e465c1-441c-41d5-b23e-db79672d9ca6 req-a8a48a4c-6868-46b6-8325-2fe27d48b615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f9728486-7db3-4d21-bd7a-b41b891489a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:01:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:23.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:23 np0005548731 nova_compute[232433]: 2025-12-06 07:01:23.518 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:23.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:23.871 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:01:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:23.872 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:01:23 np0005548731 nova_compute[232433]: 2025-12-06 07:01:23.872 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:24 np0005548731 nova_compute[232433]: 2025-12-06 07:01:24.734 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:25.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:25.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:25 np0005548731 nova_compute[232433]: 2025-12-06 07:01:25.954 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:26 np0005548731 nova_compute[232433]: 2025-12-06 07:01:26.394 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:01:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:27.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:27.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:28 np0005548731 nova_compute[232433]: 2025-12-06 07:01:28.520 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:01:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:01:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:29.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:29.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:29 np0005548731 podman[241648]: 2025-12-06 07:01:29.915903027 +0000 UTC m=+0.066461517 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 02:01:29 np0005548731 podman[241646]: 2025-12-06 07:01:29.931818873 +0000 UTC m=+0.081775319 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  6 02:01:29 np0005548731 podman[241647]: 2025-12-06 07:01:29.948497476 +0000 UTC m=+0.100327807 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:01:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e164 e164: 3 total, 3 up, 3 in
Dec  6 02:01:30 np0005548731 nova_compute[232433]: 2025-12-06 07:01:30.957 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:31.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:31.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:31.875 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:33.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:33 np0005548731 nova_compute[232433]: 2025-12-06 07:01:33.546 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:33.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:33 np0005548731 nova_compute[232433]: 2025-12-06 07:01:33.722 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:34Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:51:7a 10.100.0.3
Dec  6 02:01:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:34Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:51:7a 10.100.0.3
Dec  6 02:01:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:35.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:35.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:35 np0005548731 nova_compute[232433]: 2025-12-06 07:01:35.959 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:36 np0005548731 nova_compute[232433]: 2025-12-06 07:01:36.715 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004481.713974, 8818a36b-f8ca-411f-9037-85036a64a941 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:01:36 np0005548731 nova_compute[232433]: 2025-12-06 07:01:36.715 232437 INFO nova.compute.manager [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:01:36 np0005548731 nova_compute[232433]: 2025-12-06 07:01:36.836 232437 DEBUG nova.compute.manager [None req-48a18d64-769d-49f6-919b-01f1b0855d39 - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:37.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:37.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:38 np0005548731 nova_compute[232433]: 2025-12-06 07:01:38.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:01:38 np0005548731 nova_compute[232433]: 2025-12-06 07:01:38.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:39.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:39.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:40 np0005548731 nova_compute[232433]: 2025-12-06 07:01:40.027 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:40 np0005548731 nova_compute[232433]: 2025-12-06 07:01:40.961 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:41 np0005548731 nova_compute[232433]: 2025-12-06 07:01:41.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:01:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:41.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:41.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:42 np0005548731 nova_compute[232433]: 2025-12-06 07:01:42.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:01:42 np0005548731 nova_compute[232433]: 2025-12-06 07:01:42.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:01:42 np0005548731 nova_compute[232433]: 2025-12-06 07:01:42.266 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:01:42 np0005548731 nova_compute[232433]: 2025-12-06 07:01:42.267 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:01:42 np0005548731 nova_compute[232433]: 2025-12-06 07:01:42.267 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:01:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e165 e165: 3 total, 3 up, 3 in
Dec  6 02:01:42 np0005548731 nova_compute[232433]: 2025-12-06 07:01:42.434 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:01:42 np0005548731 nova_compute[232433]: 2025-12-06 07:01:42.764 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:01:42 np0005548731 nova_compute[232433]: 2025-12-06 07:01:42.795 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:01:42 np0005548731 nova_compute[232433]: 2025-12-06 07:01:42.796 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:01:42 np0005548731 nova_compute[232433]: 2025-12-06 07:01:42.796 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.130 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.131 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:43.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:43.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.600 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:01:43 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1879664780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.695 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.786 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.787 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.790 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.791 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.791 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.793 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.794 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.943 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.944 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4301MB free_disk=20.7884521484375GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.944 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:43 np0005548731 nova_compute[232433]: 2025-12-06 07:01:43.945 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:44 np0005548731 nova_compute[232433]: 2025-12-06 07:01:44.025 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance d32579c4-62c8-41ac-9d01-b617cc7992ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:01:44 np0005548731 nova_compute[232433]: 2025-12-06 07:01:44.026 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 2341fd69-a672-42e2-834b-0f7b8269c7ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:01:44 np0005548731 nova_compute[232433]: 2025-12-06 07:01:44.026 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance f9728486-7db3-4d21-bd7a-b41b891489a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:01:44 np0005548731 nova_compute[232433]: 2025-12-06 07:01:44.027 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:01:44 np0005548731 nova_compute[232433]: 2025-12-06 07:01:44.027 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=960MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:01:44 np0005548731 nova_compute[232433]: 2025-12-06 07:01:44.093 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:01:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1355878612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:01:44 np0005548731 nova_compute[232433]: 2025-12-06 07:01:44.572 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:44 np0005548731 nova_compute[232433]: 2025-12-06 07:01:44.578 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:01:44 np0005548731 nova_compute[232433]: 2025-12-06 07:01:44.596 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:01:44 np0005548731 nova_compute[232433]: 2025-12-06 07:01:44.618 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:01:44 np0005548731 nova_compute[232433]: 2025-12-06 07:01:44.619 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.020 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.020 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.042 232437 DEBUG nova.compute.manager [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.109 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.110 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.116 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.116 232437 INFO nova.compute.claims [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:01:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.267 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:45.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:01:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:45.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.619 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:01:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:01:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2755476702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.712 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.718 232437 DEBUG nova.compute.provider_tree [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.745 232437 DEBUG nova.scheduler.client.report [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.766 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.767 232437 DEBUG nova.compute.manager [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.818 232437 DEBUG nova.compute.manager [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.818 232437 DEBUG nova.network.neutron [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.841 232437 INFO nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.859 232437 DEBUG nova.compute.manager [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.947 232437 DEBUG nova.compute.manager [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.948 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:01:45 np0005548731 nova_compute[232433]: 2025-12-06 07:01:45.949 232437 INFO nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Creating image(s)#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.029 232437 DEBUG nova.storage.rbd_utils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.064 232437 DEBUG nova.storage.rbd_utils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.094 232437 DEBUG nova.storage.rbd_utils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.098 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.121 232437 DEBUG nova.policy [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3cae056210a400fa5e3495fe827d29a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.123 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.127 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.127 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.161 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.162 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.163 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.163 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.191 232437 DEBUG nova.storage.rbd_utils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:46 np0005548731 nova_compute[232433]: 2025-12-06 07:01:46.197 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:47.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:47.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:48 np0005548731 nova_compute[232433]: 2025-12-06 07:01:48.344 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:48 np0005548731 nova_compute[232433]: 2025-12-06 07:01:48.380 232437 DEBUG nova.network.neutron [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Successfully created port: 45a08bd2-8b6a-4af2-bd49-4df95910daa9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:01:48 np0005548731 nova_compute[232433]: 2025-12-06 07:01:48.428 232437 DEBUG nova.storage.rbd_utils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] resizing rbd image 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:01:48 np0005548731 nova_compute[232433]: 2025-12-06 07:01:48.528 232437 DEBUG nova.objects.instance [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'migration_context' on Instance uuid 646ce79a-a40a-4b2a-9af5-90c98c98b72a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:01:48 np0005548731 nova_compute[232433]: 2025-12-06 07:01:48.591 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:01:48 np0005548731 nova_compute[232433]: 2025-12-06 07:01:48.592 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Ensure instance console log exists: /var/lib/nova/instances/646ce79a-a40a-4b2a-9af5-90c98c98b72a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:01:48 np0005548731 nova_compute[232433]: 2025-12-06 07:01:48.592 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:48 np0005548731 nova_compute[232433]: 2025-12-06 07:01:48.593 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:48 np0005548731 nova_compute[232433]: 2025-12-06 07:01:48.593 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:48 np0005548731 nova_compute[232433]: 2025-12-06 07:01:48.602 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:49.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:49.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:49 np0005548731 nova_compute[232433]: 2025-12-06 07:01:49.975 232437 DEBUG nova.network.neutron [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Successfully updated port: 45a08bd2-8b6a-4af2-bd49-4df95910daa9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:01:50 np0005548731 nova_compute[232433]: 2025-12-06 07:01:50.102 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "refresh_cache-646ce79a-a40a-4b2a-9af5-90c98c98b72a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:01:50 np0005548731 nova_compute[232433]: 2025-12-06 07:01:50.102 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquired lock "refresh_cache-646ce79a-a40a-4b2a-9af5-90c98c98b72a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:01:50 np0005548731 nova_compute[232433]: 2025-12-06 07:01:50.102 232437 DEBUG nova.network.neutron [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:01:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:50 np0005548731 nova_compute[232433]: 2025-12-06 07:01:50.381 232437 DEBUG nova.compute.manager [req-7ed15d09-e420-4355-b4ed-69ed8164afce req-868976ae-13da-40c2-ba71-b22ced7a16df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Received event network-changed-45a08bd2-8b6a-4af2-bd49-4df95910daa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:01:50 np0005548731 nova_compute[232433]: 2025-12-06 07:01:50.381 232437 DEBUG nova.compute.manager [req-7ed15d09-e420-4355-b4ed-69ed8164afce req-868976ae-13da-40c2-ba71-b22ced7a16df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Refreshing instance network info cache due to event network-changed-45a08bd2-8b6a-4af2-bd49-4df95910daa9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:01:50 np0005548731 nova_compute[232433]: 2025-12-06 07:01:50.382 232437 DEBUG oslo_concurrency.lockutils [req-7ed15d09-e420-4355-b4ed-69ed8164afce req-868976ae-13da-40c2-ba71-b22ced7a16df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-646ce79a-a40a-4b2a-9af5-90c98c98b72a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:01:50 np0005548731 nova_compute[232433]: 2025-12-06 07:01:50.529 232437 DEBUG nova.network.neutron [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:01:50 np0005548731 nova_compute[232433]: 2025-12-06 07:01:50.964 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:51.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:51.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.247 232437 DEBUG nova.network.neutron [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Updating instance_info_cache with network_info: [{"id": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "address": "fa:16:3e:12:9e:14", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45a08bd2-8b", "ovs_interfaceid": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.354 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Releasing lock "refresh_cache-646ce79a-a40a-4b2a-9af5-90c98c98b72a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.355 232437 DEBUG nova.compute.manager [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Instance network_info: |[{"id": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "address": "fa:16:3e:12:9e:14", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45a08bd2-8b", "ovs_interfaceid": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.355 232437 DEBUG oslo_concurrency.lockutils [req-7ed15d09-e420-4355-b4ed-69ed8164afce req-868976ae-13da-40c2-ba71-b22ced7a16df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-646ce79a-a40a-4b2a-9af5-90c98c98b72a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.356 232437 DEBUG nova.network.neutron [req-7ed15d09-e420-4355-b4ed-69ed8164afce req-868976ae-13da-40c2-ba71-b22ced7a16df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Refreshing network info cache for port 45a08bd2-8b6a-4af2-bd49-4df95910daa9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.359 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Start _get_guest_xml network_info=[{"id": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "address": "fa:16:3e:12:9e:14", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45a08bd2-8b", "ovs_interfaceid": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.363 232437 WARNING nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.368 232437 DEBUG nova.virt.libvirt.host [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.369 232437 DEBUG nova.virt.libvirt.host [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.372 232437 DEBUG nova.virt.libvirt.host [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.373 232437 DEBUG nova.virt.libvirt.host [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.374 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.375 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.376 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.376 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.376 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.377 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.377 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.377 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.378 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.378 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.378 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.379 232437 DEBUG nova.virt.hardware [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:01:52 np0005548731 nova_compute[232433]: 2025-12-06 07:01:52.383 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:01:53 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2352722957' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.107 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.137 232437 DEBUG nova.storage.rbd_utils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.141 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:53.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:53.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:01:53 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2918283785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.611 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.690 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.692 232437 DEBUG nova.virt.libvirt.vif [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1264066457',display_name='tempest-ServersAdminTestJSON-server-1264066457',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1264066457',id=20,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ktncwgme',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:01:45Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=646ce79a-a40a-4b2a-9af5-90c98c98b72a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "address": "fa:16:3e:12:9e:14", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45a08bd2-8b", "ovs_interfaceid": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.692 232437 DEBUG nova.network.os_vif_util [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "address": "fa:16:3e:12:9e:14", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45a08bd2-8b", "ovs_interfaceid": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.693 232437 DEBUG nova.network.os_vif_util [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=45a08bd2-8b6a-4af2-bd49-4df95910daa9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45a08bd2-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.694 232437 DEBUG nova.objects.instance [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'pci_devices' on Instance uuid 646ce79a-a40a-4b2a-9af5-90c98c98b72a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.811 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  <uuid>646ce79a-a40a-4b2a-9af5-90c98c98b72a</uuid>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  <name>instance-00000014</name>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersAdminTestJSON-server-1264066457</nova:name>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:01:52</nova:creationTime>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <nova:user uuid="a3cae056210a400fa5e3495fe827d29a">tempest-ServersAdminTestJSON-1902776367-project-member</nova:user>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <nova:project uuid="b6179a8b65c2484eb7ca1e068d93a58c">tempest-ServersAdminTestJSON-1902776367</nova:project>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <nova:port uuid="45a08bd2-8b6a-4af2-bd49-4df95910daa9">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <entry name="serial">646ce79a-a40a-4b2a-9af5-90c98c98b72a</entry>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <entry name="uuid">646ce79a-a40a-4b2a-9af5-90c98c98b72a</entry>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk.config">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:12:9e:14"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <target dev="tap45a08bd2-8b"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/646ce79a-a40a-4b2a-9af5-90c98c98b72a/console.log" append="off"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:01:53 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:01:53 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:01:53 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:01:53 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.812 232437 DEBUG nova.compute.manager [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Preparing to wait for external event network-vif-plugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.812 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.813 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.813 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.813 232437 DEBUG nova.virt.libvirt.vif [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1264066457',display_name='tempest-ServersAdminTestJSON-server-1264066457',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1264066457',id=20,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ktncwgme',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:01:45Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=646ce79a-a40a-4b2a-9af5-90c98c98b72a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "address": "fa:16:3e:12:9e:14", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45a08bd2-8b", "ovs_interfaceid": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.814 232437 DEBUG nova.network.os_vif_util [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "address": "fa:16:3e:12:9e:14", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45a08bd2-8b", "ovs_interfaceid": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.814 232437 DEBUG nova.network.os_vif_util [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=45a08bd2-8b6a-4af2-bd49-4df95910daa9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45a08bd2-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.815 232437 DEBUG os_vif [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=45a08bd2-8b6a-4af2-bd49-4df95910daa9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45a08bd2-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.815 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.815 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.816 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.820 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45a08bd2-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.820 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap45a08bd2-8b, col_values=(('external_ids', {'iface-id': '45a08bd2-8b6a-4af2-bd49-4df95910daa9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:9e:14', 'vm-uuid': '646ce79a-a40a-4b2a-9af5-90c98c98b72a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.822 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:53 np0005548731 NetworkManager[49182]: <info>  [1765004513.8229] manager: (tap45a08bd2-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.825 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.829 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.829 232437 INFO os_vif [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=45a08bd2-8b6a-4af2-bd49-4df95910daa9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45a08bd2-8b')#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.889 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.890 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.890 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No VIF found with MAC fa:16:3e:12:9e:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.891 232437 INFO nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Using config drive#033[00m
Dec  6 02:01:53 np0005548731 nova_compute[232433]: 2025-12-06 07:01:53.917 232437 DEBUG nova.storage.rbd_utils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.257 232437 DEBUG oslo_concurrency.lockutils [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "f9728486-7db3-4d21-bd7a-b41b891489a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.258 232437 DEBUG oslo_concurrency.lockutils [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.258 232437 DEBUG oslo_concurrency.lockutils [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.259 232437 DEBUG oslo_concurrency.lockutils [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.259 232437 DEBUG oslo_concurrency.lockutils [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.260 232437 INFO nova.compute.manager [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Terminating instance#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.261 232437 DEBUG nova.compute.manager [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.262 232437 DEBUG nova.network.neutron [req-7ed15d09-e420-4355-b4ed-69ed8164afce req-868976ae-13da-40c2-ba71-b22ced7a16df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Updated VIF entry in instance network info cache for port 45a08bd2-8b6a-4af2-bd49-4df95910daa9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.262 232437 DEBUG nova.network.neutron [req-7ed15d09-e420-4355-b4ed-69ed8164afce req-868976ae-13da-40c2-ba71-b22ced7a16df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Updating instance_info_cache with network_info: [{"id": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "address": "fa:16:3e:12:9e:14", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45a08bd2-8b", "ovs_interfaceid": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.307 232437 DEBUG oslo_concurrency.lockutils [req-7ed15d09-e420-4355-b4ed-69ed8164afce req-868976ae-13da-40c2-ba71-b22ced7a16df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-646ce79a-a40a-4b2a-9af5-90c98c98b72a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:01:54 np0005548731 kernel: tap196b1de8-92 (unregistering): left promiscuous mode
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.342 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:54 np0005548731 NetworkManager[49182]: <info>  [1765004514.3434] device (tap196b1de8-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:01:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:54Z|00060|binding|INFO|Releasing lport 196b1de8-9246-4ec3-91b5-3686f8691152 from this chassis (sb_readonly=0)
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.357 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:54Z|00061|binding|INFO|Setting lport 196b1de8-9246-4ec3-91b5-3686f8691152 down in Southbound
Dec  6 02:01:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:54Z|00062|binding|INFO|Removing iface tap196b1de8-92 ovn-installed in OVS
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.360 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.375 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:54 np0005548731 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000010.scope: Deactivated successfully.
Dec  6 02:01:54 np0005548731 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000010.scope: Consumed 17.395s CPU time.
Dec  6 02:01:54 np0005548731 systemd-machined[195355]: Machine qemu-7-instance-00000010 terminated.
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.473 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:51:7a 10.100.0.3'], port_security=['fa:16:3e:ff:51:7a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f9728486-7db3-4d21-bd7a-b41b891489a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0468f21-498f-424d-a04a-d011b4adc72e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2869bdde00d4ef8bea339be8805aa3f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a097e786-0d4a-48b2-8e8b-48554d219e38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fc5a527-4a15-4cc8-9ef4-9d61b8890236, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=196b1de8-9246-4ec3-91b5-3686f8691152) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.475 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 196b1de8-9246-4ec3-91b5-3686f8691152 in datapath a0468f21-498f-424d-a04a-d011b4adc72e unbound from our chassis#033[00m
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.476 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0468f21-498f-424d-a04a-d011b4adc72e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.478 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8b63dc-b9da-4222-890a-fafb988d2420]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.478 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e namespace which is not needed anymore#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.484 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.490 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.496 232437 INFO nova.virt.libvirt.driver [-] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Instance destroyed successfully.#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.496 232437 DEBUG nova.objects.instance [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lazy-loading 'resources' on Instance uuid f9728486-7db3-4d21-bd7a-b41b891489a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:01:54 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[240909]: [NOTICE]   (240928) : haproxy version is 2.8.14-c23fe91
Dec  6 02:01:54 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[240909]: [NOTICE]   (240928) : path to executable is /usr/sbin/haproxy
Dec  6 02:01:54 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[240909]: [WARNING]  (240928) : Exiting Master process...
Dec  6 02:01:54 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[240909]: [ALERT]    (240928) : Current worker (240931) exited with code 143 (Terminated)
Dec  6 02:01:54 np0005548731 neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e[240909]: [WARNING]  (240928) : All workers exited. Exiting... (0)
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.614 232437 INFO nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Creating config drive at /var/lib/nova/instances/646ce79a-a40a-4b2a-9af5-90c98c98b72a/disk.config#033[00m
Dec  6 02:01:54 np0005548731 systemd[1]: libpod-1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7.scope: Deactivated successfully.
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.619 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/646ce79a-a40a-4b2a-9af5-90c98c98b72a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpouqnjs62 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:54 np0005548731 podman[242124]: 2025-12-06 07:01:54.623969517 +0000 UTC m=+0.051637719 container died 1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.644 232437 DEBUG nova.virt.libvirt.vif [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1632796309',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1632796309',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(22),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1632796309',id=16,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=22,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyYd2vj0IZaejjGwrppG7vtmC1xs+0JSlQJBkmu53OyTo4w/o53nNppqE2nymdT0UkGVn1vFVEZmmF+N+AjxIMF81FiLMyyI4ZyWmUw9aTkhGmFymaAYJKyFfVSYZ1oNQ==',key_name='tempest-keypair-1145479885',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:01:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e2869bdde00d4ef8bea339be8805aa3f',ramdisk_id='',reservation_id='r-g9dey3es',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-631560428',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-631560428-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:01:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='56a80bccd06a46eb841b4a39bdc45f76',uuid=f9728486-7db3-4d21-bd7a-b41b891489a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "196b1de8-9246-4ec3-91b5-3686f8691152", "address": "fa:16:3e:ff:51:7a", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196b1de8-92", "ovs_interfaceid": "196b1de8-9246-4ec3-91b5-3686f8691152", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.645 232437 DEBUG nova.network.os_vif_util [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converting VIF {"id": "196b1de8-9246-4ec3-91b5-3686f8691152", "address": "fa:16:3e:ff:51:7a", "network": {"id": "a0468f21-498f-424d-a04a-d011b4adc72e", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-40054798-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2869bdde00d4ef8bea339be8805aa3f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap196b1de8-92", "ovs_interfaceid": "196b1de8-9246-4ec3-91b5-3686f8691152", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.646 232437 DEBUG nova.network.os_vif_util [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:51:7a,bridge_name='br-int',has_traffic_filtering=True,id=196b1de8-9246-4ec3-91b5-3686f8691152,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196b1de8-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.647 232437 DEBUG os_vif [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:51:7a,bridge_name='br-int',has_traffic_filtering=True,id=196b1de8-9246-4ec3-91b5-3686f8691152,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196b1de8-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.650 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.651 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap196b1de8-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:54 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7-userdata-shm.mount: Deactivated successfully.
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.656 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.658 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:01:54 np0005548731 systemd[1]: var-lib-containers-storage-overlay-60f215e87b0be28edf72877c728ad3ff6077ebac27a5fccbb50ff687037a1efd-merged.mount: Deactivated successfully.
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.661 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.665 232437 INFO os_vif [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:51:7a,bridge_name='br-int',has_traffic_filtering=True,id=196b1de8-9246-4ec3-91b5-3686f8691152,network=Network(a0468f21-498f-424d-a04a-d011b4adc72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap196b1de8-92')#033[00m
Dec  6 02:01:54 np0005548731 podman[242124]: 2025-12-06 07:01:54.676870566 +0000 UTC m=+0.104538778 container cleanup 1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:01:54 np0005548731 systemd[1]: libpod-conmon-1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7.scope: Deactivated successfully.
Dec  6 02:01:54 np0005548731 podman[242174]: 2025-12-06 07:01:54.743310702 +0000 UTC m=+0.045487300 container remove 1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.749 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/646ce79a-a40a-4b2a-9af5-90c98c98b72a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpouqnjs62" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.748 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8815e0e2-22ec-4bfc-8ccb-497999a9367c]: (4, ('Sat Dec  6 07:01:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e (1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7)\n1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7\nSat Dec  6 07:01:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e (1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7)\n1907dcb2b1fa0249395a3763340cc52cf5ba778467d056fbfbedb6bd331905f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.750 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0aed4ff7-f889-4432-9e2c-e412035d11d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.751 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0468f21-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:54 np0005548731 kernel: tapa0468f21-40: left promiscuous mode
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.774 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7c23ce9a-ca1e-47c5-86e4-a2ebeb3771b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.780 232437 DEBUG nova.storage.rbd_utils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.786 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/646ce79a-a40a-4b2a-9af5-90c98c98b72a/disk.config 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.787 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d2672b-1c79-4c9a-bf1d-cef3d25ca627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.788 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d16546d5-ce89-426f-9f36-bc0ab0964826]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.811 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[14a7ea38-80bc-4d40-9c06-2c179e63c51f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473802, 'reachable_time': 25635, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242212, 'error': None, 'target': 'ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:54 np0005548731 nova_compute[232433]: 2025-12-06 07:01:54.813 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:54 np0005548731 systemd[1]: run-netns-ovnmeta\x2da0468f21\x2d498f\x2d424d\x2da04a\x2dd011b4adc72e.mount: Deactivated successfully.
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.815 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0468f21-498f-424d-a04a-d011b4adc72e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:01:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:54.816 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[f67302f3-7eaf-44cd-b67a-9f72f8252742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.344 232437 DEBUG oslo_concurrency.processutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/646ce79a-a40a-4b2a-9af5-90c98c98b72a/disk.config 646ce79a-a40a-4b2a-9af5-90c98c98b72a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.346 232437 INFO nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Deleting local config drive /var/lib/nova/instances/646ce79a-a40a-4b2a-9af5-90c98c98b72a/disk.config because it was imported into RBD.#033[00m
Dec  6 02:01:55 np0005548731 kernel: tap45a08bd2-8b: entered promiscuous mode
Dec  6 02:01:55 np0005548731 NetworkManager[49182]: <info>  [1765004515.4152] manager: (tap45a08bd2-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Dec  6 02:01:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:55Z|00063|binding|INFO|Claiming lport 45a08bd2-8b6a-4af2-bd49-4df95910daa9 for this chassis.
Dec  6 02:01:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:55Z|00064|binding|INFO|45a08bd2-8b6a-4af2-bd49-4df95910daa9: Claiming fa:16:3e:12:9e:14 10.100.0.10
Dec  6 02:01:55 np0005548731 systemd-udevd[242093]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.415 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:55 np0005548731 NetworkManager[49182]: <info>  [1765004515.4304] device (tap45a08bd2-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:01:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:55Z|00065|binding|INFO|Setting lport 45a08bd2-8b6a-4af2-bd49-4df95910daa9 ovn-installed in OVS
Dec  6 02:01:55 np0005548731 NetworkManager[49182]: <info>  [1765004515.4319] device (tap45a08bd2-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.434 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:55 np0005548731 systemd-machined[195355]: New machine qemu-9-instance-00000014.
Dec  6 02:01:55 np0005548731 systemd[1]: Started Virtual Machine qemu-9-instance-00000014.
Dec  6 02:01:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:55Z|00066|binding|INFO|Setting lport 45a08bd2-8b6a-4af2-bd49-4df95910daa9 up in Southbound
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.511 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:9e:14 10.100.0.10'], port_security=['fa:16:3e:12:9e:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '646ce79a-a40a-4b2a-9af5-90c98c98b72a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=45a08bd2-8b6a-4af2-bd49-4df95910daa9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.513 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 45a08bd2-8b6a-4af2-bd49-4df95910daa9 in datapath 9451d867-0aba-464d-b4d9-f947b887e903 bound to our chassis#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.515 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9451d867-0aba-464d-b4d9-f947b887e903#033[00m
Dec  6 02:01:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:55.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.528 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab0d7be-c308-4eea-bf7b-0e538123ee9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.529 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9451d867-01 in ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.530 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9451d867-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.530 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[19b76e94-497d-4bf9-8741-41167c541a09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.531 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a500208e-f1f2-423c-b89c-a84dd9122697]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.544 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[63c903a2-5f4d-442a-9450-017fddd20591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.560 232437 DEBUG nova.compute.manager [req-00c82b41-4441-4a36-a505-8b24cc5782de req-ce43f228-ccf1-437d-b5a0-62238c5d838c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Received event network-vif-unplugged-196b1de8-9246-4ec3-91b5-3686f8691152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.560 232437 DEBUG oslo_concurrency.lockutils [req-00c82b41-4441-4a36-a505-8b24cc5782de req-ce43f228-ccf1-437d-b5a0-62238c5d838c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.561 232437 DEBUG oslo_concurrency.lockutils [req-00c82b41-4441-4a36-a505-8b24cc5782de req-ce43f228-ccf1-437d-b5a0-62238c5d838c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.561 232437 DEBUG oslo_concurrency.lockutils [req-00c82b41-4441-4a36-a505-8b24cc5782de req-ce43f228-ccf1-437d-b5a0-62238c5d838c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.562 232437 DEBUG nova.compute.manager [req-00c82b41-4441-4a36-a505-8b24cc5782de req-ce43f228-ccf1-437d-b5a0-62238c5d838c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] No waiting events found dispatching network-vif-unplugged-196b1de8-9246-4ec3-91b5-3686f8691152 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.562 232437 DEBUG nova.compute.manager [req-00c82b41-4441-4a36-a505-8b24cc5782de req-ce43f228-ccf1-437d-b5a0-62238c5d838c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Received event network-vif-unplugged-196b1de8-9246-4ec3-91b5-3686f8691152 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.563 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e9b4a8-6840-44f5-92f4-13b0d3a54560]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.604 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[8c735ea9-2096-43cb-b43f-de8c8fa32c7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:55.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:55 np0005548731 NetworkManager[49182]: <info>  [1765004515.6134] manager: (tap9451d867-00): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.615 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[33506ab1-1ce5-4eb4-ab66-b6ec76210c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.651 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[73b67859-b967-43fa-9178-b3314d40074d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.653 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[93ceaede-3da3-44fb-84e1-a96fb0361dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 NetworkManager[49182]: <info>  [1765004515.6765] device (tap9451d867-00): carrier: link connected
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.682 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a92cd1-de28-4095-beb5-bde854a4bf52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.698 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa07bef-34c4-467a-8acd-89a0d8ad3da3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477768, 'reachable_time': 17100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242327, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.719 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ee67f6-ccf4-45f3-89aa-f04ed1e869d9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:45e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477768, 'tstamp': 477768}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242328, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.737 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[44959322-4da3-4c0a-8dc2-401345bb2cd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477768, 'reachable_time': 17100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242344, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.777 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[66f4944c-da38-45ba-9805-b95260b9a739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.853 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f4805b80-e944-4d89-a68c-da956f6b8a46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.855 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.855 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.855 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9451d867-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:55 np0005548731 NetworkManager[49182]: <info>  [1765004515.9009] manager: (tap9451d867-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Dec  6 02:01:55 np0005548731 kernel: tap9451d867-00: entered promiscuous mode
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.901 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.905 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.905 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9451d867-00, col_values=(('external_ids', {'iface-id': 'fed07814-3a76-4798-8d3b-90759d15a8cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:01:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:01:55Z|00067|binding|INFO|Releasing lport fed07814-3a76-4798-8d3b-90759d15a8cf from this chassis (sb_readonly=0)
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.907 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.921 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.922 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.923 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9451d867-0aba-464d-b4d9-f947b887e903.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9451d867-0aba-464d-b4d9-f947b887e903.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.925 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b7635c-d1f5-496f-88f8-e990d4a4ef75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.925 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-9451d867-0aba-464d-b4d9-f947b887e903
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/9451d867-0aba-464d-b4d9-f947b887e903.pid.haproxy
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 9451d867-0aba-464d-b4d9-f947b887e903
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:01:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:01:55.926 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'env', 'PROCESS_TAG=haproxy-9451d867-0aba-464d-b4d9-f947b887e903', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9451d867-0aba-464d-b4d9-f947b887e903.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.943 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004515.941991, 646ce79a-a40a-4b2a-9af5-90c98c98b72a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.943 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] VM Started (Lifecycle Event)#033[00m
Dec  6 02:01:55 np0005548731 nova_compute[232433]: 2025-12-06 07:01:55.968 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.039 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.044 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004515.9423428, 646ce79a-a40a-4b2a-9af5-90c98c98b72a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.044 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.068 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.073 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.098 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:01:56 np0005548731 podman[242405]: 2025-12-06 07:01:56.354740636 +0000 UTC m=+0.046999806 container create cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  6 02:01:56 np0005548731 systemd[1]: Started libpod-conmon-cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed.scope.
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.413 232437 DEBUG nova.compute.manager [req-52ee7791-f5a5-47eb-b33c-31ef3262dd50 req-ff84f204-9fed-4c05-90ec-cc66193b4bb8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Received event network-vif-plugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.414 232437 DEBUG oslo_concurrency.lockutils [req-52ee7791-f5a5-47eb-b33c-31ef3262dd50 req-ff84f204-9fed-4c05-90ec-cc66193b4bb8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.414 232437 DEBUG oslo_concurrency.lockutils [req-52ee7791-f5a5-47eb-b33c-31ef3262dd50 req-ff84f204-9fed-4c05-90ec-cc66193b4bb8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.414 232437 DEBUG oslo_concurrency.lockutils [req-52ee7791-f5a5-47eb-b33c-31ef3262dd50 req-ff84f204-9fed-4c05-90ec-cc66193b4bb8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.414 232437 DEBUG nova.compute.manager [req-52ee7791-f5a5-47eb-b33c-31ef3262dd50 req-ff84f204-9fed-4c05-90ec-cc66193b4bb8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Processing event network-vif-plugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.415 232437 DEBUG nova.compute.manager [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.418 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004516.4186509, 646ce79a-a40a-4b2a-9af5-90c98c98b72a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.419 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.420 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:01:56 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.425 232437 INFO nova.virt.libvirt.driver [-] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Instance spawned successfully.#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.426 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:01:56 np0005548731 podman[242405]: 2025-12-06 07:01:56.33256457 +0000 UTC m=+0.024823770 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:01:56 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb10c1571a5fcd19dbb221e1972ccd726d9597723dd400af553f0aaf0207147/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:01:56 np0005548731 podman[242405]: 2025-12-06 07:01:56.442714183 +0000 UTC m=+0.134973373 container init cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:01:56 np0005548731 podman[242405]: 2025-12-06 07:01:56.44795547 +0000 UTC m=+0.140214640 container start cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 02:01:56 np0005548731 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[242420]: [NOTICE]   (242424) : New worker (242426) forked
Dec  6 02:01:56 np0005548731 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[242420]: [NOTICE]   (242424) : Loading success.
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.619 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.625 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.626 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.626 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.627 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.627 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.628 232437 DEBUG nova.virt.libvirt.driver [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.630 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.729 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.846 232437 INFO nova.compute.manager [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Took 10.90 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.847 232437 DEBUG nova.compute.manager [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:01:56 np0005548731 nova_compute[232433]: 2025-12-06 07:01:56.960 232437 INFO nova.compute.manager [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Took 11.87 seconds to build instance.#033[00m
Dec  6 02:01:57 np0005548731 nova_compute[232433]: 2025-12-06 07:01:57.046 232437 DEBUG oslo_concurrency.lockutils [None req-9de8e524-0c42-41b4-8da5-6a25ee29c9c4 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:01:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:57.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:01:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:57.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:57 np0005548731 nova_compute[232433]: 2025-12-06 07:01:57.700 232437 DEBUG nova.compute.manager [req-f6b6f4af-175b-425f-aa3f-fb0f6e302086 req-203ca71b-16aa-47aa-9101-851bca69453f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Received event network-vif-plugged-196b1de8-9246-4ec3-91b5-3686f8691152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:01:57 np0005548731 nova_compute[232433]: 2025-12-06 07:01:57.700 232437 DEBUG oslo_concurrency.lockutils [req-f6b6f4af-175b-425f-aa3f-fb0f6e302086 req-203ca71b-16aa-47aa-9101-851bca69453f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:57 np0005548731 nova_compute[232433]: 2025-12-06 07:01:57.701 232437 DEBUG oslo_concurrency.lockutils [req-f6b6f4af-175b-425f-aa3f-fb0f6e302086 req-203ca71b-16aa-47aa-9101-851bca69453f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:57 np0005548731 nova_compute[232433]: 2025-12-06 07:01:57.701 232437 DEBUG oslo_concurrency.lockutils [req-f6b6f4af-175b-425f-aa3f-fb0f6e302086 req-203ca71b-16aa-47aa-9101-851bca69453f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:57 np0005548731 nova_compute[232433]: 2025-12-06 07:01:57.701 232437 DEBUG nova.compute.manager [req-f6b6f4af-175b-425f-aa3f-fb0f6e302086 req-203ca71b-16aa-47aa-9101-851bca69453f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] No waiting events found dispatching network-vif-plugged-196b1de8-9246-4ec3-91b5-3686f8691152 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:01:57 np0005548731 nova_compute[232433]: 2025-12-06 07:01:57.702 232437 WARNING nova.compute.manager [req-f6b6f4af-175b-425f-aa3f-fb0f6e302086 req-203ca71b-16aa-47aa-9101-851bca69453f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Received unexpected event network-vif-plugged-196b1de8-9246-4ec3-91b5-3686f8691152 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:01:59 np0005548731 nova_compute[232433]: 2025-12-06 07:01:59.143 232437 DEBUG nova.compute.manager [req-f90c9120-4cde-4111-aaff-a6d7273286a3 req-0d5a2db1-defc-49ab-9be8-53649689b902 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Received event network-vif-plugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:01:59 np0005548731 nova_compute[232433]: 2025-12-06 07:01:59.144 232437 DEBUG oslo_concurrency.lockutils [req-f90c9120-4cde-4111-aaff-a6d7273286a3 req-0d5a2db1-defc-49ab-9be8-53649689b902 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:01:59 np0005548731 nova_compute[232433]: 2025-12-06 07:01:59.144 232437 DEBUG oslo_concurrency.lockutils [req-f90c9120-4cde-4111-aaff-a6d7273286a3 req-0d5a2db1-defc-49ab-9be8-53649689b902 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:01:59 np0005548731 nova_compute[232433]: 2025-12-06 07:01:59.145 232437 DEBUG oslo_concurrency.lockutils [req-f90c9120-4cde-4111-aaff-a6d7273286a3 req-0d5a2db1-defc-49ab-9be8-53649689b902 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:01:59 np0005548731 nova_compute[232433]: 2025-12-06 07:01:59.145 232437 DEBUG nova.compute.manager [req-f90c9120-4cde-4111-aaff-a6d7273286a3 req-0d5a2db1-defc-49ab-9be8-53649689b902 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] No waiting events found dispatching network-vif-plugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:01:59 np0005548731 nova_compute[232433]: 2025-12-06 07:01:59.145 232437 WARNING nova.compute.manager [req-f90c9120-4cde-4111-aaff-a6d7273286a3 req-0d5a2db1-defc-49ab-9be8-53649689b902 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Received unexpected event network-vif-plugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:01:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:59.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:01:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:01:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:59.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:01:59 np0005548731 nova_compute[232433]: 2025-12-06 07:01:59.657 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:00.846 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:00.847 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:00.847 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:00 np0005548731 podman[242439]: 2025-12-06 07:02:00.906420226 +0000 UTC m=+0.060976144 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 02:02:00 np0005548731 podman[242437]: 2025-12-06 07:02:00.924307389 +0000 UTC m=+0.086556744 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  6 02:02:00 np0005548731 podman[242438]: 2025-12-06 07:02:00.955482492 +0000 UTC m=+0.114427976 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:02:00 np0005548731 nova_compute[232433]: 2025-12-06 07:02:00.969 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 02:02:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:01.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 02:02:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:01.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:02 np0005548731 nova_compute[232433]: 2025-12-06 07:02:02.982 232437 INFO nova.virt.libvirt.driver [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Deleting instance files /var/lib/nova/instances/f9728486-7db3-4d21-bd7a-b41b891489a5_del#033[00m
Dec  6 02:02:02 np0005548731 nova_compute[232433]: 2025-12-06 07:02:02.983 232437 INFO nova.virt.libvirt.driver [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Deletion of /var/lib/nova/instances/f9728486-7db3-4d21-bd7a-b41b891489a5_del complete#033[00m
Dec  6 02:02:03 np0005548731 nova_compute[232433]: 2025-12-06 07:02:03.214 232437 INFO nova.compute.manager [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Took 8.95 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:02:03 np0005548731 nova_compute[232433]: 2025-12-06 07:02:03.215 232437 DEBUG oslo.service.loopingcall [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:02:03 np0005548731 nova_compute[232433]: 2025-12-06 07:02:03.216 232437 DEBUG nova.compute.manager [-] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:02:03 np0005548731 nova_compute[232433]: 2025-12-06 07:02:03.216 232437 DEBUG nova.network.neutron [-] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:02:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:03.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:02:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:03.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:02:04 np0005548731 nova_compute[232433]: 2025-12-06 07:02:04.660 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:02:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:05.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:02:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:05.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:05 np0005548731 nova_compute[232433]: 2025-12-06 07:02:05.971 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:06 np0005548731 nova_compute[232433]: 2025-12-06 07:02:06.051 232437 DEBUG nova.network.neutron [-] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:02:06 np0005548731 nova_compute[232433]: 2025-12-06 07:02:06.108 232437 INFO nova.compute.manager [-] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Took 2.89 seconds to deallocate network for instance.#033[00m
Dec  6 02:02:06 np0005548731 nova_compute[232433]: 2025-12-06 07:02:06.194 232437 DEBUG oslo_concurrency.lockutils [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:06 np0005548731 nova_compute[232433]: 2025-12-06 07:02:06.195 232437 DEBUG oslo_concurrency.lockutils [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:06 np0005548731 nova_compute[232433]: 2025-12-06 07:02:06.711 232437 DEBUG oslo_concurrency.processutils [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:02:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1796991372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:02:07 np0005548731 nova_compute[232433]: 2025-12-06 07:02:07.173 232437 DEBUG oslo_concurrency.processutils [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:07 np0005548731 nova_compute[232433]: 2025-12-06 07:02:07.179 232437 DEBUG nova.compute.provider_tree [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:02:07 np0005548731 nova_compute[232433]: 2025-12-06 07:02:07.364 232437 DEBUG nova.scheduler.client.report [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:02:07 np0005548731 nova_compute[232433]: 2025-12-06 07:02:07.414 232437 DEBUG oslo_concurrency.lockutils [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:07 np0005548731 nova_compute[232433]: 2025-12-06 07:02:07.468 232437 INFO nova.scheduler.client.report [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Deleted allocations for instance f9728486-7db3-4d21-bd7a-b41b891489a5#033[00m
Dec  6 02:02:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:07.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:07.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:07 np0005548731 nova_compute[232433]: 2025-12-06 07:02:07.656 232437 DEBUG oslo_concurrency.lockutils [None req-e63ddeb8-b7bc-4a39-86fc-208d32c575a1 56a80bccd06a46eb841b4a39bdc45f76 e2869bdde00d4ef8bea339be8805aa3f - - default default] Lock "f9728486-7db3-4d21-bd7a-b41b891489a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:08 np0005548731 nova_compute[232433]: 2025-12-06 07:02:08.262 232437 DEBUG nova.compute.manager [req-b68ce598-9b50-4bc1-b5e1-2f48ab13ab90 req-8943187b-b8e4-441c-b706-8d2de1edb8f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Received event network-vif-deleted-196b1de8-9246-4ec3-91b5-3686f8691152 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:02:09 np0005548731 nova_compute[232433]: 2025-12-06 07:02:09.494 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004514.4936833, f9728486-7db3-4d21-bd7a-b41b891489a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:02:09 np0005548731 nova_compute[232433]: 2025-12-06 07:02:09.495 232437 INFO nova.compute.manager [-] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:02:09 np0005548731 nova_compute[232433]: 2025-12-06 07:02:09.524 232437 DEBUG nova.compute.manager [None req-78b0a1a0-ebf4-457c-a895-0e7c24f834b8 - - - - - -] [instance: f9728486-7db3-4d21-bd7a-b41b891489a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:02:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:09.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:09.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:09 np0005548731 nova_compute[232433]: 2025-12-06 07:02:09.663 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:10 np0005548731 ovn_controller[133927]: 2025-12-06T07:02:10Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:9e:14 10.100.0.10
Dec  6 02:02:10 np0005548731 ovn_controller[133927]: 2025-12-06T07:02:10Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:9e:14 10.100.0.10
Dec  6 02:02:10 np0005548731 nova_compute[232433]: 2025-12-06 07:02:10.973 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:11.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:02:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:11.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:02:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.003000070s ======
Dec  6 02:02:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:13.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000070s
Dec  6 02:02:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:02:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:13.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:02:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e166 e166: 3 total, 3 up, 3 in
Dec  6 02:02:14 np0005548731 nova_compute[232433]: 2025-12-06 07:02:14.666 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:15.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:15.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:16 np0005548731 nova_compute[232433]: 2025-12-06 07:02:16.006 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.237127) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536237209, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2471, "num_deletes": 254, "total_data_size": 5961498, "memory_usage": 6052544, "flush_reason": "Manual Compaction"}
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536274651, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 3869021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23494, "largest_seqno": 25960, "table_properties": {"data_size": 3858904, "index_size": 6419, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21670, "raw_average_key_size": 20, "raw_value_size": 3838448, "raw_average_value_size": 3683, "num_data_blocks": 282, "num_entries": 1042, "num_filter_entries": 1042, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004331, "oldest_key_time": 1765004331, "file_creation_time": 1765004536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 37592 microseconds, and 13921 cpu microseconds.
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.274715) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 3869021 bytes OK
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.274747) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.283706) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.283754) EVENT_LOG_v1 {"time_micros": 1765004536283746, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.283771) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 5950451, prev total WAL file size 5950451, number of live WAL files 2.
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.285511) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(3778KB)], [48(8263KB)]
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536285577, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 12331148, "oldest_snapshot_seqno": -1}
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5467 keys, 10328787 bytes, temperature: kUnknown
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536379047, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 10328787, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10290656, "index_size": 23365, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 138802, "raw_average_key_size": 25, "raw_value_size": 10190288, "raw_average_value_size": 1863, "num_data_blocks": 954, "num_entries": 5467, "num_filter_entries": 5467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765004536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.379707) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10328787 bytes
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.381661) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.3 rd, 110.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.1 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5994, records dropped: 527 output_compression: NoCompression
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.381689) EVENT_LOG_v1 {"time_micros": 1765004536381676, "job": 28, "event": "compaction_finished", "compaction_time_micros": 93938, "compaction_time_cpu_micros": 38944, "output_level": 6, "num_output_files": 1, "total_output_size": 10328787, "num_input_records": 5994, "num_output_records": 5467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536382680, "job": 28, "event": "table_file_deletion", "file_number": 50}
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536384508, "job": 28, "event": "table_file_deletion", "file_number": 48}
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.285390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.384728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.384739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.384741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.384744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:02:16 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:02:16.384746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:02:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:17.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:17.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:02:18Z|00068|binding|INFO|Releasing lport fed07814-3a76-4798-8d3b-90759d15a8cf from this chassis (sb_readonly=0)
Dec  6 02:02:18 np0005548731 nova_compute[232433]: 2025-12-06 07:02:18.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:02:18Z|00069|binding|INFO|Releasing lport fed07814-3a76-4798-8d3b-90759d15a8cf from this chassis (sb_readonly=0)
Dec  6 02:02:18 np0005548731 nova_compute[232433]: 2025-12-06 07:02:18.641 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:02:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:19.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:02:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:19.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:19 np0005548731 nova_compute[232433]: 2025-12-06 07:02:19.667 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:21 np0005548731 nova_compute[232433]: 2025-12-06 07:02:21.008 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:02:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:21.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:02:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:21.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e167 e167: 3 total, 3 up, 3 in
Dec  6 02:02:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:23.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:02:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:23.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:02:24 np0005548731 nova_compute[232433]: 2025-12-06 07:02:24.115 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:24.115 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:02:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:24.116 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:02:24 np0005548731 nova_compute[232433]: 2025-12-06 07:02:24.669 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:25.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:02:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:25.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:02:26 np0005548731 nova_compute[232433]: 2025-12-06 07:02:26.047 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:27.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:02:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:27.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:02:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:29.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:02:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:29.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:02:29 np0005548731 nova_compute[232433]: 2025-12-06 07:02:29.672 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:02:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:02:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:02:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:31 np0005548731 nova_compute[232433]: 2025-12-06 07:02:31.049 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:02:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:31.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:02:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:02:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:31.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:02:31 np0005548731 podman[242718]: 2025-12-06 07:02:31.900915728 +0000 UTC m=+0.057302622 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 02:02:31 np0005548731 podman[242720]: 2025-12-06 07:02:31.908050779 +0000 UTC m=+0.059884743 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:02:31 np0005548731 podman[242719]: 2025-12-06 07:02:31.981688543 +0000 UTC m=+0.123031164 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:02:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:32.119 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:02:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 e168: 3 total, 3 up, 3 in
Dec  6 02:02:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:33.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:02:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:33.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:02:34 np0005548731 nova_compute[232433]: 2025-12-06 07:02:34.674 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:35.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:02:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:35.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:02:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:02:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:02:36 np0005548731 nova_compute[232433]: 2025-12-06 07:02:36.051 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:37 np0005548731 nova_compute[232433]: 2025-12-06 07:02:37.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:37 np0005548731 nova_compute[232433]: 2025-12-06 07:02:37.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:02:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:37.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:02:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:37.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:02:37 np0005548731 nova_compute[232433]: 2025-12-06 07:02:37.885 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:02:38 np0005548731 nova_compute[232433]: 2025-12-06 07:02:38.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:38 np0005548731 nova_compute[232433]: 2025-12-06 07:02:38.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:02:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:39.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:02:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:02:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:39.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:02:39 np0005548731 nova_compute[232433]: 2025-12-06 07:02:39.676 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:41 np0005548731 nova_compute[232433]: 2025-12-06 07:02:41.053 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:41 np0005548731 nova_compute[232433]: 2025-12-06 07:02:41.130 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:41.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:41.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:42 np0005548731 nova_compute[232433]: 2025-12-06 07:02:42.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:42 np0005548731 nova_compute[232433]: 2025-12-06 07:02:42.693 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "2341fd69-a672-42e2-834b-0f7b8269c7ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:42 np0005548731 nova_compute[232433]: 2025-12-06 07:02:42.693 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "2341fd69-a672-42e2-834b-0f7b8269c7ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:42 np0005548731 nova_compute[232433]: 2025-12-06 07:02:42.693 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "2341fd69-a672-42e2-834b-0f7b8269c7ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:42 np0005548731 nova_compute[232433]: 2025-12-06 07:02:42.694 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "2341fd69-a672-42e2-834b-0f7b8269c7ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:42 np0005548731 nova_compute[232433]: 2025-12-06 07:02:42.694 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "2341fd69-a672-42e2-834b-0f7b8269c7ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:42 np0005548731 nova_compute[232433]: 2025-12-06 07:02:42.696 232437 INFO nova.compute.manager [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Terminating instance#033[00m
Dec  6 02:02:42 np0005548731 nova_compute[232433]: 2025-12-06 07:02:42.696 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:02:42 np0005548731 nova_compute[232433]: 2025-12-06 07:02:42.696 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:02:42 np0005548731 nova_compute[232433]: 2025-12-06 07:02:42.697 232437 DEBUG nova.network.neutron [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:02:42 np0005548731 nova_compute[232433]: 2025-12-06 07:02:42.986 232437 DEBUG nova.network.neutron [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:02:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:43.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:43.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.125 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.126 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.126 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.158 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.159 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.159 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.159 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.160 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.234 232437 DEBUG nova.network.neutron [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.264 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.265 232437 DEBUG nova.compute.manager [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:02:44 np0005548731 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec  6 02:02:44 np0005548731 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Consumed 19.973s CPU time.
Dec  6 02:02:44 np0005548731 systemd-machined[195355]: Machine qemu-6-instance-0000000d terminated.
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.491 232437 INFO nova.virt.libvirt.driver [-] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance destroyed successfully.#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.492 232437 DEBUG nova.objects.instance [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'resources' on Instance uuid 2341fd69-a672-42e2-834b-0f7b8269c7ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:02:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:02:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1988349296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.627 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.679 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.763 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.763 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.767 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.767 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.772 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.772 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.900 232437 INFO nova.virt.libvirt.driver [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Deleting instance files /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef_del#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.901 232437 INFO nova.virt.libvirt.driver [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Deletion of /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef_del complete#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.933 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.934 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4410MB free_disk=20.695194244384766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.935 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.935 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.980 232437 INFO nova.compute.manager [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.983 232437 DEBUG oslo.service.loopingcall [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.983 232437 DEBUG nova.compute.manager [-] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:02:44 np0005548731 nova_compute[232433]: 2025-12-06 07:02:44.983 232437 DEBUG nova.network.neutron [-] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:02:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:45 np0005548731 nova_compute[232433]: 2025-12-06 07:02:45.253 232437 DEBUG nova.network.neutron [-] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:02:45 np0005548731 nova_compute[232433]: 2025-12-06 07:02:45.305 232437 DEBUG nova.network.neutron [-] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:02:45 np0005548731 nova_compute[232433]: 2025-12-06 07:02:45.353 232437 INFO nova.compute.manager [-] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Took 0.37 seconds to deallocate network for instance.#033[00m
Dec  6 02:02:45 np0005548731 nova_compute[232433]: 2025-12-06 07:02:45.363 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance d32579c4-62c8-41ac-9d01-b617cc7992ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:02:45 np0005548731 nova_compute[232433]: 2025-12-06 07:02:45.363 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 2341fd69-a672-42e2-834b-0f7b8269c7ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:02:45 np0005548731 nova_compute[232433]: 2025-12-06 07:02:45.363 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 646ce79a-a40a-4b2a-9af5-90c98c98b72a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:02:45 np0005548731 nova_compute[232433]: 2025-12-06 07:02:45.363 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:02:45 np0005548731 nova_compute[232433]: 2025-12-06 07:02:45.364 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:02:45 np0005548731 nova_compute[232433]: 2025-12-06 07:02:45.453 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:02:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:45.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:02:45 np0005548731 nova_compute[232433]: 2025-12-06 07:02:45.602 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:45.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:02:46 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1248195527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.051 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.055 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.059 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.078 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.109 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.110 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.110 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.111 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.111 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.355 232437 DEBUG oslo_concurrency.processutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:02:46 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2579202930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.796 232437 DEBUG oslo_concurrency.processutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.801 232437 DEBUG nova.compute.provider_tree [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.838 232437 DEBUG nova.scheduler.client.report [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.880 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:46 np0005548731 nova_compute[232433]: 2025-12-06 07:02:46.937 232437 INFO nova.scheduler.client.report [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Deleted allocations for instance 2341fd69-a672-42e2-834b-0f7b8269c7ef#033[00m
Dec  6 02:02:47 np0005548731 nova_compute[232433]: 2025-12-06 07:02:47.043 232437 DEBUG oslo_concurrency.lockutils [None req-1f43dd13-813d-420a-90ef-decd49de0950 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "2341fd69-a672-42e2-834b-0f7b8269c7ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:47 np0005548731 nova_compute[232433]: 2025-12-06 07:02:47.109 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:47 np0005548731 nova_compute[232433]: 2025-12-06 07:02:47.147 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:47 np0005548731 nova_compute[232433]: 2025-12-06 07:02:47.148 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:47.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:02:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:47.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.408 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "d32579c4-62c8-41ac-9d01-b617cc7992ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.408 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "d32579c4-62c8-41ac-9d01-b617cc7992ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.409 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "d32579c4-62c8-41ac-9d01-b617cc7992ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.409 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "d32579c4-62c8-41ac-9d01-b617cc7992ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.409 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "d32579c4-62c8-41ac-9d01-b617cc7992ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.410 232437 INFO nova.compute.manager [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Terminating instance#033[00m
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.411 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.411 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.411 232437 DEBUG nova.network.neutron [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:02:48 np0005548731 nova_compute[232433]: 2025-12-06 07:02:48.684 232437 DEBUG nova.network.neutron [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.079 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.079 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.111 232437 DEBUG nova.compute.manager [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.207 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.207 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.213 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.213 232437 INFO nova.compute.claims [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.309 232437 DEBUG nova.network.neutron [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.364 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.365 232437 DEBUG nova.compute.manager [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:02:49 np0005548731 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  6 02:02:49 np0005548731 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Consumed 20.990s CPU time.
Dec  6 02:02:49 np0005548731 systemd-machined[195355]: Machine qemu-4-instance-0000000b terminated.
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.450 232437 DEBUG oslo_concurrency.processutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.583 232437 INFO nova.virt.libvirt.driver [-] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance destroyed successfully.#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.583 232437 DEBUG nova.objects.instance [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'resources' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:02:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:49.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:49.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.682 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:02:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/46249981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.910 232437 DEBUG oslo_concurrency.processutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.914 232437 DEBUG nova.compute.provider_tree [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.945 232437 DEBUG nova.scheduler.client.report [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.971 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:49 np0005548731 nova_compute[232433]: 2025-12-06 07:02:49.972 232437 DEBUG nova.compute.manager [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.016 232437 INFO nova.virt.libvirt.driver [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Deleting instance files /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed_del#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.017 232437 INFO nova.virt.libvirt.driver [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Deletion of /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed_del complete#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.023 232437 DEBUG nova.compute.manager [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.023 232437 DEBUG nova.network.neutron [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.059 232437 INFO nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.089 232437 INFO nova.compute.manager [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.090 232437 DEBUG oslo.service.loopingcall [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.090 232437 DEBUG nova.compute.manager [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.094 232437 DEBUG nova.compute.manager [-] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.094 232437 DEBUG nova.network.neutron [-] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.155 232437 INFO nova.virt.block_device [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Booting with volume 82a6fabc-b798-4571-a13e-f9c67a3f9413 at /dev/vda#033[00m
Dec  6 02:02:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.536 232437 DEBUG nova.network.neutron [-] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.556 232437 DEBUG nova.network.neutron [-] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.583 232437 INFO nova.compute.manager [-] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Took 0.49 seconds to deallocate network for instance.#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.615 232437 DEBUG os_brick.utils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.617 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.631 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.631 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f4ae8c-5d8d-431d-ac97-b28a54bbda25]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.632 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.639 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.640 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[e8fab2f5-206f-4445-ae4a-9bf7fa1fe10e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.641 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.649 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.649 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb181a1-9891-4b3d-a35c-20ebc17caaf2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.651 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd9bb4f-3d1a-4d2c-a59b-953fbce9a71c]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.652 232437 DEBUG oslo_concurrency.processutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.679 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.680 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.681 232437 DEBUG oslo_concurrency.processutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.686 232437 DEBUG nova.policy [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6805353f6bf048f9b406a1e565a13f11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.688 232437 DEBUG os_brick.initiator.connectors.lightos [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.688 232437 DEBUG os_brick.initiator.connectors.lightos [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.689 232437 DEBUG os_brick.initiator.connectors.lightos [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.689 232437 DEBUG os_brick.utils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.689 232437 DEBUG nova.virt.block_device [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating existing volume attachment record: 1f7e2e99-fed4-47ad-9a87-fb8a1b33200d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:02:50 np0005548731 nova_compute[232433]: 2025-12-06 07:02:50.928 232437 DEBUG oslo_concurrency.processutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:51 np0005548731 nova_compute[232433]: 2025-12-06 07:02:51.058 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:02:51 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4028376923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:02:51 np0005548731 nova_compute[232433]: 2025-12-06 07:02:51.402 232437 DEBUG oslo_concurrency.processutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:51 np0005548731 nova_compute[232433]: 2025-12-06 07:02:51.407 232437 DEBUG nova.compute.provider_tree [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:02:51 np0005548731 nova_compute[232433]: 2025-12-06 07:02:51.440 232437 DEBUG nova.scheduler.client.report [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:02:51 np0005548731 nova_compute[232433]: 2025-12-06 07:02:51.465 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:51 np0005548731 nova_compute[232433]: 2025-12-06 07:02:51.495 232437 INFO nova.scheduler.client.report [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Deleted allocations for instance d32579c4-62c8-41ac-9d01-b617cc7992ed#033[00m
Dec  6 02:02:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:51.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:51 np0005548731 nova_compute[232433]: 2025-12-06 07:02:51.607 232437 DEBUG oslo_concurrency.lockutils [None req-42c4f7ff-3340-4bfb-b0bb-90e0f0f7b528 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "d32579c4-62c8-41ac-9d01-b617cc7992ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:02:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:51.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:02:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:02:51 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3290223524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:02:51 np0005548731 nova_compute[232433]: 2025-12-06 07:02:51.912 232437 DEBUG nova.network.neutron [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Successfully created port: 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:02:52 np0005548731 nova_compute[232433]: 2025-12-06 07:02:52.906 232437 DEBUG nova.compute.manager [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:02:52 np0005548731 nova_compute[232433]: 2025-12-06 07:02:52.907 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:02:52 np0005548731 nova_compute[232433]: 2025-12-06 07:02:52.908 232437 INFO nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Creating image(s)#033[00m
Dec  6 02:02:52 np0005548731 nova_compute[232433]: 2025-12-06 07:02:52.908 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:02:52 np0005548731 nova_compute[232433]: 2025-12-06 07:02:52.908 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Ensure instance console log exists: /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:02:52 np0005548731 nova_compute[232433]: 2025-12-06 07:02:52.909 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:52 np0005548731 nova_compute[232433]: 2025-12-06 07:02:52.909 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:52 np0005548731 nova_compute[232433]: 2025-12-06 07:02:52.909 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:53.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:53.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:54 np0005548731 nova_compute[232433]: 2025-12-06 07:02:54.250 232437 DEBUG nova.network.neutron [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Successfully updated port: 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:02:54 np0005548731 nova_compute[232433]: 2025-12-06 07:02:54.275 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:02:54 np0005548731 nova_compute[232433]: 2025-12-06 07:02:54.275 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquired lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:02:54 np0005548731 nova_compute[232433]: 2025-12-06 07:02:54.276 232437 DEBUG nova.network.neutron [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:02:54 np0005548731 nova_compute[232433]: 2025-12-06 07:02:54.580 232437 DEBUG nova.compute.manager [req-13b71c1d-fd7d-4552-b74c-df26865e6d1d req-23f00ed3-afcb-4804-bcbc-715894342732 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-changed-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:02:54 np0005548731 nova_compute[232433]: 2025-12-06 07:02:54.581 232437 DEBUG nova.compute.manager [req-13b71c1d-fd7d-4552-b74c-df26865e6d1d req-23f00ed3-afcb-4804-bcbc-715894342732 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Refreshing instance network info cache due to event network-changed-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:02:54 np0005548731 nova_compute[232433]: 2025-12-06 07:02:54.581 232437 DEBUG oslo_concurrency.lockutils [req-13b71c1d-fd7d-4552-b74c-df26865e6d1d req-23f00ed3-afcb-4804-bcbc-715894342732 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:02:54 np0005548731 nova_compute[232433]: 2025-12-06 07:02:54.685 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:54 np0005548731 nova_compute[232433]: 2025-12-06 07:02:54.716 232437 DEBUG nova.network.neutron [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:02:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:02:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:55.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:02:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:55.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.059 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.143 232437 DEBUG nova.network.neutron [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating instance_info_cache with network_info: [{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.159 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Releasing lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.159 232437 DEBUG nova.compute.manager [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Instance network_info: |[{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.159 232437 DEBUG oslo_concurrency.lockutils [req-13b71c1d-fd7d-4552-b74c-df26865e6d1d req-23f00ed3-afcb-4804-bcbc-715894342732 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.159 232437 DEBUG nova.network.neutron [req-13b71c1d-fd7d-4552-b74c-df26865e6d1d req-23f00ed3-afcb-4804-bcbc-715894342732 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Refreshing network info cache for port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.162 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Start _get_guest_xml network_info=[{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-82a6fabc-b798-4571-a13e-f9c67a3f9413', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '82a6fabc-b798-4571-a13e-f9c67a3f9413', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '4dd1863a-3cb1-4e42-8611-ed9587a0b6ce', 'attached_at': '', 'detached_at': '', 'volume_id': '82a6fabc-b798-4571-a13e-f9c67a3f9413', 'serial': '82a6fabc-b798-4571-a13e-f9c67a3f9413'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'attachment_id': '1f7e2e99-fed4-47ad-9a87-fb8a1b33200d', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.166 232437 WARNING nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.176 232437 DEBUG nova.virt.libvirt.host [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.177 232437 DEBUG nova.virt.libvirt.host [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.180 232437 DEBUG nova.virt.libvirt.host [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.181 232437 DEBUG nova.virt.libvirt.host [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.181 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.182 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.182 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.182 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.182 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.183 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.183 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.183 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.183 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.183 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.184 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.184 232437 DEBUG nova.virt.hardware [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.207 232437 DEBUG nova.storage.rbd_utils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] rbd image 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.211 232437 DEBUG oslo_concurrency.processutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:02:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2544000975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.663 232437 DEBUG oslo_concurrency.processutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.685 232437 DEBUG nova.virt.libvirt.vif [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:02:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-509443515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-509443515',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-m8zqnud8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:02:50Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=4dd1863a-3cb1-4e42-8611-ed9587a0b6ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.686 232437 DEBUG nova.network.os_vif_util [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converting VIF {"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.686 232437 DEBUG nova.network.os_vif_util [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.687 232437 DEBUG nova.objects.instance [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.699 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  <uuid>4dd1863a-3cb1-4e42-8611-ed9587a0b6ce</uuid>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  <name>instance-00000017</name>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-509443515</nova:name>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:02:56</nova:creationTime>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <nova:user uuid="6805353f6bf048f9b406a1e565a13f11">tempest-LiveAutoBlockMigrationV225Test-252281632-project-member</nova:user>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <nova:project uuid="dc1bc9517198484ab30d93ebd5d88c35">tempest-LiveAutoBlockMigrationV225Test-252281632</nova:project>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <nova:port uuid="510079bd-5c00-4b8a-b2eb-d63f0ffc3d69">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <entry name="serial">4dd1863a-3cb1-4e42-8611-ed9587a0b6ce</entry>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <entry name="uuid">4dd1863a-3cb1-4e42-8611-ed9587a0b6ce</entry>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_disk.config">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-82a6fabc-b798-4571-a13e-f9c67a3f9413">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <serial>82a6fabc-b798-4571-a13e-f9c67a3f9413</serial>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:30:77:d0"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <target dev="tap510079bd-5c"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce/console.log" append="off"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:02:56 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:02:56 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:02:56 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:02:56 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.700 232437 DEBUG nova.compute.manager [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Preparing to wait for external event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.701 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.701 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.701 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.702 232437 DEBUG nova.virt.libvirt.vif [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:02:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-509443515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-509443515',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-m8zqnud8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:02:50Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=4dd1863a-3cb1-4e42-8611-ed9587a0b6ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.702 232437 DEBUG nova.network.os_vif_util [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converting VIF {"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.702 232437 DEBUG nova.network.os_vif_util [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.703 232437 DEBUG os_vif [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.703 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.704 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.704 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.708 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.708 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap510079bd-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.708 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap510079bd-5c, col_values=(('external_ids', {'iface-id': '510079bd-5c00-4b8a-b2eb-d63f0ffc3d69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:77:d0', 'vm-uuid': '4dd1863a-3cb1-4e42-8611-ed9587a0b6ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:02:56 np0005548731 NetworkManager[49182]: <info>  [1765004576.7108] manager: (tap510079bd-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.712 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.716 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.716 232437 INFO os_vif [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c')#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.776 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.776 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.777 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] No VIF found with MAC fa:16:3e:30:77:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.777 232437 INFO nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Using config drive#033[00m
Dec  6 02:02:56 np0005548731 nova_compute[232433]: 2025-12-06 07:02:56.798 232437 DEBUG nova.storage.rbd_utils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] rbd image 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:02:57 np0005548731 nova_compute[232433]: 2025-12-06 07:02:57.421 232437 INFO nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Creating config drive at /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce/disk.config#033[00m
Dec  6 02:02:57 np0005548731 nova_compute[232433]: 2025-12-06 07:02:57.430 232437 DEBUG oslo_concurrency.processutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkkxme8rx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:57 np0005548731 nova_compute[232433]: 2025-12-06 07:02:57.565 232437 DEBUG oslo_concurrency.processutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkkxme8rx" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:57 np0005548731 nova_compute[232433]: 2025-12-06 07:02:57.600 232437 DEBUG nova.storage.rbd_utils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] rbd image 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:02:57 np0005548731 nova_compute[232433]: 2025-12-06 07:02:57.604 232437 DEBUG oslo_concurrency.processutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce/disk.config 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:02:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:57.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:57.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.021 232437 DEBUG oslo_concurrency.processutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce/disk.config 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.022 232437 INFO nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Deleting local config drive /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce/disk.config because it was imported into RBD.#033[00m
Dec  6 02:02:58 np0005548731 kernel: tap510079bd-5c: entered promiscuous mode
Dec  6 02:02:58 np0005548731 NetworkManager[49182]: <info>  [1765004578.0811] manager: (tap510079bd-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Dec  6 02:02:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:02:58Z|00070|binding|INFO|Claiming lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for this chassis.
Dec  6 02:02:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:02:58Z|00071|binding|INFO|510079bd-5c00-4b8a-b2eb-d63f0ffc3d69: Claiming fa:16:3e:30:77:d0 10.100.0.8
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.083 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.088 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.096 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:77:d0 10.100.0.8'], port_security=['fa:16:3e:30:77:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4dd1863a-3cb1-4e42-8611-ed9587a0b6ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.097 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 bound to our chassis#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.099 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfa287bf-10c3-40fc-8071-37bb7f801357#033[00m
Dec  6 02:02:58 np0005548731 systemd-udevd[243220]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.117 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[087cdaf5-2c36-47b5-b62f-20b8194517b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.118 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdfa287bf-11 in ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.121 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdfa287bf-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.121 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f1ad5b-6581-4e10-911e-bbfcefd3ebcf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.122 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5f50452b-fa5d-4ecd-ade4-69a1922dbc30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 systemd-machined[195355]: New machine qemu-10-instance-00000017.
Dec  6 02:02:58 np0005548731 NetworkManager[49182]: <info>  [1765004578.1323] device (tap510079bd-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:02:58 np0005548731 NetworkManager[49182]: <info>  [1765004578.1331] device (tap510079bd-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.137 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[eaedfc07-1606-4e57-ada1-cfc9051d3d2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 systemd[1]: Started Virtual Machine qemu-10-instance-00000017.
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.157 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2be618e0-2c51-47ec-a8f1-b0bdd9e1142e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.156 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:02:58Z|00072|binding|INFO|Setting lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 ovn-installed in OVS
Dec  6 02:02:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:02:58Z|00073|binding|INFO|Setting lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 up in Southbound
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.165 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.196 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b75203db-d187-419e-a645-a8652628d8ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 NetworkManager[49182]: <info>  [1765004578.2045] manager: (tapdfa287bf-10): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.203 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2672a6-9a3c-4a43-956f-5a2e58aa2670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.245 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[aa668c02-5a82-4b49-af16-8e217ac4fb27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.249 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[23dceb9d-c58e-4d7c-9d1e-e737729227c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.250 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:02:58 np0005548731 NetworkManager[49182]: <info>  [1765004578.2832] device (tapdfa287bf-10): carrier: link connected
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.298 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Triggering sync for uuid 646ce79a-a40a-4b2a-9af5-90c98c98b72a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.299 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Triggering sync for uuid 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.297 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2d6902-d363-4691-8e56-8ce1117ccd9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.299 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.299 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.300 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.319 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d707641f-9187-48e7-acf5-d180163d1b79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfa287bf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484028, 'reachable_time': 32253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243254, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.343 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[65fe1e77-0b36-410d-acf8-c488d633e971]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:a341'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484028, 'tstamp': 484028}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243255, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.345 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.362 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0bde15-eeee-4682-883c-8a8186bf5530]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfa287bf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484028, 'reachable_time': 32253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243256, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.403 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbc9c7c-606d-4455-ac0c-1df1c2a97a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.472 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[453ba01e-d603-4b04-9979-bb4c6d2175cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.473 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfa287bf-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.473 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.474 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfa287bf-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:02:58 np0005548731 kernel: tapdfa287bf-10: entered promiscuous mode
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.475 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:58 np0005548731 NetworkManager[49182]: <info>  [1765004578.4761] manager: (tapdfa287bf-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.483 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfa287bf-10, col_values=(('external_ids', {'iface-id': 'a8b489de-cf80-4c12-869a-5e807cdbba8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.484 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:02:58Z|00074|binding|INFO|Releasing lport a8b489de-cf80-4c12-869a-5e807cdbba8c from this chassis (sb_readonly=0)
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.496 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.497 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.498 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb3174f-fc1d-4157-b75f-d25ddfa96a20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.499 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-dfa287bf-10c3-40fc-8071-37bb7f801357
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID dfa287bf-10c3-40fc-8071-37bb7f801357
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:02:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:02:58.499 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'env', 'PROCESS_TAG=haproxy-dfa287bf-10c3-40fc-8071-37bb7f801357', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dfa287bf-10c3-40fc-8071-37bb7f801357.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.521 232437 DEBUG nova.network.neutron [req-13b71c1d-fd7d-4552-b74c-df26865e6d1d req-23f00ed3-afcb-4804-bcbc-715894342732 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updated VIF entry in instance network info cache for port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.522 232437 DEBUG nova.network.neutron [req-13b71c1d-fd7d-4552-b74c-df26865e6d1d req-23f00ed3-afcb-4804-bcbc-715894342732 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating instance_info_cache with network_info: [{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.558 232437 DEBUG oslo_concurrency.lockutils [req-13b71c1d-fd7d-4552-b74c-df26865e6d1d req-23f00ed3-afcb-4804-bcbc-715894342732 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.725 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004578.7249343, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.726 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Started (Lifecycle Event)#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.842 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.846 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004578.7261155, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.847 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.871 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.874 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:02:58 np0005548731 podman[243330]: 2025-12-06 07:02:58.891791619 +0000 UTC m=+0.050527538 container create 4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 02:02:58 np0005548731 nova_compute[232433]: 2025-12-06 07:02:58.901 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:02:58 np0005548731 systemd[1]: Started libpod-conmon-4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08.scope.
Dec  6 02:02:58 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:02:58 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b778ffe9e87bfe2430b648dce6006328d764b6ad545f38b39394c2d9f58d7ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:02:58 np0005548731 podman[243330]: 2025-12-06 07:02:58.865381832 +0000 UTC m=+0.024117771 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:02:58 np0005548731 podman[243330]: 2025-12-06 07:02:58.981333155 +0000 UTC m=+0.140069104 container init 4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:02:58 np0005548731 podman[243330]: 2025-12-06 07:02:58.98650883 +0000 UTC m=+0.145244749 container start 4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:02:59 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[243345]: [NOTICE]   (243349) : New worker (243351) forked
Dec  6 02:02:59 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[243345]: [NOTICE]   (243349) : Loading success.
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.489 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004564.488659, 2341fd69-a672-42e2-834b-0f7b8269c7ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.490 232437 INFO nova.compute.manager [-] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.560 232437 DEBUG nova.compute.manager [None req-b0c3bb59-480d-4502-8231-8b9a3503e3ed - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:02:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:02:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:59.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:02:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:02:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:02:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:59.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.884 232437 DEBUG nova.compute.manager [req-a28044ba-5ef8-45ce-8bff-95d6ef18e7ef req-4d1bb4b9-5b0b-48bf-832d-8bf8d4a8ce8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.884 232437 DEBUG oslo_concurrency.lockutils [req-a28044ba-5ef8-45ce-8bff-95d6ef18e7ef req-4d1bb4b9-5b0b-48bf-832d-8bf8d4a8ce8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.885 232437 DEBUG oslo_concurrency.lockutils [req-a28044ba-5ef8-45ce-8bff-95d6ef18e7ef req-4d1bb4b9-5b0b-48bf-832d-8bf8d4a8ce8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.885 232437 DEBUG oslo_concurrency.lockutils [req-a28044ba-5ef8-45ce-8bff-95d6ef18e7ef req-4d1bb4b9-5b0b-48bf-832d-8bf8d4a8ce8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.885 232437 DEBUG nova.compute.manager [req-a28044ba-5ef8-45ce-8bff-95d6ef18e7ef req-4d1bb4b9-5b0b-48bf-832d-8bf8d4a8ce8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Processing event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.886 232437 DEBUG nova.compute.manager [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.889 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004579.8892086, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.890 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.891 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.894 232437 INFO nova.virt.libvirt.driver [-] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Instance spawned successfully.#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.894 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.917 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.923 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.926 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.927 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.927 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.927 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.928 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.928 232437 DEBUG nova.virt.libvirt.driver [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:02:59 np0005548731 nova_compute[232433]: 2025-12-06 07:02:59.973 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:03:00 np0005548731 nova_compute[232433]: 2025-12-06 07:03:00.022 232437 INFO nova.compute.manager [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Took 7.12 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:03:00 np0005548731 nova_compute[232433]: 2025-12-06 07:03:00.023 232437 DEBUG nova.compute.manager [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:03:00 np0005548731 nova_compute[232433]: 2025-12-06 07:03:00.112 232437 INFO nova.compute.manager [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Took 10.93 seconds to build instance.#033[00m
Dec  6 02:03:00 np0005548731 nova_compute[232433]: 2025-12-06 07:03:00.138 232437 DEBUG oslo_concurrency.lockutils [None req-d7beabbb-bb09-4514-b32c-d7618151a9f6 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:00 np0005548731 nova_compute[232433]: 2025-12-06 07:03:00.138 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:00 np0005548731 nova_compute[232433]: 2025-12-06 07:03:00.138 232437 INFO nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:03:00 np0005548731 nova_compute[232433]: 2025-12-06 07:03:00.139 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:00.847 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:00.848 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:00.848 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:01 np0005548731 nova_compute[232433]: 2025-12-06 07:03:01.061 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:01.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:03:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:01.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:03:01 np0005548731 nova_compute[232433]: 2025-12-06 07:03:01.711 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:02 np0005548731 nova_compute[232433]: 2025-12-06 07:03:02.006 232437 DEBUG nova.compute.manager [req-df499ce8-a34e-44f0-8a84-41a58a3f6066 req-9ff05284-7cf9-4f49-bf63-2dcbfafbacde 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:02 np0005548731 nova_compute[232433]: 2025-12-06 07:03:02.007 232437 DEBUG oslo_concurrency.lockutils [req-df499ce8-a34e-44f0-8a84-41a58a3f6066 req-9ff05284-7cf9-4f49-bf63-2dcbfafbacde 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:02 np0005548731 nova_compute[232433]: 2025-12-06 07:03:02.008 232437 DEBUG oslo_concurrency.lockutils [req-df499ce8-a34e-44f0-8a84-41a58a3f6066 req-9ff05284-7cf9-4f49-bf63-2dcbfafbacde 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:02 np0005548731 nova_compute[232433]: 2025-12-06 07:03:02.009 232437 DEBUG oslo_concurrency.lockutils [req-df499ce8-a34e-44f0-8a84-41a58a3f6066 req-9ff05284-7cf9-4f49-bf63-2dcbfafbacde 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:02 np0005548731 nova_compute[232433]: 2025-12-06 07:03:02.009 232437 DEBUG nova.compute.manager [req-df499ce8-a34e-44f0-8a84-41a58a3f6066 req-9ff05284-7cf9-4f49-bf63-2dcbfafbacde 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:03:02 np0005548731 nova_compute[232433]: 2025-12-06 07:03:02.010 232437 WARNING nova.compute.manager [req-df499ce8-a34e-44f0-8a84-41a58a3f6066 req-9ff05284-7cf9-4f49-bf63-2dcbfafbacde 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received unexpected event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:03:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:02.124 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:03:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:02.125 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:03:02 np0005548731 nova_compute[232433]: 2025-12-06 07:03:02.126 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:02 np0005548731 podman[243362]: 2025-12-06 07:03:02.903354936 +0000 UTC m=+0.061638485 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  6 02:03:02 np0005548731 podman[243364]: 2025-12-06 07:03:02.905799645 +0000 UTC m=+0.065159881 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 02:03:02 np0005548731 podman[243363]: 2025-12-06 07:03:02.924582877 +0000 UTC m=+0.086903524 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec  6 02:03:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:03.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:03.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:04 np0005548731 nova_compute[232433]: 2025-12-06 07:03:04.581 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004569.5808709, d32579c4-62c8-41ac-9d01-b617cc7992ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:03:04 np0005548731 nova_compute[232433]: 2025-12-06 07:03:04.582 232437 INFO nova.compute.manager [-] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:03:04 np0005548731 nova_compute[232433]: 2025-12-06 07:03:04.603 232437 DEBUG nova.compute.manager [None req-ab799570-d1d4-4822-88a8-d4937213b92f - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:03:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:05.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:05.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:06 np0005548731 nova_compute[232433]: 2025-12-06 07:03:06.064 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:06 np0005548731 nova_compute[232433]: 2025-12-06 07:03:06.696 232437 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Check if temp file /var/lib/nova/instances/tmp9cr5jvub exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Dec  6 02:03:06 np0005548731 nova_compute[232433]: 2025-12-06 07:03:06.697 232437 DEBUG nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp9cr5jvub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Dec  6 02:03:06 np0005548731 nova_compute[232433]: 2025-12-06 07:03:06.713 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:07.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:07.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:03:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4229558082' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:03:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:03:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4229558082' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:03:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:09.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:09.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:11 np0005548731 nova_compute[232433]: 2025-12-06 07:03:11.066 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:11.128 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:11.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:11.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:11 np0005548731 nova_compute[232433]: 2025-12-06 07:03:11.716 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:13.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:03:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:13.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:03:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:13Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:77:d0 10.100.0.8
Dec  6 02:03:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:13Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:77:d0 10.100.0.8
Dec  6 02:03:14 np0005548731 nova_compute[232433]: 2025-12-06 07:03:14.739 232437 DEBUG nova.compute.manager [req-071a88ab-14e8-40a3-8af2-41ce2c1629bc req-d086270c-6a17-4430-8b10-a2cc25fec572 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:14 np0005548731 nova_compute[232433]: 2025-12-06 07:03:14.739 232437 DEBUG oslo_concurrency.lockutils [req-071a88ab-14e8-40a3-8af2-41ce2c1629bc req-d086270c-6a17-4430-8b10-a2cc25fec572 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:14 np0005548731 nova_compute[232433]: 2025-12-06 07:03:14.740 232437 DEBUG oslo_concurrency.lockutils [req-071a88ab-14e8-40a3-8af2-41ce2c1629bc req-d086270c-6a17-4430-8b10-a2cc25fec572 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:14 np0005548731 nova_compute[232433]: 2025-12-06 07:03:14.740 232437 DEBUG oslo_concurrency.lockutils [req-071a88ab-14e8-40a3-8af2-41ce2c1629bc req-d086270c-6a17-4430-8b10-a2cc25fec572 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:14 np0005548731 nova_compute[232433]: 2025-12-06 07:03:14.740 232437 DEBUG nova.compute.manager [req-071a88ab-14e8-40a3-8af2-41ce2c1629bc req-d086270c-6a17-4430-8b10-a2cc25fec572 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:03:14 np0005548731 nova_compute[232433]: 2025-12-06 07:03:14.740 232437 DEBUG nova.compute.manager [req-071a88ab-14e8-40a3-8af2-41ce2c1629bc req-d086270c-6a17-4430-8b10-a2cc25fec572 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:03:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:15.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:15.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.068 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.173 232437 INFO nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Took 7.92 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.173 232437 DEBUG nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.192 232437 DEBUG nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp9cr5jvub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(380a6d0b-c91a-4634-9fab-997f293c0ae5),old_vol_attachment_ids={82a6fabc-b798-4571-a13e-f9c67a3f9413='1f7e2e99-fed4-47ad-9a87-fb8a1b33200d'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.195 232437 DEBUG nova.objects.instance [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lazy-loading 'migration_context' on Instance uuid 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.197 232437 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.198 232437 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.199 232437 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.218 232437 DEBUG nova.virt.libvirt.migration [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Find same serial number: pos=1, serial=82a6fabc-b798-4571-a13e-f9c67a3f9413 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.219 232437 DEBUG nova.virt.libvirt.vif [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:02:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-509443515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-509443515',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-m8zqnud8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:03:00Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=4dd1863a-3cb1-4e42-8611-ed9587a0b6ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.220 232437 DEBUG nova.network.os_vif_util [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converting VIF {"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.220 232437 DEBUG nova.network.os_vif_util [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.220 232437 DEBUG nova.virt.libvirt.migration [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating guest XML with vif config: <interface type="ethernet">
Dec  6 02:03:16 np0005548731 nova_compute[232433]:  <mac address="fa:16:3e:30:77:d0"/>
Dec  6 02:03:16 np0005548731 nova_compute[232433]:  <model type="virtio"/>
Dec  6 02:03:16 np0005548731 nova_compute[232433]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:03:16 np0005548731 nova_compute[232433]:  <mtu size="1442"/>
Dec  6 02:03:16 np0005548731 nova_compute[232433]:  <target dev="tap510079bd-5c"/>
Dec  6 02:03:16 np0005548731 nova_compute[232433]: </interface>
Dec  6 02:03:16 np0005548731 nova_compute[232433]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.221 232437 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.701 232437 DEBUG nova.virt.libvirt.migration [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.702 232437 INFO nova.virt.libvirt.migration [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.718 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.794 232437 INFO nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.918 232437 DEBUG nova.compute.manager [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.919 232437 DEBUG oslo_concurrency.lockutils [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.919 232437 DEBUG oslo_concurrency.lockutils [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.919 232437 DEBUG oslo_concurrency.lockutils [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.919 232437 DEBUG nova.compute.manager [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.920 232437 WARNING nova.compute.manager [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received unexpected event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with vm_state active and task_state migrating.#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.920 232437 DEBUG nova.compute.manager [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-changed-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.920 232437 DEBUG nova.compute.manager [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Refreshing instance network info cache due to event network-changed-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.920 232437 DEBUG oslo_concurrency.lockutils [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.921 232437 DEBUG oslo_concurrency.lockutils [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:03:16 np0005548731 nova_compute[232433]: 2025-12-06 07:03:16.921 232437 DEBUG nova.network.neutron [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Refreshing network info cache for port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:03:17 np0005548731 nova_compute[232433]: 2025-12-06 07:03:17.297 232437 DEBUG nova.virt.libvirt.migration [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Dec  6 02:03:17 np0005548731 nova_compute[232433]: 2025-12-06 07:03:17.297 232437 DEBUG nova.virt.libvirt.migration [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Dec  6 02:03:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:17.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:17.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:17 np0005548731 nova_compute[232433]: 2025-12-06 07:03:17.800 232437 DEBUG nova.virt.libvirt.migration [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Dec  6 02:03:17 np0005548731 nova_compute[232433]: 2025-12-06 07:03:17.800 232437 DEBUG nova.virt.libvirt.migration [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.304 232437 DEBUG nova.virt.libvirt.migration [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.304 232437 DEBUG nova.virt.libvirt.migration [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.588 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004598.5877206, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.588 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.622 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.625 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.645 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Dec  6 02:03:18 np0005548731 kernel: tap510079bd-5c (unregistering): left promiscuous mode
Dec  6 02:03:18 np0005548731 NetworkManager[49182]: <info>  [1765004598.7850] device (tap510079bd-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:03:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:18Z|00075|binding|INFO|Releasing lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 from this chassis (sb_readonly=0)
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.793 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:18Z|00076|binding|INFO|Setting lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 down in Southbound
Dec  6 02:03:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:18Z|00077|binding|INFO|Removing iface tap510079bd-5c ovn-installed in OVS
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.798 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:18.804 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:77:d0 10.100.0.8'], port_security=['fa:16:3e:30:77:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '03fe054d-d727-4af3-9c5e-92e57505f242'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4dd1863a-3cb1-4e42-8611-ed9587a0b6ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:18.806 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 unbound from our chassis#033[00m
Dec  6 02:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:18.808 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dfa287bf-10c3-40fc-8071-37bb7f801357, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:18.809 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6e12c130-134a-4342-939e-a3331d6cff65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:18.810 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 namespace which is not needed anymore#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.811 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:18 np0005548731 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000017.scope: Deactivated successfully.
Dec  6 02:03:18 np0005548731 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000017.scope: Consumed 14.469s CPU time.
Dec  6 02:03:18 np0005548731 systemd-machined[195355]: Machine qemu-10-instance-00000017 terminated.
Dec  6 02:03:18 np0005548731 virtqemud[232080]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-82a6fabc-b798-4571-a13e-f9c67a3f9413: No such file or directory
Dec  6 02:03:18 np0005548731 virtqemud[232080]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-82a6fabc-b798-4571-a13e-f9c67a3f9413: No such file or directory
Dec  6 02:03:18 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[243345]: [NOTICE]   (243349) : haproxy version is 2.8.14-c23fe91
Dec  6 02:03:18 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[243345]: [NOTICE]   (243349) : path to executable is /usr/sbin/haproxy
Dec  6 02:03:18 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[243345]: [WARNING]  (243349) : Exiting Master process...
Dec  6 02:03:18 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[243345]: [ALERT]    (243349) : Current worker (243351) exited with code 143 (Terminated)
Dec  6 02:03:18 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[243345]: [WARNING]  (243349) : All workers exited. Exiting... (0)
Dec  6 02:03:18 np0005548731 systemd[1]: libpod-4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08.scope: Deactivated successfully.
Dec  6 02:03:18 np0005548731 podman[243512]: 2025-12-06 07:03:18.951657816 +0000 UTC m=+0.048469638 container died 4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.969 232437 DEBUG nova.virt.libvirt.guest [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.970 232437 INFO nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migration operation has completed#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.971 232437 INFO nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] _post_live_migration() is started..#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.972 232437 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.973 232437 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Dec  6 02:03:18 np0005548731 nova_compute[232433]: 2025-12-06 07:03:18.973 232437 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Dec  6 02:03:18 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08-userdata-shm.mount: Deactivated successfully.
Dec  6 02:03:18 np0005548731 systemd[1]: var-lib-containers-storage-overlay-8b778ffe9e87bfe2430b648dce6006328d764b6ad545f38b39394c2d9f58d7ef-merged.mount: Deactivated successfully.
Dec  6 02:03:19 np0005548731 podman[243512]: 2025-12-06 07:03:19.004747674 +0000 UTC m=+0.101559496 container cleanup 4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:03:19 np0005548731 systemd[1]: libpod-conmon-4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08.scope: Deactivated successfully.
Dec  6 02:03:19 np0005548731 podman[243556]: 2025-12-06 07:03:19.072763533 +0000 UTC m=+0.045337994 container remove 4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  6 02:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:19.079 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[02c1df90-051d-461d-8087-325ecf37d62a]: (4, ('Sat Dec  6 07:03:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 (4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08)\n4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08\nSat Dec  6 07:03:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 (4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08)\n4ca0db57b70a2363087ccb115bca9f8655560bacb4b33a2c652e921590a21e08\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:19.083 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4d6ef9-f2da-48ef-951d-49b61c8caf1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:19.085 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfa287bf-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:19 np0005548731 nova_compute[232433]: 2025-12-06 07:03:19.088 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:19 np0005548731 kernel: tapdfa287bf-10: left promiscuous mode
Dec  6 02:03:19 np0005548731 nova_compute[232433]: 2025-12-06 07:03:19.106 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:19.110 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0db73123-eb02-4117-8b55-e8d9c3026823]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:19.132 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd2c8f9-3550-493f-b2b8-be45e7e2ba4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:19.134 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[860cee1d-03b7-4f1f-a826-b6260f85c300]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:19.149 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d1f5f1-10cb-43cb-b409-9967fc881815]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484019, 'reachable_time': 21757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243575, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:19.153 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:03:19 np0005548731 systemd[1]: run-netns-ovnmeta\x2ddfa287bf\x2d10c3\x2d40fc\x2d8071\x2d37bb7f801357.mount: Deactivated successfully.
Dec  6 02:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:19.153 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[cbba11dc-bc66-4081-92ef-ee7fe021763d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:19 np0005548731 nova_compute[232433]: 2025-12-06 07:03:19.171 232437 DEBUG nova.network.neutron [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updated VIF entry in instance network info cache for port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:03:19 np0005548731 nova_compute[232433]: 2025-12-06 07:03:19.171 232437 DEBUG nova.network.neutron [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating instance_info_cache with network_info: [{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:03:19 np0005548731 nova_compute[232433]: 2025-12-06 07:03:19.194 232437 DEBUG oslo_concurrency.lockutils [req-b8767e0a-969c-4c37-8d8e-0e4d606ae494 req-c61eda28-3faa-423a-8e35-200a5e8fb358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:03:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:19.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:19.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:20 np0005548731 nova_compute[232433]: 2025-12-06 07:03:20.973 232437 DEBUG nova.compute.manager [req-804372f1-20f4-40a4-acf0-a0ebb9291b68 req-02fdd4c6-912f-470b-832a-0698a4454b90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:20 np0005548731 nova_compute[232433]: 2025-12-06 07:03:20.974 232437 DEBUG oslo_concurrency.lockutils [req-804372f1-20f4-40a4-acf0-a0ebb9291b68 req-02fdd4c6-912f-470b-832a-0698a4454b90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:20 np0005548731 nova_compute[232433]: 2025-12-06 07:03:20.974 232437 DEBUG oslo_concurrency.lockutils [req-804372f1-20f4-40a4-acf0-a0ebb9291b68 req-02fdd4c6-912f-470b-832a-0698a4454b90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:20 np0005548731 nova_compute[232433]: 2025-12-06 07:03:20.974 232437 DEBUG oslo_concurrency.lockutils [req-804372f1-20f4-40a4-acf0-a0ebb9291b68 req-02fdd4c6-912f-470b-832a-0698a4454b90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:20 np0005548731 nova_compute[232433]: 2025-12-06 07:03:20.975 232437 DEBUG nova.compute.manager [req-804372f1-20f4-40a4-acf0-a0ebb9291b68 req-02fdd4c6-912f-470b-832a-0698a4454b90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:03:20 np0005548731 nova_compute[232433]: 2025-12-06 07:03:20.975 232437 DEBUG nova.compute.manager [req-804372f1-20f4-40a4-acf0-a0ebb9291b68 req-02fdd4c6-912f-470b-832a-0698a4454b90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.070 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.161 232437 DEBUG nova.compute.manager [req-7e21dec8-4723-4f5a-b40c-c1f1c68acf15 req-3a1781b4-fcb1-4e87-b35b-899bf04d1576 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.162 232437 DEBUG oslo_concurrency.lockutils [req-7e21dec8-4723-4f5a-b40c-c1f1c68acf15 req-3a1781b4-fcb1-4e87-b35b-899bf04d1576 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.162 232437 DEBUG oslo_concurrency.lockutils [req-7e21dec8-4723-4f5a-b40c-c1f1c68acf15 req-3a1781b4-fcb1-4e87-b35b-899bf04d1576 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.163 232437 DEBUG oslo_concurrency.lockutils [req-7e21dec8-4723-4f5a-b40c-c1f1c68acf15 req-3a1781b4-fcb1-4e87-b35b-899bf04d1576 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.163 232437 DEBUG nova.compute.manager [req-7e21dec8-4723-4f5a-b40c-c1f1c68acf15 req-3a1781b4-fcb1-4e87-b35b-899bf04d1576 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.163 232437 WARNING nova.compute.manager [req-7e21dec8-4723-4f5a-b40c-c1f1c68acf15 req-3a1781b4-fcb1-4e87-b35b-899bf04d1576 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received unexpected event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with vm_state active and task_state migrating.#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.213 232437 DEBUG nova.network.neutron [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Activated binding for port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.214 232437 DEBUG nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.214 232437 DEBUG nova.virt.libvirt.vif [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:02:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-509443515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-509443515',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-m8zqnud8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:03:05Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=4dd1863a-3cb1-4e42-8611-ed9587a0b6ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.215 232437 DEBUG nova.network.os_vif_util [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converting VIF {"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.215 232437 DEBUG nova.network.os_vif_util [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.216 232437 DEBUG os_vif [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.217 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.217 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap510079bd-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.219 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.220 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.222 232437 INFO os_vif [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c')#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.222 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.222 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.223 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.223 232437 DEBUG nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.223 232437 INFO nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Deleting instance files /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_del#033[00m
Dec  6 02:03:21 np0005548731 nova_compute[232433]: 2025-12-06 07:03:21.224 232437 INFO nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Deletion of /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_del complete#033[00m
Dec  6 02:03:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:21.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.246 232437 DEBUG nova.compute.manager [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.247 232437 DEBUG oslo_concurrency.lockutils [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.247 232437 DEBUG oslo_concurrency.lockutils [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.247 232437 DEBUG oslo_concurrency.lockutils [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.247 232437 DEBUG nova.compute.manager [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.248 232437 WARNING nova.compute.manager [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received unexpected event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with vm_state active and task_state migrating.#033[00m
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.248 232437 DEBUG nova.compute.manager [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.248 232437 DEBUG oslo_concurrency.lockutils [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.248 232437 DEBUG oslo_concurrency.lockutils [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.249 232437 DEBUG oslo_concurrency.lockutils [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.249 232437 DEBUG nova.compute.manager [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:03:23 np0005548731 nova_compute[232433]: 2025-12-06 07:03:23.249 232437 WARNING nova.compute.manager [req-fd362657-b408-456e-8d7d-5e1fdb0d8c13 req-91d7ce65-935b-4db5-b9a2-1c5fafa7abb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received unexpected event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with vm_state active and task_state migrating.#033[00m
Dec  6 02:03:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:23.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:23.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:25.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:25.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:26 np0005548731 nova_compute[232433]: 2025-12-06 07:03:26.072 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:26 np0005548731 nova_compute[232433]: 2025-12-06 07:03:26.220 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.181 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.181 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.181 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.203 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.204 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.204 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.204 232437 DEBUG nova.compute.resource_tracker [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.205 232437 DEBUG oslo_concurrency.processutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:03:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:03:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/877678940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.648 232437 DEBUG oslo_concurrency.processutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:03:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:27.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.736 232437 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.737 232437 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.905 232437 WARNING nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.906 232437 DEBUG nova.compute.resource_tracker [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4552MB free_disk=20.77063751220703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.907 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.907 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.959 232437 DEBUG nova.compute.resource_tracker [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Migration for instance 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Dec  6 02:03:27 np0005548731 nova_compute[232433]: 2025-12-06 07:03:27.978 232437 DEBUG nova.compute.resource_tracker [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.007 232437 DEBUG nova.compute.resource_tracker [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Instance 646ce79a-a40a-4b2a-9af5-90c98c98b72a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.007 232437 DEBUG nova.compute.resource_tracker [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Migration 380a6d0b-c91a-4634-9fab-997f293c0ae5 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.007 232437 DEBUG nova.compute.resource_tracker [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.008 232437 DEBUG nova.compute.resource_tracker [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.024 232437 DEBUG nova.scheduler.client.report [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.041 232437 DEBUG nova.scheduler.client.report [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.042 232437 DEBUG nova.compute.provider_tree [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.063 232437 DEBUG nova.scheduler.client.report [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.104 232437 DEBUG nova.scheduler.client.report [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.169 232437 DEBUG oslo_concurrency.processutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:03:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:03:28 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2969829256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.603 232437 DEBUG oslo_concurrency.processutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.611 232437 DEBUG nova.compute.provider_tree [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.628 232437 DEBUG nova.scheduler.client.report [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.655 232437 DEBUG nova.compute.resource_tracker [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.655 232437 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.660 232437 INFO nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migrating instance to compute-1.ctlplane.example.com finished successfully.#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.763 232437 INFO nova.scheduler.client.report [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Deleted allocation for migration 380a6d0b-c91a-4634-9fab-997f293c0ae5#033[00m
Dec  6 02:03:28 np0005548731 nova_compute[232433]: 2025-12-06 07:03:28.764 232437 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Dec  6 02:03:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:29.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:29.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:29 np0005548731 nova_compute[232433]: 2025-12-06 07:03:29.977 232437 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Creating tmpfile /var/lib/nova/instances/tmpeswar0ys to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Dec  6 02:03:30 np0005548731 nova_compute[232433]: 2025-12-06 07:03:30.137 232437 DEBUG nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpeswar0ys',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Dec  6 02:03:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:30 np0005548731 nova_compute[232433]: 2025-12-06 07:03:30.994 232437 DEBUG nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpeswar0ys',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Dec  6 02:03:31 np0005548731 nova_compute[232433]: 2025-12-06 07:03:31.026 232437 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:03:31 np0005548731 nova_compute[232433]: 2025-12-06 07:03:31.026 232437 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquired lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:03:31 np0005548731 nova_compute[232433]: 2025-12-06 07:03:31.026 232437 DEBUG nova.network.neutron [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:03:31 np0005548731 nova_compute[232433]: 2025-12-06 07:03:31.074 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:31 np0005548731 nova_compute[232433]: 2025-12-06 07:03:31.221 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:31.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:31.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.129 232437 DEBUG nova.network.neutron [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating instance_info_cache with network_info: [{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.143 232437 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Releasing lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.144 232437 DEBUG os_brick.utils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.145 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.157 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.157 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[78f7ced4-31b7-4d29-86c6-61823fd537e2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.159 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.170 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.170 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[01edf336-8baf-4b74-b866-8cf25433e105]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.172 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.184 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.184 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[71cd3c2d-527e-4ca8-9f3f-e1869358fc4c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.186 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad3d626-96ad-4410-a6cf-745ac63eae36]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.187 232437 DEBUG oslo_concurrency.processutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.214 232437 DEBUG oslo_concurrency.processutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.217 232437 DEBUG os_brick.initiator.connectors.lightos [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.217 232437 DEBUG os_brick.initiator.connectors.lightos [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.218 232437 DEBUG os_brick.initiator.connectors.lightos [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.218 232437 DEBUG os_brick.utils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:03:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:33.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:33.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:33 np0005548731 podman[243638]: 2025-12-06 07:03:33.898715956 +0000 UTC m=+0.055307274 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec  6 02:03:33 np0005548731 podman[243636]: 2025-12-06 07:03:33.911954094 +0000 UTC m=+0.071905652 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  6 02:03:33 np0005548731 podman[243637]: 2025-12-06 07:03:33.919345162 +0000 UTC m=+0.077542518 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.970 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004598.9690418, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.970 232437 INFO nova.compute.manager [-] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:03:33 np0005548731 nova_compute[232433]: 2025-12-06 07:03:33.999 232437 DEBUG nova.compute.manager [None req-01837937-1d75-4fce-b5ca-5b1d10038a9b - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:03:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:03:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/299817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.644 232437 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpeswar0ys',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={82a6fabc-b798-4571-a13e-f9c67a3f9413='48e3400e-42f6-4554-8920-994a6676cae0'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.644 232437 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Creating instance directory: /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.645 232437 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Ensure instance console log exists: /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.645 232437 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.648 232437 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.649 232437 DEBUG nova.virt.libvirt.vif [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:02:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-509443515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-509443515',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-m8zqnud8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:03:26Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=4dd1863a-3cb1-4e42-8611-ed9587a0b6ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.649 232437 DEBUG nova.network.os_vif_util [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converting VIF {"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.650 232437 DEBUG nova.network.os_vif_util [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.650 232437 DEBUG os_vif [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.650 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.651 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.651 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.653 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.653 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap510079bd-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.654 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap510079bd-5c, col_values=(('external_ids', {'iface-id': '510079bd-5c00-4b8a-b2eb-d63f0ffc3d69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:77:d0', 'vm-uuid': '4dd1863a-3cb1-4e42-8611-ed9587a0b6ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.655 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:34 np0005548731 NetworkManager[49182]: <info>  [1765004614.6561] manager: (tap510079bd-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.656 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.659 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.660 232437 INFO os_vif [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c')#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.662 232437 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Dec  6 02:03:34 np0005548731 nova_compute[232433]: 2025-12-06 07:03:34.662 232437 DEBUG nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpeswar0ys',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={82a6fabc-b798-4571-a13e-f9c67a3f9413='48e3400e-42f6-4554-8920-994a6676cae0'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Dec  6 02:03:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:35.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:35.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:36 np0005548731 nova_compute[232433]: 2025-12-06 07:03:36.076 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:36 np0005548731 nova_compute[232433]: 2025-12-06 07:03:36.255 232437 DEBUG nova.network.neutron [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Dec  6 02:03:36 np0005548731 nova_compute[232433]: 2025-12-06 07:03:36.726 232437 DEBUG nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpeswar0ys',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={82a6fabc-b798-4571-a13e-f9c67a3f9413='48e3400e-42f6-4554-8920-994a6676cae0'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Dec  6 02:03:36 np0005548731 podman[244025]: 2025-12-06 07:03:36.74917602 +0000 UTC m=+0.046799417 container create 59d0d912c5309b99a954d66b9266963cdee77f6d0721a695b095a18069bc32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 02:03:36 np0005548731 systemd[1]: Started libpod-conmon-59d0d912c5309b99a954d66b9266963cdee77f6d0721a695b095a18069bc32f3.scope.
Dec  6 02:03:36 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:03:36 np0005548731 podman[244025]: 2025-12-06 07:03:36.723675646 +0000 UTC m=+0.021299104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 02:03:36 np0005548731 podman[244025]: 2025-12-06 07:03:36.833138212 +0000 UTC m=+0.130761629 container init 59d0d912c5309b99a954d66b9266963cdee77f6d0721a695b095a18069bc32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 02:03:36 np0005548731 podman[244025]: 2025-12-06 07:03:36.839599788 +0000 UTC m=+0.137223185 container start 59d0d912c5309b99a954d66b9266963cdee77f6d0721a695b095a18069bc32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  6 02:03:36 np0005548731 podman[244025]: 2025-12-06 07:03:36.842282823 +0000 UTC m=+0.139906220 container attach 59d0d912c5309b99a954d66b9266963cdee77f6d0721a695b095a18069bc32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 02:03:36 np0005548731 flamboyant_maxwell[244041]: 167 167
Dec  6 02:03:36 np0005548731 podman[244025]: 2025-12-06 07:03:36.850580973 +0000 UTC m=+0.148204390 container died 59d0d912c5309b99a954d66b9266963cdee77f6d0721a695b095a18069bc32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  6 02:03:36 np0005548731 systemd[1]: Starting libvirt proxy daemon...
Dec  6 02:03:36 np0005548731 systemd[1]: libpod-59d0d912c5309b99a954d66b9266963cdee77f6d0721a695b095a18069bc32f3.scope: Deactivated successfully.
Dec  6 02:03:36 np0005548731 systemd[1]: var-lib-containers-storage-overlay-c98a80da508adf2ee3497cbabb54110433213fc69857c3ebc0674c1cab65b7fb-merged.mount: Deactivated successfully.
Dec  6 02:03:36 np0005548731 systemd[1]: Started libvirt proxy daemon.
Dec  6 02:03:36 np0005548731 podman[244025]: 2025-12-06 07:03:36.88909946 +0000 UTC m=+0.186722857 container remove 59d0d912c5309b99a954d66b9266963cdee77f6d0721a695b095a18069bc32f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 02:03:36 np0005548731 systemd[1]: libpod-conmon-59d0d912c5309b99a954d66b9266963cdee77f6d0721a695b095a18069bc32f3.scope: Deactivated successfully.
Dec  6 02:03:36 np0005548731 kernel: tap510079bd-5c: entered promiscuous mode
Dec  6 02:03:37 np0005548731 NetworkManager[49182]: <info>  [1765004617.0014] manager: (tap510079bd-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Dec  6 02:03:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:37Z|00078|binding|INFO|Claiming lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for this additional chassis.
Dec  6 02:03:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:37Z|00079|binding|INFO|510079bd-5c00-4b8a-b2eb-d63f0ffc3d69: Claiming fa:16:3e:30:77:d0 10.100.0.8
Dec  6 02:03:37 np0005548731 nova_compute[232433]: 2025-12-06 07:03:37.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:37Z|00080|binding|INFO|Setting lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 ovn-installed in OVS
Dec  6 02:03:37 np0005548731 nova_compute[232433]: 2025-12-06 07:03:37.019 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:37 np0005548731 nova_compute[232433]: 2025-12-06 07:03:37.021 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:37 np0005548731 nova_compute[232433]: 2025-12-06 07:03:37.023 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:37 np0005548731 systemd-machined[195355]: New machine qemu-11-instance-00000017.
Dec  6 02:03:37 np0005548731 systemd[1]: Started Virtual Machine qemu-11-instance-00000017.
Dec  6 02:03:37 np0005548731 systemd-udevd[244111]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:03:37 np0005548731 podman[244094]: 2025-12-06 07:03:37.060102829 +0000 UTC m=+0.045625790 container create 7003940b8bbf36719b38204eabeb79d6d683bf690889a8f4e81229b5355d3db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 02:03:37 np0005548731 NetworkManager[49182]: <info>  [1765004617.0657] device (tap510079bd-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:03:37 np0005548731 NetworkManager[49182]: <info>  [1765004617.0669] device (tap510079bd-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:03:37 np0005548731 systemd[1]: Started libpod-conmon-7003940b8bbf36719b38204eabeb79d6d683bf690889a8f4e81229b5355d3db5.scope.
Dec  6 02:03:37 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:03:37 np0005548731 podman[244094]: 2025-12-06 07:03:37.039610915 +0000 UTC m=+0.025133906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 02:03:37 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57724d00c680d3e17ad55393c95f89fb931d2cd9a650bd38d628cacee418498/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 02:03:37 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57724d00c680d3e17ad55393c95f89fb931d2cd9a650bd38d628cacee418498/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 02:03:37 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57724d00c680d3e17ad55393c95f89fb931d2cd9a650bd38d628cacee418498/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 02:03:37 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57724d00c680d3e17ad55393c95f89fb931d2cd9a650bd38d628cacee418498/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 02:03:37 np0005548731 podman[244094]: 2025-12-06 07:03:37.161128222 +0000 UTC m=+0.146651193 container init 7003940b8bbf36719b38204eabeb79d6d683bf690889a8f4e81229b5355d3db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  6 02:03:37 np0005548731 podman[244094]: 2025-12-06 07:03:37.168987341 +0000 UTC m=+0.154510312 container start 7003940b8bbf36719b38204eabeb79d6d683bf690889a8f4e81229b5355d3db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  6 02:03:37 np0005548731 podman[244094]: 2025-12-06 07:03:37.17269264 +0000 UTC m=+0.158215611 container attach 7003940b8bbf36719b38204eabeb79d6d683bf690889a8f4e81229b5355d3db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  6 02:03:37 np0005548731 nova_compute[232433]: 2025-12-06 07:03:37.538 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004617.5383706, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:03:37 np0005548731 nova_compute[232433]: 2025-12-06 07:03:37.539 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Started (Lifecycle Event)#033[00m
Dec  6 02:03:37 np0005548731 nova_compute[232433]: 2025-12-06 07:03:37.562 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:03:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:37.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:37.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:38 np0005548731 nova_compute[232433]: 2025-12-06 07:03:38.017 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004618.0174205, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:03:38 np0005548731 nova_compute[232433]: 2025-12-06 07:03:38.019 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:03:38 np0005548731 nova_compute[232433]: 2025-12-06 07:03:38.048 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:03:38 np0005548731 nova_compute[232433]: 2025-12-06 07:03:38.051 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:03:38 np0005548731 nova_compute[232433]: 2025-12-06 07:03:38.082 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]: [
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:    {
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:        "available": false,
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:        "ceph_device": false,
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:        "lsm_data": {},
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:        "lvs": [],
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:        "path": "/dev/sr0",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:        "rejected_reasons": [
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "Insufficient space (<5GB)",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "Has a FileSystem"
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:        ],
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:        "sys_api": {
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "actuators": null,
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "device_nodes": "sr0",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "devname": "sr0",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "human_readable_size": "482.00 KB",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "id_bus": "ata",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "model": "QEMU DVD-ROM",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "nr_requests": "2",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "parent": "/dev/sr0",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "partitions": {},
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "path": "/dev/sr0",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "removable": "1",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "rev": "2.5+",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "ro": "0",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "rotational": "1",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "sas_address": "",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "sas_device_handle": "",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "scheduler_mode": "mq-deadline",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "sectors": 0,
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "sectorsize": "2048",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "size": 493568.0,
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "support_discard": "2048",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "type": "disk",
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:            "vendor": "QEMU"
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:        }
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]:    }
Dec  6 02:03:38 np0005548731 suspicious_keldysh[244122]: ]
Dec  6 02:03:38 np0005548731 systemd[1]: libpod-7003940b8bbf36719b38204eabeb79d6d683bf690889a8f4e81229b5355d3db5.scope: Deactivated successfully.
Dec  6 02:03:38 np0005548731 systemd[1]: libpod-7003940b8bbf36719b38204eabeb79d6d683bf690889a8f4e81229b5355d3db5.scope: Consumed 1.216s CPU time.
Dec  6 02:03:38 np0005548731 podman[245316]: 2025-12-06 07:03:38.46579405 +0000 UTC m=+0.023629459 container died 7003940b8bbf36719b38204eabeb79d6d683bf690889a8f4e81229b5355d3db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  6 02:03:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay-b57724d00c680d3e17ad55393c95f89fb931d2cd9a650bd38d628cacee418498-merged.mount: Deactivated successfully.
Dec  6 02:03:38 np0005548731 podman[245316]: 2025-12-06 07:03:38.518159062 +0000 UTC m=+0.075994461 container remove 7003940b8bbf36719b38204eabeb79d6d683bf690889a8f4e81229b5355d3db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 02:03:38 np0005548731 systemd[1]: libpod-conmon-7003940b8bbf36719b38204eabeb79d6d683bf690889a8f4e81229b5355d3db5.scope: Deactivated successfully.
Dec  6 02:03:39 np0005548731 nova_compute[232433]: 2025-12-06 07:03:39.656 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:03:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:39.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:03:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:39.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:39Z|00081|binding|INFO|Claiming lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for this chassis.
Dec  6 02:03:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:39Z|00082|binding|INFO|510079bd-5c00-4b8a-b2eb-d63f0ffc3d69: Claiming fa:16:3e:30:77:d0 10.100.0.8
Dec  6 02:03:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:39Z|00083|binding|INFO|Setting lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 up in Southbound
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.780 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:77:d0 10.100.0.8'], port_security=['fa:16:3e:30:77:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4dd1863a-3cb1-4e42-8611-ed9587a0b6ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '19', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.782 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 bound to our chassis#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.784 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfa287bf-10c3-40fc-8071-37bb7f801357#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.795 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[41a66be3-06cf-4f6d-a0cc-74b2b35e413a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.796 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdfa287bf-11 in ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.798 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdfa287bf-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.798 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3a50b34f-c1e1-4d70-8248-d5076b49db46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.799 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8835dff3-5334-4eb2-a6fe-a8a1d637fa41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.812 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc372d5-e31c-443e-8592-1ceccf13a269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.826 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9745fd66-cf23-4dc0-b971-e975ffccfbf5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.856 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[44b8674a-b7be-4285-9d2b-af7b155c5bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:39 np0005548731 systemd-udevd[244113]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:03:39 np0005548731 NetworkManager[49182]: <info>  [1765004619.8634] manager: (tapdfa287bf-10): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.862 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3b7b02-0f18-42e8-a68b-79eb08b63275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.901 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[36f3e4e8-e5b0-4d4f-bd20-3346055023d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.905 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2e7abe-ca76-46c4-998e-9c697d5e9a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:39 np0005548731 NetworkManager[49182]: <info>  [1765004619.9325] device (tapdfa287bf-10): carrier: link connected
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.941 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a350f4-1d00-4f48-9051-bbc430ec5b50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.961 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b29eb75b-02d1-4962-943c-baf02ada6652]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfa287bf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488193, 'reachable_time': 38792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245355, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:39 np0005548731 nova_compute[232433]: 2025-12-06 07:03:39.972 232437 INFO nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Post operation of migration started#033[00m
Dec  6 02:03:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:39.982 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5346e3c8-8f33-4061-8871-b98ccd9e7e07]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:a341'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488193, 'tstamp': 488193}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245356, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:40.001 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[05623eaa-812a-4189-907c-43634c4b7d95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfa287bf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488193, 'reachable_time': 38792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 245357, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:40.039 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1b209dd0-b294-44f5-8679-0fc396c865c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:40.102 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9b7479-ccc5-4cb1-9048-462be3408eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:40.104 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfa287bf-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:40.104 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:40.104 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfa287bf-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:40 np0005548731 NetworkManager[49182]: <info>  [1765004620.1069] manager: (tapdfa287bf-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Dec  6 02:03:40 np0005548731 nova_compute[232433]: 2025-12-06 07:03:40.106 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:40 np0005548731 kernel: tapdfa287bf-10: entered promiscuous mode
Dec  6 02:03:40 np0005548731 nova_compute[232433]: 2025-12-06 07:03:40.109 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:40.110 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfa287bf-10, col_values=(('external_ids', {'iface-id': 'a8b489de-cf80-4c12-869a-5e807cdbba8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:40 np0005548731 nova_compute[232433]: 2025-12-06 07:03:40.111 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:40Z|00084|binding|INFO|Releasing lport a8b489de-cf80-4c12-869a-5e807cdbba8c from this chassis (sb_readonly=0)
Dec  6 02:03:40 np0005548731 nova_compute[232433]: 2025-12-06 07:03:40.126 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:40 np0005548731 nova_compute[232433]: 2025-12-06 07:03:40.128 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:40.128 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:40.129 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[48fd1f19-defc-49b8-8015-682ad4f626d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:40.130 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-dfa287bf-10c3-40fc-8071-37bb7f801357
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID dfa287bf-10c3-40fc-8071-37bb7f801357
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:03:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:40.131 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'env', 'PROCESS_TAG=haproxy-dfa287bf-10c3-40fc-8071-37bb7f801357', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dfa287bf-10c3-40fc-8071-37bb7f801357.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:03:40 np0005548731 nova_compute[232433]: 2025-12-06 07:03:40.154 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:03:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:03:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:03:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:03:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:03:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:03:40 np0005548731 nova_compute[232433]: 2025-12-06 07:03:40.394 232437 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:03:40 np0005548731 nova_compute[232433]: 2025-12-06 07:03:40.395 232437 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquired lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:03:40 np0005548731 nova_compute[232433]: 2025-12-06 07:03:40.395 232437 DEBUG nova.network.neutron [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:03:40 np0005548731 podman[245390]: 2025-12-06 07:03:40.496107116 +0000 UTC m=+0.043923890 container create 84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:03:40 np0005548731 systemd[1]: Started libpod-conmon-84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03.scope.
Dec  6 02:03:40 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:03:40 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2d44bbfcbda0aa48aaa7fb1219b310ce275eb18cbb1ff9923e067e5a335bb9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:03:40 np0005548731 podman[245390]: 2025-12-06 07:03:40.473081061 +0000 UTC m=+0.020897855 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:03:40 np0005548731 podman[245390]: 2025-12-06 07:03:40.579830361 +0000 UTC m=+0.127647165 container init 84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  6 02:03:40 np0005548731 podman[245390]: 2025-12-06 07:03:40.585265592 +0000 UTC m=+0.133082366 container start 84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  6 02:03:40 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[245405]: [NOTICE]   (245409) : New worker (245411) forked
Dec  6 02:03:40 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[245405]: [NOTICE]   (245409) : Loading success.
Dec  6 02:03:41 np0005548731 nova_compute[232433]: 2025-12-06 07:03:41.080 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:41 np0005548731 nova_compute[232433]: 2025-12-06 07:03:41.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:03:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:41.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:41.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:42 np0005548731 nova_compute[232433]: 2025-12-06 07:03:42.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:03:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:43.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:43.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:44 np0005548731 nova_compute[232433]: 2025-12-06 07:03:44.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:03:44 np0005548731 nova_compute[232433]: 2025-12-06 07:03:44.330 232437 DEBUG nova.network.neutron [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating instance_info_cache with network_info: [{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:03:44 np0005548731 nova_compute[232433]: 2025-12-06 07:03:44.392 232437 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Releasing lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:03:44 np0005548731 nova_compute[232433]: 2025-12-06 07:03:44.420 232437 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:44 np0005548731 nova_compute[232433]: 2025-12-06 07:03:44.421 232437 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:44 np0005548731 nova_compute[232433]: 2025-12-06 07:03:44.421 232437 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:44 np0005548731 nova_compute[232433]: 2025-12-06 07:03:44.425 232437 INFO nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Dec  6 02:03:44 np0005548731 virtqemud[232080]: Domain id=11 name='instance-00000017' uuid=4dd1863a-3cb1-4e42-8611-ed9587a0b6ce is tainted: custom-monitor
Dec  6 02:03:44 np0005548731 nova_compute[232433]: 2025-12-06 07:03:44.658 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:45 np0005548731 nova_compute[232433]: 2025-12-06 07:03:45.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:03:45 np0005548731 nova_compute[232433]: 2025-12-06 07:03:45.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:03:45 np0005548731 nova_compute[232433]: 2025-12-06 07:03:45.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:03:45 np0005548731 nova_compute[232433]: 2025-12-06 07:03:45.263 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-646ce79a-a40a-4b2a-9af5-90c98c98b72a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:03:45 np0005548731 nova_compute[232433]: 2025-12-06 07:03:45.264 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-646ce79a-a40a-4b2a-9af5-90c98c98b72a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:03:45 np0005548731 nova_compute[232433]: 2025-12-06 07:03:45.264 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:03:45 np0005548731 nova_compute[232433]: 2025-12-06 07:03:45.264 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 646ce79a-a40a-4b2a-9af5-90c98c98b72a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:03:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:45 np0005548731 nova_compute[232433]: 2025-12-06 07:03:45.432 232437 INFO nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Dec  6 02:03:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:45.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:03:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:03:46 np0005548731 nova_compute[232433]: 2025-12-06 07:03:46.082 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:46 np0005548731 nova_compute[232433]: 2025-12-06 07:03:46.437 232437 INFO nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Dec  6 02:03:46 np0005548731 nova_compute[232433]: 2025-12-06 07:03:46.443 232437 DEBUG nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:03:46 np0005548731 nova_compute[232433]: 2025-12-06 07:03:46.462 232437 DEBUG nova.objects.instance [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec  6 02:03:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:47.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:03:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:47.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:03:48 np0005548731 nova_compute[232433]: 2025-12-06 07:03:48.410 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Updating instance_info_cache with network_info: [{"id": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "address": "fa:16:3e:12:9e:14", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45a08bd2-8b", "ovs_interfaceid": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:03:48 np0005548731 nova_compute[232433]: 2025-12-06 07:03:48.433 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-646ce79a-a40a-4b2a-9af5-90c98c98b72a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:03:48 np0005548731 nova_compute[232433]: 2025-12-06 07:03:48.434 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:03:48 np0005548731 nova_compute[232433]: 2025-12-06 07:03:48.434 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:03:48 np0005548731 nova_compute[232433]: 2025-12-06 07:03:48.434 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:03:48 np0005548731 nova_compute[232433]: 2025-12-06 07:03:48.434 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:03:48 np0005548731 nova_compute[232433]: 2025-12-06 07:03:48.599 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:48 np0005548731 nova_compute[232433]: 2025-12-06 07:03:48.599 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:48 np0005548731 nova_compute[232433]: 2025-12-06 07:03:48.600 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:48 np0005548731 nova_compute[232433]: 2025-12-06 07:03:48.600 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:03:48 np0005548731 nova_compute[232433]: 2025-12-06 07:03:48.600 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:03:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:03:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2535004301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.074 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.150 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.151 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.154 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.154 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:03:49 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.303 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.304 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4421MB free_disk=20.83074188232422GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.304 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.305 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.375 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Applying migration context for instance 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce as it has an incoming, in-progress migration 180afc56-4a01-4d3b-b7b6-eba0219610c0. Migration status is running _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.375 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.376 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.410 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 646ce79a-a40a-4b2a-9af5-90c98c98b72a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.411 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.411 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.411 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.555 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:03:49 np0005548731 nova_compute[232433]: 2025-12-06 07:03:49.661 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:49.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:49.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:03:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2473430784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.018 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.023 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.037 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.059 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.059 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.156 232437 DEBUG oslo_concurrency.lockutils [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.157 232437 DEBUG oslo_concurrency.lockutils [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.158 232437 DEBUG oslo_concurrency.lockutils [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.158 232437 DEBUG oslo_concurrency.lockutils [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.159 232437 DEBUG oslo_concurrency.lockutils [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.160 232437 INFO nova.compute.manager [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Terminating instance#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.161 232437 DEBUG nova.compute.manager [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:03:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:50 np0005548731 kernel: tap510079bd-5c (unregistering): left promiscuous mode
Dec  6 02:03:50 np0005548731 NetworkManager[49182]: <info>  [1765004630.9225] device (tap510079bd-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:03:50 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:50Z|00085|binding|INFO|Releasing lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 from this chassis (sb_readonly=0)
Dec  6 02:03:50 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:50Z|00086|binding|INFO|Setting lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 down in Southbound
Dec  6 02:03:50 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:50Z|00087|binding|INFO|Removing iface tap510079bd-5c ovn-installed in OVS
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.932 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:50.939 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:77:d0 10.100.0.8'], port_security=['fa:16:3e:30:77:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4dd1863a-3cb1-4e42-8611-ed9587a0b6ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '21', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:03:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:50.940 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 unbound from our chassis#033[00m
Dec  6 02:03:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:50.942 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dfa287bf-10c3-40fc-8071-37bb7f801357, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:03:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:50.944 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[257b3c6f-8022-45e3-8e67-dfa5261f891a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:50.944 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 namespace which is not needed anymore#033[00m
Dec  6 02:03:50 np0005548731 nova_compute[232433]: 2025-12-06 07:03:50.948 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:50 np0005548731 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000017.scope: Deactivated successfully.
Dec  6 02:03:50 np0005548731 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000017.scope: Consumed 1.409s CPU time.
Dec  6 02:03:50 np0005548731 systemd-machined[195355]: Machine qemu-11-instance-00000017 terminated.
Dec  6 02:03:51 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[245405]: [NOTICE]   (245409) : haproxy version is 2.8.14-c23fe91
Dec  6 02:03:51 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[245405]: [NOTICE]   (245409) : path to executable is /usr/sbin/haproxy
Dec  6 02:03:51 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[245405]: [WARNING]  (245409) : Exiting Master process...
Dec  6 02:03:51 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[245405]: [ALERT]    (245409) : Current worker (245411) exited with code 143 (Terminated)
Dec  6 02:03:51 np0005548731 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[245405]: [WARNING]  (245409) : All workers exited. Exiting... (0)
Dec  6 02:03:51 np0005548731 systemd[1]: libpod-84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03.scope: Deactivated successfully.
Dec  6 02:03:51 np0005548731 podman[245544]: 2025-12-06 07:03:51.075888111 +0000 UTC m=+0.043273933 container died 84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.084 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03-userdata-shm.mount: Deactivated successfully.
Dec  6 02:03:51 np0005548731 systemd[1]: var-lib-containers-storage-overlay-8a2d44bbfcbda0aa48aaa7fb1219b310ce275eb18cbb1ff9923e067e5a335bb9-merged.mount: Deactivated successfully.
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.124 232437 DEBUG oslo_concurrency.lockutils [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.125 232437 DEBUG oslo_concurrency.lockutils [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:51 np0005548731 podman[245544]: 2025-12-06 07:03:51.125906705 +0000 UTC m=+0.093292527 container cleanup 84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.125 232437 DEBUG oslo_concurrency.lockutils [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.126 232437 DEBUG oslo_concurrency.lockutils [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.126 232437 DEBUG oslo_concurrency.lockutils [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.128 232437 INFO nova.compute.manager [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Terminating instance#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.129 232437 DEBUG nova.compute.manager [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:03:51 np0005548731 systemd[1]: libpod-conmon-84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03.scope: Deactivated successfully.
Dec  6 02:03:51 np0005548731 kernel: tap45a08bd2-8b (unregistering): left promiscuous mode
Dec  6 02:03:51 np0005548731 NetworkManager[49182]: <info>  [1765004631.1754] device (tap45a08bd2-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.182 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.189 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:51Z|00088|binding|INFO|Releasing lport 45a08bd2-8b6a-4af2-bd49-4df95910daa9 from this chassis (sb_readonly=0)
Dec  6 02:03:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:51Z|00089|binding|INFO|Setting lport 45a08bd2-8b6a-4af2-bd49-4df95910daa9 down in Southbound
Dec  6 02:03:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:03:51Z|00090|binding|INFO|Removing iface tap45a08bd2-8b ovn-installed in OVS
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.194 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 podman[245575]: 2025-12-06 07:03:51.19546095 +0000 UTC m=+0.048884328 container remove 84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.199 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:9e:14 10.100.0.10'], port_security=['fa:16:3e:12:9e:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '646ce79a-a40a-4b2a-9af5-90c98c98b72a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=45a08bd2-8b6a-4af2-bd49-4df95910daa9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.201 232437 INFO nova.virt.libvirt.driver [-] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Instance destroyed successfully.#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.202 232437 DEBUG nova.objects.instance [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lazy-loading 'resources' on Instance uuid 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.203 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a1094fd0-7ac6-4143-a159-d94d19ffc219]: (4, ('Sat Dec  6 07:03:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 (84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03)\n84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03\nSat Dec  6 07:03:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 (84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03)\n84d0a632e061abc1f71de0cd7232ca6e9877ef5099bb6a1f4ead67bd0bd18d03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.204 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[461a0b82-3b0e-438c-8c7b-9dca0266f237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.205 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfa287bf-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.207 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 kernel: tapdfa287bf-10: left promiscuous mode
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.217 232437 DEBUG nova.virt.libvirt.vif [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:02:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-509443515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-509443515',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-m8zqnud8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:03:46Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=4dd1863a-3cb1-4e42-8611-ed9587a0b6ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.218 232437 DEBUG nova.network.os_vif_util [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converting VIF {"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.219 232437 DEBUG nova.network.os_vif_util [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.219 232437 DEBUG os_vif [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.221 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.221 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap510079bd-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.223 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.225 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.231 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000014.scope: Deactivated successfully.
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.234 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f8df25-4ff9-4de5-967e-51e723f1163d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000014.scope: Consumed 19.330s CPU time.
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.234 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 systemd-machined[195355]: Machine qemu-9-instance-00000014 terminated.
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.238 232437 INFO os_vif [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c')#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.243 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[52b5762b-9cdf-4e67-855a-9810a97d6c0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.245 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[927286ea-e516-4f7c-8897-7ec80aab8af9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.261 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dde221f7-3909-4e41-845f-cf4d8d2aa239]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488185, 'reachable_time': 28748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245613, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 systemd[1]: run-netns-ovnmeta\x2ddfa287bf\x2d10c3\x2d40fc\x2d8071\x2d37bb7f801357.mount: Deactivated successfully.
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.266 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.267 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[dc0ba6f8-b32a-44ba-877b-acc8afd8b64d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.268 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 45a08bd2-8b6a-4af2-bd49-4df95910daa9 in datapath 9451d867-0aba-464d-b4d9-f947b887e903 unbound from our chassis#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.270 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9451d867-0aba-464d-b4d9-f947b887e903, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.271 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[db851762-9573-4e56-94fd-eff386aa86e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.271 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 namespace which is not needed anymore#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.359 232437 INFO nova.virt.libvirt.driver [-] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Instance destroyed successfully.#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.360 232437 DEBUG nova.objects.instance [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'resources' on Instance uuid 646ce79a-a40a-4b2a-9af5-90c98c98b72a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.380 232437 DEBUG nova.virt.libvirt.vif [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:01:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1264066457',display_name='tempest-ServersAdminTestJSON-server-1264066457',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1264066457',id=20,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:01:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ktncwgme',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:01:56Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=646ce79a-a40a-4b2a-9af5-90c98c98b72a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "address": "fa:16:3e:12:9e:14", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45a08bd2-8b", "ovs_interfaceid": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.380 232437 DEBUG nova.network.os_vif_util [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "address": "fa:16:3e:12:9e:14", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45a08bd2-8b", "ovs_interfaceid": "45a08bd2-8b6a-4af2-bd49-4df95910daa9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.381 232437 DEBUG nova.network.os_vif_util [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=45a08bd2-8b6a-4af2-bd49-4df95910daa9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45a08bd2-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.381 232437 DEBUG os_vif [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=45a08bd2-8b6a-4af2-bd49-4df95910daa9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45a08bd2-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.382 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.383 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45a08bd2-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.384 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.385 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.387 232437 INFO os_vif [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:9e:14,bridge_name='br-int',has_traffic_filtering=True,id=45a08bd2-8b6a-4af2-bd49-4df95910daa9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45a08bd2-8b')#033[00m
Dec  6 02:03:51 np0005548731 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[242420]: [NOTICE]   (242424) : haproxy version is 2.8.14-c23fe91
Dec  6 02:03:51 np0005548731 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[242420]: [NOTICE]   (242424) : path to executable is /usr/sbin/haproxy
Dec  6 02:03:51 np0005548731 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[242420]: [WARNING]  (242424) : Exiting Master process...
Dec  6 02:03:51 np0005548731 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[242420]: [ALERT]    (242424) : Current worker (242426) exited with code 143 (Terminated)
Dec  6 02:03:51 np0005548731 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[242420]: [WARNING]  (242424) : All workers exited. Exiting... (0)
Dec  6 02:03:51 np0005548731 systemd[1]: libpod-cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed.scope: Deactivated successfully.
Dec  6 02:03:51 np0005548731 podman[245641]: 2025-12-06 07:03:51.402963777 +0000 UTC m=+0.047776031 container died cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 02:03:51 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed-userdata-shm.mount: Deactivated successfully.
Dec  6 02:03:51 np0005548731 systemd[1]: var-lib-containers-storage-overlay-cfb10c1571a5fcd19dbb221e1972ccd726d9597723dd400af553f0aaf0207147-merged.mount: Deactivated successfully.
Dec  6 02:03:51 np0005548731 podman[245641]: 2025-12-06 07:03:51.435367699 +0000 UTC m=+0.080179953 container cleanup cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 02:03:51 np0005548731 systemd[1]: libpod-conmon-cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed.scope: Deactivated successfully.
Dec  6 02:03:51 np0005548731 podman[245696]: 2025-12-06 07:03:51.49980999 +0000 UTC m=+0.042069775 container remove cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.505 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1dd20f-2919-43c6-8af1-33186b84b1fb]: (4, ('Sat Dec  6 07:03:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 (cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed)\ncbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed\nSat Dec  6 07:03:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 (cbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed)\ncbd6730de15aad03699214fa16a0916b52358932a2b188eaaa529f36f62c9fed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.506 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fef2613f-1a33-4aa3-91de-ffb6177f0bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.507 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.509 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 kernel: tap9451d867-00: left promiscuous mode
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.523 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.526 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1b2a22-dd13-4f05-8818-f1ffa6c0a992]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.549 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[85c7ba1c-4fac-4c1c-a3c1-f2635e55edb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.550 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1e61cab2-de61-43a1-85bb-a7b7764b0a93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.570 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[412dc53a-89d8-48b0-b82f-01ac4a0d7f20]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477760, 'reachable_time': 25969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245714, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.572 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:03:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:03:51.572 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[96759ab2-db6f-4c45-a817-838259709f03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.599 232437 DEBUG nova.compute.manager [req-0d900078-f9dd-40b0-97cb-4ebbaaa58215 req-b1d659b3-a7d6-417a-8d4a-f58569c59461 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Received event network-vif-unplugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.599 232437 DEBUG oslo_concurrency.lockutils [req-0d900078-f9dd-40b0-97cb-4ebbaaa58215 req-b1d659b3-a7d6-417a-8d4a-f58569c59461 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.600 232437 DEBUG oslo_concurrency.lockutils [req-0d900078-f9dd-40b0-97cb-4ebbaaa58215 req-b1d659b3-a7d6-417a-8d4a-f58569c59461 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.600 232437 DEBUG oslo_concurrency.lockutils [req-0d900078-f9dd-40b0-97cb-4ebbaaa58215 req-b1d659b3-a7d6-417a-8d4a-f58569c59461 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.600 232437 DEBUG nova.compute.manager [req-0d900078-f9dd-40b0-97cb-4ebbaaa58215 req-b1d659b3-a7d6-417a-8d4a-f58569c59461 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] No waiting events found dispatching network-vif-unplugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.600 232437 DEBUG nova.compute.manager [req-0d900078-f9dd-40b0-97cb-4ebbaaa58215 req-b1d659b3-a7d6-417a-8d4a-f58569c59461 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Received event network-vif-unplugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:03:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:51.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:03:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.729 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.730 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.830 232437 INFO nova.virt.libvirt.driver [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Deleting instance files /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_del#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.831 232437 INFO nova.virt.libvirt.driver [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Deletion of /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_del complete#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.885 232437 INFO nova.compute.manager [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Took 1.72 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.885 232437 DEBUG oslo.service.loopingcall [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.886 232437 DEBUG nova.compute.manager [-] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:03:51 np0005548731 nova_compute[232433]: 2025-12-06 07:03:51.886 232437 DEBUG nova.network.neutron [-] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:03:52 np0005548731 systemd[1]: run-netns-ovnmeta\x2d9451d867\x2d0aba\x2d464d\x2db4d9\x2df947b887e903.mount: Deactivated successfully.
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.848 232437 DEBUG nova.compute.manager [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.848 232437 DEBUG oslo_concurrency.lockutils [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.848 232437 DEBUG oslo_concurrency.lockutils [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.848 232437 DEBUG oslo_concurrency.lockutils [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.849 232437 DEBUG nova.compute.manager [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.849 232437 DEBUG nova.compute.manager [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.849 232437 DEBUG nova.compute.manager [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.849 232437 DEBUG oslo_concurrency.lockutils [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.849 232437 DEBUG oslo_concurrency.lockutils [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.850 232437 DEBUG oslo_concurrency.lockutils [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.850 232437 DEBUG nova.compute.manager [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:03:52 np0005548731 nova_compute[232433]: 2025-12-06 07:03:52.850 232437 WARNING nova.compute.manager [req-91b37c9c-8b4d-40df-8cf0-da9024939374 req-433e5d8a-4eb8-49fc-886c-a05f22b589e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received unexpected event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.013 232437 INFO nova.virt.libvirt.driver [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Deleting instance files /var/lib/nova/instances/646ce79a-a40a-4b2a-9af5-90c98c98b72a_del#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.014 232437 INFO nova.virt.libvirt.driver [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Deletion of /var/lib/nova/instances/646ce79a-a40a-4b2a-9af5-90c98c98b72a_del complete#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.212 232437 INFO nova.compute.manager [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Took 2.08 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.212 232437 DEBUG oslo.service.loopingcall [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.213 232437 DEBUG nova.compute.manager [-] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.213 232437 DEBUG nova.network.neutron [-] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.480 232437 DEBUG nova.network.neutron [-] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.499 232437 INFO nova.compute.manager [-] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Took 1.61 seconds to deallocate network for instance.#033[00m
Dec  6 02:03:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:53.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.715 232437 DEBUG nova.compute.manager [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Received event network-vif-plugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.715 232437 DEBUG oslo_concurrency.lockutils [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.716 232437 DEBUG oslo_concurrency.lockutils [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.716 232437 DEBUG oslo_concurrency.lockutils [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.716 232437 DEBUG nova.compute.manager [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] No waiting events found dispatching network-vif-plugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.716 232437 WARNING nova.compute.manager [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Received unexpected event network-vif-plugged-45a08bd2-8b6a-4af2-bd49-4df95910daa9 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.716 232437 DEBUG nova.compute.manager [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-deleted-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:53.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.918 232437 INFO nova.compute.manager [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Took 0.42 seconds to detach 1 volumes for instance.#033[00m
Dec  6 02:03:53 np0005548731 nova_compute[232433]: 2025-12-06 07:03:53.920 232437 DEBUG nova.compute.manager [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Deleting volume: 82a6fabc-b798-4571-a13e-f9c67a3f9413 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.204 232437 DEBUG oslo_concurrency.lockutils [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.205 232437 DEBUG oslo_concurrency.lockutils [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.293 232437 DEBUG oslo_concurrency.processutils [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.440 232437 DEBUG nova.network.neutron [-] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.459 232437 INFO nova.compute.manager [-] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Took 1.25 seconds to deallocate network for instance.#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.527 232437 DEBUG oslo_concurrency.lockutils [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.601 232437 DEBUG nova.compute.manager [req-fe8cc36e-ded3-4353-a809-5d70fd7c543a req-65e80e8f-1635-43de-bd8c-5315065ed14d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Received event network-vif-deleted-45a08bd2-8b6a-4af2-bd49-4df95910daa9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:03:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:03:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/761473893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.727 232437 DEBUG oslo_concurrency.processutils [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.733 232437 DEBUG nova.compute.provider_tree [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.751 232437 DEBUG nova.scheduler.client.report [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.778 232437 DEBUG oslo_concurrency.lockutils [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.780 232437 DEBUG oslo_concurrency.lockutils [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.819 232437 INFO nova.scheduler.client.report [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Deleted allocations for instance 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.837 232437 DEBUG oslo_concurrency.processutils [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:03:54 np0005548731 nova_compute[232433]: 2025-12-06 07:03:54.918 232437 DEBUG oslo_concurrency.lockutils [None req-1b17e9c9-95a7-476d-904a-b53a93986e12 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:03:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3510907800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:03:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:03:55 np0005548731 nova_compute[232433]: 2025-12-06 07:03:55.287 232437 DEBUG oslo_concurrency.processutils [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:03:55 np0005548731 nova_compute[232433]: 2025-12-06 07:03:55.292 232437 DEBUG nova.compute.provider_tree [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:03:55 np0005548731 nova_compute[232433]: 2025-12-06 07:03:55.310 232437 DEBUG nova.scheduler.client.report [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:03:55 np0005548731 nova_compute[232433]: 2025-12-06 07:03:55.374 232437 DEBUG oslo_concurrency.lockutils [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:55 np0005548731 nova_compute[232433]: 2025-12-06 07:03:55.441 232437 INFO nova.scheduler.client.report [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Deleted allocations for instance 646ce79a-a40a-4b2a-9af5-90c98c98b72a#033[00m
Dec  6 02:03:55 np0005548731 nova_compute[232433]: 2025-12-06 07:03:55.534 232437 DEBUG oslo_concurrency.lockutils [None req-bde758fc-8653-4b50-ba81-8d273476feeb a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "646ce79a-a40a-4b2a-9af5-90c98c98b72a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:03:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:55.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:55.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:56 np0005548731 nova_compute[232433]: 2025-12-06 07:03:56.086 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:56 np0005548731 nova_compute[232433]: 2025-12-06 07:03:56.385 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:03:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:57.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:03:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:03:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:57.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:03:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:03:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:59.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:03:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:03:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:03:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:59.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:00 np0005548731 nova_compute[232433]: 2025-12-06 07:04:00.538 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Acquiring lock "0d75b3d9-be73-41d3-a29e-752a5f62702c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:00 np0005548731 nova_compute[232433]: 2025-12-06 07:04:00.539 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "0d75b3d9-be73-41d3-a29e-752a5f62702c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:00 np0005548731 nova_compute[232433]: 2025-12-06 07:04:00.580 232437 DEBUG nova.compute.manager [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:04:00 np0005548731 nova_compute[232433]: 2025-12-06 07:04:00.671 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:00 np0005548731 nova_compute[232433]: 2025-12-06 07:04:00.672 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:00 np0005548731 nova_compute[232433]: 2025-12-06 07:04:00.680 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:04:00 np0005548731 nova_compute[232433]: 2025-12-06 07:04:00.681 232437 INFO nova.compute.claims [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:04:00 np0005548731 nova_compute[232433]: 2025-12-06 07:04:00.831 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:00.848 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:00.849 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:00.849 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.088 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:04:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1303928982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.315 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.321 232437 DEBUG nova.compute.provider_tree [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.386 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.496 232437 DEBUG nova.scheduler.client.report [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.533 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.534 232437 DEBUG nova.compute.manager [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.617 232437 DEBUG nova.compute.manager [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.617 232437 DEBUG nova.network.neutron [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.638 232437 INFO nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.672 232437 DEBUG nova.compute.manager [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:04:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:01.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:01.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.757 232437 DEBUG nova.compute.manager [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.758 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.759 232437 INFO nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Creating image(s)#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.787 232437 DEBUG nova.storage.rbd_utils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] rbd image 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.815 232437 DEBUG nova.storage.rbd_utils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] rbd image 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.848 232437 DEBUG nova.storage.rbd_utils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] rbd image 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.851 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.916 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.919 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.921 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.922 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.957 232437 DEBUG nova.storage.rbd_utils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] rbd image 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:01 np0005548731 nova_compute[232433]: 2025-12-06 07:04:01.962 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.368 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.445 232437 DEBUG nova.storage.rbd_utils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] resizing rbd image 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.550 232437 DEBUG nova.objects.instance [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d75b3d9-be73-41d3-a29e-752a5f62702c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.620 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.621 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Ensure instance console log exists: /var/lib/nova/instances/0d75b3d9-be73-41d3-a29e-752a5f62702c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.622 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.622 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.622 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.835 232437 DEBUG nova.network.neutron [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.835 232437 DEBUG nova.compute.manager [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.841 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.845 232437 WARNING nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.850 232437 DEBUG nova.virt.libvirt.host [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.851 232437 DEBUG nova.virt.libvirt.host [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.855 232437 DEBUG nova.virt.libvirt.host [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.855 232437 DEBUG nova.virt.libvirt.host [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.856 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.857 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.857 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.857 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.858 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.858 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.858 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.858 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.859 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.859 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.859 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.859 232437 DEBUG nova.virt.hardware [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:04:02 np0005548731 nova_compute[232433]: 2025-12-06 07:04:02.862 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:04:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1730755814' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:04:03 np0005548731 nova_compute[232433]: 2025-12-06 07:04:03.285 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:03 np0005548731 nova_compute[232433]: 2025-12-06 07:04:03.310 232437 DEBUG nova.storage.rbd_utils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] rbd image 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:03 np0005548731 nova_compute[232433]: 2025-12-06 07:04:03.314 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:03.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:04:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3035910838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:04:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:03.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:03 np0005548731 nova_compute[232433]: 2025-12-06 07:04:03.741 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:03 np0005548731 nova_compute[232433]: 2025-12-06 07:04:03.743 232437 DEBUG nova.objects.instance [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d75b3d9-be73-41d3-a29e-752a5f62702c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:04:03 np0005548731 nova_compute[232433]: 2025-12-06 07:04:03.780 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  <uuid>0d75b3d9-be73-41d3-a29e-752a5f62702c</uuid>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  <name>instance-00000019</name>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerDiagnosticsTest-server-123434567</nova:name>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:04:02</nova:creationTime>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <nova:user uuid="e53d35ee2a994a8a9b397f772571123e">tempest-ServerDiagnosticsTest-1295686787-project-member</nova:user>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <nova:project uuid="7bc18054c7b54331a9f141e9c35a9d00">tempest-ServerDiagnosticsTest-1295686787</nova:project>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <nova:ports/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <entry name="serial">0d75b3d9-be73-41d3-a29e-752a5f62702c</entry>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <entry name="uuid">0d75b3d9-be73-41d3-a29e-752a5f62702c</entry>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/0d75b3d9-be73-41d3-a29e-752a5f62702c_disk">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/0d75b3d9-be73-41d3-a29e-752a5f62702c_disk.config">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/0d75b3d9-be73-41d3-a29e-752a5f62702c/console.log" append="off"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:04:03 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:04:03 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:04:03 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:04:03 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:04:03 np0005548731 nova_compute[232433]: 2025-12-06 07:04:03.912 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:04:03 np0005548731 nova_compute[232433]: 2025-12-06 07:04:03.913 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:04:03 np0005548731 nova_compute[232433]: 2025-12-06 07:04:03.914 232437 INFO nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Using config drive#033[00m
Dec  6 02:04:03 np0005548731 nova_compute[232433]: 2025-12-06 07:04:03.958 232437 DEBUG nova.storage.rbd_utils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] rbd image 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:04 np0005548731 nova_compute[232433]: 2025-12-06 07:04:04.113 232437 INFO nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Creating config drive at /var/lib/nova/instances/0d75b3d9-be73-41d3-a29e-752a5f62702c/disk.config#033[00m
Dec  6 02:04:04 np0005548731 nova_compute[232433]: 2025-12-06 07:04:04.120 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d75b3d9-be73-41d3-a29e-752a5f62702c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp004_1w2j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:04 np0005548731 nova_compute[232433]: 2025-12-06 07:04:04.250 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d75b3d9-be73-41d3-a29e-752a5f62702c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp004_1w2j" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:04 np0005548731 nova_compute[232433]: 2025-12-06 07:04:04.281 232437 DEBUG nova.storage.rbd_utils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] rbd image 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:04 np0005548731 nova_compute[232433]: 2025-12-06 07:04:04.284 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d75b3d9-be73-41d3-a29e-752a5f62702c/disk.config 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:04 np0005548731 nova_compute[232433]: 2025-12-06 07:04:04.442 232437 DEBUG oslo_concurrency.processutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d75b3d9-be73-41d3-a29e-752a5f62702c/disk.config 0d75b3d9-be73-41d3-a29e-752a5f62702c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:04 np0005548731 nova_compute[232433]: 2025-12-06 07:04:04.444 232437 INFO nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Deleting local config drive /var/lib/nova/instances/0d75b3d9-be73-41d3-a29e-752a5f62702c/disk.config because it was imported into RBD.#033[00m
Dec  6 02:04:04 np0005548731 systemd-machined[195355]: New machine qemu-12-instance-00000019.
Dec  6 02:04:04 np0005548731 systemd[1]: Started Virtual Machine qemu-12-instance-00000019.
Dec  6 02:04:04 np0005548731 podman[246138]: 2025-12-06 07:04:04.59697266 +0000 UTC m=+0.068203124 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 02:04:04 np0005548731 podman[246136]: 2025-12-06 07:04:04.599034889 +0000 UTC m=+0.075917319 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  6 02:04:04 np0005548731 podman[246137]: 2025-12-06 07:04:04.616276615 +0000 UTC m=+0.090109872 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.128 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004645.1276925, 0d75b3d9-be73-41d3-a29e-752a5f62702c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.128 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.131 232437 DEBUG nova.compute.manager [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.131 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.134 232437 INFO nova.virt.libvirt.driver [-] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Instance spawned successfully.#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.135 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.157 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.163 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.166 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.167 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.167 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.168 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.168 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.169 232437 DEBUG nova.virt.libvirt.driver [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.192 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.193 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004645.1282437, 0d75b3d9-be73-41d3-a29e-752a5f62702c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.193 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] VM Started (Lifecycle Event)#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.219 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.223 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.248 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.249 232437 INFO nova.compute.manager [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Took 3.49 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.250 232437 DEBUG nova.compute.manager [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.324 232437 INFO nova.compute.manager [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Took 4.68 seconds to build instance.#033[00m
Dec  6 02:04:05 np0005548731 nova_compute[232433]: 2025-12-06 07:04:05.342 232437 DEBUG oslo_concurrency.lockutils [None req-84d66094-e3f3-43ea-9822-fd77441963a6 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "0d75b3d9-be73-41d3-a29e-752a5f62702c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:05.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:04:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:05.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.090 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:06.103 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.104 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:06.105 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.200 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004631.1985075, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.200 232437 INFO nova.compute.manager [-] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.232 232437 DEBUG nova.compute.manager [None req-2c25ad59-c337-49a3-b2ad-979dc4d3e93b - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.359 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004631.3582492, 646ce79a-a40a-4b2a-9af5-90c98c98b72a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.359 232437 INFO nova.compute.manager [-] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.381 232437 DEBUG nova.compute.manager [None req-1832855b-2592-4f7e-84d1-755d3b93f007 - - - - - -] [instance: 646ce79a-a40a-4b2a-9af5-90c98c98b72a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.388 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.519 232437 DEBUG nova.compute.manager [None req-d4ed1764-4b3a-414b-a46e-753871aa9f71 65ee878cf7144ceaaac06cf1c473333f 418031d879994067872c57f047bea151 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.523 232437 INFO nova.compute.manager [None req-d4ed1764-4b3a-414b-a46e-753871aa9f71 65ee878cf7144ceaaac06cf1c473333f 418031d879994067872c57f047bea151 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Retrieving diagnostics#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.709 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Acquiring lock "0d75b3d9-be73-41d3-a29e-752a5f62702c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.710 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "0d75b3d9-be73-41d3-a29e-752a5f62702c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.710 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Acquiring lock "0d75b3d9-be73-41d3-a29e-752a5f62702c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.710 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "0d75b3d9-be73-41d3-a29e-752a5f62702c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.711 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "0d75b3d9-be73-41d3-a29e-752a5f62702c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.712 232437 INFO nova.compute.manager [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Terminating instance#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.712 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Acquiring lock "refresh_cache-0d75b3d9-be73-41d3-a29e-752a5f62702c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.713 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Acquired lock "refresh_cache-0d75b3d9-be73-41d3-a29e-752a5f62702c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.713 232437 DEBUG nova.network.neutron [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:04:06 np0005548731 nova_compute[232433]: 2025-12-06 07:04:06.947 232437 DEBUG nova.network.neutron [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:04:07 np0005548731 nova_compute[232433]: 2025-12-06 07:04:07.335 232437 DEBUG nova.network.neutron [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:04:07 np0005548731 nova_compute[232433]: 2025-12-06 07:04:07.361 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Releasing lock "refresh_cache-0d75b3d9-be73-41d3-a29e-752a5f62702c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:04:07 np0005548731 nova_compute[232433]: 2025-12-06 07:04:07.362 232437 DEBUG nova.compute.manager [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:04:07 np0005548731 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000019.scope: Deactivated successfully.
Dec  6 02:04:07 np0005548731 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000019.scope: Consumed 2.963s CPU time.
Dec  6 02:04:07 np0005548731 systemd-machined[195355]: Machine qemu-12-instance-00000019 terminated.
Dec  6 02:04:07 np0005548731 nova_compute[232433]: 2025-12-06 07:04:07.580 232437 INFO nova.virt.libvirt.driver [-] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Instance destroyed successfully.#033[00m
Dec  6 02:04:07 np0005548731 nova_compute[232433]: 2025-12-06 07:04:07.581 232437 DEBUG nova.objects.instance [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lazy-loading 'resources' on Instance uuid 0d75b3d9-be73-41d3-a29e-752a5f62702c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:04:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:07.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:07.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.376 232437 INFO nova.virt.libvirt.driver [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Deleting instance files /var/lib/nova/instances/0d75b3d9-be73-41d3-a29e-752a5f62702c_del#033[00m
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.377 232437 INFO nova.virt.libvirt.driver [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Deletion of /var/lib/nova/instances/0d75b3d9-be73-41d3-a29e-752a5f62702c_del complete#033[00m
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.438 232437 INFO nova.compute.manager [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.439 232437 DEBUG oslo.service.loopingcall [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.439 232437 DEBUG nova.compute.manager [-] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.439 232437 DEBUG nova.network.neutron [-] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.628 232437 DEBUG nova.network.neutron [-] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.643 232437 DEBUG nova.network.neutron [-] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:04:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:04:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1565463242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:04:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:04:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1565463242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.659 232437 INFO nova.compute.manager [-] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Took 0.22 seconds to deallocate network for instance.#033[00m
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.719 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.720 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.780 232437 DEBUG oslo_concurrency.processutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:08 np0005548731 nova_compute[232433]: 2025-12-06 07:04:08.949 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:04:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3303593592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:04:09 np0005548731 nova_compute[232433]: 2025-12-06 07:04:09.199 232437 DEBUG oslo_concurrency.processutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:09 np0005548731 nova_compute[232433]: 2025-12-06 07:04:09.205 232437 DEBUG nova.compute.provider_tree [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:04:09 np0005548731 nova_compute[232433]: 2025-12-06 07:04:09.227 232437 DEBUG nova.scheduler.client.report [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:04:09 np0005548731 nova_compute[232433]: 2025-12-06 07:04:09.263 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:09 np0005548731 nova_compute[232433]: 2025-12-06 07:04:09.315 232437 INFO nova.scheduler.client.report [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Deleted allocations for instance 0d75b3d9-be73-41d3-a29e-752a5f62702c#033[00m
Dec  6 02:04:09 np0005548731 nova_compute[232433]: 2025-12-06 07:04:09.387 232437 DEBUG oslo_concurrency.lockutils [None req-4f8a628b-c8b8-4532-925c-10126ae48726 e53d35ee2a994a8a9b397f772571123e 7bc18054c7b54331a9f141e9c35a9d00 - - default default] Lock "0d75b3d9-be73-41d3-a29e-752a5f62702c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:09.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:09.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:11 np0005548731 nova_compute[232433]: 2025-12-06 07:04:11.100 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:11 np0005548731 nova_compute[232433]: 2025-12-06 07:04:11.390 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:04:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:11.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:04:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:04:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:11.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:04:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:13.107 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:04:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:13.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:04:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:13.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:04:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:15.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:15.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:16 np0005548731 nova_compute[232433]: 2025-12-06 07:04:16.102 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:16 np0005548731 nova_compute[232433]: 2025-12-06 07:04:16.392 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:17.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:04:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:17.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.173 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Acquiring lock "c8598dee-32a7-4624-91be-ec0e2dbb738b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.173 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "c8598dee-32a7-4624-91be-ec0e2dbb738b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.194 232437 DEBUG nova.compute.manager [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.302 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.303 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.309 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.310 232437 INFO nova.compute.claims [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.429 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:19.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:19.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:04:19 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/886785863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.869 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.875 232437 DEBUG nova.compute.provider_tree [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.895 232437 DEBUG nova.scheduler.client.report [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.923 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.924 232437 DEBUG nova.compute.manager [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.980 232437 DEBUG nova.compute.manager [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:04:19 np0005548731 nova_compute[232433]: 2025-12-06 07:04:19.981 232437 DEBUG nova.network.neutron [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.002 232437 INFO nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.020 232437 DEBUG nova.compute.manager [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.116 232437 DEBUG nova.compute.manager [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.118 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.118 232437 INFO nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Creating image(s)#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.143 232437 DEBUG nova.storage.rbd_utils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] rbd image c8598dee-32a7-4624-91be-ec0e2dbb738b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.171 232437 DEBUG nova.storage.rbd_utils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] rbd image c8598dee-32a7-4624-91be-ec0e2dbb738b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.200 232437 DEBUG nova.storage.rbd_utils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] rbd image c8598dee-32a7-4624-91be-ec0e2dbb738b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.205 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.272 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.273 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.273 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.274 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.299 232437 DEBUG nova.storage.rbd_utils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] rbd image c8598dee-32a7-4624-91be-ec0e2dbb738b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.302 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c8598dee-32a7-4624-91be-ec0e2dbb738b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.676 232437 DEBUG nova.network.neutron [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec  6 02:04:20 np0005548731 nova_compute[232433]: 2025-12-06 07:04:20.676 232437 DEBUG nova.compute.manager [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.139 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.171 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c8598dee-32a7-4624-91be-ec0e2dbb738b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.868s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.247 232437 DEBUG nova.storage.rbd_utils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] resizing rbd image c8598dee-32a7-4624-91be-ec0e2dbb738b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.349 232437 DEBUG nova.objects.instance [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lazy-loading 'migration_context' on Instance uuid c8598dee-32a7-4624-91be-ec0e2dbb738b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.377 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.378 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Ensure instance console log exists: /var/lib/nova/instances/c8598dee-32a7-4624-91be-ec0e2dbb738b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.378 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.379 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.379 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.381 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.387 232437 WARNING nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.393 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.399 232437 DEBUG nova.virt.libvirt.host [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.400 232437 DEBUG nova.virt.libvirt.host [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.404 232437 DEBUG nova.virt.libvirt.host [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.405 232437 DEBUG nova.virt.libvirt.host [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.406 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.407 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.407 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.408 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.408 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.408 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.409 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.409 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.409 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.410 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.410 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.410 232437 DEBUG nova.virt.hardware [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.414 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:04:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:21.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:04:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:21.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:04:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3895071870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.918 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.950 232437 DEBUG nova.storage.rbd_utils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] rbd image c8598dee-32a7-4624-91be-ec0e2dbb738b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:21 np0005548731 nova_compute[232433]: 2025-12-06 07:04:21.957 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:04:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1359816576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.370 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.372 232437 DEBUG nova.objects.instance [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8598dee-32a7-4624-91be-ec0e2dbb738b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.420 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  <uuid>c8598dee-32a7-4624-91be-ec0e2dbb738b</uuid>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  <name>instance-0000001a</name>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-2033065459</nova:name>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:04:21</nova:creationTime>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <nova:user uuid="e3271801a7ec4e8c98140835d902af20">tempest-ServerDiagnosticsNegativeTest-990899949-project-member</nova:user>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <nova:project uuid="80ff51d0b3b24875aba4fff9772ae201">tempest-ServerDiagnosticsNegativeTest-990899949</nova:project>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <nova:ports/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <entry name="serial">c8598dee-32a7-4624-91be-ec0e2dbb738b</entry>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <entry name="uuid">c8598dee-32a7-4624-91be-ec0e2dbb738b</entry>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c8598dee-32a7-4624-91be-ec0e2dbb738b_disk">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c8598dee-32a7-4624-91be-ec0e2dbb738b_disk.config">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/c8598dee-32a7-4624-91be-ec0e2dbb738b/console.log" append="off"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:04:22 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:04:22 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:04:22 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:04:22 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.477 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.478 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.478 232437 INFO nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Using config drive#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.505 232437 DEBUG nova.storage.rbd_utils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] rbd image c8598dee-32a7-4624-91be-ec0e2dbb738b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.579 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004647.578753, 0d75b3d9-be73-41d3-a29e-752a5f62702c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.580 232437 INFO nova.compute.manager [-] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.599 232437 DEBUG nova.compute.manager [None req-9044e9b1-7b84-4c02-81bb-fecd98ef7e80 - - - - - -] [instance: 0d75b3d9-be73-41d3-a29e-752a5f62702c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.692 232437 INFO nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Creating config drive at /var/lib/nova/instances/c8598dee-32a7-4624-91be-ec0e2dbb738b/disk.config#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.697 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8598dee-32a7-4624-91be-ec0e2dbb738b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeg8q68ij execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.823 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8598dee-32a7-4624-91be-ec0e2dbb738b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeg8q68ij" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.852 232437 DEBUG nova.storage.rbd_utils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] rbd image c8598dee-32a7-4624-91be-ec0e2dbb738b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:22 np0005548731 nova_compute[232433]: 2025-12-06 07:04:22.856 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8598dee-32a7-4624-91be-ec0e2dbb738b/disk.config c8598dee-32a7-4624-91be-ec0e2dbb738b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:23.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:04:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:23.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:04:23 np0005548731 nova_compute[232433]: 2025-12-06 07:04:23.882 232437 DEBUG oslo_concurrency.processutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8598dee-32a7-4624-91be-ec0e2dbb738b/disk.config c8598dee-32a7-4624-91be-ec0e2dbb738b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:23 np0005548731 nova_compute[232433]: 2025-12-06 07:04:23.883 232437 INFO nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Deleting local config drive /var/lib/nova/instances/c8598dee-32a7-4624-91be-ec0e2dbb738b/disk.config because it was imported into RBD.#033[00m
Dec  6 02:04:23 np0005548731 systemd-machined[195355]: New machine qemu-13-instance-0000001a.
Dec  6 02:04:23 np0005548731 systemd[1]: Started Virtual Machine qemu-13-instance-0000001a.
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.583 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004664.5811987, c8598dee-32a7-4624-91be-ec0e2dbb738b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.584 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.587 232437 DEBUG nova.compute.manager [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.587 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.592 232437 INFO nova.virt.libvirt.driver [-] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Instance spawned successfully.#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.592 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.630 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.635 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.635 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.637 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.637 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.638 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.638 232437 DEBUG nova.virt.libvirt.driver [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.643 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.671 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.672 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004664.5825553, c8598dee-32a7-4624-91be-ec0e2dbb738b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.672 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] VM Started (Lifecycle Event)#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.699 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.702 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.707 232437 INFO nova.compute.manager [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Took 4.59 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.708 232437 DEBUG nova.compute.manager [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.720 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.786 232437 INFO nova.compute.manager [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Took 5.53 seconds to build instance.#033[00m
Dec  6 02:04:24 np0005548731 nova_compute[232433]: 2025-12-06 07:04:24.800 232437 DEBUG oslo_concurrency.lockutils [None req-79fb1680-82bd-464f-9ecb-e088fd75c9fe e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "c8598dee-32a7-4624-91be-ec0e2dbb738b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:04:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:25.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:04:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:25.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.098 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Acquiring lock "c8598dee-32a7-4624-91be-ec0e2dbb738b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.099 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "c8598dee-32a7-4624-91be-ec0e2dbb738b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.099 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Acquiring lock "c8598dee-32a7-4624-91be-ec0e2dbb738b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.099 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "c8598dee-32a7-4624-91be-ec0e2dbb738b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.099 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "c8598dee-32a7-4624-91be-ec0e2dbb738b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.101 232437 INFO nova.compute.manager [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Terminating instance#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.101 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Acquiring lock "refresh_cache-c8598dee-32a7-4624-91be-ec0e2dbb738b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.102 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Acquired lock "refresh_cache-c8598dee-32a7-4624-91be-ec0e2dbb738b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.102 232437 DEBUG nova.network.neutron [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.143 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.395 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.523 232437 DEBUG nova.network.neutron [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.812 232437 DEBUG nova.network.neutron [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.827 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Releasing lock "refresh_cache-c8598dee-32a7-4624-91be-ec0e2dbb738b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:04:26 np0005548731 nova_compute[232433]: 2025-12-06 07:04:26.828 232437 DEBUG nova.compute.manager [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:04:27 np0005548731 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Dec  6 02:04:27 np0005548731 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001a.scope: Consumed 2.941s CPU time.
Dec  6 02:04:27 np0005548731 systemd-machined[195355]: Machine qemu-13-instance-0000001a terminated.
Dec  6 02:04:27 np0005548731 nova_compute[232433]: 2025-12-06 07:04:27.445 232437 INFO nova.virt.libvirt.driver [-] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Instance destroyed successfully.#033[00m
Dec  6 02:04:27 np0005548731 nova_compute[232433]: 2025-12-06 07:04:27.446 232437 DEBUG nova.objects.instance [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lazy-loading 'resources' on Instance uuid c8598dee-32a7-4624-91be-ec0e2dbb738b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:04:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:27.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:27.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:29 np0005548731 nova_compute[232433]: 2025-12-06 07:04:29.611 232437 INFO nova.virt.libvirt.driver [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Deleting instance files /var/lib/nova/instances/c8598dee-32a7-4624-91be-ec0e2dbb738b_del#033[00m
Dec  6 02:04:29 np0005548731 nova_compute[232433]: 2025-12-06 07:04:29.612 232437 INFO nova.virt.libvirt.driver [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Deletion of /var/lib/nova/instances/c8598dee-32a7-4624-91be-ec0e2dbb738b_del complete#033[00m
Dec  6 02:04:29 np0005548731 nova_compute[232433]: 2025-12-06 07:04:29.670 232437 INFO nova.compute.manager [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Took 2.84 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:04:29 np0005548731 nova_compute[232433]: 2025-12-06 07:04:29.670 232437 DEBUG oslo.service.loopingcall [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:04:29 np0005548731 nova_compute[232433]: 2025-12-06 07:04:29.670 232437 DEBUG nova.compute.manager [-] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:04:29 np0005548731 nova_compute[232433]: 2025-12-06 07:04:29.671 232437 DEBUG nova.network.neutron [-] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:04:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:29.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:04:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:29.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:04:30 np0005548731 nova_compute[232433]: 2025-12-06 07:04:30.264 232437 DEBUG nova.network.neutron [-] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:04:30 np0005548731 nova_compute[232433]: 2025-12-06 07:04:30.280 232437 DEBUG nova.network.neutron [-] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:04:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:30 np0005548731 nova_compute[232433]: 2025-12-06 07:04:30.307 232437 INFO nova.compute.manager [-] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Took 0.64 seconds to deallocate network for instance.#033[00m
Dec  6 02:04:30 np0005548731 nova_compute[232433]: 2025-12-06 07:04:30.369 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:30 np0005548731 nova_compute[232433]: 2025-12-06 07:04:30.370 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:30 np0005548731 nova_compute[232433]: 2025-12-06 07:04:30.424 232437 DEBUG oslo_concurrency.processutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:04:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1089812738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:04:30 np0005548731 nova_compute[232433]: 2025-12-06 07:04:30.882 232437 DEBUG oslo_concurrency.processutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:30 np0005548731 nova_compute[232433]: 2025-12-06 07:04:30.888 232437 DEBUG nova.compute.provider_tree [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:04:30 np0005548731 nova_compute[232433]: 2025-12-06 07:04:30.915 232437 DEBUG nova.scheduler.client.report [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:04:30 np0005548731 nova_compute[232433]: 2025-12-06 07:04:30.938 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:30 np0005548731 nova_compute[232433]: 2025-12-06 07:04:30.964 232437 INFO nova.scheduler.client.report [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Deleted allocations for instance c8598dee-32a7-4624-91be-ec0e2dbb738b#033[00m
Dec  6 02:04:31 np0005548731 nova_compute[232433]: 2025-12-06 07:04:31.021 232437 DEBUG oslo_concurrency.lockutils [None req-92730943-4a2e-498d-9848-e5049fc30375 e3271801a7ec4e8c98140835d902af20 80ff51d0b3b24875aba4fff9772ae201 - - default default] Lock "c8598dee-32a7-4624-91be-ec0e2dbb738b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:31 np0005548731 nova_compute[232433]: 2025-12-06 07:04:31.144 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:31 np0005548731 nova_compute[232433]: 2025-12-06 07:04:31.397 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:31.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:04:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:31.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:04:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:33.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:04:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:33.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:04:34 np0005548731 podman[246764]: 2025-12-06 07:04:34.908364604 +0000 UTC m=+0.068385702 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 02:04:35 np0005548731 podman[246766]: 2025-12-06 07:04:35.000182221 +0000 UTC m=+0.158896827 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:04:35 np0005548731 podman[246765]: 2025-12-06 07:04:35.018320688 +0000 UTC m=+0.178341926 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:04:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:35.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:35.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:36 np0005548731 nova_compute[232433]: 2025-12-06 07:04:36.146 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:36 np0005548731 nova_compute[232433]: 2025-12-06 07:04:36.399 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:37.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:04:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:37.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.032 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.033 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.050 232437 DEBUG nova.compute.manager [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.157 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.158 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.166 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.166 232437 INFO nova.compute.claims [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.259 232437 DEBUG oslo_concurrency.processutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:04:38 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/554912748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.681 232437 DEBUG oslo_concurrency.processutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.687 232437 DEBUG nova.compute.provider_tree [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.704 232437 DEBUG nova.scheduler.client.report [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.836 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.837 232437 DEBUG nova.compute.manager [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.946 232437 DEBUG nova.compute.manager [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.947 232437 DEBUG nova.network.neutron [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:04:38 np0005548731 nova_compute[232433]: 2025-12-06 07:04:38.964 232437 INFO nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.010 232437 DEBUG nova.compute.manager [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.069 232437 INFO nova.virt.block_device [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Booting with volume 2b74c89f-1018-4ad4-8af8-84109979b9c7 at /dev/vda#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.254 232437 DEBUG os_brick.utils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.255 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.265 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.265 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[448c11d4-35f1-4351-8e8d-842b5d989442]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.266 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.274 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.274 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[d026b15e-ce6e-4125-8aef-3246ccfe08e4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.276 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.284 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.284 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[59fbdcdc-ddeb-40dc-96a3-12133bb7a2e3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.286 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0d804e-8605-4fa1-9b53-9f66cdd89c4f]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.286 232437 DEBUG oslo_concurrency.processutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.314 232437 DEBUG oslo_concurrency.processutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.317 232437 DEBUG os_brick.initiator.connectors.lightos [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.317 232437 DEBUG os_brick.initiator.connectors.lightos [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.317 232437 DEBUG os_brick.initiator.connectors.lightos [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.318 232437 DEBUG os_brick.utils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.318 232437 DEBUG nova.virt.block_device [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating existing volume attachment record: b2a91c5a-7187-4625-b7d3-9dd1c6ec4d90 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:04:39 np0005548731 nova_compute[232433]: 2025-12-06 07:04:39.660 232437 DEBUG nova.policy [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '756e3e1fa7e44042bdf37a6cdd877fac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9e86c61372e24db392d4a12ca71f7e00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:04:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:39.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:39.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:40 np0005548731 nova_compute[232433]: 2025-12-06 07:04:40.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:04:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:40 np0005548731 nova_compute[232433]: 2025-12-06 07:04:40.475 232437 DEBUG nova.compute.manager [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:04:40 np0005548731 nova_compute[232433]: 2025-12-06 07:04:40.477 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:04:40 np0005548731 nova_compute[232433]: 2025-12-06 07:04:40.478 232437 INFO nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Creating image(s)#033[00m
Dec  6 02:04:40 np0005548731 nova_compute[232433]: 2025-12-06 07:04:40.478 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:04:40 np0005548731 nova_compute[232433]: 2025-12-06 07:04:40.479 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Ensure instance console log exists: /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:04:40 np0005548731 nova_compute[232433]: 2025-12-06 07:04:40.479 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:40 np0005548731 nova_compute[232433]: 2025-12-06 07:04:40.479 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:40 np0005548731 nova_compute[232433]: 2025-12-06 07:04:40.480 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:40 np0005548731 nova_compute[232433]: 2025-12-06 07:04:40.618 232437 DEBUG nova.network.neutron [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Successfully created port: 9e200480-deff-4f39-9945-230d157ac906 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:04:41 np0005548731 nova_compute[232433]: 2025-12-06 07:04:41.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:41 np0005548731 nova_compute[232433]: 2025-12-06 07:04:41.401 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:04:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:41.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:04:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:41.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:42 np0005548731 nova_compute[232433]: 2025-12-06 07:04:42.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:04:42 np0005548731 nova_compute[232433]: 2025-12-06 07:04:42.444 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004667.443117, c8598dee-32a7-4624-91be-ec0e2dbb738b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:04:42 np0005548731 nova_compute[232433]: 2025-12-06 07:04:42.445 232437 INFO nova.compute.manager [-] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:04:42 np0005548731 nova_compute[232433]: 2025-12-06 07:04:42.476 232437 DEBUG nova.compute.manager [None req-778b78bb-695d-43bf-8e13-4985e110f9c8 - - - - - -] [instance: c8598dee-32a7-4624-91be-ec0e2dbb738b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:42 np0005548731 nova_compute[232433]: 2025-12-06 07:04:42.881 232437 DEBUG nova.network.neutron [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Successfully updated port: 9e200480-deff-4f39-9945-230d157ac906 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:04:42 np0005548731 nova_compute[232433]: 2025-12-06 07:04:42.903 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:04:42 np0005548731 nova_compute[232433]: 2025-12-06 07:04:42.904 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquired lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:04:42 np0005548731 nova_compute[232433]: 2025-12-06 07:04:42.904 232437 DEBUG nova.network.neutron [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:04:43 np0005548731 nova_compute[232433]: 2025-12-06 07:04:43.046 232437 DEBUG nova.compute.manager [req-10315195-7e26-4cb7-84a4-c51b3b1c6d51 req-4745da03-3854-4580-97b3-66f1230dcd09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-changed-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:04:43 np0005548731 nova_compute[232433]: 2025-12-06 07:04:43.046 232437 DEBUG nova.compute.manager [req-10315195-7e26-4cb7-84a4-c51b3b1c6d51 req-4745da03-3854-4580-97b3-66f1230dcd09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Refreshing instance network info cache due to event network-changed-9e200480-deff-4f39-9945-230d157ac906. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:04:43 np0005548731 nova_compute[232433]: 2025-12-06 07:04:43.046 232437 DEBUG oslo_concurrency.lockutils [req-10315195-7e26-4cb7-84a4-c51b3b1c6d51 req-4745da03-3854-4580-97b3-66f1230dcd09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:04:43 np0005548731 nova_compute[232433]: 2025-12-06 07:04:43.636 232437 DEBUG nova.network.neutron [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:04:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:04:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:43.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:04:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:43.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:44 np0005548731 nova_compute[232433]: 2025-12-06 07:04:44.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:04:44 np0005548731 nova_compute[232433]: 2025-12-06 07:04:44.817 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:44.817 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:04:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:44.819 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.070 232437 DEBUG nova.network.neutron [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating instance_info_cache with network_info: [{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.095 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Releasing lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.096 232437 DEBUG nova.compute.manager [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Instance network_info: |[{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.096 232437 DEBUG oslo_concurrency.lockutils [req-10315195-7e26-4cb7-84a4-c51b3b1c6d51 req-4745da03-3854-4580-97b3-66f1230dcd09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.096 232437 DEBUG nova.network.neutron [req-10315195-7e26-4cb7-84a4-c51b3b1c6d51 req-4745da03-3854-4580-97b3-66f1230dcd09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Refreshing network info cache for port 9e200480-deff-4f39-9945-230d157ac906 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.099 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Start _get_guest_xml network_info=[{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2b74c89f-1018-4ad4-8af8-84109979b9c7', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2b74c89f-1018-4ad4-8af8-84109979b9c7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '1712064c-6ba8-4660-972f-1e827a40781a', 'attached_at': '', 'detached_at': '', 'volume_id': '2b74c89f-1018-4ad4-8af8-84109979b9c7', 'serial': '2b74c89f-1018-4ad4-8af8-84109979b9c7'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'attachment_id': 'b2a91c5a-7187-4625-b7d3-9dd1c6ec4d90', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.103 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.103 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.105 232437 WARNING nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.110 232437 DEBUG nova.virt.libvirt.host [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.111 232437 DEBUG nova.virt.libvirt.host [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.113 232437 DEBUG nova.virt.libvirt.host [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.114 232437 DEBUG nova.virt.libvirt.host [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.115 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.115 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.116 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.116 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.116 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.116 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.116 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.116 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.117 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.117 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.117 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.117 232437 DEBUG nova.virt.hardware [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.148 232437 DEBUG nova.storage.rbd_utils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] rbd image 1712064c-6ba8-4660-972f-1e827a40781a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.152 232437 DEBUG oslo_concurrency.processutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.178 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.179 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.179 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:04:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:04:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1474950124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:04:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:04:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:45.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:04:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:45.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.821 232437 DEBUG oslo_concurrency.processutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.865 232437 DEBUG nova.virt.libvirt.vif [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:04:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1567617044',display_name='tempest-LiveMigrationTest-server-1567617044',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1567617044',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-i5n1flwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:04:39Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=1712064c-6ba8-4660-972f-1e827a40781a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.867 232437 DEBUG nova.network.os_vif_util [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converting VIF {"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.868 232437 DEBUG nova.network.os_vif_util [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.871 232437 DEBUG nova.objects.instance [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1712064c-6ba8-4660-972f-1e827a40781a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.890 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  <uuid>1712064c-6ba8-4660-972f-1e827a40781a</uuid>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  <name>instance-0000001b</name>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <nova:name>tempest-LiveMigrationTest-server-1567617044</nova:name>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:04:45</nova:creationTime>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <nova:user uuid="756e3e1fa7e44042bdf37a6cdd877fac">tempest-LiveMigrationTest-854827502-project-member</nova:user>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <nova:project uuid="9e86c61372e24db392d4a12ca71f7e00">tempest-LiveMigrationTest-854827502</nova:project>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <nova:port uuid="9e200480-deff-4f39-9945-230d157ac906">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <entry name="serial">1712064c-6ba8-4660-972f-1e827a40781a</entry>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <entry name="uuid">1712064c-6ba8-4660-972f-1e827a40781a</entry>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/1712064c-6ba8-4660-972f-1e827a40781a_disk.config">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-2b74c89f-1018-4ad4-8af8-84109979b9c7">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <serial>2b74c89f-1018-4ad4-8af8-84109979b9c7</serial>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:e9:3c:15"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <target dev="tap9e200480-de"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a/console.log" append="off"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:04:45 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:04:45 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:04:45 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:04:45 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.891 232437 DEBUG nova.compute.manager [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Preparing to wait for external event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.891 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.892 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.892 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.894 232437 DEBUG nova.virt.libvirt.vif [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:04:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1567617044',display_name='tempest-LiveMigrationTest-server-1567617044',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1567617044',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-i5n1flwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:04:39Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=1712064c-6ba8-4660-972f-1e827a40781a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.894 232437 DEBUG nova.network.os_vif_util [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converting VIF {"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.896 232437 DEBUG nova.network.os_vif_util [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.897 232437 DEBUG os_vif [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.899 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.900 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.901 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.911 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.913 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e200480-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.914 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e200480-de, col_values=(('external_ids', {'iface-id': '9e200480-deff-4f39-9945-230d157ac906', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:3c:15', 'vm-uuid': '1712064c-6ba8-4660-972f-1e827a40781a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.917 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:45 np0005548731 NetworkManager[49182]: <info>  [1765004685.9185] manager: (tap9e200480-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.919 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.924 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.925 232437 INFO os_vif [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de')#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.996 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.996 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.997 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] No VIF found with MAC fa:16:3e:e9:3c:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:04:45 np0005548731 nova_compute[232433]: 2025-12-06 07:04:45.997 232437 INFO nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Using config drive#033[00m
Dec  6 02:04:46 np0005548731 nova_compute[232433]: 2025-12-06 07:04:46.038 232437 DEBUG nova.storage.rbd_utils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] rbd image 1712064c-6ba8-4660-972f-1e827a40781a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:46 np0005548731 nova_compute[232433]: 2025-12-06 07:04:46.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:04:46 np0005548731 nova_compute[232433]: 2025-12-06 07:04:46.237 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:46 np0005548731 nova_compute[232433]: 2025-12-06 07:04:46.736 232437 INFO nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Creating config drive at /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a/disk.config#033[00m
Dec  6 02:04:46 np0005548731 nova_compute[232433]: 2025-12-06 07:04:46.741 232437 DEBUG oslo_concurrency.processutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptze9oc_5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:46 np0005548731 nova_compute[232433]: 2025-12-06 07:04:46.870 232437 DEBUG oslo_concurrency.processutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptze9oc_5" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:46 np0005548731 nova_compute[232433]: 2025-12-06 07:04:46.908 232437 DEBUG nova.storage.rbd_utils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] rbd image 1712064c-6ba8-4660-972f-1e827a40781a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:46 np0005548731 nova_compute[232433]: 2025-12-06 07:04:46.914 232437 DEBUG oslo_concurrency.processutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a/disk.config 1712064c-6ba8-4660-972f-1e827a40781a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.101 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.123 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.124 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.144 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.144 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.144 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.144 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.145 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 02:04:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.466 232437 DEBUG nova.network.neutron [req-10315195-7e26-4cb7-84a4-c51b3b1c6d51 req-4745da03-3854-4580-97b3-66f1230dcd09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updated VIF entry in instance network info cache for port 9e200480-deff-4f39-9945-230d157ac906. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.467 232437 DEBUG nova.network.neutron [req-10315195-7e26-4cb7-84a4-c51b3b1c6d51 req-4745da03-3854-4580-97b3-66f1230dcd09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating instance_info_cache with network_info: [{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.484 232437 DEBUG oslo_concurrency.lockutils [req-10315195-7e26-4cb7-84a4-c51b3b1c6d51 req-4745da03-3854-4580-97b3-66f1230dcd09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:04:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:04:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/599834630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.614 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.697 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.698 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:04:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:04:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:47.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:04:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:47.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.849 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.850 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4794MB free_disk=20.910202026367188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.851 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.851 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.930 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 1712064c-6ba8-4660-972f-1e827a40781a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.930 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.931 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:04:47 np0005548731 nova_compute[232433]: 2025-12-06 07:04:47.975 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:04:48Z|00091|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  6 02:04:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:04:48 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/119544535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:04:48 np0005548731 nova_compute[232433]: 2025-12-06 07:04:48.399 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:48 np0005548731 nova_compute[232433]: 2025-12-06 07:04:48.405 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:04:48 np0005548731 nova_compute[232433]: 2025-12-06 07:04:48.465 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:04:48 np0005548731 nova_compute[232433]: 2025-12-06 07:04:48.509 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:04:48 np0005548731 nova_compute[232433]: 2025-12-06 07:04:48.509 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:04:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:04:48 np0005548731 nova_compute[232433]: 2025-12-06 07:04:48.750 232437 DEBUG oslo_concurrency.processutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a/disk.config 1712064c-6ba8-4660-972f-1e827a40781a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.837s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:48 np0005548731 nova_compute[232433]: 2025-12-06 07:04:48.751 232437 INFO nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Deleting local config drive /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a/disk.config because it was imported into RBD.#033[00m
Dec  6 02:04:48 np0005548731 kernel: tap9e200480-de: entered promiscuous mode
Dec  6 02:04:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:04:48Z|00092|binding|INFO|Claiming lport 9e200480-deff-4f39-9945-230d157ac906 for this chassis.
Dec  6 02:04:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:04:48Z|00093|binding|INFO|9e200480-deff-4f39-9945-230d157ac906: Claiming fa:16:3e:e9:3c:15 10.100.0.13
Dec  6 02:04:48 np0005548731 nova_compute[232433]: 2025-12-06 07:04:48.810 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:48 np0005548731 NetworkManager[49182]: <info>  [1765004688.8117] manager: (tap9e200480-de): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Dec  6 02:04:48 np0005548731 nova_compute[232433]: 2025-12-06 07:04:48.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:48 np0005548731 systemd-udevd[247199]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.834 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:3c:15 10.100.0.13'], port_security=['fa:16:3e:e9:3c:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1712064c-6ba8-4660-972f-1e827a40781a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9238b9b5-08f5-4634-bd05-370e3192b201', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db7e9816-53a6-4d9a-be4a-e5a8dcf8a64b, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9e200480-deff-4f39-9945-230d157ac906) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.835 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9e200480-deff-4f39-9945-230d157ac906 in datapath 9238b9b5-08f5-4634-bd05-370e3192b201 bound to our chassis#033[00m
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.836 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9238b9b5-08f5-4634-bd05-370e3192b201#033[00m
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.849 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9e731d39-a572-427f-bb75-2c41625accae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.850 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9238b9b5-01 in ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:04:48 np0005548731 NetworkManager[49182]: <info>  [1765004688.8514] device (tap9e200480-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:04:48 np0005548731 NetworkManager[49182]: <info>  [1765004688.8524] device (tap9e200480-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.852 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9238b9b5-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.852 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a50b90-0caf-4a60-83b0-493eda124d11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.853 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a90f43d3-357a-40f2-a783-a0ee17987430]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:48 np0005548731 systemd-machined[195355]: New machine qemu-14-instance-0000001b.
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.865 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[3eff3865-50fc-4b19-a3b7-a20d8df9b56c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.879 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7496d135-72f8-4d0e-98d9-c847d77fa0f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:48 np0005548731 systemd[1]: Started Virtual Machine qemu-14-instance-0000001b.
Dec  6 02:04:48 np0005548731 nova_compute[232433]: 2025-12-06 07:04:48.884 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:04:48Z|00094|binding|INFO|Setting lport 9e200480-deff-4f39-9945-230d157ac906 ovn-installed in OVS
Dec  6 02:04:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:04:48Z|00095|binding|INFO|Setting lport 9e200480-deff-4f39-9945-230d157ac906 up in Southbound
Dec  6 02:04:48 np0005548731 nova_compute[232433]: 2025-12-06 07:04:48.890 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.909 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc4221c-01f1-44a2-b8e3-aedd9801126e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.916 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d9519bb9-9e0b-4777-870f-275a34741943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:48 np0005548731 NetworkManager[49182]: <info>  [1765004688.9176] manager: (tap9238b9b5-00): new Veth device (/org/freedesktop/NetworkManager/Devices/59)
Dec  6 02:04:48 np0005548731 systemd-udevd[247205]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.949 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f8cd3bef-74fe-4d42-b511-59ff8f446c42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.952 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[dab9e200-635b-4463-a369-7cd886acae12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:48 np0005548731 NetworkManager[49182]: <info>  [1765004688.9797] device (tap9238b9b5-00): carrier: link connected
Dec  6 02:04:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:48.985 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6d6303-f6b8-4dde-b925-2db07064dc00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.004 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bb13551d-a3a5-4505-ab45-e0a585f2946a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9238b9b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:4f:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495098, 'reachable_time': 36876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247235, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.017 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfcdf9f-9f7e-4af7-b6f2-d81b6756a45a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:4ff3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495098, 'tstamp': 495098}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247236, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.034 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4d39f856-79d5-4797-b56d-ab898a4c0f8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9238b9b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:4f:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495098, 'reachable_time': 36876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247237, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.062 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f589e892-00a4-4616-855d-8532fc93e412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:49 np0005548731 nova_compute[232433]: 2025-12-06 07:04:49.076 232437 DEBUG nova.compute.manager [req-fd4da4ae-8110-4bf5-a385-55e923c5ab7e req-dc13a404-6f9a-40bb-80ca-9fdfa92890e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:04:49 np0005548731 nova_compute[232433]: 2025-12-06 07:04:49.077 232437 DEBUG oslo_concurrency.lockutils [req-fd4da4ae-8110-4bf5-a385-55e923c5ab7e req-dc13a404-6f9a-40bb-80ca-9fdfa92890e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:49 np0005548731 nova_compute[232433]: 2025-12-06 07:04:49.077 232437 DEBUG oslo_concurrency.lockutils [req-fd4da4ae-8110-4bf5-a385-55e923c5ab7e req-dc13a404-6f9a-40bb-80ca-9fdfa92890e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:49 np0005548731 nova_compute[232433]: 2025-12-06 07:04:49.077 232437 DEBUG oslo_concurrency.lockutils [req-fd4da4ae-8110-4bf5-a385-55e923c5ab7e req-dc13a404-6f9a-40bb-80ca-9fdfa92890e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:49 np0005548731 nova_compute[232433]: 2025-12-06 07:04:49.077 232437 DEBUG nova.compute.manager [req-fd4da4ae-8110-4bf5-a385-55e923c5ab7e req-dc13a404-6f9a-40bb-80ca-9fdfa92890e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Processing event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.113 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[879630eb-06ba-4312-9710-5b77b78b3403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.116 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9238b9b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.117 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.117 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9238b9b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:04:49 np0005548731 nova_compute[232433]: 2025-12-06 07:04:49.119 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:49 np0005548731 NetworkManager[49182]: <info>  [1765004689.1200] manager: (tap9238b9b5-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Dec  6 02:04:49 np0005548731 kernel: tap9238b9b5-00: entered promiscuous mode
Dec  6 02:04:49 np0005548731 nova_compute[232433]: 2025-12-06 07:04:49.122 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.123 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9238b9b5-00, col_values=(('external_ids', {'iface-id': '5c223717-35ae-4662-bf3f-55f7a73b7a9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:04:49 np0005548731 nova_compute[232433]: 2025-12-06 07:04:49.124 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:49 np0005548731 ovn_controller[133927]: 2025-12-06T07:04:49Z|00096|binding|INFO|Releasing lport 5c223717-35ae-4662-bf3f-55f7a73b7a9a from this chassis (sb_readonly=0)
Dec  6 02:04:49 np0005548731 nova_compute[232433]: 2025-12-06 07:04:49.141 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.143 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.145 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[161327af-ad26-412a-9898-6541682343f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.146 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-9238b9b5-08f5-4634-bd05-370e3192b201
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 9238b9b5-08f5-4634-bd05-370e3192b201
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:04:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:49.149 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'env', 'PROCESS_TAG=haproxy-9238b9b5-08f5-4634-bd05-370e3192b201', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9238b9b5-08f5-4634-bd05-370e3192b201.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:04:49 np0005548731 podman[247269]: 2025-12-06 07:04:49.516638354 +0000 UTC m=+0.052486029 container create 80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  6 02:04:49 np0005548731 systemd[1]: Started libpod-conmon-80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792.scope.
Dec  6 02:04:49 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:04:49 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57c3a69aedb396a4fd0f857de1ee900a93cd23ab78ffe9e885847b7c6137018e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:04:49 np0005548731 podman[247269]: 2025-12-06 07:04:49.486962537 +0000 UTC m=+0.022810242 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:04:49 np0005548731 podman[247269]: 2025-12-06 07:04:49.593871028 +0000 UTC m=+0.129718723 container init 80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:04:49 np0005548731 podman[247269]: 2025-12-06 07:04:49.600485248 +0000 UTC m=+0.136332903 container start 80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:04:49 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[247283]: [NOTICE]   (247287) : New worker (247289) forked
Dec  6 02:04:49 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[247283]: [NOTICE]   (247287) : Loading success.
Dec  6 02:04:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:49.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:49.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.133 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004690.1332808, 1712064c-6ba8-4660-972f-1e827a40781a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.134 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Started (Lifecycle Event)#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.136 232437 DEBUG nova.compute.manager [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.140 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.142 232437 INFO nova.virt.libvirt.driver [-] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Instance spawned successfully.#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.143 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.169 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.169 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.169 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.170 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.170 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.170 232437 DEBUG nova.virt.libvirt.driver [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.174 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.176 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.212 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.212 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004690.1362438, 1712064c-6ba8-4660-972f-1e827a40781a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.213 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.249 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.252 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004690.1394794, 1712064c-6ba8-4660-972f-1e827a40781a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.252 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.260 232437 INFO nova.compute.manager [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Took 9.78 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.260 232437 DEBUG nova.compute.manager [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.286 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:04:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.289 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.316 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.333 232437 INFO nova.compute.manager [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Took 12.21 seconds to build instance.#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.367 232437 DEBUG oslo_concurrency.lockutils [None req-02ab9025-6254-4426-8f5c-ac86696939e3 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:50 np0005548731 nova_compute[232433]: 2025-12-06 07:04:50.919 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e169 e169: 3 total, 3 up, 3 in
Dec  6 02:04:51 np0005548731 nova_compute[232433]: 2025-12-06 07:04:51.152 232437 DEBUG nova.compute.manager [req-0f8a7a7e-cad3-426a-9836-9f252cc870e9 req-f6507e73-237a-4bf7-929c-13b0e5965183 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:04:51 np0005548731 nova_compute[232433]: 2025-12-06 07:04:51.153 232437 DEBUG oslo_concurrency.lockutils [req-0f8a7a7e-cad3-426a-9836-9f252cc870e9 req-f6507e73-237a-4bf7-929c-13b0e5965183 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:51 np0005548731 nova_compute[232433]: 2025-12-06 07:04:51.153 232437 DEBUG oslo_concurrency.lockutils [req-0f8a7a7e-cad3-426a-9836-9f252cc870e9 req-f6507e73-237a-4bf7-929c-13b0e5965183 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:51 np0005548731 nova_compute[232433]: 2025-12-06 07:04:51.153 232437 DEBUG oslo_concurrency.lockutils [req-0f8a7a7e-cad3-426a-9836-9f252cc870e9 req-f6507e73-237a-4bf7-929c-13b0e5965183 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:51 np0005548731 nova_compute[232433]: 2025-12-06 07:04:51.154 232437 DEBUG nova.compute.manager [req-0f8a7a7e-cad3-426a-9836-9f252cc870e9 req-f6507e73-237a-4bf7-929c-13b0e5965183 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:04:51 np0005548731 nova_compute[232433]: 2025-12-06 07:04:51.154 232437 WARNING nova.compute.manager [req-0f8a7a7e-cad3-426a-9836-9f252cc870e9 req-f6507e73-237a-4bf7-929c-13b0e5965183 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:04:51 np0005548731 nova_compute[232433]: 2025-12-06 07:04:51.269 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:51 np0005548731 nova_compute[232433]: 2025-12-06 07:04:51.490 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:04:51 np0005548731 nova_compute[232433]: 2025-12-06 07:04:51.490 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:04:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:51.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:04:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:51.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:04:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:53.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:53.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:53 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:04:53.821 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:04:54 np0005548731 nova_compute[232433]: 2025-12-06 07:04:54.060 232437 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Check if temp file /var/lib/nova/instances/tmpnrrgb899 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Dec  6 02:04:54 np0005548731 nova_compute[232433]: 2025-12-06 07:04:54.060 232437 DEBUG nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpnrrgb899',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Dec  6 02:04:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:04:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:04:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:04:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:55.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:55.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:55 np0005548731 nova_compute[232433]: 2025-12-06 07:04:55.923 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:56 np0005548731 nova_compute[232433]: 2025-12-06 07:04:56.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:04:56 np0005548731 nova_compute[232433]: 2025-12-06 07:04:56.325 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:56 np0005548731 nova_compute[232433]: 2025-12-06 07:04:56.326 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:56 np0005548731 nova_compute[232433]: 2025-12-06 07:04:56.343 232437 DEBUG nova.compute.manager [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:04:56 np0005548731 nova_compute[232433]: 2025-12-06 07:04:56.413 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:56 np0005548731 nova_compute[232433]: 2025-12-06 07:04:56.414 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:56 np0005548731 nova_compute[232433]: 2025-12-06 07:04:56.420 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:04:56 np0005548731 nova_compute[232433]: 2025-12-06 07:04:56.421 232437 INFO nova.compute.claims [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:04:56 np0005548731 nova_compute[232433]: 2025-12-06 07:04:56.541 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:04:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/32107808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:04:56 np0005548731 nova_compute[232433]: 2025-12-06 07:04:56.963 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:56 np0005548731 nova_compute[232433]: 2025-12-06 07:04:56.969 232437 DEBUG nova.compute.provider_tree [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.002 232437 DEBUG nova.scheduler.client.report [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.163 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.164 232437 DEBUG nova.compute.manager [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.208 232437 DEBUG nova.compute.manager [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.208 232437 DEBUG nova.network.neutron [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.227 232437 INFO nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Ignoring supplied device name: /dev/sda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.249 232437 DEBUG nova.compute.manager [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.368 232437 DEBUG nova.compute.manager [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.369 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.370 232437 INFO nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Creating image(s)#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.394 232437 DEBUG nova.storage.rbd_utils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] rbd image 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.422 232437 DEBUG nova.storage.rbd_utils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] rbd image 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.451 232437 DEBUG nova.storage.rbd_utils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] rbd image 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.455 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "3061ff2caf019ed81096e4bbefa75aa61b352e98" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.456 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "3061ff2caf019ed81096e4bbefa75aa61b352e98" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.459 232437 DEBUG nova.policy [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5f62143343c4499a86f710385c0c2f8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'edcc68bbf9cd4c9189e59964db884a26', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:04:57 np0005548731 nova_compute[232433]: 2025-12-06 07:04:57.726 232437 DEBUG nova.virt.libvirt.imagebackend [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/544e5cd1-78c2-46b6-bec1-176dc9a97c75/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/544e5cd1-78c2-46b6-bec1-176dc9a97c75/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  6 02:04:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:57.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:57.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.203 232437 DEBUG nova.compute.manager [req-56089010-9b70-49ed-b03a-855df925b023 req-4b700f16-cbed-48a6-a02a-f67fc86491cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.204 232437 DEBUG oslo_concurrency.lockutils [req-56089010-9b70-49ed-b03a-855df925b023 req-4b700f16-cbed-48a6-a02a-f67fc86491cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.205 232437 DEBUG oslo_concurrency.lockutils [req-56089010-9b70-49ed-b03a-855df925b023 req-4b700f16-cbed-48a6-a02a-f67fc86491cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.205 232437 DEBUG oslo_concurrency.lockutils [req-56089010-9b70-49ed-b03a-855df925b023 req-4b700f16-cbed-48a6-a02a-f67fc86491cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.205 232437 DEBUG nova.compute.manager [req-56089010-9b70-49ed-b03a-855df925b023 req-4b700f16-cbed-48a6-a02a-f67fc86491cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.205 232437 DEBUG nova.compute.manager [req-56089010-9b70-49ed-b03a-855df925b023 req-4b700f16-cbed-48a6-a02a-f67fc86491cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.267 232437 DEBUG nova.network.neutron [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Successfully created port: c25d2974-9b24-4cff-862e-5f43e77d56be _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.914 232437 INFO nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Took 3.89 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.914 232437 DEBUG nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.941 232437 DEBUG nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpnrrgb899',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(66ff6ed2-80e5-4a70-9b00-2852c6d84ac4),old_vol_attachment_ids={2b74c89f-1018-4ad4-8af8-84109979b9c7='b2a91c5a-7187-4625-b7d3-9dd1c6ec4d90'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.944 232437 DEBUG nova.objects.instance [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lazy-loading 'migration_context' on Instance uuid 1712064c-6ba8-4660-972f-1e827a40781a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.945 232437 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.947 232437 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.947 232437 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.974 232437 DEBUG nova.virt.libvirt.migration [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Find same serial number: pos=1, serial=2b74c89f-1018-4ad4-8af8-84109979b9c7 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.976 232437 DEBUG nova.virt.libvirt.vif [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:04:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1567617044',display_name='tempest-LiveMigrationTest-server-1567617044',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1567617044',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:04:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-i5n1flwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:04:50Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=1712064c-6ba8-4660-972f-1e827a40781a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.976 232437 DEBUG nova.network.os_vif_util [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converting VIF {"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.977 232437 DEBUG nova.network.os_vif_util [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.978 232437 DEBUG nova.virt.libvirt.migration [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating guest XML with vif config: <interface type="ethernet">
Dec  6 02:04:58 np0005548731 nova_compute[232433]:  <mac address="fa:16:3e:e9:3c:15"/>
Dec  6 02:04:58 np0005548731 nova_compute[232433]:  <model type="virtio"/>
Dec  6 02:04:58 np0005548731 nova_compute[232433]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:04:58 np0005548731 nova_compute[232433]:  <mtu size="1442"/>
Dec  6 02:04:58 np0005548731 nova_compute[232433]:  <target dev="tap9e200480-de"/>
Dec  6 02:04:58 np0005548731 nova_compute[232433]: </interface>
Dec  6 02:04:58 np0005548731 nova_compute[232433]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Dec  6 02:04:58 np0005548731 nova_compute[232433]: 2025-12-06 07:04:58.979 232437 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.221 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.259 232437 DEBUG nova.network.neutron [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Successfully updated port: c25d2974-9b24-4cff-862e-5f43e77d56be _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.277 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "refresh_cache-9b0dea9b-128d-43a8-aedd-6a023517b89f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.278 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquired lock "refresh_cache-9b0dea9b-128d-43a8-aedd-6a023517b89f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.278 232437 DEBUG nova.network.neutron [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.284 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.285 232437 DEBUG nova.virt.images [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] 544e5cd1-78c2-46b6-bec1-176dc9a97c75 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.286 232437 DEBUG nova.privsep.utils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.287 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98.part /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.407 232437 DEBUG nova.compute.manager [req-65d8f278-50c6-44ac-adff-caea33e45e77 req-87779111-ce1b-40a3-b72f-97ce8e84aa18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Received event network-changed-c25d2974-9b24-4cff-862e-5f43e77d56be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.408 232437 DEBUG nova.compute.manager [req-65d8f278-50c6-44ac-adff-caea33e45e77 req-87779111-ce1b-40a3-b72f-97ce8e84aa18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Refreshing instance network info cache due to event network-changed-c25d2974-9b24-4cff-862e-5f43e77d56be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.408 232437 DEBUG oslo_concurrency.lockutils [req-65d8f278-50c6-44ac-adff-caea33e45e77 req-87779111-ce1b-40a3-b72f-97ce8e84aa18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9b0dea9b-128d-43a8-aedd-6a023517b89f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.450 232437 DEBUG nova.virt.libvirt.migration [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.450 232437 INFO nova.virt.libvirt.migration [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.460 232437 DEBUG nova.network.neutron [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.463 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98.part /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98.converted" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.468 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.531 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98.converted --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.532 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "3061ff2caf019ed81096e4bbefa75aa61b352e98" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.555 232437 DEBUG nova.storage.rbd_utils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] rbd image 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.559 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.585 232437 INFO nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Dec  6 02:04:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:59.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:04:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:04:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:59.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.909 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:04:59 np0005548731 nova_compute[232433]: 2025-12-06 07:04:59.990 232437 DEBUG nova.storage.rbd_utils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] resizing rbd image 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.089 232437 DEBUG nova.virt.libvirt.migration [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.090 232437 DEBUG nova.virt.libvirt.migration [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.232 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004700.2260156, 1712064c-6ba8-4660-972f-1e827a40781a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.233 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.240 232437 DEBUG nova.objects.instance [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b0dea9b-128d-43a8-aedd-6a023517b89f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.260 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.261 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.261 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Ensure instance console log exists: /var/lib/nova/instances/9b0dea9b-128d-43a8-aedd-6a023517b89f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.262 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.262 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.263 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.265 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:05:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.289 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Dec  6 02:05:00 np0005548731 kernel: tap9e200480-de (unregistering): left promiscuous mode
Dec  6 02:05:00 np0005548731 NetworkManager[49182]: <info>  [1765004700.4182] device (tap9e200480-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.427 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:00Z|00097|binding|INFO|Releasing lport 9e200480-deff-4f39-9945-230d157ac906 from this chassis (sb_readonly=0)
Dec  6 02:05:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:00Z|00098|binding|INFO|Setting lport 9e200480-deff-4f39-9945-230d157ac906 down in Southbound
Dec  6 02:05:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:00Z|00099|binding|INFO|Removing iface tap9e200480-de ovn-installed in OVS
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.436 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:3c:15 10.100.0.13'], port_security=['fa:16:3e:e9:3c:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '03fe054d-d727-4af3-9c5e-92e57505f242'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1712064c-6ba8-4660-972f-1e827a40781a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9238b9b5-08f5-4634-bd05-370e3192b201', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '8', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db7e9816-53a6-4d9a-be4a-e5a8dcf8a64b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9e200480-deff-4f39-9945-230d157ac906) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.438 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9e200480-deff-4f39-9945-230d157ac906 in datapath 9238b9b5-08f5-4634-bd05-370e3192b201 unbound from our chassis#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.439 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9238b9b5-08f5-4634-bd05-370e3192b201, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.441 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8714f8d3-a51f-4825-9b9a-aac17d76e777]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.441 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 namespace which is not needed anymore#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.452 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:00 np0005548731 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Dec  6 02:05:00 np0005548731 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001b.scope: Consumed 11.567s CPU time.
Dec  6 02:05:00 np0005548731 systemd-machined[195355]: Machine qemu-14-instance-0000001b terminated.
Dec  6 02:05:00 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[247283]: [NOTICE]   (247287) : haproxy version is 2.8.14-c23fe91
Dec  6 02:05:00 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[247283]: [NOTICE]   (247287) : path to executable is /usr/sbin/haproxy
Dec  6 02:05:00 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[247283]: [WARNING]  (247287) : Exiting Master process...
Dec  6 02:05:00 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[247283]: [ALERT]    (247287) : Current worker (247289) exited with code 143 (Terminated)
Dec  6 02:05:00 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[247283]: [WARNING]  (247287) : All workers exited. Exiting... (0)
Dec  6 02:05:00 np0005548731 systemd[1]: libpod-80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792.scope: Deactivated successfully.
Dec  6 02:05:00 np0005548731 podman[247677]: 2025-12-06 07:05:00.578928347 +0000 UTC m=+0.049689331 container died 80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 02:05:00 np0005548731 virtqemud[232080]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-2b74c89f-1018-4ad4-8af8-84109979b9c7: No such file or directory
Dec  6 02:05:00 np0005548731 virtqemud[232080]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-2b74c89f-1018-4ad4-8af8-84109979b9c7: No such file or directory
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.593 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.599 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.601 232437 DEBUG nova.virt.libvirt.guest [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.601 232437 INFO nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migration operation has completed#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.601 232437 INFO nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] _post_live_migration() is started..#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.603 232437 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.603 232437 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.604 232437 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Dec  6 02:05:00 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792-userdata-shm.mount: Deactivated successfully.
Dec  6 02:05:00 np0005548731 systemd[1]: var-lib-containers-storage-overlay-57c3a69aedb396a4fd0f857de1ee900a93cd23ab78ffe9e885847b7c6137018e-merged.mount: Deactivated successfully.
Dec  6 02:05:00 np0005548731 podman[247677]: 2025-12-06 07:05:00.622826056 +0000 UTC m=+0.093587040 container cleanup 80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:05:00 np0005548731 systemd[1]: libpod-conmon-80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792.scope: Deactivated successfully.
Dec  6 02:05:00 np0005548731 podman[247718]: 2025-12-06 07:05:00.695350157 +0000 UTC m=+0.045730245 container remove 80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.702 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b3cbfb-069e-4ced-9da6-81433214d86d]: (4, ('Sat Dec  6 07:05:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 (80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792)\n80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792\nSat Dec  6 07:05:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 (80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792)\n80e8b91aa7d1d7dce9f6baac26eb43e941e4db62f11d9e35907c1718a5fef792\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.704 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a81a0076-2f4e-47b0-897a-34061e033061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.705 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9238b9b5-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.707 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:00 np0005548731 kernel: tap9238b9b5-00: left promiscuous mode
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.726 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.729 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff2e321-88e7-420a-84df-218645e92c16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.746 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f1cc5f-aec0-46f7-8370-9c5dee67fc68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.747 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4304ed-fb11-44da-be60-9456fe803c3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.760 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[95566d9a-15d8-4e2c-9584-24022a38c28b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495091, 'reachable_time': 25787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247737, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.762 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.762 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[103f8bcc-c844-4f6e-9d94-85210b5403f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:00 np0005548731 systemd[1]: run-netns-ovnmeta\x2d9238b9b5\x2d08f5\x2d4634\x2dbd05\x2d370e3192b201.mount: Deactivated successfully.
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.849 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.850 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:00.850 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:00 np0005548731 nova_compute[232433]: 2025-12-06 07:05:00.956 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.010 232437 DEBUG nova.compute.manager [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.011 232437 DEBUG oslo_concurrency.lockutils [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.012 232437 DEBUG oslo_concurrency.lockutils [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.012 232437 DEBUG oslo_concurrency.lockutils [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.012 232437 DEBUG nova.compute.manager [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.013 232437 WARNING nova.compute.manager [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state active and task_state migrating.#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.013 232437 DEBUG nova.compute.manager [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-changed-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.013 232437 DEBUG nova.compute.manager [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Refreshing instance network info cache due to event network-changed-9e200480-deff-4f39-9945-230d157ac906. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.014 232437 DEBUG oslo_concurrency.lockutils [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.014 232437 DEBUG oslo_concurrency.lockutils [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.015 232437 DEBUG nova.network.neutron [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Refreshing network info cache for port 9e200480-deff-4f39-9945-230d157ac906 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.170 232437 DEBUG nova.compute.manager [req-bcd6b4a1-215a-4f7c-a3b2-0aee25c16492 req-db45a840-8f0d-4923-9959-b44af0698169 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.171 232437 DEBUG oslo_concurrency.lockutils [req-bcd6b4a1-215a-4f7c-a3b2-0aee25c16492 req-db45a840-8f0d-4923-9959-b44af0698169 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.171 232437 DEBUG oslo_concurrency.lockutils [req-bcd6b4a1-215a-4f7c-a3b2-0aee25c16492 req-db45a840-8f0d-4923-9959-b44af0698169 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.171 232437 DEBUG oslo_concurrency.lockutils [req-bcd6b4a1-215a-4f7c-a3b2-0aee25c16492 req-db45a840-8f0d-4923-9959-b44af0698169 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.171 232437 DEBUG nova.compute.manager [req-bcd6b4a1-215a-4f7c-a3b2-0aee25c16492 req-db45a840-8f0d-4923-9959-b44af0698169 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.172 232437 DEBUG nova.compute.manager [req-bcd6b4a1-215a-4f7c-a3b2-0aee25c16492 req-db45a840-8f0d-4923-9959-b44af0698169 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.664 232437 DEBUG nova.network.neutron [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Updating instance_info_cache with network_info: [{"id": "c25d2974-9b24-4cff-862e-5f43e77d56be", "address": "fa:16:3e:36:88:b5", "network": {"id": "ebf7a358-f0c9-48b5-8485-23c08f737784", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-863315842-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "edcc68bbf9cd4c9189e59964db884a26", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25d2974-9b", "ovs_interfaceid": "c25d2974-9b24-4cff-862e-5f43e77d56be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.683 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Releasing lock "refresh_cache-9b0dea9b-128d-43a8-aedd-6a023517b89f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.684 232437 DEBUG nova.compute.manager [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Instance network_info: |[{"id": "c25d2974-9b24-4cff-862e-5f43e77d56be", "address": "fa:16:3e:36:88:b5", "network": {"id": "ebf7a358-f0c9-48b5-8485-23c08f737784", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-863315842-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "edcc68bbf9cd4c9189e59964db884a26", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25d2974-9b", "ovs_interfaceid": "c25d2974-9b24-4cff-862e-5f43e77d56be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.684 232437 DEBUG oslo_concurrency.lockutils [req-65d8f278-50c6-44ac-adff-caea33e45e77 req-87779111-ce1b-40a3-b72f-97ce8e84aa18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9b0dea9b-128d-43a8-aedd-6a023517b89f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.685 232437 DEBUG nova.network.neutron [req-65d8f278-50c6-44ac-adff-caea33e45e77 req-87779111-ce1b-40a3-b72f-97ce8e84aa18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Refreshing network info cache for port c25d2974-9b24-4cff-862e-5f43e77d56be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.687 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Start _get_guest_xml network_info=[{"id": "c25d2974-9b24-4cff-862e-5f43e77d56be", "address": "fa:16:3e:36:88:b5", "network": {"id": "ebf7a358-f0c9-48b5-8485-23c08f737784", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-863315842-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "edcc68bbf9cd4c9189e59964db884a26", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25d2974-9b", "ovs_interfaceid": "c25d2974-9b24-4cff-862e-5f43e77d56be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'scsi', 'cdrom_bus': 'scsi', 'mapping': {'root': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'scsi', 'dev': 'sdb', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T07:04:49Z,direct_url=<?>,disk_format='qcow2',id=544e5cd1-78c2-46b6-bec1-176dc9a97c75,min_disk=0,min_ram=0,name='',owner='9175aeced50c47b9b3aba5550ef48619',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T07:04:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/sda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'scsi', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/sda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '544e5cd1-78c2-46b6-bec1-176dc9a97c75'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.691 232437 WARNING nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.696 232437 DEBUG nova.virt.libvirt.host [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.696 232437 DEBUG nova.virt.libvirt.host [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.702 232437 DEBUG nova.virt.libvirt.host [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.702 232437 DEBUG nova.virt.libvirt.host [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.704 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.704 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T07:04:49Z,direct_url=<?>,disk_format='qcow2',id=544e5cd1-78c2-46b6-bec1-176dc9a97c75,min_disk=0,min_ram=0,name='',owner='9175aeced50c47b9b3aba5550ef48619',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T07:04:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.704 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.705 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.705 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.705 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.706 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.706 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.706 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.707 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.707 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.707 232437 DEBUG nova.virt.hardware [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:05:01 np0005548731 nova_compute[232433]: 2025-12-06 07:05:01.710 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:01.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:05:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:01.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:05:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:05:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2461479803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.147 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.170 232437 DEBUG nova.storage.rbd_utils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] rbd image 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.174 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:05:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3823637738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.608 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.610 232437 DEBUG nova.virt.libvirt.vif [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:04:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-171472628',display_name='tempest-AttachSCSIVolumeTestJSON-server-171472628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-171472628',id=31,image_ref='544e5cd1-78c2-46b6-bec1-176dc9a97c75',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIrVdsMwjNq9Fmk++jiOapSC71MhALV02yfiLTSDaFdCaMi5WTLHgeXWQ+IJhjXnQReNKmErMLIKuEbeAIKRK2LT9KgsLL26Bf1g2xLyRtjiFSK2z6roei561T41/p3POQ==',key_name='tempest-keypair-398633617',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='edcc68bbf9cd4c9189e59964db884a26',ramdisk_id='',reservation_id='r-db5ee3x2',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='544e5cd1-78c2-46b6-bec1-176dc9a97c75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-105555555',owner_user_name='tempest-AttachSCSIVolumeTestJSON-105555555-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:04:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e5f62143343c4499a86f710385c0c2f8',uuid=9b0dea9b-128d-43a8-aedd-6a023517b89f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c25d2974-9b24-4cff-862e-5f43e77d56be", "address": "fa:16:3e:36:88:b5", "network": {"id": "ebf7a358-f0c9-48b5-8485-23c08f737784", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-863315842-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "edcc68bbf9cd4c9189e59964db884a26", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25d2974-9b", "ovs_interfaceid": "c25d2974-9b24-4cff-862e-5f43e77d56be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.611 232437 DEBUG nova.network.os_vif_util [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Converting VIF {"id": "c25d2974-9b24-4cff-862e-5f43e77d56be", "address": "fa:16:3e:36:88:b5", "network": {"id": "ebf7a358-f0c9-48b5-8485-23c08f737784", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-863315842-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "edcc68bbf9cd4c9189e59964db884a26", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25d2974-9b", "ovs_interfaceid": "c25d2974-9b24-4cff-862e-5f43e77d56be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.612 232437 DEBUG nova.network.os_vif_util [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:88:b5,bridge_name='br-int',has_traffic_filtering=True,id=c25d2974-9b24-4cff-862e-5f43e77d56be,network=Network(ebf7a358-f0c9-48b5-8485-23c08f737784),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25d2974-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.613 232437 DEBUG nova.objects.instance [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b0dea9b-128d-43a8-aedd-6a023517b89f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.631 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  <uuid>9b0dea9b-128d-43a8-aedd-6a023517b89f</uuid>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  <name>instance-0000001f</name>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <nova:name>tempest-AttachSCSIVolumeTestJSON-server-171472628</nova:name>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:05:01</nova:creationTime>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <nova:user uuid="e5f62143343c4499a86f710385c0c2f8">tempest-AttachSCSIVolumeTestJSON-105555555-project-member</nova:user>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <nova:project uuid="edcc68bbf9cd4c9189e59964db884a26">tempest-AttachSCSIVolumeTestJSON-105555555</nova:project>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="544e5cd1-78c2-46b6-bec1-176dc9a97c75"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <nova:port uuid="c25d2974-9b24-4cff-862e-5f43e77d56be">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <entry name="serial">9b0dea9b-128d-43a8-aedd-6a023517b89f</entry>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <entry name="uuid">9b0dea9b-128d-43a8-aedd-6a023517b89f</entry>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9b0dea9b-128d-43a8-aedd-6a023517b89f_disk">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <target dev="sda" bus="scsi"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <address type="drive" controller="0" unit="0"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9b0dea9b-128d-43a8-aedd-6a023517b89f_disk.config">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <target dev="sdb" bus="scsi"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <address type="drive" controller="0" unit="1"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="scsi" index="0" model="virtio-scsi"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:36:88:b5"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <target dev="tapc25d2974-9b"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/9b0dea9b-128d-43a8-aedd-6a023517b89f/console.log" append="off"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:05:02 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:05:02 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:05:02 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:05:02 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.632 232437 DEBUG nova.compute.manager [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Preparing to wait for external event network-vif-plugged-c25d2974-9b24-4cff-862e-5f43e77d56be prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.633 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.633 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.634 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.634 232437 DEBUG nova.virt.libvirt.vif [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:04:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-171472628',display_name='tempest-AttachSCSIVolumeTestJSON-server-171472628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-171472628',id=31,image_ref='544e5cd1-78c2-46b6-bec1-176dc9a97c75',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIrVdsMwjNq9Fmk++jiOapSC71MhALV02yfiLTSDaFdCaMi5WTLHgeXWQ+IJhjXnQReNKmErMLIKuEbeAIKRK2LT9KgsLL26Bf1g2xLyRtjiFSK2z6roei561T41/p3POQ==',key_name='tempest-keypair-398633617',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='edcc68bbf9cd4c9189e59964db884a26',ramdisk_id='',reservation_id='r-db5ee3x2',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='544e5cd1-78c2-46b6-bec1-176dc9a97c75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-105555555',owner_user_name='tempest-AttachSCSIVolumeTestJSON-105555555-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:04:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e5f62143343c4499a86f710385c0c2f8',uuid=9b0dea9b-128d-43a8-aedd-6a023517b89f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c25d2974-9b24-4cff-862e-5f43e77d56be", "address": "fa:16:3e:36:88:b5", "network": {"id": "ebf7a358-f0c9-48b5-8485-23c08f737784", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-863315842-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "edcc68bbf9cd4c9189e59964db884a26", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25d2974-9b", "ovs_interfaceid": "c25d2974-9b24-4cff-862e-5f43e77d56be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.634 232437 DEBUG nova.network.os_vif_util [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Converting VIF {"id": "c25d2974-9b24-4cff-862e-5f43e77d56be", "address": "fa:16:3e:36:88:b5", "network": {"id": "ebf7a358-f0c9-48b5-8485-23c08f737784", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-863315842-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "edcc68bbf9cd4c9189e59964db884a26", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25d2974-9b", "ovs_interfaceid": "c25d2974-9b24-4cff-862e-5f43e77d56be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.635 232437 DEBUG nova.network.os_vif_util [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:88:b5,bridge_name='br-int',has_traffic_filtering=True,id=c25d2974-9b24-4cff-862e-5f43e77d56be,network=Network(ebf7a358-f0c9-48b5-8485-23c08f737784),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25d2974-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.635 232437 DEBUG os_vif [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:88:b5,bridge_name='br-int',has_traffic_filtering=True,id=c25d2974-9b24-4cff-862e-5f43e77d56be,network=Network(ebf7a358-f0c9-48b5-8485-23c08f737784),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25d2974-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.636 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.636 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.637 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.641 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.641 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc25d2974-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.642 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc25d2974-9b, col_values=(('external_ids', {'iface-id': 'c25d2974-9b24-4cff-862e-5f43e77d56be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:88:b5', 'vm-uuid': '9b0dea9b-128d-43a8-aedd-6a023517b89f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.643 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:02 np0005548731 NetworkManager[49182]: <info>  [1765004702.6443] manager: (tapc25d2974-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.645 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.652 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.653 232437 INFO os_vif [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:88:b5,bridge_name='br-int',has_traffic_filtering=True,id=c25d2974-9b24-4cff-862e-5f43e77d56be,network=Network(ebf7a358-f0c9-48b5-8485-23c08f737784),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25d2974-9b')#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.713 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.714 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.716 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] No VIF found with MAC fa:16:3e:36:88:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.717 232437 INFO nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Using config drive#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.745 232437 DEBUG nova.storage.rbd_utils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] rbd image 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.992 232437 DEBUG nova.network.neutron [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Activated binding for port 9e200480-deff-4f39-9945-230d157ac906 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.993 232437 DEBUG nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.994 232437 DEBUG nova.virt.libvirt.vif [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:04:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1567617044',display_name='tempest-LiveMigrationTest-server-1567617044',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1567617044',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:04:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-i5n1flwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:04:53Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=1712064c-6ba8-4660-972f-1e827a40781a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.994 232437 DEBUG nova.network.os_vif_util [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converting VIF {"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.995 232437 DEBUG nova.network.os_vif_util [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.995 232437 DEBUG os_vif [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.996 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.997 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e200480-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:02 np0005548731 nova_compute[232433]: 2025-12-06 07:05:02.998 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.002 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.004 232437 INFO os_vif [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de')#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.005 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.005 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.005 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.006 232437 DEBUG nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.006 232437 INFO nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Deleting instance files /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a_del#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.006 232437 INFO nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Deletion of /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a_del complete#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.179 232437 DEBUG nova.compute.manager [req-6914740a-847b-4cf2-b63b-ecfcca29f882 req-931c28d4-71cb-48ce-9947-59b913f97d31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.180 232437 DEBUG oslo_concurrency.lockutils [req-6914740a-847b-4cf2-b63b-ecfcca29f882 req-931c28d4-71cb-48ce-9947-59b913f97d31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.180 232437 DEBUG oslo_concurrency.lockutils [req-6914740a-847b-4cf2-b63b-ecfcca29f882 req-931c28d4-71cb-48ce-9947-59b913f97d31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.180 232437 DEBUG oslo_concurrency.lockutils [req-6914740a-847b-4cf2-b63b-ecfcca29f882 req-931c28d4-71cb-48ce-9947-59b913f97d31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.181 232437 DEBUG nova.compute.manager [req-6914740a-847b-4cf2-b63b-ecfcca29f882 req-931c28d4-71cb-48ce-9947-59b913f97d31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.181 232437 DEBUG nova.compute.manager [req-6914740a-847b-4cf2-b63b-ecfcca29f882 req-931c28d4-71cb-48ce-9947-59b913f97d31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.299 232437 DEBUG nova.compute.manager [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.299 232437 DEBUG oslo_concurrency.lockutils [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.300 232437 DEBUG oslo_concurrency.lockutils [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.300 232437 DEBUG oslo_concurrency.lockutils [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.300 232437 DEBUG nova.compute.manager [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.301 232437 WARNING nova.compute.manager [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state active and task_state migrating.#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.301 232437 DEBUG nova.compute.manager [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.301 232437 DEBUG oslo_concurrency.lockutils [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.302 232437 DEBUG oslo_concurrency.lockutils [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.302 232437 DEBUG oslo_concurrency.lockutils [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.302 232437 DEBUG nova.compute.manager [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.302 232437 WARNING nova.compute.manager [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state active and task_state migrating.#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.303 232437 DEBUG nova.compute.manager [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.303 232437 DEBUG oslo_concurrency.lockutils [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.303 232437 DEBUG oslo_concurrency.lockutils [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.303 232437 DEBUG oslo_concurrency.lockutils [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.304 232437 DEBUG nova.compute.manager [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.304 232437 WARNING nova.compute.manager [req-d4e17c96-8549-45b2-ad21-d4f726bb1135 req-b8b46a68-b3b5-4a35-af0b-efe695b4156f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state active and task_state migrating.#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.399 232437 INFO nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Creating config drive at /var/lib/nova/instances/9b0dea9b-128d-43a8-aedd-6a023517b89f/disk.config#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.405 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b0dea9b-128d-43a8-aedd-6a023517b89f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpukpntkfr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.533 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b0dea9b-128d-43a8-aedd-6a023517b89f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpukpntkfr" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.555 232437 DEBUG nova.storage.rbd_utils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] rbd image 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.559 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b0dea9b-128d-43a8-aedd-6a023517b89f/disk.config 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.586 232437 DEBUG nova.network.neutron [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updated VIF entry in instance network info cache for port 9e200480-deff-4f39-9945-230d157ac906. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.587 232437 DEBUG nova.network.neutron [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating instance_info_cache with network_info: [{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.609 232437 DEBUG oslo_concurrency.lockutils [req-5c2e6880-28ac-49d5-8e52-b0051d3426f4 req-59baf75d-d683-49f3-a84c-7e2f4d593674 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.723 232437 DEBUG oslo_concurrency.processutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b0dea9b-128d-43a8-aedd-6a023517b89f/disk.config 9b0dea9b-128d-43a8-aedd-6a023517b89f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.724 232437 INFO nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Deleting local config drive /var/lib/nova/instances/9b0dea9b-128d-43a8-aedd-6a023517b89f/disk.config because it was imported into RBD.#033[00m
Dec  6 02:05:03 np0005548731 kernel: tapc25d2974-9b: entered promiscuous mode
Dec  6 02:05:03 np0005548731 NetworkManager[49182]: <info>  [1765004703.7797] manager: (tapc25d2974-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Dec  6 02:05:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:03Z|00100|binding|INFO|Claiming lport c25d2974-9b24-4cff-862e-5f43e77d56be for this chassis.
Dec  6 02:05:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:03Z|00101|binding|INFO|c25d2974-9b24-4cff-862e-5f43e77d56be: Claiming fa:16:3e:36:88:b5 10.100.0.5
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.782 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.785 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.791 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:88:b5 10.100.0.5'], port_security=['fa:16:3e:36:88:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9b0dea9b-128d-43a8-aedd-6a023517b89f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebf7a358-f0c9-48b5-8485-23c08f737784', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'edcc68bbf9cd4c9189e59964db884a26', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2ea38d4-7079-4d8e-8d81-023b2f93461c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63636421-5d43-464e-9459-29f5c8bd3cf7, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=c25d2974-9b24-4cff-862e-5f43e77d56be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.792 143965 INFO neutron.agent.ovn.metadata.agent [-] Port c25d2974-9b24-4cff-862e-5f43e77d56be in datapath ebf7a358-f0c9-48b5-8485-23c08f737784 bound to our chassis#033[00m
Dec  6 02:05:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.794 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebf7a358-f0c9-48b5-8485-23c08f737784#033[00m
Dec  6 02:05:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:03.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:03.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.806 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd528bd-d60c-4767-ac89-e9064fab4048]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.807 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebf7a358-f1 in ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.809 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebf7a358-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.809 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0cab5df6-863c-49c9-88c8-43d87b1fd7a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.810 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa4a273-3798-485c-a977-c488a5b8f5c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 systemd-machined[195355]: New machine qemu-15-instance-0000001f.
Dec  6 02:05:03 np0005548731 systemd-udevd[247877]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.820 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[4ded622d-2ae3-44a5-bc98-1a856244f554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 NetworkManager[49182]: <info>  [1765004703.8282] device (tapc25d2974-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:05:03 np0005548731 NetworkManager[49182]: <info>  [1765004703.8289] device (tapc25d2974-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:05:03 np0005548731 systemd[1]: Started Virtual Machine qemu-15-instance-0000001f.
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.843 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1281802e-7dcd-4d3b-95cf-2562b32e122a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.854 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:03Z|00102|binding|INFO|Setting lport c25d2974-9b24-4cff-862e-5f43e77d56be ovn-installed in OVS
Dec  6 02:05:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:03Z|00103|binding|INFO|Setting lport c25d2974-9b24-4cff-862e-5f43e77d56be up in Southbound
Dec  6 02:05:03 np0005548731 nova_compute[232433]: 2025-12-06 07:05:03.861 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.875 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca50961-79a2-4781-be85-e12e9c6fdf0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 NetworkManager[49182]: <info>  [1765004703.8810] manager: (tapebf7a358-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.880 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9dae7d14-3a28-4aaa-83b8-065b5c3634fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.906 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[82b93818-8741-4b55-99fe-5cc8fc5392cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.908 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[a030763b-b343-40db-adbc-34cf6f6763c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 NetworkManager[49182]: <info>  [1765004703.9324] device (tapebf7a358-f0): carrier: link connected
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.939 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f300d61d-0bb6-4d08-ae15-689cedabb3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.955 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d856241d-918e-4ae2-a93f-d61b03fc0c78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebf7a358-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:9b:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496593, 'reachable_time': 41575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247909, 'error': None, 'target': 'ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.970 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b5060e52-098e-431d-9817-8dbe6296a4fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:9b08'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496593, 'tstamp': 496593}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247910, 'error': None, 'target': 'ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:03.986 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6c93072a-ead9-4f11-8c8e-3b478e198e16]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebf7a358-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:9b:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496593, 'reachable_time': 41575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247911, 'error': None, 'target': 'ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:04.013 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[84039cd9-f3e1-4684-a765-c46d8bbf7903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.030 232437 DEBUG nova.network.neutron [req-65d8f278-50c6-44ac-adff-caea33e45e77 req-87779111-ce1b-40a3-b72f-97ce8e84aa18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Updated VIF entry in instance network info cache for port c25d2974-9b24-4cff-862e-5f43e77d56be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.031 232437 DEBUG nova.network.neutron [req-65d8f278-50c6-44ac-adff-caea33e45e77 req-87779111-ce1b-40a3-b72f-97ce8e84aa18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Updating instance_info_cache with network_info: [{"id": "c25d2974-9b24-4cff-862e-5f43e77d56be", "address": "fa:16:3e:36:88:b5", "network": {"id": "ebf7a358-f0c9-48b5-8485-23c08f737784", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-863315842-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "edcc68bbf9cd4c9189e59964db884a26", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25d2974-9b", "ovs_interfaceid": "c25d2974-9b24-4cff-862e-5f43e77d56be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.050 232437 DEBUG oslo_concurrency.lockutils [req-65d8f278-50c6-44ac-adff-caea33e45e77 req-87779111-ce1b-40a3-b72f-97ce8e84aa18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9b0dea9b-128d-43a8-aedd-6a023517b89f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:04.064 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b83fec6d-35c0-4dbf-8490-c3efd8f3115a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:04.066 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebf7a358-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:04.066 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:04.066 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebf7a358-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:04 np0005548731 kernel: tapebf7a358-f0: entered promiscuous mode
Dec  6 02:05:04 np0005548731 NetworkManager[49182]: <info>  [1765004704.0687] manager: (tapebf7a358-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.068 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:04.070 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebf7a358-f0, col_values=(('external_ids', {'iface-id': '3d3c7bb3-2528-4c1d-8ea8-94e539e1498f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:04Z|00104|binding|INFO|Releasing lport 3d3c7bb3-2528-4c1d-8ea8-94e539e1498f from this chassis (sb_readonly=0)
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.072 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.093 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:04.095 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebf7a358-f0c9-48b5-8485-23c08f737784.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebf7a358-f0c9-48b5-8485-23c08f737784.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:04.095 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7ec71a-62a6-4a04-88e8-95b8bd64a1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:04.096 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-ebf7a358-f0c9-48b5-8485-23c08f737784
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/ebf7a358-f0c9-48b5-8485-23c08f737784.pid.haproxy
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID ebf7a358-f0c9-48b5-8485-23c08f737784
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:05:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:04.100 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784', 'env', 'PROCESS_TAG=haproxy-ebf7a358-f0c9-48b5-8485-23c08f737784', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebf7a358-f0c9-48b5-8485-23c08f737784.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:05:04 np0005548731 podman[247980]: 2025-12-06 07:05:04.407461845 +0000 UTC m=+0.022957285 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:05:04 np0005548731 podman[247980]: 2025-12-06 07:05:04.526003667 +0000 UTC m=+0.141499087 container create b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.538 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004704.5379155, 9b0dea9b-128d-43a8-aedd-6a023517b89f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.539 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] VM Started (Lifecycle Event)#033[00m
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.558 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.562 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004704.5388365, 9b0dea9b-128d-43a8-aedd-6a023517b89f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.563 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:05:04 np0005548731 systemd[1]: Started libpod-conmon-b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01.scope.
Dec  6 02:05:04 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.585 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.588 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:05:04 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b96bb5e717cc7709ccb52113173f318e9970e66e3e6e795b15ba126cd5dd877/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:05:04 np0005548731 podman[247980]: 2025-12-06 07:05:04.600450724 +0000 UTC m=+0.215946184 container init b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:05:04 np0005548731 podman[247980]: 2025-12-06 07:05:04.605835754 +0000 UTC m=+0.221331184 container start b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:05:04 np0005548731 nova_compute[232433]: 2025-12-06 07:05:04.612 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:05:04 np0005548731 neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784[248001]: [NOTICE]   (248005) : New worker (248007) forked
Dec  6 02:05:04 np0005548731 neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784[248001]: [NOTICE]   (248005) : Loading success.
Dec  6 02:05:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.422 232437 DEBUG nova.compute.manager [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.423 232437 DEBUG oslo_concurrency.lockutils [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.423 232437 DEBUG oslo_concurrency.lockutils [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.423 232437 DEBUG oslo_concurrency.lockutils [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.423 232437 DEBUG nova.compute.manager [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.424 232437 WARNING nova.compute.manager [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state active and task_state migrating.#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.424 232437 DEBUG nova.compute.manager [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Received event network-vif-plugged-c25d2974-9b24-4cff-862e-5f43e77d56be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.424 232437 DEBUG oslo_concurrency.lockutils [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.424 232437 DEBUG oslo_concurrency.lockutils [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.424 232437 DEBUG oslo_concurrency.lockutils [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.425 232437 DEBUG nova.compute.manager [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Processing event network-vif-plugged-c25d2974-9b24-4cff-862e-5f43e77d56be _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.425 232437 DEBUG nova.compute.manager [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Received event network-vif-plugged-c25d2974-9b24-4cff-862e-5f43e77d56be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.425 232437 DEBUG oslo_concurrency.lockutils [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.425 232437 DEBUG oslo_concurrency.lockutils [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.425 232437 DEBUG oslo_concurrency.lockutils [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.426 232437 DEBUG nova.compute.manager [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] No waiting events found dispatching network-vif-plugged-c25d2974-9b24-4cff-862e-5f43e77d56be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.426 232437 WARNING nova.compute.manager [req-60207ec9-9bbc-42cd-b884-e93853d7562b req-31babe5c-a052-48e2-980c-07d56beb8c4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Received unexpected event network-vif-plugged-c25d2974-9b24-4cff-862e-5f43e77d56be for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.426 232437 DEBUG nova.compute.manager [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.430 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004705.4303977, 9b0dea9b-128d-43a8-aedd-6a023517b89f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.431 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.432 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.435 232437 INFO nova.virt.libvirt.driver [-] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Instance spawned successfully.#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.436 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Attempting to register defaults for the following image properties: ['hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.441 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.442 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.442 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.442 232437 DEBUG nova.virt.libvirt.driver [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.451 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.454 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.476 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.501 232437 INFO nova.compute.manager [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Took 8.13 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.502 232437 DEBUG nova.compute.manager [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.563 232437 INFO nova.compute.manager [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Took 9.17 seconds to build instance.#033[00m
Dec  6 02:05:05 np0005548731 nova_compute[232433]: 2025-12-06 07:05:05.584 232437 DEBUG oslo_concurrency.lockutils [None req-796f3a12-8f70-4777-897c-858f8d53a969 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:05.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:05:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:05.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:05:05 np0005548731 podman[248016]: 2025-12-06 07:05:05.905430048 +0000 UTC m=+0.058456222 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  6 02:05:05 np0005548731 podman[248018]: 2025-12-06 07:05:05.928379062 +0000 UTC m=+0.073008473 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:05:05 np0005548731 podman[248017]: 2025-12-06 07:05:05.942514334 +0000 UTC m=+0.091629763 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:05:06 np0005548731 nova_compute[232433]: 2025-12-06 07:05:06.286 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:07.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:07.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:08 np0005548731 nova_compute[232433]: 2025-12-06 07:05:08.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:08 np0005548731 nova_compute[232433]: 2025-12-06 07:05:08.842 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:08 np0005548731 nova_compute[232433]: 2025-12-06 07:05:08.842 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:08 np0005548731 nova_compute[232433]: 2025-12-06 07:05:08.843 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:08 np0005548731 nova_compute[232433]: 2025-12-06 07:05:08.886 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:08 np0005548731 nova_compute[232433]: 2025-12-06 07:05:08.886 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:08 np0005548731 nova_compute[232433]: 2025-12-06 07:05:08.887 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:08 np0005548731 nova_compute[232433]: 2025-12-06 07:05:08.887 232437 DEBUG nova.compute.resource_tracker [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:05:08 np0005548731 nova_compute[232433]: 2025-12-06 07:05:08.887 232437 DEBUG oslo_concurrency.processutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:09 np0005548731 NetworkManager[49182]: <info>  [1765004709.0076] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.007 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:09 np0005548731 NetworkManager[49182]: <info>  [1765004709.0086] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.129 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:09Z|00105|binding|INFO|Releasing lport 3d3c7bb3-2528-4c1d-8ea8-94e539e1498f from this chassis (sb_readonly=0)
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.147 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:05:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2765720048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:05:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:05:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1546777161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:05:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:05:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1546777161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.322 232437 DEBUG oslo_concurrency.processutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.409 232437 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.410 232437 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.574 232437 WARNING nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.576 232437 DEBUG nova.compute.resource_tracker [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4611MB free_disk=20.8201904296875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.576 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.576 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.626 232437 DEBUG nova.compute.resource_tracker [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Migration for instance 1712064c-6ba8-4660-972f-1e827a40781a refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.649 232437 DEBUG nova.compute.resource_tracker [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.678 232437 DEBUG nova.compute.resource_tracker [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Migration 66ff6ed2-80e5-4a70-9b00-2852c6d84ac4 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.679 232437 DEBUG nova.compute.resource_tracker [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Instance 9b0dea9b-128d-43a8-aedd-6a023517b89f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.680 232437 DEBUG nova.compute.resource_tracker [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.680 232437 DEBUG nova.compute.resource_tracker [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.736 232437 DEBUG oslo_concurrency.processutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:09.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:09.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.908 232437 DEBUG nova.compute.manager [req-e582b03f-7925-426a-98bc-8886b3318236 req-4c6045cd-1894-40e8-bd40-4901c622c29e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Received event network-changed-c25d2974-9b24-4cff-862e-5f43e77d56be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.909 232437 DEBUG nova.compute.manager [req-e582b03f-7925-426a-98bc-8886b3318236 req-4c6045cd-1894-40e8-bd40-4901c622c29e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Refreshing instance network info cache due to event network-changed-c25d2974-9b24-4cff-862e-5f43e77d56be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.909 232437 DEBUG oslo_concurrency.lockutils [req-e582b03f-7925-426a-98bc-8886b3318236 req-4c6045cd-1894-40e8-bd40-4901c622c29e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9b0dea9b-128d-43a8-aedd-6a023517b89f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.910 232437 DEBUG oslo_concurrency.lockutils [req-e582b03f-7925-426a-98bc-8886b3318236 req-4c6045cd-1894-40e8-bd40-4901c622c29e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9b0dea9b-128d-43a8-aedd-6a023517b89f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:05:09 np0005548731 nova_compute[232433]: 2025-12-06 07:05:09.910 232437 DEBUG nova.network.neutron [req-e582b03f-7925-426a-98bc-8886b3318236 req-4c6045cd-1894-40e8-bd40-4901c622c29e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Refreshing network info cache for port c25d2974-9b24-4cff-862e-5f43e77d56be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:05:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:05:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4125302750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:05:10 np0005548731 nova_compute[232433]: 2025-12-06 07:05:10.204 232437 DEBUG oslo_concurrency.processutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:10 np0005548731 nova_compute[232433]: 2025-12-06 07:05:10.211 232437 DEBUG nova.compute.provider_tree [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:05:10 np0005548731 nova_compute[232433]: 2025-12-06 07:05:10.229 232437 DEBUG nova.scheduler.client.report [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:05:10 np0005548731 nova_compute[232433]: 2025-12-06 07:05:10.253 232437 DEBUG nova.compute.resource_tracker [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:05:10 np0005548731 nova_compute[232433]: 2025-12-06 07:05:10.254 232437 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:10 np0005548731 nova_compute[232433]: 2025-12-06 07:05:10.260 232437 INFO nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migrating instance to compute-1.ctlplane.example.com finished successfully.#033[00m
Dec  6 02:05:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:10 np0005548731 nova_compute[232433]: 2025-12-06 07:05:10.352 232437 INFO nova.scheduler.client.report [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Deleted allocation for migration 66ff6ed2-80e5-4a70-9b00-2852c6d84ac4#033[00m
Dec  6 02:05:10 np0005548731 nova_compute[232433]: 2025-12-06 07:05:10.352 232437 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Dec  6 02:05:11 np0005548731 nova_compute[232433]: 2025-12-06 07:05:11.288 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:11 np0005548731 nova_compute[232433]: 2025-12-06 07:05:11.793 232437 DEBUG nova.network.neutron [req-e582b03f-7925-426a-98bc-8886b3318236 req-4c6045cd-1894-40e8-bd40-4901c622c29e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Updated VIF entry in instance network info cache for port c25d2974-9b24-4cff-862e-5f43e77d56be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:05:11 np0005548731 nova_compute[232433]: 2025-12-06 07:05:11.794 232437 DEBUG nova.network.neutron [req-e582b03f-7925-426a-98bc-8886b3318236 req-4c6045cd-1894-40e8-bd40-4901c622c29e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Updating instance_info_cache with network_info: [{"id": "c25d2974-9b24-4cff-862e-5f43e77d56be", "address": "fa:16:3e:36:88:b5", "network": {"id": "ebf7a358-f0c9-48b5-8485-23c08f737784", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-863315842-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "edcc68bbf9cd4c9189e59964db884a26", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25d2974-9b", "ovs_interfaceid": "c25d2974-9b24-4cff-862e-5f43e77d56be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:05:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:11.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:11.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:11 np0005548731 nova_compute[232433]: 2025-12-06 07:05:11.821 232437 DEBUG oslo_concurrency.lockutils [req-e582b03f-7925-426a-98bc-8886b3318236 req-4c6045cd-1894-40e8-bd40-4901c622c29e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9b0dea9b-128d-43a8-aedd-6a023517b89f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:05:12 np0005548731 nova_compute[232433]: 2025-12-06 07:05:12.180 232437 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Creating tmpfile /var/lib/nova/instances/tmpwy14j4_p to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Dec  6 02:05:12 np0005548731 nova_compute[232433]: 2025-12-06 07:05:12.182 232437 DEBUG nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwy14j4_p',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Dec  6 02:05:12 np0005548731 nova_compute[232433]: 2025-12-06 07:05:12.966 232437 DEBUG nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwy14j4_p',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Dec  6 02:05:12 np0005548731 nova_compute[232433]: 2025-12-06 07:05:12.992 232437 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:05:12 np0005548731 nova_compute[232433]: 2025-12-06 07:05:12.992 232437 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquired lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:05:12 np0005548731 nova_compute[232433]: 2025-12-06 07:05:12.993 232437 DEBUG nova.network.neutron [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:05:13 np0005548731 nova_compute[232433]: 2025-12-06 07:05:13.040 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4486f0 =====
Dec  6 02:05:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4486f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4486f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:13.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.157 232437 DEBUG nova.network.neutron [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating instance_info_cache with network_info: [{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.171 232437 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Releasing lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.172 232437 DEBUG os_brick.utils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.173 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.183 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.183 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e2e871-3ac4-49bf-9a7f-e26c56aae34d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.184 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.190 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.191 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[07c1781e-7803-4b98-8363-43bd39d2e7d2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.193 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.208 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.208 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[6ccefd62-ba21-4fd8-9849-d73bff791151]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.210 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9c3340-a5a1-40bb-baa5-f029ace8d791]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.210 232437 DEBUG oslo_concurrency.processutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.249 232437 DEBUG oslo_concurrency.processutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.252 232437 DEBUG os_brick.initiator.connectors.lightos [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.252 232437 DEBUG os_brick.initiator.connectors.lightos [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.253 232437 DEBUG os_brick.initiator.connectors.lightos [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:05:14 np0005548731 nova_compute[232433]: 2025-12-06 07:05:14.253 232437 DEBUG os_brick.utils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:05:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.532 232437 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwy14j4_p',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={2b74c89f-1018-4ad4-8af8-84109979b9c7='9c9d8800-f1b1-4d13-bc69-4ef58df68604'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.534 232437 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Creating instance directory: /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.534 232437 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Ensure instance console log exists: /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.535 232437 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.539 232437 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.542 232437 DEBUG nova.virt.libvirt.vif [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:04:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1567617044',display_name='tempest-LiveMigrationTest-server-1567617044',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1567617044',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:04:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-i5n1flwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:05:07Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=1712064c-6ba8-4660-972f-1e827a40781a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.543 232437 DEBUG nova.network.os_vif_util [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converting VIF {"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.546 232437 DEBUG nova.network.os_vif_util [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.547 232437 DEBUG os_vif [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.550 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.551 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.555 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.556 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e200480-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.557 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e200480-de, col_values=(('external_ids', {'iface-id': '9e200480-deff-4f39-9945-230d157ac906', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:3c:15', 'vm-uuid': '1712064c-6ba8-4660-972f-1e827a40781a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:15 np0005548731 NetworkManager[49182]: <info>  [1765004715.5604] manager: (tap9e200480-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.559 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.564 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.568 232437 INFO os_vif [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de')#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.572 232437 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.572 232437 DEBUG nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwy14j4_p',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={2b74c89f-1018-4ad4-8af8-84109979b9c7='9c9d8800-f1b1-4d13-bc69-4ef58df68604'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.602 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004700.600969, 1712064c-6ba8-4660-972f-1e827a40781a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.602 232437 INFO nova.compute.manager [-] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:05:15 np0005548731 nova_compute[232433]: 2025-12-06 07:05:15.619 232437 DEBUG nova.compute.manager [None req-4349d193-7832-4a56-a932-c079a4805d5a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:15.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:05:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:15.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:05:16 np0005548731 nova_compute[232433]: 2025-12-06 07:05:16.318 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:16 np0005548731 nova_compute[232433]: 2025-12-06 07:05:16.498 232437 DEBUG nova.network.neutron [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Port 9e200480-deff-4f39-9945-230d157ac906 updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Dec  6 02:05:16 np0005548731 nova_compute[232433]: 2025-12-06 07:05:16.846 232437 DEBUG nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwy14j4_p',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={2b74c89f-1018-4ad4-8af8-84109979b9c7='9c9d8800-f1b1-4d13-bc69-4ef58df68604'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Dec  6 02:05:17 np0005548731 kernel: tap9e200480-de: entered promiscuous mode
Dec  6 02:05:17 np0005548731 NetworkManager[49182]: <info>  [1765004717.0615] manager: (tap9e200480-de): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Dec  6 02:05:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:17Z|00106|binding|INFO|Claiming lport 9e200480-deff-4f39-9945-230d157ac906 for this additional chassis.
Dec  6 02:05:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:17Z|00107|binding|INFO|9e200480-deff-4f39-9945-230d157ac906: Claiming fa:16:3e:e9:3c:15 10.100.0.13
Dec  6 02:05:17 np0005548731 nova_compute[232433]: 2025-12-06 07:05:17.063 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:17Z|00108|binding|INFO|Setting lport 9e200480-deff-4f39-9945-230d157ac906 ovn-installed in OVS
Dec  6 02:05:17 np0005548731 nova_compute[232433]: 2025-12-06 07:05:17.083 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:17 np0005548731 nova_compute[232433]: 2025-12-06 07:05:17.087 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:17 np0005548731 systemd-udevd[248202]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:05:17 np0005548731 systemd-machined[195355]: New machine qemu-16-instance-0000001b.
Dec  6 02:05:17 np0005548731 systemd[1]: Started Virtual Machine qemu-16-instance-0000001b.
Dec  6 02:05:17 np0005548731 NetworkManager[49182]: <info>  [1765004717.1037] device (tap9e200480-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:05:17 np0005548731 NetworkManager[49182]: <info>  [1765004717.1050] device (tap9e200480-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:05:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e170 e170: 3 total, 3 up, 3 in
Dec  6 02:05:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:17.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:17.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:17 np0005548731 nova_compute[232433]: 2025-12-06 07:05:17.939 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004717.9389482, 1712064c-6ba8-4660-972f-1e827a40781a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:17 np0005548731 nova_compute[232433]: 2025-12-06 07:05:17.940 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Started (Lifecycle Event)#033[00m
Dec  6 02:05:17 np0005548731 nova_compute[232433]: 2025-12-06 07:05:17.966 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:18 np0005548731 nova_compute[232433]: 2025-12-06 07:05:18.464 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004718.464604, 1712064c-6ba8-4660-972f-1e827a40781a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:18 np0005548731 nova_compute[232433]: 2025-12-06 07:05:18.465 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:05:18 np0005548731 nova_compute[232433]: 2025-12-06 07:05:18.484 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:18 np0005548731 nova_compute[232433]: 2025-12-06 07:05:18.488 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:05:18 np0005548731 nova_compute[232433]: 2025-12-06 07:05:18.506 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Dec  6 02:05:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:19Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:88:b5 10.100.0.5
Dec  6 02:05:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:19Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:88:b5 10.100.0.5
Dec  6 02:05:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e171 e171: 3 total, 3 up, 3 in
Dec  6 02:05:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:19.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:05:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:19.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:05:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:20Z|00109|binding|INFO|Claiming lport 9e200480-deff-4f39-9945-230d157ac906 for this chassis.
Dec  6 02:05:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:20Z|00110|binding|INFO|9e200480-deff-4f39-9945-230d157ac906: Claiming fa:16:3e:e9:3c:15 10.100.0.13
Dec  6 02:05:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:20Z|00111|binding|INFO|Setting lport 9e200480-deff-4f39-9945-230d157ac906 up in Southbound
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.116 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:3c:15 10.100.0.13'], port_security=['fa:16:3e:e9:3c:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1712064c-6ba8-4660-972f-1e827a40781a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9238b9b5-08f5-4634-bd05-370e3192b201', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '21', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db7e9816-53a6-4d9a-be4a-e5a8dcf8a64b, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9e200480-deff-4f39-9945-230d157ac906) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.117 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9e200480-deff-4f39-9945-230d157ac906 in datapath 9238b9b5-08f5-4634-bd05-370e3192b201 bound to our chassis#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.119 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9238b9b5-08f5-4634-bd05-370e3192b201#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.131 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0d23b974-e2c4-40f4-95b4-3bff5bb1fee1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.132 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9238b9b5-01 in ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.134 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9238b9b5-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.134 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4470312b-b9d9-4d86-ba35-f81bed7efcbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.135 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d5325baa-b4b3-405d-8da2-bb37b09fc718]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.150 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[930143cc-b4dd-4a25-b6bb-7ef70e6842fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.169 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b887b4e6-a55e-4ff4-aa20-b443590e9b60]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.196 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4de2fa11-280d-4c5b-8fd0-4fa66773f2aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.201 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bc8bf0-1b94-46cc-a767-9e69ffc010d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 NetworkManager[49182]: <info>  [1765004720.2028] manager: (tap9238b9b5-00): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Dec  6 02:05:20 np0005548731 systemd-udevd[248262]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.247 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0b09ca7c-d05f-48ba-8794-7a562c18c7c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.250 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f1974a-90db-41ce-bc8d-8a4b1974800e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 NetworkManager[49182]: <info>  [1765004720.2747] device (tap9238b9b5-00): carrier: link connected
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.280 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5729f342-0351-4c60-b3d8-2674028dc2dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.300 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6364fd72-eeea-4f22-80f4-19c8e84e84d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9238b9b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:4f:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498228, 'reachable_time': 37172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248281, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.316 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[76ef978c-76e1-45b2-889e-5fdc2923b8d9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:4ff3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498228, 'tstamp': 498228}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248282, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.336 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[399d5b6b-f1f8-4fcb-9819-daf8199afbcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9238b9b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:4f:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498228, 'reachable_time': 37172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248283, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.375 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[628b2eb1-53c3-40fe-9f47-82d8294140aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.452 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa2a5c4-9935-4f76-8288-4de8e87d0539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.454 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9238b9b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.454 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.455 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9238b9b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:20 np0005548731 nova_compute[232433]: 2025-12-06 07:05:20.457 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:20 np0005548731 NetworkManager[49182]: <info>  [1765004720.4584] manager: (tap9238b9b5-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Dec  6 02:05:20 np0005548731 kernel: tap9238b9b5-00: entered promiscuous mode
Dec  6 02:05:20 np0005548731 nova_compute[232433]: 2025-12-06 07:05:20.459 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.461 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9238b9b5-00, col_values=(('external_ids', {'iface-id': '5c223717-35ae-4662-bf3f-55f7a73b7a9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:20Z|00112|binding|INFO|Releasing lport 5c223717-35ae-4662-bf3f-55f7a73b7a9a from this chassis (sb_readonly=0)
Dec  6 02:05:20 np0005548731 nova_compute[232433]: 2025-12-06 07:05:20.462 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:20 np0005548731 nova_compute[232433]: 2025-12-06 07:05:20.479 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.480 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.481 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee17bb0-61a0-48ed-aebf-d02e9efa9140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.481 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-9238b9b5-08f5-4634-bd05-370e3192b201
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 9238b9b5-08f5-4634-bd05-370e3192b201
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:05:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:20.482 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'env', 'PROCESS_TAG=haproxy-9238b9b5-08f5-4634-bd05-370e3192b201', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9238b9b5-08f5-4634-bd05-370e3192b201.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:05:20 np0005548731 nova_compute[232433]: 2025-12-06 07:05:20.560 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:20 np0005548731 nova_compute[232433]: 2025-12-06 07:05:20.720 232437 INFO nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Post operation of migration started#033[00m
Dec  6 02:05:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e172 e172: 3 total, 3 up, 3 in
Dec  6 02:05:20 np0005548731 podman[248315]: 2025-12-06 07:05:20.886381207 +0000 UTC m=+0.045509101 container create 09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:05:20 np0005548731 systemd[1]: Started libpod-conmon-09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1.scope.
Dec  6 02:05:20 np0005548731 podman[248315]: 2025-12-06 07:05:20.862008148 +0000 UTC m=+0.021136062 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:05:20 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:05:20 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57720d5c7ffb5d45755fb50a9f44802408027939e17d9c3fa96c9357d467507b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:05:20 np0005548731 podman[248315]: 2025-12-06 07:05:20.987443546 +0000 UTC m=+0.146571460 container init 09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:05:20 np0005548731 podman[248315]: 2025-12-06 07:05:20.992479217 +0000 UTC m=+0.151607111 container start 09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  6 02:05:21 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[248331]: [NOTICE]   (248335) : New worker (248337) forked
Dec  6 02:05:21 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[248331]: [NOTICE]   (248335) : Loading success.
Dec  6 02:05:21 np0005548731 nova_compute[232433]: 2025-12-06 07:05:21.282 232437 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:05:21 np0005548731 nova_compute[232433]: 2025-12-06 07:05:21.283 232437 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquired lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:05:21 np0005548731 nova_compute[232433]: 2025-12-06 07:05:21.283 232437 DEBUG nova.network.neutron [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:05:21 np0005548731 nova_compute[232433]: 2025-12-06 07:05:21.348 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:21.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:21.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:22 np0005548731 nova_compute[232433]: 2025-12-06 07:05:22.613 232437 DEBUG nova.network.neutron [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating instance_info_cache with network_info: [{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:05:22 np0005548731 nova_compute[232433]: 2025-12-06 07:05:22.632 232437 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Releasing lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:05:22 np0005548731 nova_compute[232433]: 2025-12-06 07:05:22.648 232437 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:22 np0005548731 nova_compute[232433]: 2025-12-06 07:05:22.648 232437 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:22 np0005548731 nova_compute[232433]: 2025-12-06 07:05:22.649 232437 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:22 np0005548731 nova_compute[232433]: 2025-12-06 07:05:22.654 232437 INFO nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Dec  6 02:05:22 np0005548731 virtqemud[232080]: Domain id=16 name='instance-0000001b' uuid=1712064c-6ba8-4660-972f-1e827a40781a is tainted: custom-monitor
Dec  6 02:05:23 np0005548731 nova_compute[232433]: 2025-12-06 07:05:23.662 232437 INFO nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Dec  6 02:05:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:23.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:05:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:23.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.559266) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724559340, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2534, "num_deletes": 509, "total_data_size": 5180055, "memory_usage": 5244080, "flush_reason": "Manual Compaction"}
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724576457, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 2368532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25965, "largest_seqno": 28494, "table_properties": {"data_size": 2360433, "index_size": 4081, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 23222, "raw_average_key_size": 19, "raw_value_size": 2340778, "raw_average_value_size": 2014, "num_data_blocks": 180, "num_entries": 1162, "num_filter_entries": 1162, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004537, "oldest_key_time": 1765004537, "file_creation_time": 1765004724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 17270 microseconds, and 6706 cpu microseconds.
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.576510) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 2368532 bytes OK
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.576578) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.578147) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.578160) EVENT_LOG_v1 {"time_micros": 1765004724578156, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.578181) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5167832, prev total WAL file size 5167832, number of live WAL files 2.
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.579357) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353036' seq:72057594037927935, type:22 .. '6C6F676D00373538' seq:0, type:0; will stop at (end)
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(2313KB)], [51(10086KB)]
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724579415, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 12697319, "oldest_snapshot_seqno": -1}
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5658 keys, 9729838 bytes, temperature: kUnknown
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724655920, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 9729838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9691830, "index_size": 22730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 144702, "raw_average_key_size": 25, "raw_value_size": 9589699, "raw_average_value_size": 1694, "num_data_blocks": 921, "num_entries": 5658, "num_filter_entries": 5658, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765004724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.656170) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 9729838 bytes
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.657658) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.8 rd, 127.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.9 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(9.5) write-amplify(4.1) OK, records in: 6629, records dropped: 971 output_compression: NoCompression
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.657675) EVENT_LOG_v1 {"time_micros": 1765004724657667, "job": 30, "event": "compaction_finished", "compaction_time_micros": 76581, "compaction_time_cpu_micros": 33083, "output_level": 6, "num_output_files": 1, "total_output_size": 9729838, "num_input_records": 6629, "num_output_records": 5658, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724658100, "job": 30, "event": "table_file_deletion", "file_number": 53}
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724659826, "job": 30, "event": "table_file_deletion", "file_number": 51}
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.579261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.659860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.659864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.659866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.659867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:24 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:24.659868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:24 np0005548731 nova_compute[232433]: 2025-12-06 07:05:24.668 232437 INFO nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Dec  6 02:05:24 np0005548731 nova_compute[232433]: 2025-12-06 07:05:24.673 232437 DEBUG nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:24 np0005548731 nova_compute[232433]: 2025-12-06 07:05:24.693 232437 DEBUG nova.objects.instance [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec  6 02:05:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:25 np0005548731 nova_compute[232433]: 2025-12-06 07:05:25.563 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:25.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:25.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:26 np0005548731 nova_compute[232433]: 2025-12-06 07:05:26.350 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e173 e173: 3 total, 3 up, 3 in
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.410 232437 DEBUG oslo_concurrency.lockutils [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.411 232437 DEBUG oslo_concurrency.lockutils [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.428 232437 DEBUG nova.objects.instance [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lazy-loading 'flavor' on Instance uuid 9b0dea9b-128d-43a8-aedd-6a023517b89f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.529 232437 DEBUG oslo_concurrency.lockutils [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:27.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:05:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:27.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.833 232437 DEBUG oslo_concurrency.lockutils [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.833 232437 DEBUG oslo_concurrency.lockutils [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.834 232437 DEBUG oslo_concurrency.lockutils [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.834 232437 DEBUG oslo_concurrency.lockutils [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.834 232437 DEBUG oslo_concurrency.lockutils [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.836 232437 INFO nova.compute.manager [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Terminating instance#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.837 232437 DEBUG nova.compute.manager [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:05:27 np0005548731 kernel: tap9e200480-de (unregistering): left promiscuous mode
Dec  6 02:05:27 np0005548731 NetworkManager[49182]: <info>  [1765004727.8907] device (tap9e200480-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:05:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:27Z|00113|binding|INFO|Releasing lport 9e200480-deff-4f39-9945-230d157ac906 from this chassis (sb_readonly=0)
Dec  6 02:05:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:27Z|00114|binding|INFO|Setting lport 9e200480-deff-4f39-9945-230d157ac906 down in Southbound
Dec  6 02:05:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:27Z|00115|binding|INFO|Removing iface tap9e200480-de ovn-installed in OVS
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.902 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.905 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.906 232437 DEBUG oslo_concurrency.lockutils [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.906 232437 DEBUG oslo_concurrency.lockutils [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.906 232437 INFO nova.compute.manager [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Attaching volume 72f42200-60a4-4bc0-ac08-8ab63713f05e to /dev/sdc#033[00m
Dec  6 02:05:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:27.909 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:3c:15 10.100.0.13'], port_security=['fa:16:3e:e9:3c:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1712064c-6ba8-4660-972f-1e827a40781a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9238b9b5-08f5-4634-bd05-370e3192b201', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '23', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db7e9816-53a6-4d9a-be4a-e5a8dcf8a64b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9e200480-deff-4f39-9945-230d157ac906) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:05:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:27.911 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9e200480-deff-4f39-9945-230d157ac906 in datapath 9238b9b5-08f5-4634-bd05-370e3192b201 unbound from our chassis#033[00m
Dec  6 02:05:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:27.913 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9238b9b5-08f5-4634-bd05-370e3192b201, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:05:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:27.917 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ed0adf-aedf-47e7-b656-1f4d535a5732]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:27.918 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 namespace which is not needed anymore#033[00m
Dec  6 02:05:27 np0005548731 nova_compute[232433]: 2025-12-06 07:05:27.924 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:27 np0005548731 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Dec  6 02:05:27 np0005548731 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001b.scope: Consumed 1.603s CPU time.
Dec  6 02:05:27 np0005548731 systemd-machined[195355]: Machine qemu-16-instance-0000001b terminated.
Dec  6 02:05:28 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[248331]: [NOTICE]   (248335) : haproxy version is 2.8.14-c23fe91
Dec  6 02:05:28 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[248331]: [NOTICE]   (248335) : path to executable is /usr/sbin/haproxy
Dec  6 02:05:28 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[248331]: [WARNING]  (248335) : Exiting Master process...
Dec  6 02:05:28 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[248331]: [ALERT]    (248335) : Current worker (248337) exited with code 143 (Terminated)
Dec  6 02:05:28 np0005548731 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[248331]: [WARNING]  (248335) : All workers exited. Exiting... (0)
Dec  6 02:05:28 np0005548731 systemd[1]: libpod-09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1.scope: Deactivated successfully.
Dec  6 02:05:28 np0005548731 podman[248373]: 2025-12-06 07:05:28.064215232 +0000 UTC m=+0.048690997 container died 09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.085 232437 INFO nova.virt.libvirt.driver [-] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Instance destroyed successfully.#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.087 232437 DEBUG nova.objects.instance [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lazy-loading 'resources' on Instance uuid 1712064c-6ba8-4660-972f-1e827a40781a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:05:28 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1-userdata-shm.mount: Deactivated successfully.
Dec  6 02:05:28 np0005548731 systemd[1]: var-lib-containers-storage-overlay-57720d5c7ffb5d45755fb50a9f44802408027939e17d9c3fa96c9357d467507b-merged.mount: Deactivated successfully.
Dec  6 02:05:28 np0005548731 podman[248373]: 2025-12-06 07:05:28.105783296 +0000 UTC m=+0.090259061 container cleanup 09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.110 232437 DEBUG nova.virt.libvirt.vif [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:04:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1567617044',display_name='tempest-LiveMigrationTest-server-1567617044',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1567617044',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:04:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-i5n1flwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:05:24Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=1712064c-6ba8-4660-972f-1e827a40781a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.111 232437 DEBUG nova.network.os_vif_util [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converting VIF {"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.111 232437 DEBUG nova.network.os_vif_util [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.111 232437 DEBUG os_vif [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.113 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.114 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e200480-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.117 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.118 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.118 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:28 np0005548731 systemd[1]: libpod-conmon-09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1.scope: Deactivated successfully.
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.121 232437 INFO os_vif [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de')#033[00m
Dec  6 02:05:28 np0005548731 podman[248410]: 2025-12-06 07:05:28.170871017 +0000 UTC m=+0.041437911 container remove 09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 02:05:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:28.176 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7340f1a7-5995-42ca-860b-c4046aeb9be5]: (4, ('Sat Dec  6 07:05:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 (09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1)\n09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1\nSat Dec  6 07:05:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 (09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1)\n09d77df2d33c01ac7ff3bd9c760ce79a3845e7ddbc2d647dc58420bfa7ee40f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:28.180 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[193c40c0-0584-4a9f-995c-4591321bfd09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:28.182 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9238b9b5-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.184 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:28 np0005548731 kernel: tap9238b9b5-00: left promiscuous mode
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.196 232437 DEBUG nova.compute.manager [req-a18f4303-a677-405a-abfa-496a7c283036 req-2cb012e8-84d8-4eac-89e6-8e20f6d485ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.197 232437 DEBUG oslo_concurrency.lockutils [req-a18f4303-a677-405a-abfa-496a7c283036 req-2cb012e8-84d8-4eac-89e6-8e20f6d485ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.197 232437 DEBUG oslo_concurrency.lockutils [req-a18f4303-a677-405a-abfa-496a7c283036 req-2cb012e8-84d8-4eac-89e6-8e20f6d485ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.197 232437 DEBUG oslo_concurrency.lockutils [req-a18f4303-a677-405a-abfa-496a7c283036 req-2cb012e8-84d8-4eac-89e6-8e20f6d485ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.197 232437 DEBUG nova.compute.manager [req-a18f4303-a677-405a-abfa-496a7c283036 req-2cb012e8-84d8-4eac-89e6-8e20f6d485ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.197 232437 DEBUG nova.compute.manager [req-a18f4303-a677-405a-abfa-496a7c283036 req-2cb012e8-84d8-4eac-89e6-8e20f6d485ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.199 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.200 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:28.202 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ae11a518-1460-439a-b608-d62193fa97d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:28.223 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[88c4e9a1-bafb-43e3-a01f-0b242570a25b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:28.225 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0401b344-178c-43aa-9c8b-b728c846c26f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:28.244 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4ccfc95d-c4f4-4801-bfcb-2f83d1dc1515]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498219, 'reachable_time': 38386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248445, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.246 232437 DEBUG os_brick.utils [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.249 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:28 np0005548731 systemd[1]: run-netns-ovnmeta\x2d9238b9b5\x2d08f5\x2d4634\x2dbd05\x2d370e3192b201.mount: Deactivated successfully.
Dec  6 02:05:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:28.250 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:05:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:28.250 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[6145a960-c4f3-493a-b21e-112ebd07f682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.264 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.264 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ae8b2e-fc82-46b4-9ccb-92f368490fc9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.265 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.275 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.275 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3857c9-eff8-4448-a2fc-9b0571e1188d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.276 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.289 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.289 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[32cc3b08-3e2c-4d3c-9360-edb4d2978a4c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.290 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3547bc-3c33-42c0-8542-75dde1247cf4]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.291 232437 DEBUG oslo_concurrency.processutils [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.315 232437 DEBUG oslo_concurrency.processutils [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.317 232437 DEBUG os_brick.initiator.connectors.lightos [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.318 232437 DEBUG os_brick.initiator.connectors.lightos [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.318 232437 DEBUG os_brick.initiator.connectors.lightos [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.318 232437 DEBUG os_brick.utils [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.318 232437 DEBUG nova.virt.block_device [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Updating existing volume attachment record: fc6b029d-c725-493f-bfa2-32d3f1426e64 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.453 232437 INFO nova.virt.libvirt.driver [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Deleting instance files /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a_del#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.456 232437 INFO nova.virt.libvirt.driver [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Deletion of /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a_del complete#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.515 232437 INFO nova.compute.manager [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.516 232437 DEBUG oslo.service.loopingcall [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.516 232437 DEBUG nova.compute.manager [-] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:05:28 np0005548731 nova_compute[232433]: 2025-12-06 07:05:28.516 232437 DEBUG nova.network.neutron [-] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:05:29 np0005548731 nova_compute[232433]: 2025-12-06 07:05:29.262 232437 DEBUG nova.objects.instance [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lazy-loading 'flavor' on Instance uuid 9b0dea9b-128d-43a8-aedd-6a023517b89f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:05:29 np0005548731 nova_compute[232433]: 2025-12-06 07:05:29.318 232437 DEBUG nova.virt.libvirt.guest [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:05:29 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:05:29 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-72f42200-60a4-4bc0-ac08-8ab63713f05e">
Dec  6 02:05:29 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:05:29 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:05:29 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:05:29 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:05:29 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:05:29 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:05:29 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:05:29 np0005548731 nova_compute[232433]:  <target dev="sdc" bus="scsi"/>
Dec  6 02:05:29 np0005548731 nova_compute[232433]:  <serial>72f42200-60a4-4bc0-ac08-8ab63713f05e</serial>
Dec  6 02:05:29 np0005548731 nova_compute[232433]:  <address type="drive" controller="0" unit="2"/>
Dec  6 02:05:29 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:05:29 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:05:29 np0005548731 nova_compute[232433]: 2025-12-06 07:05:29.319 232437 DEBUG nova.network.neutron [-] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:05:29 np0005548731 nova_compute[232433]: 2025-12-06 07:05:29.431 232437 INFO nova.compute.manager [-] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Took 0.92 seconds to deallocate network for instance.#033[00m
Dec  6 02:05:29 np0005548731 nova_compute[232433]: 2025-12-06 07:05:29.447 232437 DEBUG nova.virt.libvirt.driver [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:05:29 np0005548731 nova_compute[232433]: 2025-12-06 07:05:29.447 232437 DEBUG nova.virt.libvirt.driver [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:05:29 np0005548731 nova_compute[232433]: 2025-12-06 07:05:29.448 232437 DEBUG nova.virt.libvirt.driver [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] No BDM found with device name sdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:05:29 np0005548731 nova_compute[232433]: 2025-12-06 07:05:29.448 232437 DEBUG nova.virt.libvirt.driver [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] No VIF found with MAC fa:16:3e:36:88:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:05:29 np0005548731 nova_compute[232433]: 2025-12-06 07:05:29.773 232437 INFO nova.compute.manager [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Took 0.34 seconds to detach 1 volumes for instance.#033[00m
Dec  6 02:05:29 np0005548731 nova_compute[232433]: 2025-12-06 07:05:29.776 232437 DEBUG nova.compute.manager [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Deleting volume: 2b74c89f-1018-4ad4-8af8-84109979b9c7 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Dec  6 02:05:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:29.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:29.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:29 np0005548731 nova_compute[232433]: 2025-12-06 07:05:29.956 232437 DEBUG oslo_concurrency.lockutils [None req-7c48049c-fae9-4ee2-b467-e01cf52b1444 e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.100 232437 DEBUG oslo_concurrency.lockutils [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.100 232437 DEBUG oslo_concurrency.lockutils [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.105 232437 DEBUG oslo_concurrency.lockutils [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.136 232437 INFO nova.scheduler.client.report [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Deleted allocations for instance 1712064c-6ba8-4660-972f-1e827a40781a#033[00m
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.219 232437 DEBUG oslo_concurrency.lockutils [None req-08327e52-c891-4437-85d8-9a6c91a2a6e5 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.391 232437 DEBUG nova.compute.manager [req-1647c08e-a54c-48cd-b1a2-21525008b250 req-324bf346-4b93-4288-9214-1b805f378d4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.392 232437 DEBUG oslo_concurrency.lockutils [req-1647c08e-a54c-48cd-b1a2-21525008b250 req-324bf346-4b93-4288-9214-1b805f378d4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.392 232437 DEBUG oslo_concurrency.lockutils [req-1647c08e-a54c-48cd-b1a2-21525008b250 req-324bf346-4b93-4288-9214-1b805f378d4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.392 232437 DEBUG oslo_concurrency.lockutils [req-1647c08e-a54c-48cd-b1a2-21525008b250 req-324bf346-4b93-4288-9214-1b805f378d4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.392 232437 DEBUG nova.compute.manager [req-1647c08e-a54c-48cd-b1a2-21525008b250 req-324bf346-4b93-4288-9214-1b805f378d4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.393 232437 WARNING nova.compute.manager [req-1647c08e-a54c-48cd-b1a2-21525008b250 req-324bf346-4b93-4288-9214-1b805f378d4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 02:05:30 np0005548731 nova_compute[232433]: 2025-12-06 07:05:30.393 232437 DEBUG nova.compute.manager [req-1647c08e-a54c-48cd-b1a2-21525008b250 req-324bf346-4b93-4288-9214-1b805f378d4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-deleted-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.274 232437 DEBUG oslo_concurrency.lockutils [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.274 232437 DEBUG oslo_concurrency.lockutils [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.296 232437 INFO nova.compute.manager [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Detaching volume 72f42200-60a4-4bc0-ac08-8ab63713f05e#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.374 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.490 232437 INFO nova.virt.block_device [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Attempting to driver detach volume 72f42200-60a4-4bc0-ac08-8ab63713f05e from mountpoint /dev/sdc#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.500 232437 DEBUG nova.virt.libvirt.driver [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Attempting to detach device sdc from instance 9b0dea9b-128d-43a8-aedd-6a023517b89f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.500 232437 DEBUG nova.virt.libvirt.guest [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-72f42200-60a4-4bc0-ac08-8ab63713f05e">
Dec  6 02:05:31 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  <target dev="sdc" bus="scsi"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  <serial>72f42200-60a4-4bc0-ac08-8ab63713f05e</serial>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:05:31 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.508 232437 INFO nova.virt.libvirt.driver [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Successfully detached device sdc from instance 9b0dea9b-128d-43a8-aedd-6a023517b89f from the persistent domain config.#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.508 232437 DEBUG nova.virt.libvirt.driver [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] (1/8): Attempting to detach device sdc with device alias scsi0-0-0-2 from instance 9b0dea9b-128d-43a8-aedd-6a023517b89f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.509 232437 DEBUG nova.virt.libvirt.guest [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-72f42200-60a4-4bc0-ac08-8ab63713f05e">
Dec  6 02:05:31 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  <target dev="sdc" bus="scsi"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  <serial>72f42200-60a4-4bc0-ac08-8ab63713f05e</serial>
Dec  6 02:05:31 np0005548731 nova_compute[232433]:  <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Dec  6 02:05:31 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:05:31 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.612 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765004731.6121778, 9b0dea9b-128d-43a8-aedd-6a023517b89f => scsi0-0-0-2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.614 232437 DEBUG nova.virt.libvirt.driver [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Start waiting for the detach event from libvirt for device sdc with device alias scsi0-0-0-2 for instance 9b0dea9b-128d-43a8-aedd-6a023517b89f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.616 232437 INFO nova.virt.libvirt.driver [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Successfully detached device sdc from instance 9b0dea9b-128d-43a8-aedd-6a023517b89f from the live domain config.#033[00m
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.801 232437 DEBUG nova.objects.instance [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lazy-loading 'flavor' on Instance uuid 9b0dea9b-128d-43a8-aedd-6a023517b89f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:05:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:05:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:05:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:31.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:31 np0005548731 nova_compute[232433]: 2025-12-06 07:05:31.853 232437 DEBUG oslo_concurrency.lockutils [None req-340fdb05-5e7d-4d19-be5f-7692d19c15fb e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:32 np0005548731 nova_compute[232433]: 2025-12-06 07:05:32.198 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Acquiring lock "71af9824-c0c1-45e6-b9bc-5e16c9e6a43c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:32 np0005548731 nova_compute[232433]: 2025-12-06 07:05:32.198 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "71af9824-c0c1-45e6-b9bc-5e16c9e6a43c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:32 np0005548731 nova_compute[232433]: 2025-12-06 07:05:32.213 232437 DEBUG nova.compute.manager [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:05:32 np0005548731 nova_compute[232433]: 2025-12-06 07:05:32.379 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:32 np0005548731 nova_compute[232433]: 2025-12-06 07:05:32.379 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:32 np0005548731 nova_compute[232433]: 2025-12-06 07:05:32.386 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:05:32 np0005548731 nova_compute[232433]: 2025-12-06 07:05:32.386 232437 INFO nova.compute.claims [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:05:32 np0005548731 nova_compute[232433]: 2025-12-06 07:05:32.581 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.007 232437 DEBUG oslo_concurrency.lockutils [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.008 232437 DEBUG oslo_concurrency.lockutils [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.008 232437 DEBUG oslo_concurrency.lockutils [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.009 232437 DEBUG oslo_concurrency.lockutils [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.009 232437 DEBUG oslo_concurrency.lockutils [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.011 232437 INFO nova.compute.manager [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Terminating instance#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.012 232437 DEBUG nova.compute.manager [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:05:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:05:33 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3394481438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:05:33 np0005548731 kernel: tapc25d2974-9b (unregistering): left promiscuous mode
Dec  6 02:05:33 np0005548731 NetworkManager[49182]: <info>  [1765004733.0553] device (tapc25d2974-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.062 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:33Z|00116|binding|INFO|Releasing lport c25d2974-9b24-4cff-862e-5f43e77d56be from this chassis (sb_readonly=0)
Dec  6 02:05:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:33Z|00117|binding|INFO|Setting lport c25d2974-9b24-4cff-862e-5f43e77d56be down in Southbound
Dec  6 02:05:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:05:33Z|00118|binding|INFO|Removing iface tapc25d2974-9b ovn-installed in OVS
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.070 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.070 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:88:b5 10.100.0.5'], port_security=['fa:16:3e:36:88:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9b0dea9b-128d-43a8-aedd-6a023517b89f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebf7a358-f0c9-48b5-8485-23c08f737784', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'edcc68bbf9cd4c9189e59964db884a26', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2ea38d4-7079-4d8e-8d81-023b2f93461c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63636421-5d43-464e-9459-29f5c8bd3cf7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=c25d2974-9b24-4cff-862e-5f43e77d56be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.072 143965 INFO neutron.agent.ovn.metadata.agent [-] Port c25d2974-9b24-4cff-862e-5f43e77d56be in datapath ebf7a358-f0c9-48b5-8485-23c08f737784 unbound from our chassis#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.075 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebf7a358-f0c9-48b5-8485-23c08f737784, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.076 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9b46f305-ceb5-4f9b-8aab-3dab4557d962]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.077 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784 namespace which is not needed anymore#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.078 232437 DEBUG nova.compute.provider_tree [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.086 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.099 232437 DEBUG nova.scheduler.client.report [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.117 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Dec  6 02:05:33 np0005548731 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001f.scope: Consumed 15.095s CPU time.
Dec  6 02:05:33 np0005548731 systemd-machined[195355]: Machine qemu-15-instance-0000001f terminated.
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.138 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.138 232437 DEBUG nova.compute.manager [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.208 232437 DEBUG nova.compute.manager [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.208 232437 DEBUG nova.network.neutron [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:05:33 np0005548731 neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784[248001]: [NOTICE]   (248005) : haproxy version is 2.8.14-c23fe91
Dec  6 02:05:33 np0005548731 neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784[248001]: [NOTICE]   (248005) : path to executable is /usr/sbin/haproxy
Dec  6 02:05:33 np0005548731 neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784[248001]: [WARNING]  (248005) : Exiting Master process...
Dec  6 02:05:33 np0005548731 neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784[248001]: [ALERT]    (248005) : Current worker (248007) exited with code 143 (Terminated)
Dec  6 02:05:33 np0005548731 neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784[248001]: [WARNING]  (248005) : All workers exited. Exiting... (0)
Dec  6 02:05:33 np0005548731 systemd[1]: libpod-b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01.scope: Deactivated successfully.
Dec  6 02:05:33 np0005548731 podman[248526]: 2025-12-06 07:05:33.221796907 +0000 UTC m=+0.051815632 container died b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.229 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.232 232437 INFO nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.235 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.246 232437 INFO nova.virt.libvirt.driver [-] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Instance destroyed successfully.#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.246 232437 DEBUG nova.objects.instance [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lazy-loading 'resources' on Instance uuid 9b0dea9b-128d-43a8-aedd-6a023517b89f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:05:33 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01-userdata-shm.mount: Deactivated successfully.
Dec  6 02:05:33 np0005548731 systemd[1]: var-lib-containers-storage-overlay-4b96bb5e717cc7709ccb52113173f318e9970e66e3e6e795b15ba126cd5dd877-merged.mount: Deactivated successfully.
Dec  6 02:05:33 np0005548731 podman[248526]: 2025-12-06 07:05:33.264453526 +0000 UTC m=+0.094472241 container cleanup b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.265 232437 DEBUG nova.virt.libvirt.vif [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:04:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-171472628',display_name='tempest-AttachSCSIVolumeTestJSON-server-171472628',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-171472628',id=31,image_ref='544e5cd1-78c2-46b6-bec1-176dc9a97c75',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIrVdsMwjNq9Fmk++jiOapSC71MhALV02yfiLTSDaFdCaMi5WTLHgeXWQ+IJhjXnQReNKmErMLIKuEbeAIKRK2LT9KgsLL26Bf1g2xLyRtjiFSK2z6roei561T41/p3POQ==',key_name='tempest-keypair-398633617',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:05:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='edcc68bbf9cd4c9189e59964db884a26',ramdisk_id='',reservation_id='r-db5ee3x2',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='544e5cd1-78c2-46b6-bec1-176dc9a97c75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_scsi_model='virtio-scsi',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachSCSIVolumeTestJSON-105555555',owner_user_name='tempest-AttachSCSIVolumeTestJSON-105555555-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:05:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e5f62143343c4499a86f710385c0c2f8',uuid=9b0dea9b-128d-43a8-aedd-6a023517b89f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c25d2974-9b24-4cff-862e-5f43e77d56be", "address": "fa:16:3e:36:88:b5", "network": {"id": "ebf7a358-f0c9-48b5-8485-23c08f737784", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-863315842-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "edcc68bbf9cd4c9189e59964db884a26", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25d2974-9b", "ovs_interfaceid": "c25d2974-9b24-4cff-862e-5f43e77d56be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.265 232437 DEBUG nova.network.os_vif_util [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Converting VIF {"id": "c25d2974-9b24-4cff-862e-5f43e77d56be", "address": "fa:16:3e:36:88:b5", "network": {"id": "ebf7a358-f0c9-48b5-8485-23c08f737784", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-863315842-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "edcc68bbf9cd4c9189e59964db884a26", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25d2974-9b", "ovs_interfaceid": "c25d2974-9b24-4cff-862e-5f43e77d56be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.266 232437 DEBUG nova.network.os_vif_util [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:88:b5,bridge_name='br-int',has_traffic_filtering=True,id=c25d2974-9b24-4cff-862e-5f43e77d56be,network=Network(ebf7a358-f0c9-48b5-8485-23c08f737784),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25d2974-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.266 232437 DEBUG os_vif [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:88:b5,bridge_name='br-int',has_traffic_filtering=True,id=c25d2974-9b24-4cff-862e-5f43e77d56be,network=Network(ebf7a358-f0c9-48b5-8485-23c08f737784),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25d2974-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.271 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.272 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc25d2974-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.274 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.276 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.279 232437 INFO os_vif [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:88:b5,bridge_name='br-int',has_traffic_filtering=True,id=c25d2974-9b24-4cff-862e-5f43e77d56be,network=Network(ebf7a358-f0c9-48b5-8485-23c08f737784),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25d2974-9b')#033[00m
Dec  6 02:05:33 np0005548731 systemd[1]: libpod-conmon-b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01.scope: Deactivated successfully.
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.302 232437 DEBUG nova.compute.manager [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:05:33 np0005548731 podman[248564]: 2025-12-06 07:05:33.342959381 +0000 UTC m=+0.047348003 container remove b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.349 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[95dbc4f6-7251-46c9-84e5-60ae7617c0d4]: (4, ('Sat Dec  6 07:05:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784 (b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01)\nb17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01\nSat Dec  6 07:05:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784 (b17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01)\nb17337e9bcc25543e1aa454227bf8d98fd08bee74adbf5b4b634cc0ecba20c01\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.351 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2b32fc-3246-4089-9b53-798cc53cea97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.352 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebf7a358-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.354 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 kernel: tapebf7a358-f0: left promiscuous mode
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.355 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.358 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc5b517-d3eb-4e66-97da-d6a1596a2f4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.370 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.374 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[47ff361e-e4d6-475a-bf12-6b993585bbd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.375 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[146ba420-17a3-4a93-bc0d-8ef95987c882]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.393 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd3cd39-813f-4782-a8de-04c80d1acb24]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496587, 'reachable_time': 19594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248598, 'error': None, 'target': 'ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:33 np0005548731 systemd[1]: run-netns-ovnmeta\x2debf7a358\x2df0c9\x2d48b5\x2d8485\x2d23c08f737784.mount: Deactivated successfully.
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.397 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebf7a358-f0c9-48b5-8485-23c08f737784 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.397 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c2aa7f-2786-4860-b119-3ed9f575df0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.456 232437 DEBUG nova.compute.manager [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.457 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.458 232437 INFO nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Creating image(s)#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.500 232437 DEBUG nova.storage.rbd_utils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] rbd image 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.517 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:05:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:33.522 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.534 232437 DEBUG nova.storage.rbd_utils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] rbd image 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.570 232437 DEBUG nova.storage.rbd_utils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] rbd image 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.575 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.603 232437 DEBUG nova.network.neutron [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.604 232437 DEBUG nova.compute.manager [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.604 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.641 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.642 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.643 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.643 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.677 232437 DEBUG nova.storage.rbd_utils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] rbd image 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.680 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.704 232437 DEBUG nova.compute.manager [req-926ca24d-7e32-4ace-992b-80b13f1ff422 req-643f790e-99c4-4632-a137-50fb0131418f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Received event network-vif-unplugged-c25d2974-9b24-4cff-862e-5f43e77d56be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.704 232437 DEBUG oslo_concurrency.lockutils [req-926ca24d-7e32-4ace-992b-80b13f1ff422 req-643f790e-99c4-4632-a137-50fb0131418f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.705 232437 DEBUG oslo_concurrency.lockutils [req-926ca24d-7e32-4ace-992b-80b13f1ff422 req-643f790e-99c4-4632-a137-50fb0131418f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.705 232437 DEBUG oslo_concurrency.lockutils [req-926ca24d-7e32-4ace-992b-80b13f1ff422 req-643f790e-99c4-4632-a137-50fb0131418f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.705 232437 DEBUG nova.compute.manager [req-926ca24d-7e32-4ace-992b-80b13f1ff422 req-643f790e-99c4-4632-a137-50fb0131418f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] No waiting events found dispatching network-vif-unplugged-c25d2974-9b24-4cff-862e-5f43e77d56be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.705 232437 DEBUG nova.compute.manager [req-926ca24d-7e32-4ace-992b-80b13f1ff422 req-643f790e-99c4-4632-a137-50fb0131418f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Received event network-vif-unplugged-c25d2974-9b24-4cff-862e-5f43e77d56be for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:05:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:33.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:05:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:33.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.947 232437 INFO nova.virt.libvirt.driver [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Deleting instance files /var/lib/nova/instances/9b0dea9b-128d-43a8-aedd-6a023517b89f_del#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.949 232437 INFO nova.virt.libvirt.driver [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Deletion of /var/lib/nova/instances/9b0dea9b-128d-43a8-aedd-6a023517b89f_del complete#033[00m
Dec  6 02:05:33 np0005548731 nova_compute[232433]: 2025-12-06 07:05:33.976 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.043 232437 INFO nova.compute.manager [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.043 232437 DEBUG oslo.service.loopingcall [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.044 232437 DEBUG nova.compute.manager [-] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.044 232437 DEBUG nova.network.neutron [-] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.051 232437 DEBUG nova.storage.rbd_utils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] resizing rbd image 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.150 232437 DEBUG nova.objects.instance [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lazy-loading 'migration_context' on Instance uuid 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.163 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.163 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Ensure instance console log exists: /var/lib/nova/instances/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.163 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.164 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.164 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.165 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.169 232437 WARNING nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.173 232437 DEBUG nova.virt.libvirt.host [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.174 232437 DEBUG nova.virt.libvirt.host [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.176 232437 DEBUG nova.virt.libvirt.host [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.177 232437 DEBUG nova.virt.libvirt.host [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.178 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.178 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.178 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.178 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.179 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.179 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.179 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.179 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.179 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.179 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.179 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.180 232437 DEBUG nova.virt.hardware [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.182 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2942269010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.814 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.843 232437 DEBUG nova.storage.rbd_utils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] rbd image 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:05:34 np0005548731 nova_compute[232433]: 2025-12-06 07:05:34.847 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.940513) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734940597, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 401, "num_deletes": 252, "total_data_size": 354159, "memory_usage": 362904, "flush_reason": "Manual Compaction"}
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734944327, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 233059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28500, "largest_seqno": 28895, "table_properties": {"data_size": 230750, "index_size": 409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6069, "raw_average_key_size": 19, "raw_value_size": 226001, "raw_average_value_size": 712, "num_data_blocks": 18, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004724, "oldest_key_time": 1765004724, "file_creation_time": 1765004734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 3853 microseconds, and 1908 cpu microseconds.
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.944395) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 233059 bytes OK
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.944419) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.946342) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.946368) EVENT_LOG_v1 {"time_micros": 1765004734946360, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.946391) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 351544, prev total WAL file size 351544, number of live WAL files 2.
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.946838) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(227KB)], [54(9501KB)]
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734946896, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 9962897, "oldest_snapshot_seqno": -1}
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5458 keys, 7990452 bytes, temperature: kUnknown
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734994127, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 7990452, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7955321, "index_size": 20365, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 141268, "raw_average_key_size": 25, "raw_value_size": 7858153, "raw_average_value_size": 1439, "num_data_blocks": 813, "num_entries": 5458, "num_filter_entries": 5458, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765004734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.994487) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 7990452 bytes
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.996560) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.4 rd, 168.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.3 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(77.0) write-amplify(34.3) OK, records in: 5975, records dropped: 517 output_compression: NoCompression
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.996582) EVENT_LOG_v1 {"time_micros": 1765004734996571, "job": 32, "event": "compaction_finished", "compaction_time_micros": 47357, "compaction_time_cpu_micros": 21747, "output_level": 6, "num_output_files": 1, "total_output_size": 7990452, "num_input_records": 5975, "num_output_records": 5458, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734996770, "job": 32, "event": "table_file_deletion", "file_number": 56}
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734998471, "job": 32, "event": "table_file_deletion", "file_number": 54}
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.946784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.998576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.998583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.998585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.998587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:05:34.998589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.260 232437 DEBUG nova.network.neutron [-] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.285 232437 INFO nova.compute.manager [-] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Took 1.24 seconds to deallocate network for instance.#033[00m
Dec  6 02:05:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:05:35 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1388583674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.326 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.327 232437 DEBUG nova.objects.instance [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lazy-loading 'pci_devices' on Instance uuid 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.333 232437 DEBUG oslo_concurrency.lockutils [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.333 232437 DEBUG oslo_concurrency.lockutils [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.343 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  <uuid>71af9824-c0c1-45e6-b9bc-5e16c9e6a43c</uuid>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  <name>instance-00000020</name>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <nova:name>tempest-TenantUsagesTestJSON-server-1228138022</nova:name>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:05:34</nova:creationTime>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <nova:user uuid="80e7264f985d465b9dddefd6429bf7f6">tempest-TenantUsagesTestJSON-21697043-project-member</nova:user>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <nova:project uuid="c5ae70f9b4104dcca7240b5f78eaabcd">tempest-TenantUsagesTestJSON-21697043</nova:project>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <nova:ports/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <entry name="serial">71af9824-c0c1-45e6-b9bc-5e16c9e6a43c</entry>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <entry name="uuid">71af9824-c0c1-45e6-b9bc-5e16c9e6a43c</entry>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk.config">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c/console.log" append="off"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:05:35 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:05:35 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:05:35 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:05:35 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.390 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.390 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.391 232437 INFO nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Using config drive#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.419 232437 DEBUG nova.storage.rbd_utils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] rbd image 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.462 232437 DEBUG oslo_concurrency.processutils [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.612 232437 INFO nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Creating config drive at /var/lib/nova/instances/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c/disk.config#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.617 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wbyovq8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.743 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wbyovq8" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.773 232437 DEBUG nova.storage.rbd_utils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] rbd image 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.776 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c/disk.config 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.801 232437 DEBUG nova.compute.manager [req-7e75a6d7-cad2-4e40-9a4d-1a525255e604 req-4effa209-8c0d-4d2f-91c3-257adbce37e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Received event network-vif-plugged-c25d2974-9b24-4cff-862e-5f43e77d56be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.801 232437 DEBUG oslo_concurrency.lockutils [req-7e75a6d7-cad2-4e40-9a4d-1a525255e604 req-4effa209-8c0d-4d2f-91c3-257adbce37e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.801 232437 DEBUG oslo_concurrency.lockutils [req-7e75a6d7-cad2-4e40-9a4d-1a525255e604 req-4effa209-8c0d-4d2f-91c3-257adbce37e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.802 232437 DEBUG oslo_concurrency.lockutils [req-7e75a6d7-cad2-4e40-9a4d-1a525255e604 req-4effa209-8c0d-4d2f-91c3-257adbce37e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.802 232437 DEBUG nova.compute.manager [req-7e75a6d7-cad2-4e40-9a4d-1a525255e604 req-4effa209-8c0d-4d2f-91c3-257adbce37e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] No waiting events found dispatching network-vif-plugged-c25d2974-9b24-4cff-862e-5f43e77d56be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.802 232437 WARNING nova.compute.manager [req-7e75a6d7-cad2-4e40-9a4d-1a525255e604 req-4effa209-8c0d-4d2f-91c3-257adbce37e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Received unexpected event network-vif-plugged-c25d2974-9b24-4cff-862e-5f43e77d56be for instance with vm_state deleted and task_state None.#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.802 232437 DEBUG nova.compute.manager [req-7e75a6d7-cad2-4e40-9a4d-1a525255e604 req-4effa209-8c0d-4d2f-91c3-257adbce37e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Received event network-vif-deleted-c25d2974-9b24-4cff-862e-5f43e77d56be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:05:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:05:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:35.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:05:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:05:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:35.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:05:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:05:35 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1868520597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.932 232437 DEBUG oslo_concurrency.processutils [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.938 232437 DEBUG nova.compute.provider_tree [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.943 232437 DEBUG oslo_concurrency.processutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c/disk.config 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.944 232437 INFO nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Deleting local config drive /var/lib/nova/instances/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c/disk.config because it was imported into RBD.#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.954 232437 DEBUG nova.scheduler.client.report [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.974 232437 DEBUG oslo_concurrency.lockutils [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:35 np0005548731 nova_compute[232433]: 2025-12-06 07:05:35.998 232437 INFO nova.scheduler.client.report [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Deleted allocations for instance 9b0dea9b-128d-43a8-aedd-6a023517b89f#033[00m
Dec  6 02:05:36 np0005548731 systemd-machined[195355]: New machine qemu-17-instance-00000020.
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.056 232437 DEBUG oslo_concurrency.lockutils [None req-6c7c2152-ac6a-463d-8a14-75d76369ef0a e5f62143343c4499a86f710385c0c2f8 edcc68bbf9cd4c9189e59964db884a26 - - default default] Lock "9b0dea9b-128d-43a8-aedd-6a023517b89f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:36 np0005548731 systemd[1]: Started Virtual Machine qemu-17-instance-00000020.
Dec  6 02:05:36 np0005548731 podman[248916]: 2025-12-06 07:05:36.105577786 +0000 UTC m=+0.073383322 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  6 02:05:36 np0005548731 podman[248918]: 2025-12-06 07:05:36.112440422 +0000 UTC m=+0.073268760 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Dec  6 02:05:36 np0005548731 podman[248917]: 2025-12-06 07:05:36.166074217 +0000 UTC m=+0.133784051 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.377 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.575 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004736.57445, 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.576 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.578 232437 DEBUG nova.compute.manager [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.579 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.582 232437 INFO nova.virt.libvirt.driver [-] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Instance spawned successfully.#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.582 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.599 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.605 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.608 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.608 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.609 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.609 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.610 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.610 232437 DEBUG nova.virt.libvirt.driver [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.633 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.634 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004736.57595, 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.634 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] VM Started (Lifecycle Event)#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.677 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.682 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.692 232437 INFO nova.compute.manager [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Took 3.24 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.692 232437 DEBUG nova.compute.manager [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.722 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.769 232437 INFO nova.compute.manager [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Took 4.47 seconds to build instance.#033[00m
Dec  6 02:05:36 np0005548731 nova_compute[232433]: 2025-12-06 07:05:36.786 232437 DEBUG oslo_concurrency.lockutils [None req-7081ab52-51f2-43cf-b1c4-7f5603432b2a 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "71af9824-c0c1-45e6-b9bc-5e16c9e6a43c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:37 np0005548731 nova_compute[232433]: 2025-12-06 07:05:37.771 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Acquiring lock "71af9824-c0c1-45e6-b9bc-5e16c9e6a43c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:37 np0005548731 nova_compute[232433]: 2025-12-06 07:05:37.772 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "71af9824-c0c1-45e6-b9bc-5e16c9e6a43c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:37 np0005548731 nova_compute[232433]: 2025-12-06 07:05:37.773 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Acquiring lock "71af9824-c0c1-45e6-b9bc-5e16c9e6a43c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:37 np0005548731 nova_compute[232433]: 2025-12-06 07:05:37.773 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "71af9824-c0c1-45e6-b9bc-5e16c9e6a43c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:37 np0005548731 nova_compute[232433]: 2025-12-06 07:05:37.773 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "71af9824-c0c1-45e6-b9bc-5e16c9e6a43c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:37 np0005548731 nova_compute[232433]: 2025-12-06 07:05:37.775 232437 INFO nova.compute.manager [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Terminating instance#033[00m
Dec  6 02:05:37 np0005548731 nova_compute[232433]: 2025-12-06 07:05:37.776 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Acquiring lock "refresh_cache-71af9824-c0c1-45e6-b9bc-5e16c9e6a43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:05:37 np0005548731 nova_compute[232433]: 2025-12-06 07:05:37.776 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Acquired lock "refresh_cache-71af9824-c0c1-45e6-b9bc-5e16c9e6a43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:05:37 np0005548731 nova_compute[232433]: 2025-12-06 07:05:37.776 232437 DEBUG nova.network.neutron [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:05:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:37.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:37.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:37 np0005548731 nova_compute[232433]: 2025-12-06 07:05:37.975 232437 DEBUG nova.network.neutron [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:05:38 np0005548731 nova_compute[232433]: 2025-12-06 07:05:38.249 232437 DEBUG nova.network.neutron [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:05:38 np0005548731 nova_compute[232433]: 2025-12-06 07:05:38.266 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Releasing lock "refresh_cache-71af9824-c0c1-45e6-b9bc-5e16c9e6a43c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:05:38 np0005548731 nova_compute[232433]: 2025-12-06 07:05:38.267 232437 DEBUG nova.compute.manager [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:05:38 np0005548731 nova_compute[232433]: 2025-12-06 07:05:38.276 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:38 np0005548731 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000020.scope: Deactivated successfully.
Dec  6 02:05:38 np0005548731 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000020.scope: Consumed 2.304s CPU time.
Dec  6 02:05:38 np0005548731 systemd-machined[195355]: Machine qemu-17-instance-00000020 terminated.
Dec  6 02:05:38 np0005548731 nova_compute[232433]: 2025-12-06 07:05:38.484 232437 INFO nova.virt.libvirt.driver [-] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Instance destroyed successfully.#033[00m
Dec  6 02:05:38 np0005548731 nova_compute[232433]: 2025-12-06 07:05:38.485 232437 DEBUG nova.objects.instance [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lazy-loading 'resources' on Instance uuid 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.012 232437 INFO nova.virt.libvirt.driver [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Deleting instance files /var/lib/nova/instances/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_del#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.013 232437 INFO nova.virt.libvirt.driver [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Deletion of /var/lib/nova/instances/71af9824-c0c1-45e6-b9bc-5e16c9e6a43c_del complete#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.073 232437 INFO nova.compute.manager [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.073 232437 DEBUG oslo.service.loopingcall [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.074 232437 DEBUG nova.compute.manager [-] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.074 232437 DEBUG nova.network.neutron [-] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.211 232437 DEBUG nova.network.neutron [-] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.224 232437 DEBUG nova.network.neutron [-] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.239 232437 INFO nova.compute.manager [-] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Took 0.17 seconds to deallocate network for instance.#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.291 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.292 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.329 232437 DEBUG oslo_concurrency.processutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:05:39.525 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:05:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e174 e174: 3 total, 3 up, 3 in
Dec  6 02:05:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:05:39 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3372856054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.782 232437 DEBUG oslo_concurrency.processutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.790 232437 DEBUG nova.compute.provider_tree [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.811 232437 DEBUG nova.scheduler.client.report [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:05:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:39.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.837 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:39.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.883 232437 INFO nova.scheduler.client.report [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Deleted allocations for instance 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c#033[00m
Dec  6 02:05:39 np0005548731 nova_compute[232433]: 2025-12-06 07:05:39.964 232437 DEBUG oslo_concurrency.lockutils [None req-62fb3994-045e-48e5-bdae-a03fbf7ecec1 80e7264f985d465b9dddefd6429bf7f6 c5ae70f9b4104dcca7240b5f78eaabcd - - default default] Lock "71af9824-c0c1-45e6-b9bc-5e16c9e6a43c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:41 np0005548731 nova_compute[232433]: 2025-12-06 07:05:41.058 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:41 np0005548731 nova_compute[232433]: 2025-12-06 07:05:41.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:05:41 np0005548731 nova_compute[232433]: 2025-12-06 07:05:41.343 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:41 np0005548731 nova_compute[232433]: 2025-12-06 07:05:41.378 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:05:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:41.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:05:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:41.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:42 np0005548731 nova_compute[232433]: 2025-12-06 07:05:42.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:05:43 np0005548731 nova_compute[232433]: 2025-12-06 07:05:43.081 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004728.0799642, 1712064c-6ba8-4660-972f-1e827a40781a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:43 np0005548731 nova_compute[232433]: 2025-12-06 07:05:43.082 232437 INFO nova.compute.manager [-] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:05:43 np0005548731 nova_compute[232433]: 2025-12-06 07:05:43.110 232437 DEBUG nova.compute.manager [None req-fd4c2f2f-9812-41ff-b084-62e8478e890a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:43 np0005548731 nova_compute[232433]: 2025-12-06 07:05:43.279 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:05:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:43.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:05:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:43.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:45 np0005548731 nova_compute[232433]: 2025-12-06 07:05:45.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:05:45 np0005548731 nova_compute[232433]: 2025-12-06 07:05:45.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:05:45 np0005548731 nova_compute[232433]: 2025-12-06 07:05:45.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:05:45 np0005548731 nova_compute[232433]: 2025-12-06 07:05:45.126 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:05:45 np0005548731 nova_compute[232433]: 2025-12-06 07:05:45.127 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:05:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:45.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:45.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:46 np0005548731 nova_compute[232433]: 2025-12-06 07:05:46.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:05:46 np0005548731 nova_compute[232433]: 2025-12-06 07:05:46.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:05:46 np0005548731 nova_compute[232433]: 2025-12-06 07:05:46.380 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.133 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.133 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.133 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e175 e175: 3 total, 3 up, 3 in
Dec  6 02:05:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:05:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/109364209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.570 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.721 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.722 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4757MB free_disk=20.9427490234375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.723 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.723 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.789 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.789 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:05:47 np0005548731 nova_compute[232433]: 2025-12-06 07:05:47.814 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:05:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:05:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:47.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:05:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:47.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:05:48 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3546733337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:05:48 np0005548731 nova_compute[232433]: 2025-12-06 07:05:48.228 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:05:48 np0005548731 nova_compute[232433]: 2025-12-06 07:05:48.234 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:05:48 np0005548731 nova_compute[232433]: 2025-12-06 07:05:48.245 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004733.2444787, 9b0dea9b-128d-43a8-aedd-6a023517b89f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:48 np0005548731 nova_compute[232433]: 2025-12-06 07:05:48.246 232437 INFO nova.compute.manager [-] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:05:48 np0005548731 nova_compute[232433]: 2025-12-06 07:05:48.261 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:05:48 np0005548731 nova_compute[232433]: 2025-12-06 07:05:48.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:48 np0005548731 nova_compute[232433]: 2025-12-06 07:05:48.300 232437 DEBUG nova.compute.manager [None req-a89c0334-80c1-498b-8c22-5957a5e9f21e - - - - - -] [instance: 9b0dea9b-128d-43a8-aedd-6a023517b89f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:48 np0005548731 nova_compute[232433]: 2025-12-06 07:05:48.307 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:05:48 np0005548731 nova_compute[232433]: 2025-12-06 07:05:48.307 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:05:49 np0005548731 nova_compute[232433]: 2025-12-06 07:05:49.308 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:05:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:49.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:05:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:49.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:05:50 np0005548731 nova_compute[232433]: 2025-12-06 07:05:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:05:50 np0005548731 nova_compute[232433]: 2025-12-06 07:05:50.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:05:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:51 np0005548731 nova_compute[232433]: 2025-12-06 07:05:51.382 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:51.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:51.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:53 np0005548731 nova_compute[232433]: 2025-12-06 07:05:53.286 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:53 np0005548731 nova_compute[232433]: 2025-12-06 07:05:53.483 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004738.4825916, 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:05:53 np0005548731 nova_compute[232433]: 2025-12-06 07:05:53.484 232437 INFO nova.compute.manager [-] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:05:53 np0005548731 nova_compute[232433]: 2025-12-06 07:05:53.607 232437 DEBUG nova.compute.manager [None req-eb259516-8a34-4ebf-979e-e7ef75f2a854 - - - - - -] [instance: 71af9824-c0c1-45e6-b9bc-5e16c9e6a43c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:05:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:53.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:05:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:53.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:05:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:05:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:05:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:05:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:05:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:55.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:55.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:56 np0005548731 nova_compute[232433]: 2025-12-06 07:05:56.385 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e176 e176: 3 total, 3 up, 3 in
Dec  6 02:05:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:57.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:57.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:58 np0005548731 nova_compute[232433]: 2025-12-06 07:05:58.289 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:05:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e177 e177: 3 total, 3 up, 3 in
Dec  6 02:05:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e178 e178: 3 total, 3 up, 3 in
Dec  6 02:05:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:59.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:05:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:05:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:05:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:59.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:06:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:06:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:00.850 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:00.850 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:00.851 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:01 np0005548731 nova_compute[232433]: 2025-12-06 07:06:01.386 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:01.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:01.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:03 np0005548731 nova_compute[232433]: 2025-12-06 07:06:03.292 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:06:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:03.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:06:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:03.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:06:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:05.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:06:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:06:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:05.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:06:06 np0005548731 nova_compute[232433]: 2025-12-06 07:06:06.427 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:06 np0005548731 podman[249415]: 2025-12-06 07:06:06.917664837 +0000 UTC m=+0.065768109 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:06:06 np0005548731 podman[249417]: 2025-12-06 07:06:06.928426107 +0000 UTC m=+0.072317227 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  6 02:06:06 np0005548731 podman[249416]: 2025-12-06 07:06:06.933927679 +0000 UTC m=+0.092156455 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:06:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e179 e179: 3 total, 3 up, 3 in
Dec  6 02:06:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:07.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:07.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:08 np0005548731 nova_compute[232433]: 2025-12-06 07:06:08.216 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Acquiring lock "58618778-1470-4993-b50d-a24f19c41ed3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:08 np0005548731 nova_compute[232433]: 2025-12-06 07:06:08.217 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:08 np0005548731 nova_compute[232433]: 2025-12-06 07:06:08.234 232437 DEBUG nova.compute.manager [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:06:08 np0005548731 nova_compute[232433]: 2025-12-06 07:06:08.299 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:08 np0005548731 nova_compute[232433]: 2025-12-06 07:06:08.321 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:08 np0005548731 nova_compute[232433]: 2025-12-06 07:06:08.321 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:08 np0005548731 nova_compute[232433]: 2025-12-06 07:06:08.327 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:06:08 np0005548731 nova_compute[232433]: 2025-12-06 07:06:08.327 232437 INFO nova.compute.claims [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:06:08 np0005548731 nova_compute[232433]: 2025-12-06 07:06:08.553 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:06:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2450534043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:06:09 np0005548731 nova_compute[232433]: 2025-12-06 07:06:09.036 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:09 np0005548731 nova_compute[232433]: 2025-12-06 07:06:09.041 232437 DEBUG nova.compute.provider_tree [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:06:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:09.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:06:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:09.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.124 232437 DEBUG nova.scheduler.client.report [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.156 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.158 232437 DEBUG nova.compute.manager [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.222 232437 DEBUG nova.compute.manager [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.223 232437 DEBUG nova.network.neutron [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.249 232437 INFO nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.281 232437 DEBUG nova.compute.manager [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:06:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.464 232437 DEBUG nova.compute.manager [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.466 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.467 232437 INFO nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Creating image(s)#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.514 232437 DEBUG nova.storage.rbd_utils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] rbd image 58618778-1470-4993-b50d-a24f19c41ed3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.554 232437 DEBUG nova.storage.rbd_utils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] rbd image 58618778-1470-4993-b50d-a24f19c41ed3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.583 232437 DEBUG nova.storage.rbd_utils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] rbd image 58618778-1470-4993-b50d-a24f19c41ed3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.587 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.650 232437 DEBUG nova.policy [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '331b8c2798cf4e52a2f60b73cf2831a3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bb2a00a1f30495ca45c1cb0c482b67c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.680 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.681 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.682 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.682 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.713 232437 DEBUG nova.storage.rbd_utils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] rbd image 58618778-1470-4993-b50d-a24f19c41ed3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:10 np0005548731 nova_compute[232433]: 2025-12-06 07:06:10.717 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 58618778-1470-4993-b50d-a24f19c41ed3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:11 np0005548731 nova_compute[232433]: 2025-12-06 07:06:11.103 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 58618778-1470-4993-b50d-a24f19c41ed3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:11 np0005548731 nova_compute[232433]: 2025-12-06 07:06:11.170 232437 DEBUG nova.storage.rbd_utils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] resizing rbd image 58618778-1470-4993-b50d-a24f19c41ed3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:06:11 np0005548731 nova_compute[232433]: 2025-12-06 07:06:11.291 232437 DEBUG nova.objects.instance [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lazy-loading 'migration_context' on Instance uuid 58618778-1470-4993-b50d-a24f19c41ed3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:06:11 np0005548731 nova_compute[232433]: 2025-12-06 07:06:11.428 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:11 np0005548731 nova_compute[232433]: 2025-12-06 07:06:11.628 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:06:11 np0005548731 nova_compute[232433]: 2025-12-06 07:06:11.629 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Ensure instance console log exists: /var/lib/nova/instances/58618778-1470-4993-b50d-a24f19c41ed3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:06:11 np0005548731 nova_compute[232433]: 2025-12-06 07:06:11.629 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:11 np0005548731 nova_compute[232433]: 2025-12-06 07:06:11.630 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:11 np0005548731 nova_compute[232433]: 2025-12-06 07:06:11.630 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:11 np0005548731 nova_compute[232433]: 2025-12-06 07:06:11.659 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:11.659 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:06:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:11.660 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:06:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:11.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:11.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:12 np0005548731 nova_compute[232433]: 2025-12-06 07:06:12.152 232437 DEBUG nova.network.neutron [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Successfully created port: 123afe9f-34c7-4df6-bb33-a09d55405a44 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:06:13 np0005548731 nova_compute[232433]: 2025-12-06 07:06:13.301 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:13 np0005548731 nova_compute[232433]: 2025-12-06 07:06:13.389 232437 DEBUG nova.network.neutron [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Successfully updated port: 123afe9f-34c7-4df6-bb33-a09d55405a44 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:06:13 np0005548731 nova_compute[232433]: 2025-12-06 07:06:13.466 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Acquiring lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:06:13 np0005548731 nova_compute[232433]: 2025-12-06 07:06:13.466 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Acquired lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:06:13 np0005548731 nova_compute[232433]: 2025-12-06 07:06:13.467 232437 DEBUG nova.network.neutron [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:06:13 np0005548731 nova_compute[232433]: 2025-12-06 07:06:13.684 232437 DEBUG nova.compute.manager [req-13bdf92b-dd58-4c0e-be58-d6b527c43fc5 req-6de11225-3d6e-415e-b921-460d1af0c96a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Received event network-changed-123afe9f-34c7-4df6-bb33-a09d55405a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:06:13 np0005548731 nova_compute[232433]: 2025-12-06 07:06:13.684 232437 DEBUG nova.compute.manager [req-13bdf92b-dd58-4c0e-be58-d6b527c43fc5 req-6de11225-3d6e-415e-b921-460d1af0c96a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Refreshing instance network info cache due to event network-changed-123afe9f-34c7-4df6-bb33-a09d55405a44. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:06:13 np0005548731 nova_compute[232433]: 2025-12-06 07:06:13.685 232437 DEBUG oslo_concurrency.lockutils [req-13bdf92b-dd58-4c0e-be58-d6b527c43fc5 req-6de11225-3d6e-415e-b921-460d1af0c96a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:06:13 np0005548731 nova_compute[232433]: 2025-12-06 07:06:13.775 232437 DEBUG nova.network.neutron [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:06:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:13.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:13.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.749 232437 DEBUG nova.network.neutron [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Updating instance_info_cache with network_info: [{"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.785 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Releasing lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.785 232437 DEBUG nova.compute.manager [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Instance network_info: |[{"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.786 232437 DEBUG oslo_concurrency.lockutils [req-13bdf92b-dd58-4c0e-be58-d6b527c43fc5 req-6de11225-3d6e-415e-b921-460d1af0c96a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.787 232437 DEBUG nova.network.neutron [req-13bdf92b-dd58-4c0e-be58-d6b527c43fc5 req-6de11225-3d6e-415e-b921-460d1af0c96a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Refreshing network info cache for port 123afe9f-34c7-4df6-bb33-a09d55405a44 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.792 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Start _get_guest_xml network_info=[{"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.798 232437 WARNING nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.805 232437 DEBUG nova.virt.libvirt.host [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.806 232437 DEBUG nova.virt.libvirt.host [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.810 232437 DEBUG nova.virt.libvirt.host [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.811 232437 DEBUG nova.virt.libvirt.host [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.813 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.813 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.814 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.815 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.816 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.816 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.817 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.817 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.818 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.819 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.819 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.820 232437 DEBUG nova.virt.hardware [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:06:15 np0005548731 nova_compute[232433]: 2025-12-06 07:06:15.825 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:06:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:15.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:06:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:15.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:06:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1384302522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.282 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.314 232437 DEBUG nova.storage.rbd_utils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] rbd image 58618778-1470-4993-b50d-a24f19c41ed3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.319 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e180 e180: 3 total, 3 up, 3 in
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.430 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:16.662 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:06:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1542511494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.811 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.813 232437 DEBUG nova.virt.libvirt.vif [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:06:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-407211273',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-407211273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-407211273',id=35,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bb2a00a1f30495ca45c1cb0c482b67c',ramdisk_id='',reservation_id='r-1sv0yve3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-2036403844',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-2036403844-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:06:10Z,user_data=None,user_id='331b8c2798cf4e52a2f60b73cf2831a3',uuid=58618778-1470-4993-b50d-a24f19c41ed3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.813 232437 DEBUG nova.network.os_vif_util [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Converting VIF {"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.814 232437 DEBUG nova.network.os_vif_util [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:b3:94,bridge_name='br-int',has_traffic_filtering=True,id=123afe9f-34c7-4df6-bb33-a09d55405a44,network=Network(f63e3ad4-d91f-42aa-b6cb-baa14900e53e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap123afe9f-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.815 232437 DEBUG nova.objects.instance [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lazy-loading 'pci_devices' on Instance uuid 58618778-1470-4993-b50d-a24f19c41ed3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.848 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  <uuid>58618778-1470-4993-b50d-a24f19c41ed3</uuid>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  <name>instance-00000023</name>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-407211273</nova:name>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:06:15</nova:creationTime>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <nova:user uuid="331b8c2798cf4e52a2f60b73cf2831a3">tempest-FloatingIPsAssociationNegativeTestJSON-2036403844-project-member</nova:user>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <nova:project uuid="0bb2a00a1f30495ca45c1cb0c482b67c">tempest-FloatingIPsAssociationNegativeTestJSON-2036403844</nova:project>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <nova:port uuid="123afe9f-34c7-4df6-bb33-a09d55405a44">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <entry name="serial">58618778-1470-4993-b50d-a24f19c41ed3</entry>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <entry name="uuid">58618778-1470-4993-b50d-a24f19c41ed3</entry>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/58618778-1470-4993-b50d-a24f19c41ed3_disk">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/58618778-1470-4993-b50d-a24f19c41ed3_disk.config">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:ea:b3:94"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <target dev="tap123afe9f-34"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/58618778-1470-4993-b50d-a24f19c41ed3/console.log" append="off"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:06:16 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:06:16 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:06:16 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:06:16 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.849 232437 DEBUG nova.compute.manager [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Preparing to wait for external event network-vif-plugged-123afe9f-34c7-4df6-bb33-a09d55405a44 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.850 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Acquiring lock "58618778-1470-4993-b50d-a24f19c41ed3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.850 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.850 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.851 232437 DEBUG nova.virt.libvirt.vif [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:06:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-407211273',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-407211273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-407211273',id=35,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bb2a00a1f30495ca45c1cb0c482b67c',ramdisk_id='',reservation_id='r-1sv0yve3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-2036403844',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-2036403844-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:06:10Z,user_data=None,user_id='331b8c2798cf4e52a2f60b73cf2831a3',uuid=58618778-1470-4993-b50d-a24f19c41ed3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.852 232437 DEBUG nova.network.os_vif_util [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Converting VIF {"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.852 232437 DEBUG nova.network.os_vif_util [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:b3:94,bridge_name='br-int',has_traffic_filtering=True,id=123afe9f-34c7-4df6-bb33-a09d55405a44,network=Network(f63e3ad4-d91f-42aa-b6cb-baa14900e53e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap123afe9f-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.852 232437 DEBUG os_vif [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:b3:94,bridge_name='br-int',has_traffic_filtering=True,id=123afe9f-34c7-4df6-bb33-a09d55405a44,network=Network(f63e3ad4-d91f-42aa-b6cb-baa14900e53e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap123afe9f-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.853 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.854 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.854 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.858 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.858 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap123afe9f-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.858 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap123afe9f-34, col_values=(('external_ids', {'iface-id': '123afe9f-34c7-4df6-bb33-a09d55405a44', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:b3:94', 'vm-uuid': '58618778-1470-4993-b50d-a24f19c41ed3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.860 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:16 np0005548731 NetworkManager[49182]: <info>  [1765004776.8621] manager: (tap123afe9f-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.863 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.869 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:16 np0005548731 nova_compute[232433]: 2025-12-06 07:06:16.871 232437 INFO os_vif [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:b3:94,bridge_name='br-int',has_traffic_filtering=True,id=123afe9f-34c7-4df6-bb33-a09d55405a44,network=Network(f63e3ad4-d91f-42aa-b6cb-baa14900e53e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap123afe9f-34')#033[00m
Dec  6 02:06:17 np0005548731 nova_compute[232433]: 2025-12-06 07:06:17.158 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:06:17 np0005548731 nova_compute[232433]: 2025-12-06 07:06:17.159 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:06:17 np0005548731 nova_compute[232433]: 2025-12-06 07:06:17.159 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] No VIF found with MAC fa:16:3e:ea:b3:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:06:17 np0005548731 nova_compute[232433]: 2025-12-06 07:06:17.160 232437 INFO nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Using config drive#033[00m
Dec  6 02:06:17 np0005548731 nova_compute[232433]: 2025-12-06 07:06:17.192 232437 DEBUG nova.storage.rbd_utils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] rbd image 58618778-1470-4993-b50d-a24f19c41ed3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:06:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:17.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:06:17 np0005548731 nova_compute[232433]: 2025-12-06 07:06:17.886 232437 INFO nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Creating config drive at /var/lib/nova/instances/58618778-1470-4993-b50d-a24f19c41ed3/disk.config#033[00m
Dec  6 02:06:17 np0005548731 nova_compute[232433]: 2025-12-06 07:06:17.892 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58618778-1470-4993-b50d-a24f19c41ed3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcaam2ebd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:06:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:17.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.027 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58618778-1470-4993-b50d-a24f19c41ed3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcaam2ebd" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.059 232437 DEBUG nova.storage.rbd_utils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] rbd image 58618778-1470-4993-b50d-a24f19c41ed3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.064 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58618778-1470-4993-b50d-a24f19c41ed3/disk.config 58618778-1470-4993-b50d-a24f19c41ed3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.214 232437 DEBUG nova.network.neutron [req-13bdf92b-dd58-4c0e-be58-d6b527c43fc5 req-6de11225-3d6e-415e-b921-460d1af0c96a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Updated VIF entry in instance network info cache for port 123afe9f-34c7-4df6-bb33-a09d55405a44. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.215 232437 DEBUG nova.network.neutron [req-13bdf92b-dd58-4c0e-be58-d6b527c43fc5 req-6de11225-3d6e-415e-b921-460d1af0c96a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Updating instance_info_cache with network_info: [{"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.219 232437 DEBUG oslo_concurrency.processutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58618778-1470-4993-b50d-a24f19c41ed3/disk.config 58618778-1470-4993-b50d-a24f19c41ed3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.219 232437 INFO nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Deleting local config drive /var/lib/nova/instances/58618778-1470-4993-b50d-a24f19c41ed3/disk.config because it was imported into RBD.#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.245 232437 DEBUG oslo_concurrency.lockutils [req-13bdf92b-dd58-4c0e-be58-d6b527c43fc5 req-6de11225-3d6e-415e-b921-460d1af0c96a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:06:18 np0005548731 kernel: tap123afe9f-34: entered promiscuous mode
Dec  6 02:06:18 np0005548731 NetworkManager[49182]: <info>  [1765004778.2693] manager: (tap123afe9f-34): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Dec  6 02:06:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:18Z|00119|binding|INFO|Claiming lport 123afe9f-34c7-4df6-bb33-a09d55405a44 for this chassis.
Dec  6 02:06:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:18Z|00120|binding|INFO|123afe9f-34c7-4df6-bb33-a09d55405a44: Claiming fa:16:3e:ea:b3:94 10.100.0.9
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.269 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.273 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.276 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.283 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:b3:94 10.100.0.9'], port_security=['fa:16:3e:ea:b3:94 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '58618778-1470-4993-b50d-a24f19c41ed3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f63e3ad4-d91f-42aa-b6cb-baa14900e53e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bb2a00a1f30495ca45c1cb0c482b67c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04b7001d-6cea-4ba1-942a-8b94540b27a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4a168e9-2e71-4459-a445-c21df5cbec14, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=123afe9f-34c7-4df6-bb33-a09d55405a44) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.284 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 123afe9f-34c7-4df6-bb33-a09d55405a44 in datapath f63e3ad4-d91f-42aa-b6cb-baa14900e53e bound to our chassis#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.285 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f63e3ad4-d91f-42aa-b6cb-baa14900e53e#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.295 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3e996fc4-69bf-4ca1-a1be-cf0d5e2169fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.296 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf63e3ad4-d1 in ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.298 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf63e3ad4-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.298 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6c573b-feaf-4d7e-b671-d43ad724335f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.299 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9287e22a-36d5-41c6-a98c-6996cbf091f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 systemd-udevd[249857]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:06:18 np0005548731 systemd-machined[195355]: New machine qemu-18-instance-00000023.
Dec  6 02:06:18 np0005548731 NetworkManager[49182]: <info>  [1765004778.3121] device (tap123afe9f-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:06:18 np0005548731 NetworkManager[49182]: <info>  [1765004778.3134] device (tap123afe9f-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.312 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[729b1c84-682f-4857-861c-4ec52723f12f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 systemd[1]: Started Virtual Machine qemu-18-instance-00000023.
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.336 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd76120-0531-485a-b912-d48a91c2abe8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.346 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:18Z|00121|binding|INFO|Setting lport 123afe9f-34c7-4df6-bb33-a09d55405a44 ovn-installed in OVS
Dec  6 02:06:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:18Z|00122|binding|INFO|Setting lport 123afe9f-34c7-4df6-bb33-a09d55405a44 up in Southbound
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.351 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.362 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9acdb258-cad9-4701-8c66-47a8eb6e9a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.366 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[64f4a010-8a47-423b-ae2d-69ce3551749d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 NetworkManager[49182]: <info>  [1765004778.3677] manager: (tapf63e3ad4-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/73)
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.404 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9d47f00c-18fb-4772-a364-3afe751814dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.406 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd41339-4e11-40f7-a02d-7535b7a02f7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 NetworkManager[49182]: <info>  [1765004778.4276] device (tapf63e3ad4-d0): carrier: link connected
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.433 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[edc6760f-7c11-42c2-a7e7-f76c2b5fb02c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.448 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a535b39a-fc1b-4346-ac41-d1c11d83f21c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf63e3ad4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:e8:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504043, 'reachable_time': 27101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249889, 'error': None, 'target': 'ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.460 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[470d90e7-a436-4e7c-b149-aa9b3ff471ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:e893'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504043, 'tstamp': 504043}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249890, 'error': None, 'target': 'ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.476 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf9cd00-1249-4ead-8a58-6c32c5253828]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf63e3ad4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:e8:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504043, 'reachable_time': 27101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249891, 'error': None, 'target': 'ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.510 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8d54d8-2cd8-4f70-8f7e-2e4f53a4160e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.575 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e3be9c80-9cc5-4ab8-9191-3acfb51003cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.577 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf63e3ad4-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.577 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.578 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf63e3ad4-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.579 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:18 np0005548731 NetworkManager[49182]: <info>  [1765004778.5798] manager: (tapf63e3ad4-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Dec  6 02:06:18 np0005548731 kernel: tapf63e3ad4-d0: entered promiscuous mode
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.581 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.582 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf63e3ad4-d0, col_values=(('external_ids', {'iface-id': '19fc600e-599c-4898-addb-eabbd44a4381'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.583 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:18Z|00123|binding|INFO|Releasing lport 19fc600e-599c-4898-addb-eabbd44a4381 from this chassis (sb_readonly=0)
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.599 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.600 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f63e3ad4-d91f-42aa-b6cb-baa14900e53e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f63e3ad4-d91f-42aa-b6cb-baa14900e53e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.601 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b21fb568-1dc0-4144-b912-0c306dd0a941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.602 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-f63e3ad4-d91f-42aa-b6cb-baa14900e53e
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/f63e3ad4-d91f-42aa-b6cb-baa14900e53e.pid.haproxy
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID f63e3ad4-d91f-42aa-b6cb-baa14900e53e
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:06:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:18.603 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e', 'env', 'PROCESS_TAG=haproxy-f63e3ad4-d91f-42aa-b6cb-baa14900e53e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f63e3ad4-d91f-42aa-b6cb-baa14900e53e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.752 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004778.7518475, 58618778-1470-4993-b50d-a24f19c41ed3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.753 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] VM Started (Lifecycle Event)#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.771 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.775 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004778.7519674, 58618778-1470-4993-b50d-a24f19c41ed3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.775 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.798 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.800 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:06:18 np0005548731 nova_compute[232433]: 2025-12-06 07:06:18.819 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:06:18 np0005548731 podman[249966]: 2025-12-06 07:06:18.951061016 +0000 UTC m=+0.054005714 container create 128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 02:06:18 np0005548731 systemd[1]: Started libpod-conmon-128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d.scope.
Dec  6 02:06:19 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:06:19 np0005548731 podman[249966]: 2025-12-06 07:06:18.925729495 +0000 UTC m=+0.028674223 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:06:19 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea32a38affadf8ea0056bd4ae7c69f3a2e9be5dfcb2463973210c2f0731f39d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:06:19 np0005548731 podman[249966]: 2025-12-06 07:06:19.037389411 +0000 UTC m=+0.140334129 container init 128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:06:19 np0005548731 podman[249966]: 2025-12-06 07:06:19.044340869 +0000 UTC m=+0.147285567 container start 128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:06:19 np0005548731 neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e[249981]: [NOTICE]   (249985) : New worker (249987) forked
Dec  6 02:06:19 np0005548731 neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e[249981]: [NOTICE]   (249985) : Loading success.
Dec  6 02:06:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:19.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:19.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.704 232437 DEBUG nova.compute.manager [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Received event network-vif-plugged-123afe9f-34c7-4df6-bb33-a09d55405a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.705 232437 DEBUG oslo_concurrency.lockutils [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "58618778-1470-4993-b50d-a24f19c41ed3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.705 232437 DEBUG oslo_concurrency.lockutils [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.705 232437 DEBUG oslo_concurrency.lockutils [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.705 232437 DEBUG nova.compute.manager [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Processing event network-vif-plugged-123afe9f-34c7-4df6-bb33-a09d55405a44 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.705 232437 DEBUG nova.compute.manager [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Received event network-vif-plugged-123afe9f-34c7-4df6-bb33-a09d55405a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.706 232437 DEBUG oslo_concurrency.lockutils [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "58618778-1470-4993-b50d-a24f19c41ed3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.706 232437 DEBUG oslo_concurrency.lockutils [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.706 232437 DEBUG oslo_concurrency.lockutils [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.706 232437 DEBUG nova.compute.manager [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] No waiting events found dispatching network-vif-plugged-123afe9f-34c7-4df6-bb33-a09d55405a44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.706 232437 WARNING nova.compute.manager [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Received unexpected event network-vif-plugged-123afe9f-34c7-4df6-bb33-a09d55405a44 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.707 232437 DEBUG nova.compute.manager [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.710 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004780.7099943, 58618778-1470-4993-b50d-a24f19c41ed3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.710 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.712 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.714 232437 INFO nova.virt.libvirt.driver [-] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Instance spawned successfully.#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.715 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.731 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.736 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.739 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.739 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.739 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.740 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.740 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.741 232437 DEBUG nova.virt.libvirt.driver [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.769 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.798 232437 INFO nova.compute.manager [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Took 10.33 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.798 232437 DEBUG nova.compute.manager [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.873 232437 INFO nova.compute.manager [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Took 12.58 seconds to build instance.#033[00m
Dec  6 02:06:20 np0005548731 nova_compute[232433]: 2025-12-06 07:06:20.891 232437 DEBUG oslo_concurrency.lockutils [None req-bc945fb2-8503-4ea8-b8ba-e6d664b97ef6 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:21 np0005548731 nova_compute[232433]: 2025-12-06 07:06:21.434 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:21 np0005548731 nova_compute[232433]: 2025-12-06 07:06:21.861 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:21.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:21.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 e181: 3 total, 3 up, 3 in
Dec  6 02:06:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:23.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:23.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:25.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:25.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:26 np0005548731 nova_compute[232433]: 2025-12-06 07:06:26.501 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:26 np0005548731 nova_compute[232433]: 2025-12-06 07:06:26.788 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:26 np0005548731 NetworkManager[49182]: <info>  [1765004786.7888] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Dec  6 02:06:26 np0005548731 NetworkManager[49182]: <info>  [1765004786.7902] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Dec  6 02:06:26 np0005548731 nova_compute[232433]: 2025-12-06 07:06:26.863 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:26 np0005548731 nova_compute[232433]: 2025-12-06 07:06:26.872 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:26Z|00124|binding|INFO|Releasing lport 19fc600e-599c-4898-addb-eabbd44a4381 from this chassis (sb_readonly=0)
Dec  6 02:06:26 np0005548731 nova_compute[232433]: 2025-12-06 07:06:26.881 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.155 232437 DEBUG nova.compute.manager [req-4c3e7a5b-f373-41c3-9d19-eabe8f5bb604 req-6950b853-a476-4f0d-a272-8b78b28f321b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Received event network-changed-123afe9f-34c7-4df6-bb33-a09d55405a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.156 232437 DEBUG nova.compute.manager [req-4c3e7a5b-f373-41c3-9d19-eabe8f5bb604 req-6950b853-a476-4f0d-a272-8b78b28f321b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Refreshing instance network info cache due to event network-changed-123afe9f-34c7-4df6-bb33-a09d55405a44. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.156 232437 DEBUG oslo_concurrency.lockutils [req-4c3e7a5b-f373-41c3-9d19-eabe8f5bb604 req-6950b853-a476-4f0d-a272-8b78b28f321b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.157 232437 DEBUG oslo_concurrency.lockutils [req-4c3e7a5b-f373-41c3-9d19-eabe8f5bb604 req-6950b853-a476-4f0d-a272-8b78b28f321b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.157 232437 DEBUG nova.network.neutron [req-4c3e7a5b-f373-41c3-9d19-eabe8f5bb604 req-6950b853-a476-4f0d-a272-8b78b28f321b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Refreshing network info cache for port 123afe9f-34c7-4df6-bb33-a09d55405a44 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.584 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.585 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.604 232437 DEBUG nova.compute.manager [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.687 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.687 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.693 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.693 232437 INFO nova.compute.claims [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:06:27 np0005548731 nova_compute[232433]: 2025-12-06 07:06:27.818 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:27.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:27.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:06:28 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3325075958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.321 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.328 232437 DEBUG nova.compute.provider_tree [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.356 232437 DEBUG nova.scheduler.client.report [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.388 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.389 232437 DEBUG nova.compute.manager [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.442 232437 DEBUG nova.compute.manager [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.443 232437 DEBUG nova.network.neutron [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.467 232437 INFO nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.484 232437 DEBUG nova.compute.manager [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.584 232437 DEBUG nova.compute.manager [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.586 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.587 232437 INFO nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Creating image(s)#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.626 232437 DEBUG nova.storage.rbd_utils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.654 232437 DEBUG nova.storage.rbd_utils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.684 232437 DEBUG nova.storage.rbd_utils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.688 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.715 232437 DEBUG nova.network.neutron [req-4c3e7a5b-f373-41c3-9d19-eabe8f5bb604 req-6950b853-a476-4f0d-a272-8b78b28f321b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Updated VIF entry in instance network info cache for port 123afe9f-34c7-4df6-bb33-a09d55405a44. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.716 232437 DEBUG nova.network.neutron [req-4c3e7a5b-f373-41c3-9d19-eabe8f5bb604 req-6950b853-a476-4f0d-a272-8b78b28f321b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Updating instance_info_cache with network_info: [{"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.719 232437 DEBUG nova.policy [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9025bd3f0854dff80d9408800d6b76b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503b2dfdce9d47598a8b9de4b15e1d45', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.736 232437 DEBUG oslo_concurrency.lockutils [req-4c3e7a5b-f373-41c3-9d19-eabe8f5bb604 req-6950b853-a476-4f0d-a272-8b78b28f321b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.751 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.751 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.752 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.752 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.774 232437 DEBUG nova.storage.rbd_utils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:28 np0005548731 nova_compute[232433]: 2025-12-06 07:06:28.777 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:29 np0005548731 nova_compute[232433]: 2025-12-06 07:06:29.039 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:29 np0005548731 nova_compute[232433]: 2025-12-06 07:06:29.131 232437 DEBUG nova.storage.rbd_utils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] resizing rbd image 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:06:29 np0005548731 nova_compute[232433]: 2025-12-06 07:06:29.245 232437 DEBUG nova.objects.instance [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lazy-loading 'migration_context' on Instance uuid 43acf76c-4b6b-4e85-9f2a-607bd18f7906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:06:29 np0005548731 nova_compute[232433]: 2025-12-06 07:06:29.259 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:06:29 np0005548731 nova_compute[232433]: 2025-12-06 07:06:29.259 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Ensure instance console log exists: /var/lib/nova/instances/43acf76c-4b6b-4e85-9f2a-607bd18f7906/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:06:29 np0005548731 nova_compute[232433]: 2025-12-06 07:06:29.260 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:29 np0005548731 nova_compute[232433]: 2025-12-06 07:06:29.260 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:29 np0005548731 nova_compute[232433]: 2025-12-06 07:06:29.260 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:29 np0005548731 nova_compute[232433]: 2025-12-06 07:06:29.289 232437 DEBUG nova.network.neutron [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Successfully created port: 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:06:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:29.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:29.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:30 np0005548731 nova_compute[232433]: 2025-12-06 07:06:30.069 232437 DEBUG nova.network.neutron [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Successfully updated port: 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:06:30 np0005548731 nova_compute[232433]: 2025-12-06 07:06:30.091 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:06:30 np0005548731 nova_compute[232433]: 2025-12-06 07:06:30.091 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquired lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:06:30 np0005548731 nova_compute[232433]: 2025-12-06 07:06:30.091 232437 DEBUG nova.network.neutron [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:06:30 np0005548731 nova_compute[232433]: 2025-12-06 07:06:30.154 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:30 np0005548731 nova_compute[232433]: 2025-12-06 07:06:30.186 232437 DEBUG nova.compute.manager [req-b5b2d07a-64ea-4726-adf7-7dd1d5a47391 req-efd99a15-8ff3-49d3-99b3-03c7e447b6df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Received event network-changed-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:06:30 np0005548731 nova_compute[232433]: 2025-12-06 07:06:30.187 232437 DEBUG nova.compute.manager [req-b5b2d07a-64ea-4726-adf7-7dd1d5a47391 req-efd99a15-8ff3-49d3-99b3-03c7e447b6df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Refreshing instance network info cache due to event network-changed-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:06:30 np0005548731 nova_compute[232433]: 2025-12-06 07:06:30.187 232437 DEBUG oslo_concurrency.lockutils [req-b5b2d07a-64ea-4726-adf7-7dd1d5a47391 req-efd99a15-8ff3-49d3-99b3-03c7e447b6df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:06:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:30 np0005548731 nova_compute[232433]: 2025-12-06 07:06:30.332 232437 DEBUG nova.network.neutron [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:06:31 np0005548731 nova_compute[232433]: 2025-12-06 07:06:31.503 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:31 np0005548731 nova_compute[232433]: 2025-12-06 07:06:31.864 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:31.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:31.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:31 np0005548731 nova_compute[232433]: 2025-12-06 07:06:31.961 232437 DEBUG nova.network.neutron [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Updating instance_info_cache with network_info: [{"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:06:31 np0005548731 nova_compute[232433]: 2025-12-06 07:06:31.991 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Releasing lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:06:31 np0005548731 nova_compute[232433]: 2025-12-06 07:06:31.992 232437 DEBUG nova.compute.manager [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Instance network_info: |[{"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:06:31 np0005548731 nova_compute[232433]: 2025-12-06 07:06:31.993 232437 DEBUG oslo_concurrency.lockutils [req-b5b2d07a-64ea-4726-adf7-7dd1d5a47391 req-efd99a15-8ff3-49d3-99b3-03c7e447b6df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:06:31 np0005548731 nova_compute[232433]: 2025-12-06 07:06:31.993 232437 DEBUG nova.network.neutron [req-b5b2d07a-64ea-4726-adf7-7dd1d5a47391 req-efd99a15-8ff3-49d3-99b3-03c7e447b6df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Refreshing network info cache for port 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:06:31 np0005548731 nova_compute[232433]: 2025-12-06 07:06:31.996 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Start _get_guest_xml network_info=[{"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.000 232437 WARNING nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.005 232437 DEBUG nova.virt.libvirt.host [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.006 232437 DEBUG nova.virt.libvirt.host [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.011 232437 DEBUG nova.virt.libvirt.host [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.012 232437 DEBUG nova.virt.libvirt.host [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.013 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.014 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.014 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.015 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.015 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.015 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.016 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.016 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.016 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.017 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.017 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.017 232437 DEBUG nova.virt.hardware [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.020 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:06:32 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2455388943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.604 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.635 232437 DEBUG nova.storage.rbd_utils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:32 np0005548731 nova_compute[232433]: 2025-12-06 07:06:32.644 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:06:33 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1436394406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.138 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.140 232437 DEBUG nova.virt.libvirt.vif [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:06:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-386327972',display_name='tempest-FloatingIPsAssociationTestJSON-server-386327972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-386327972',id=37,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503b2dfdce9d47598a8b9de4b15e1d45',ramdisk_id='',reservation_id='r-ocfaists',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-263213844',owner_user_name='tempest-FloatingIPsAssociationTestJSON-263213844-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:06:28Z,user_data=None,user_id='c9025bd3f0854dff80d9408800d6b76b',uuid=43acf76c-4b6b-4e85-9f2a-607bd18f7906,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.140 232437 DEBUG nova.network.os_vif_util [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converting VIF {"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.142 232437 DEBUG nova.network.os_vif_util [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:1f:9a,bridge_name='br-int',has_traffic_filtering=True,id=46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46e3c9e1-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.143 232437 DEBUG nova.objects.instance [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lazy-loading 'pci_devices' on Instance uuid 43acf76c-4b6b-4e85-9f2a-607bd18f7906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.162 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  <uuid>43acf76c-4b6b-4e85-9f2a-607bd18f7906</uuid>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  <name>instance-00000025</name>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-386327972</nova:name>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:06:32</nova:creationTime>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <nova:user uuid="c9025bd3f0854dff80d9408800d6b76b">tempest-FloatingIPsAssociationTestJSON-263213844-project-member</nova:user>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <nova:project uuid="503b2dfdce9d47598a8b9de4b15e1d45">tempest-FloatingIPsAssociationTestJSON-263213844</nova:project>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <nova:port uuid="46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <entry name="serial">43acf76c-4b6b-4e85-9f2a-607bd18f7906</entry>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <entry name="uuid">43acf76c-4b6b-4e85-9f2a-607bd18f7906</entry>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk.config">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:c3:1f:9a"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <target dev="tap46e3c9e1-9d"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/43acf76c-4b6b-4e85-9f2a-607bd18f7906/console.log" append="off"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:06:33 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:06:33 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:06:33 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:06:33 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.164 232437 DEBUG nova.compute.manager [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Preparing to wait for external event network-vif-plugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.164 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.165 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.165 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.166 232437 DEBUG nova.virt.libvirt.vif [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:06:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-386327972',display_name='tempest-FloatingIPsAssociationTestJSON-server-386327972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-386327972',id=37,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503b2dfdce9d47598a8b9de4b15e1d45',ramdisk_id='',reservation_id='r-ocfaists',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-263213844',owner_user_name='tempest-FloatingIPsAssociationTestJSON-263213844-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:06:28Z,user_data=None,user_id='c9025bd3f0854dff80d9408800d6b76b',uuid=43acf76c-4b6b-4e85-9f2a-607bd18f7906,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.166 232437 DEBUG nova.network.os_vif_util [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converting VIF {"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.166 232437 DEBUG nova.network.os_vif_util [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:1f:9a,bridge_name='br-int',has_traffic_filtering=True,id=46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46e3c9e1-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.167 232437 DEBUG os_vif [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:1f:9a,bridge_name='br-int',has_traffic_filtering=True,id=46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46e3c9e1-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.167 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.168 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.168 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.171 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.171 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap46e3c9e1-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.171 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap46e3c9e1-9d, col_values=(('external_ids', {'iface-id': '46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:1f:9a', 'vm-uuid': '43acf76c-4b6b-4e85-9f2a-607bd18f7906'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.173 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:33 np0005548731 NetworkManager[49182]: <info>  [1765004793.1751] manager: (tap46e3c9e1-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.176 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.182 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.183 232437 INFO os_vif [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:1f:9a,bridge_name='br-int',has_traffic_filtering=True,id=46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46e3c9e1-9d')#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.249 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.249 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.249 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] No VIF found with MAC fa:16:3e:c3:1f:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.250 232437 INFO nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Using config drive#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.279 232437 DEBUG nova.storage.rbd_utils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.428 232437 DEBUG nova.network.neutron [req-b5b2d07a-64ea-4726-adf7-7dd1d5a47391 req-efd99a15-8ff3-49d3-99b3-03c7e447b6df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Updated VIF entry in instance network info cache for port 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.429 232437 DEBUG nova.network.neutron [req-b5b2d07a-64ea-4726-adf7-7dd1d5a47391 req-efd99a15-8ff3-49d3-99b3-03c7e447b6df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Updating instance_info_cache with network_info: [{"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.453 232437 DEBUG oslo_concurrency.lockutils [req-b5b2d07a-64ea-4726-adf7-7dd1d5a47391 req-efd99a15-8ff3-49d3-99b3-03c7e447b6df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.532 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.680 232437 INFO nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Creating config drive at /var/lib/nova/instances/43acf76c-4b6b-4e85-9f2a-607bd18f7906/disk.config#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.685 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43acf76c-4b6b-4e85-9f2a-607bd18f7906/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3fxq_dg8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.813 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43acf76c-4b6b-4e85-9f2a-607bd18f7906/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3fxq_dg8" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.846 232437 DEBUG nova.storage.rbd_utils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:33 np0005548731 nova_compute[232433]: 2025-12-06 07:06:33.852 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/43acf76c-4b6b-4e85-9f2a-607bd18f7906/disk.config 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:33.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:33.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.025 232437 DEBUG oslo_concurrency.processutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/43acf76c-4b6b-4e85-9f2a-607bd18f7906/disk.config 43acf76c-4b6b-4e85-9f2a-607bd18f7906_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.026 232437 INFO nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Deleting local config drive /var/lib/nova/instances/43acf76c-4b6b-4e85-9f2a-607bd18f7906/disk.config because it was imported into RBD.#033[00m
Dec  6 02:06:34 np0005548731 kernel: tap46e3c9e1-9d: entered promiscuous mode
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.085 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:34Z|00125|binding|INFO|Claiming lport 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 for this chassis.
Dec  6 02:06:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:34Z|00126|binding|INFO|46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8: Claiming fa:16:3e:c3:1f:9a 10.100.0.12
Dec  6 02:06:34 np0005548731 NetworkManager[49182]: <info>  [1765004794.0910] manager: (tap46e3c9e1-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.093 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:1f:9a 10.100.0.12'], port_security=['fa:16:3e:c3:1f:9a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '43acf76c-4b6b-4e85-9f2a-607bd18f7906', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-344a2c5d-4516-4e02-9384-4797cfc76497', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503b2dfdce9d47598a8b9de4b15e1d45', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd9b8be6-903c-4fbc-b82c-3bc27105f7c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9327b272-f30f-48a7-bcf4-62320d61cbdc, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.095 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 in datapath 344a2c5d-4516-4e02-9384-4797cfc76497 bound to our chassis#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.097 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 344a2c5d-4516-4e02-9384-4797cfc76497#033[00m
Dec  6 02:06:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:34Z|00127|binding|INFO|Setting lport 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 ovn-installed in OVS
Dec  6 02:06:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:34Z|00128|binding|INFO|Setting lport 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 up in Southbound
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.107 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.110 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0306f985-6ca3-47cf-a621-dce5a7e0a6eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.113 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.115 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap344a2c5d-41 in ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.119 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap344a2c5d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.120 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[81b76d28-c260-412e-b7e5-1488aa7081e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.121 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ff791b38-dc97-4fc3-8c98-a86676809f56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 systemd-udevd[250329]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.138 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[0d9e12b1-72d1-425a-a1b3-8235c17242d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 NetworkManager[49182]: <info>  [1765004794.1468] device (tap46e3c9e1-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:06:34 np0005548731 NetworkManager[49182]: <info>  [1765004794.1475] device (tap46e3c9e1-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:06:34 np0005548731 systemd-machined[195355]: New machine qemu-19-instance-00000025.
Dec  6 02:06:34 np0005548731 systemd[1]: Started Virtual Machine qemu-19-instance-00000025.
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.173 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a53c401a-d95c-4219-88ed-fee3392d1654]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.203 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[30eeda05-9eda-4395-a88d-4a0d2af4d876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.209 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9df6b637-82dd-438f-98e6-0b56ceaf5d0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 NetworkManager[49182]: <info>  [1765004794.2108] manager: (tap344a2c5d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/79)
Dec  6 02:06:34 np0005548731 systemd-udevd[250332]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.244 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[58b985ca-e286-47e8-b4ee-e4330bec640f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.248 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d5715e11-2017-4a42-83fa-a52706412bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 NetworkManager[49182]: <info>  [1765004794.2730] device (tap344a2c5d-40): carrier: link connected
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.283 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[80d23876-eea5-4d90-b0e1-a813298f404f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.303 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6a0f99-e09b-41ad-8259-6e0516d3b462]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap344a2c5d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:2c:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505628, 'reachable_time': 33575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250362, 'error': None, 'target': 'ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.317 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0e726cae-6550-4f35-be1b-b07fe9ec72ed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefd:2c87'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505628, 'tstamp': 505628}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250363, 'error': None, 'target': 'ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.336 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d6556ea9-1a1a-4477-895f-cc6ac71ba896]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap344a2c5d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:2c:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505628, 'reachable_time': 33575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250364, 'error': None, 'target': 'ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.342 232437 DEBUG nova.compute.manager [req-962f7544-8ce0-4900-b9c2-03b9c181d43e req-cbc18356-77eb-4b0c-9e02-09e8fb72960c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Received event network-vif-plugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.343 232437 DEBUG oslo_concurrency.lockutils [req-962f7544-8ce0-4900-b9c2-03b9c181d43e req-cbc18356-77eb-4b0c-9e02-09e8fb72960c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.343 232437 DEBUG oslo_concurrency.lockutils [req-962f7544-8ce0-4900-b9c2-03b9c181d43e req-cbc18356-77eb-4b0c-9e02-09e8fb72960c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.343 232437 DEBUG oslo_concurrency.lockutils [req-962f7544-8ce0-4900-b9c2-03b9c181d43e req-cbc18356-77eb-4b0c-9e02-09e8fb72960c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.343 232437 DEBUG nova.compute.manager [req-962f7544-8ce0-4900-b9c2-03b9c181d43e req-cbc18356-77eb-4b0c-9e02-09e8fb72960c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Processing event network-vif-plugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.365 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2a317d-e43f-4516-8ea3-de4ecee4de7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.430 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bce304e4-4d94-4ba1-a060-0299bb0f227b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.432 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap344a2c5d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.432 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.433 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap344a2c5d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.434 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:34 np0005548731 kernel: tap344a2c5d-40: entered promiscuous mode
Dec  6 02:06:34 np0005548731 NetworkManager[49182]: <info>  [1765004794.4355] manager: (tap344a2c5d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.437 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.437 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap344a2c5d-40, col_values=(('external_ids', {'iface-id': '0a001355-8429-4726-b883-743f03fd3e79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.438 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:34Z|00129|binding|INFO|Releasing lport 0a001355-8429-4726-b883-743f03fd3e79 from this chassis (sb_readonly=0)
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.452 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.454 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/344a2c5d-4516-4e02-9384-4797cfc76497.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/344a2c5d-4516-4e02-9384-4797cfc76497.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.457 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[eedba5b2-4acc-4ee0-8737-c3ae110eee0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.458 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-344a2c5d-4516-4e02-9384-4797cfc76497
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/344a2c5d-4516-4e02-9384-4797cfc76497.pid.haproxy
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 344a2c5d-4516-4e02-9384-4797cfc76497
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:06:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:34.460 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497', 'env', 'PROCESS_TAG=haproxy-344a2c5d-4516-4e02-9384-4797cfc76497', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/344a2c5d-4516-4e02-9384-4797cfc76497.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.775 232437 DEBUG nova.compute.manager [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.776 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004794.7741048, 43acf76c-4b6b-4e85-9f2a-607bd18f7906 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.776 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] VM Started (Lifecycle Event)#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.784 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.788 232437 INFO nova.virt.libvirt.driver [-] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Instance spawned successfully.#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.788 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.796 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.800 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.810 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.810 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.811 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.811 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.812 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.812 232437 DEBUG nova.virt.libvirt.driver [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.817 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.817 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004794.774605, 43acf76c-4b6b-4e85-9f2a-607bd18f7906 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.818 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:06:34 np0005548731 podman[250436]: 2025-12-06 07:06:34.819957976 +0000 UTC m=+0.050565567 container create f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.849 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:34 np0005548731 systemd[1]: Started libpod-conmon-f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0.scope.
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.854 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004794.7828405, 43acf76c-4b6b-4e85-9f2a-607bd18f7906 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.854 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:06:34 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.885 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:34 np0005548731 podman[250436]: 2025-12-06 07:06:34.792520197 +0000 UTC m=+0.023127828 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.889 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:06:34 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3319244f0375c034cfe602eca227eb0fd63d1a65c0a36070546be19678681177/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.908 232437 INFO nova.compute.manager [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Took 6.32 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.909 232437 DEBUG nova.compute.manager [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:34 np0005548731 podman[250436]: 2025-12-06 07:06:34.916460639 +0000 UTC m=+0.147068230 container init f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 02:06:34 np0005548731 podman[250436]: 2025-12-06 07:06:34.923112741 +0000 UTC m=+0.153720332 container start f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.940 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:06:34 np0005548731 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[250449]: [NOTICE]   (250453) : New worker (250455) forked
Dec  6 02:06:34 np0005548731 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[250449]: [NOTICE]   (250453) : Loading success.
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.973 232437 INFO nova.compute.manager [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Took 7.31 seconds to build instance.#033[00m
Dec  6 02:06:34 np0005548731 nova_compute[232433]: 2025-12-06 07:06:34.990 232437 DEBUG oslo_concurrency.lockutils [None req-611617ba-ed67-4fee-9144-9b356b074a70 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:35 np0005548731 nova_compute[232433]: 2025-12-06 07:06:35.117 232437 DEBUG nova.compute.manager [req-a61d9784-f59d-46e4-bbb0-fee1515ded35 req-8eb64f90-ebc0-401a-b161-be389c79a787 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Received event network-changed-123afe9f-34c7-4df6-bb33-a09d55405a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:06:35 np0005548731 nova_compute[232433]: 2025-12-06 07:06:35.117 232437 DEBUG nova.compute.manager [req-a61d9784-f59d-46e4-bbb0-fee1515ded35 req-8eb64f90-ebc0-401a-b161-be389c79a787 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Refreshing instance network info cache due to event network-changed-123afe9f-34c7-4df6-bb33-a09d55405a44. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:06:35 np0005548731 nova_compute[232433]: 2025-12-06 07:06:35.117 232437 DEBUG oslo_concurrency.lockutils [req-a61d9784-f59d-46e4-bbb0-fee1515ded35 req-8eb64f90-ebc0-401a-b161-be389c79a787 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:06:35 np0005548731 nova_compute[232433]: 2025-12-06 07:06:35.118 232437 DEBUG oslo_concurrency.lockutils [req-a61d9784-f59d-46e4-bbb0-fee1515ded35 req-8eb64f90-ebc0-401a-b161-be389c79a787 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:06:35 np0005548731 nova_compute[232433]: 2025-12-06 07:06:35.118 232437 DEBUG nova.network.neutron [req-a61d9784-f59d-46e4-bbb0-fee1515ded35 req-8eb64f90-ebc0-401a-b161-be389c79a787 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Refreshing network info cache for port 123afe9f-34c7-4df6-bb33-a09d55405a44 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:06:35 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:35Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ea:b3:94 10.100.0.9
Dec  6 02:06:35 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:35Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:b3:94 10.100.0.9
Dec  6 02:06:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:35.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:35.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:36 np0005548731 nova_compute[232433]: 2025-12-06 07:06:36.226 232437 DEBUG nova.network.neutron [req-a61d9784-f59d-46e4-bbb0-fee1515ded35 req-8eb64f90-ebc0-401a-b161-be389c79a787 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Updated VIF entry in instance network info cache for port 123afe9f-34c7-4df6-bb33-a09d55405a44. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:06:36 np0005548731 nova_compute[232433]: 2025-12-06 07:06:36.227 232437 DEBUG nova.network.neutron [req-a61d9784-f59d-46e4-bbb0-fee1515ded35 req-8eb64f90-ebc0-401a-b161-be389c79a787 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Updating instance_info_cache with network_info: [{"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:06:36 np0005548731 nova_compute[232433]: 2025-12-06 07:06:36.251 232437 DEBUG oslo_concurrency.lockutils [req-a61d9784-f59d-46e4-bbb0-fee1515ded35 req-8eb64f90-ebc0-401a-b161-be389c79a787 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-58618778-1470-4993-b50d-a24f19c41ed3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:06:36 np0005548731 nova_compute[232433]: 2025-12-06 07:06:36.459 232437 DEBUG nova.compute.manager [req-797242a5-d638-40e9-bc47-e324197e0d64 req-5acd4582-6e3f-4ac0-b29d-f63b2c2ace06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Received event network-vif-plugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:06:36 np0005548731 nova_compute[232433]: 2025-12-06 07:06:36.460 232437 DEBUG oslo_concurrency.lockutils [req-797242a5-d638-40e9-bc47-e324197e0d64 req-5acd4582-6e3f-4ac0-b29d-f63b2c2ace06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:36 np0005548731 nova_compute[232433]: 2025-12-06 07:06:36.460 232437 DEBUG oslo_concurrency.lockutils [req-797242a5-d638-40e9-bc47-e324197e0d64 req-5acd4582-6e3f-4ac0-b29d-f63b2c2ace06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:36 np0005548731 nova_compute[232433]: 2025-12-06 07:06:36.460 232437 DEBUG oslo_concurrency.lockutils [req-797242a5-d638-40e9-bc47-e324197e0d64 req-5acd4582-6e3f-4ac0-b29d-f63b2c2ace06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:36 np0005548731 nova_compute[232433]: 2025-12-06 07:06:36.460 232437 DEBUG nova.compute.manager [req-797242a5-d638-40e9-bc47-e324197e0d64 req-5acd4582-6e3f-4ac0-b29d-f63b2c2ace06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] No waiting events found dispatching network-vif-plugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:06:36 np0005548731 nova_compute[232433]: 2025-12-06 07:06:36.461 232437 WARNING nova.compute.manager [req-797242a5-d638-40e9-bc47-e324197e0d64 req-5acd4582-6e3f-4ac0-b29d-f63b2c2ace06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Received unexpected event network-vif-plugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:06:36 np0005548731 nova_compute[232433]: 2025-12-06 07:06:36.506 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:37 np0005548731 podman[250491]: 2025-12-06 07:06:37.298419695 +0000 UTC m=+0.062207648 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:06:37 np0005548731 podman[250489]: 2025-12-06 07:06:37.320984637 +0000 UTC m=+0.091180466 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:06:37 np0005548731 podman[250490]: 2025-12-06 07:06:37.341472017 +0000 UTC m=+0.103247698 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 02:06:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:37.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:37.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:38 np0005548731 nova_compute[232433]: 2025-12-06 07:06:38.174 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:38 np0005548731 nova_compute[232433]: 2025-12-06 07:06:38.525 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:39 np0005548731 nova_compute[232433]: 2025-12-06 07:06:39.442 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "001c930d-1294-433f-b9df-38beb3123844" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:39 np0005548731 nova_compute[232433]: 2025-12-06 07:06:39.442 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "001c930d-1294-433f-b9df-38beb3123844" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:39 np0005548731 nova_compute[232433]: 2025-12-06 07:06:39.485 232437 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:06:39 np0005548731 nova_compute[232433]: 2025-12-06 07:06:39.609 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:39 np0005548731 nova_compute[232433]: 2025-12-06 07:06:39.610 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:39 np0005548731 nova_compute[232433]: 2025-12-06 07:06:39.616 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:06:39 np0005548731 nova_compute[232433]: 2025-12-06 07:06:39.616 232437 INFO nova.compute.claims [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:06:39 np0005548731 nova_compute[232433]: 2025-12-06 07:06:39.740 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:39.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:39.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.017 232437 DEBUG oslo_concurrency.lockutils [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Acquiring lock "58618778-1470-4993-b50d-a24f19c41ed3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.017 232437 DEBUG oslo_concurrency.lockutils [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.018 232437 DEBUG oslo_concurrency.lockutils [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Acquiring lock "58618778-1470-4993-b50d-a24f19c41ed3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.018 232437 DEBUG oslo_concurrency.lockutils [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.018 232437 DEBUG oslo_concurrency.lockutils [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.020 232437 INFO nova.compute.manager [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Terminating instance#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.020 232437 DEBUG nova.compute.manager [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:06:40 np0005548731 kernel: tap123afe9f-34 (unregistering): left promiscuous mode
Dec  6 02:06:40 np0005548731 NetworkManager[49182]: <info>  [1765004800.0599] device (tap123afe9f-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:06:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:40Z|00130|binding|INFO|Releasing lport 123afe9f-34c7-4df6-bb33-a09d55405a44 from this chassis (sb_readonly=0)
Dec  6 02:06:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:40Z|00131|binding|INFO|Setting lport 123afe9f-34c7-4df6-bb33-a09d55405a44 down in Southbound
Dec  6 02:06:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:40Z|00132|binding|INFO|Removing iface tap123afe9f-34 ovn-installed in OVS
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.128 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.135 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:b3:94 10.100.0.9'], port_security=['fa:16:3e:ea:b3:94 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '58618778-1470-4993-b50d-a24f19c41ed3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f63e3ad4-d91f-42aa-b6cb-baa14900e53e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bb2a00a1f30495ca45c1cb0c482b67c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04b7001d-6cea-4ba1-942a-8b94540b27a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d4a168e9-2e71-4459-a445-c21df5cbec14, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=123afe9f-34c7-4df6-bb33-a09d55405a44) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.136 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 123afe9f-34c7-4df6-bb33-a09d55405a44 in datapath f63e3ad4-d91f-42aa-b6cb-baa14900e53e unbound from our chassis#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.137 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f63e3ad4-d91f-42aa-b6cb-baa14900e53e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.138 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[afae8764-47c5-40c8-a29c-238c1eb91e1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.138 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e namespace which is not needed anymore#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.145 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:40 np0005548731 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000023.scope: Deactivated successfully.
Dec  6 02:06:40 np0005548731 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000023.scope: Consumed 14.263s CPU time.
Dec  6 02:06:40 np0005548731 systemd-machined[195355]: Machine qemu-18-instance-00000023 terminated.
Dec  6 02:06:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:06:40 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3249629797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.183 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.189 232437 DEBUG nova.compute.provider_tree [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.211 232437 DEBUG nova.scheduler.client.report [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.240 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.244 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.252 232437 INFO nova.virt.libvirt.driver [-] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Instance destroyed successfully.#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.253 232437 DEBUG nova.objects.instance [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lazy-loading 'resources' on Instance uuid 58618778-1470-4993-b50d-a24f19c41ed3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.274 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.276 232437 DEBUG nova.virt.libvirt.vif [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:06:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-407211273',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-407211273',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-407211273',id=35,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:06:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bb2a00a1f30495ca45c1cb0c482b67c',ramdisk_id='',reservation_id='r-1sv0yve3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-2036403844',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-2036403844-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:06:20Z,user_data=None,user_id='331b8c2798cf4e52a2f60b73cf2831a3',uuid=58618778-1470-4993-b50d-a24f19c41ed3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.276 232437 DEBUG nova.network.os_vif_util [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Converting VIF {"id": "123afe9f-34c7-4df6-bb33-a09d55405a44", "address": "fa:16:3e:ea:b3:94", "network": {"id": "f63e3ad4-d91f-42aa-b6cb-baa14900e53e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-625876326-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bb2a00a1f30495ca45c1cb0c482b67c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap123afe9f-34", "ovs_interfaceid": "123afe9f-34c7-4df6-bb33-a09d55405a44", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.277 232437 DEBUG nova.network.os_vif_util [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ea:b3:94,bridge_name='br-int',has_traffic_filtering=True,id=123afe9f-34c7-4df6-bb33-a09d55405a44,network=Network(f63e3ad4-d91f-42aa-b6cb-baa14900e53e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap123afe9f-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.277 232437 DEBUG os_vif [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:b3:94,bridge_name='br-int',has_traffic_filtering=True,id=123afe9f-34c7-4df6-bb33-a09d55405a44,network=Network(f63e3ad4-d91f-42aa-b6cb-baa14900e53e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap123afe9f-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:06:40 np0005548731 neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e[249981]: [NOTICE]   (249985) : haproxy version is 2.8.14-c23fe91
Dec  6 02:06:40 np0005548731 neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e[249981]: [NOTICE]   (249985) : path to executable is /usr/sbin/haproxy
Dec  6 02:06:40 np0005548731 neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e[249981]: [WARNING]  (249985) : Exiting Master process...
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.279 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.279 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap123afe9f-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:40 np0005548731 neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e[249981]: [ALERT]    (249985) : Current worker (249987) exited with code 143 (Terminated)
Dec  6 02:06:40 np0005548731 neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e[249981]: [WARNING]  (249985) : All workers exited. Exiting... (0)
Dec  6 02:06:40 np0005548731 systemd[1]: libpod-128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d.scope: Deactivated successfully.
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.281 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.284 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.286 232437 INFO os_vif [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:b3:94,bridge_name='br-int',has_traffic_filtering=True,id=123afe9f-34c7-4df6-bb33-a09d55405a44,network=Network(f63e3ad4-d91f-42aa-b6cb-baa14900e53e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap123afe9f-34')#033[00m
Dec  6 02:06:40 np0005548731 podman[250624]: 2025-12-06 07:06:40.289979257 +0000 UTC m=+0.053939835 container died 128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  6 02:06:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:40 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d-userdata-shm.mount: Deactivated successfully.
Dec  6 02:06:40 np0005548731 systemd[1]: var-lib-containers-storage-overlay-ea32a38affadf8ea0056bd4ae7c69f3a2e9be5dfcb2463973210c2f0731f39d9-merged.mount: Deactivated successfully.
Dec  6 02:06:40 np0005548731 podman[250624]: 2025-12-06 07:06:40.326194012 +0000 UTC m=+0.090154580 container cleanup 128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:06:40 np0005548731 systemd[1]: libpod-conmon-128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d.scope: Deactivated successfully.
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.356 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "4d9584e0-fddd-493b-86a5-2cdb9087cce3" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.357 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "4d9584e0-fddd-493b-86a5-2cdb9087cce3" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:40 np0005548731 podman[250680]: 2025-12-06 07:06:40.387425813 +0000 UTC m=+0.040686292 container remove 128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.392 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5100e289-9a8c-4a97-8c85-20afe6183f85]: (4, ('Sat Dec  6 07:06:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e (128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d)\n128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d\nSat Dec  6 07:06:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e (128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d)\n128748be4643dc7e49842184c2bba747e73e2eff335ae8087c1fd92547e50a7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.394 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1438ea-b68e-4f5d-8c02-5f49e930227d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.394 232437 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] No node specified, defaulting to compute-2.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.395 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf63e3ad4-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:40 np0005548731 kernel: tapf63e3ad4-d0: left promiscuous mode
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.397 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.399 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.401 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fd129d-35ec-4418-91a1-563255c37eec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.411 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d9802d05-067b-445c-a73b-68458510248f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.412 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f47a06-375a-4574-92d0-836c78028df8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.414 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.427 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5c4fd2-1712-4aca-a014-85025ea9cce6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504036, 'reachable_time': 43121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250695, 'error': None, 'target': 'ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:40 np0005548731 systemd[1]: run-netns-ovnmeta\x2df63e3ad4\x2dd91f\x2d42aa\x2db6cb\x2dbaa14900e53e.mount: Deactivated successfully.
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.431 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f63e3ad4-d91f-42aa-b6cb-baa14900e53e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:06:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:40.432 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c526fd-61e6-4baf-9277-a0a6e5cfe563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.466 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "4d9584e0-fddd-493b-86a5-2cdb9087cce3" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.466 232437 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.546 232437 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.547 232437 DEBUG nova.network.neutron [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.591 232437 INFO nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.627 232437 INFO nova.virt.libvirt.driver [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Deleting instance files /var/lib/nova/instances/58618778-1470-4993-b50d-a24f19c41ed3_del#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.628 232437 INFO nova.virt.libvirt.driver [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Deletion of /var/lib/nova/instances/58618778-1470-4993-b50d-a24f19c41ed3_del complete#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.635 232437 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.761 232437 INFO nova.compute.manager [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.762 232437 DEBUG oslo.service.loopingcall [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.762 232437 DEBUG nova.compute.manager [-] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.763 232437 DEBUG nova.network.neutron [-] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.862 232437 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.863 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.864 232437 INFO nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Creating image(s)#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.895 232437 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 001c930d-1294-433f-b9df-38beb3123844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.925 232437 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 001c930d-1294-433f-b9df-38beb3123844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.950 232437 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 001c930d-1294-433f-b9df-38beb3123844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:40 np0005548731 nova_compute[232433]: 2025-12-06 07:06:40.954 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.029 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.031 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.032 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.032 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.060 232437 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 001c930d-1294-433f-b9df-38beb3123844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.065 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 001c930d-1294-433f-b9df-38beb3123844_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.143 232437 DEBUG nova.network.neutron [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.144 232437 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.506 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.552 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.632 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 001c930d-1294-433f-b9df-38beb3123844_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.712 232437 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] resizing rbd image 001c930d-1294-433f-b9df-38beb3123844_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.816 232437 DEBUG nova.objects.instance [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'migration_context' on Instance uuid 001c930d-1294-433f-b9df-38beb3123844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.831 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.831 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Ensure instance console log exists: /var/lib/nova/instances/001c930d-1294-433f-b9df-38beb3123844/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.832 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.832 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.833 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.834 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.839 232437 WARNING nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.843 232437 DEBUG nova.virt.libvirt.host [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.843 232437 DEBUG nova.virt.libvirt.host [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.848 232437 DEBUG nova.virt.libvirt.host [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.848 232437 DEBUG nova.virt.libvirt.host [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.850 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.850 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.851 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.851 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.851 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.851 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.852 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.852 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.852 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.852 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.853 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.853 232437 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:06:41 np0005548731 nova_compute[232433]: 2025-12-06 07:06:41.855 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:41.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:41.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.159 232437 DEBUG nova.network.neutron [-] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.189 232437 INFO nova.compute.manager [-] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Took 1.43 seconds to deallocate network for instance.#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.258 232437 DEBUG oslo_concurrency.lockutils [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.259 232437 DEBUG oslo_concurrency.lockutils [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.332 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.356 232437 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 001c930d-1294-433f-b9df-38beb3123844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.359 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.470 232437 DEBUG oslo_concurrency.processutils [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:06:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3914379697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.753 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.755 232437 DEBUG nova.objects.instance [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'pci_devices' on Instance uuid 001c930d-1294-433f-b9df-38beb3123844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.787 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  <uuid>001c930d-1294-433f-b9df-38beb3123844</uuid>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  <name>instance-00000029</name>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersOnMultiNodesTest-server-1985926905-2</nova:name>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:06:41</nova:creationTime>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <nova:user uuid="21af856fbd8843c2969956a9587ca48a">tempest-ServersOnMultiNodesTest-646911698-project-member</nova:user>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <nova:project uuid="2c1f48b58dd04b828c83d6350cc4e13d">tempest-ServersOnMultiNodesTest-646911698</nova:project>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <nova:ports/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <entry name="serial">001c930d-1294-433f-b9df-38beb3123844</entry>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <entry name="uuid">001c930d-1294-433f-b9df-38beb3123844</entry>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/001c930d-1294-433f-b9df-38beb3123844_disk">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/001c930d-1294-433f-b9df-38beb3123844_disk.config">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/001c930d-1294-433f-b9df-38beb3123844/console.log" append="off"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:06:42 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:06:42 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:06:42 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:06:42 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.851 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.852 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.853 232437 INFO nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Using config drive#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.876 232437 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 001c930d-1294-433f-b9df-38beb3123844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:06:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4056199244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.932 232437 DEBUG oslo_concurrency.processutils [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:42 np0005548731 nova_compute[232433]: 2025-12-06 07:06:42.936 232437 DEBUG nova.compute.provider_tree [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:06:43 np0005548731 nova_compute[232433]: 2025-12-06 07:06:43.152 232437 DEBUG nova.scheduler.client.report [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:06:43 np0005548731 nova_compute[232433]: 2025-12-06 07:06:43.177 232437 INFO nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Creating config drive at /var/lib/nova/instances/001c930d-1294-433f-b9df-38beb3123844/disk.config#033[00m
Dec  6 02:06:43 np0005548731 nova_compute[232433]: 2025-12-06 07:06:43.183 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/001c930d-1294-433f-b9df-38beb3123844/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjivhw46_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:43 np0005548731 nova_compute[232433]: 2025-12-06 07:06:43.312 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/001c930d-1294-433f-b9df-38beb3123844/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjivhw46_" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:43 np0005548731 nova_compute[232433]: 2025-12-06 07:06:43.337 232437 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 001c930d-1294-433f-b9df-38beb3123844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:06:43 np0005548731 nova_compute[232433]: 2025-12-06 07:06:43.341 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/001c930d-1294-433f-b9df-38beb3123844/disk.config 001c930d-1294-433f-b9df-38beb3123844_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:43 np0005548731 nova_compute[232433]: 2025-12-06 07:06:43.556 232437 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/001c930d-1294-433f-b9df-38beb3123844/disk.config 001c930d-1294-433f-b9df-38beb3123844_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:43 np0005548731 nova_compute[232433]: 2025-12-06 07:06:43.557 232437 INFO nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Deleting local config drive /var/lib/nova/instances/001c930d-1294-433f-b9df-38beb3123844/disk.config because it was imported into RBD.#033[00m
Dec  6 02:06:43 np0005548731 systemd-machined[195355]: New machine qemu-20-instance-00000029.
Dec  6 02:06:43 np0005548731 systemd[1]: Started Virtual Machine qemu-20-instance-00000029.
Dec  6 02:06:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:43.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:43.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.036 232437 DEBUG oslo_concurrency.lockutils [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.073 232437 INFO nova.scheduler.client.report [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Deleted allocations for instance 58618778-1470-4993-b50d-a24f19c41ed3#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.248 232437 DEBUG oslo_concurrency.lockutils [None req-34eb1552-ccd7-4744-a2da-9a40351e02c4 331b8c2798cf4e52a2f60b73cf2831a3 0bb2a00a1f30495ca45c1cb0c482b67c - - default default] Lock "58618778-1470-4993-b50d-a24f19c41ed3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.399 232437 DEBUG nova.compute.manager [req-7fe7da71-b8ad-447c-ba8a-5e56bae2d1b2 req-5c0ba1d8-9247-45df-a020-878345fd1275 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Received event network-changed-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.400 232437 DEBUG nova.compute.manager [req-7fe7da71-b8ad-447c-ba8a-5e56bae2d1b2 req-5c0ba1d8-9247-45df-a020-878345fd1275 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Refreshing instance network info cache due to event network-changed-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.401 232437 DEBUG oslo_concurrency.lockutils [req-7fe7da71-b8ad-447c-ba8a-5e56bae2d1b2 req-5c0ba1d8-9247-45df-a020-878345fd1275 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.401 232437 DEBUG oslo_concurrency.lockutils [req-7fe7da71-b8ad-447c-ba8a-5e56bae2d1b2 req-5c0ba1d8-9247-45df-a020-878345fd1275 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.402 232437 DEBUG nova.network.neutron [req-7fe7da71-b8ad-447c-ba8a-5e56bae2d1b2 req-5c0ba1d8-9247-45df-a020-878345fd1275 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Refreshing network info cache for port 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.545 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004804.544921, 001c930d-1294-433f-b9df-38beb3123844 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.545 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 001c930d-1294-433f-b9df-38beb3123844] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.549 232437 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.549 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.554 232437 INFO nova.virt.libvirt.driver [-] [instance: 001c930d-1294-433f-b9df-38beb3123844] Instance spawned successfully.#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.554 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.587 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 001c930d-1294-433f-b9df-38beb3123844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.591 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 001c930d-1294-433f-b9df-38beb3123844] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.608 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.609 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.610 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.610 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.611 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.612 232437 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.617 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 001c930d-1294-433f-b9df-38beb3123844] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.618 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004804.5469205, 001c930d-1294-433f-b9df-38beb3123844 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.618 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 001c930d-1294-433f-b9df-38beb3123844] VM Started (Lifecycle Event)#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.664 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 001c930d-1294-433f-b9df-38beb3123844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.667 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 001c930d-1294-433f-b9df-38beb3123844] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.696 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 001c930d-1294-433f-b9df-38beb3123844] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.708 232437 INFO nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Took 3.84 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.708 232437 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.781 232437 INFO nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Took 5.21 seconds to build instance.#033[00m
Dec  6 02:06:44 np0005548731 nova_compute[232433]: 2025-12-06 07:06:44.823 232437 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "001c930d-1294-433f-b9df-38beb3123844" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:45 np0005548731 nova_compute[232433]: 2025-12-06 07:06:45.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:06:45 np0005548731 nova_compute[232433]: 2025-12-06 07:06:45.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:45.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:45.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:46 np0005548731 nova_compute[232433]: 2025-12-06 07:06:46.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:06:46 np0005548731 nova_compute[232433]: 2025-12-06 07:06:46.507 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:47 np0005548731 nova_compute[232433]: 2025-12-06 07:06:47.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:06:47 np0005548731 nova_compute[232433]: 2025-12-06 07:06:47.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:06:47 np0005548731 nova_compute[232433]: 2025-12-06 07:06:47.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:06:47 np0005548731 nova_compute[232433]: 2025-12-06 07:06:47.758 232437 DEBUG nova.network.neutron [req-7fe7da71-b8ad-447c-ba8a-5e56bae2d1b2 req-5c0ba1d8-9247-45df-a020-878345fd1275 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Updated VIF entry in instance network info cache for port 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:06:47 np0005548731 nova_compute[232433]: 2025-12-06 07:06:47.759 232437 DEBUG nova.network.neutron [req-7fe7da71-b8ad-447c-ba8a-5e56bae2d1b2 req-5c0ba1d8-9247-45df-a020-878345fd1275 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Updating instance_info_cache with network_info: [{"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:06:47 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec  6 02:06:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:47.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:47.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:48Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:1f:9a 10.100.0.12
Dec  6 02:06:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:48Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:1f:9a 10.100.0.12
Dec  6 02:06:49 np0005548731 nova_compute[232433]: 2025-12-06 07:06:48.999 232437 DEBUG oslo_concurrency.lockutils [req-7fe7da71-b8ad-447c-ba8a-5e56bae2d1b2 req-5c0ba1d8-9247-45df-a020-878345fd1275 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:06:49 np0005548731 nova_compute[232433]: 2025-12-06 07:06:49.000 232437 DEBUG nova.compute.manager [req-7fe7da71-b8ad-447c-ba8a-5e56bae2d1b2 req-5c0ba1d8-9247-45df-a020-878345fd1275 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Received event network-vif-deleted-123afe9f-34c7-4df6-bb33-a09d55405a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:06:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:49.268 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:06:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:49.270 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:06:49 np0005548731 nova_compute[232433]: 2025-12-06 07:06:49.271 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:49 np0005548731 nova_compute[232433]: 2025-12-06 07:06:49.735 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:06:49 np0005548731 nova_compute[232433]: 2025-12-06 07:06:49.735 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:06:49 np0005548731 nova_compute[232433]: 2025-12-06 07:06:49.736 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:06:49 np0005548731 nova_compute[232433]: 2025-12-06 07:06:49.736 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 43acf76c-4b6b-4e85-9f2a-607bd18f7906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:06:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:49.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:49.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:50 np0005548731 nova_compute[232433]: 2025-12-06 07:06:50.286 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:50 np0005548731 nova_compute[232433]: 2025-12-06 07:06:50.559 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "001c930d-1294-433f-b9df-38beb3123844" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:50 np0005548731 nova_compute[232433]: 2025-12-06 07:06:50.560 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "001c930d-1294-433f-b9df-38beb3123844" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:50 np0005548731 nova_compute[232433]: 2025-12-06 07:06:50.560 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "001c930d-1294-433f-b9df-38beb3123844-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:50 np0005548731 nova_compute[232433]: 2025-12-06 07:06:50.561 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "001c930d-1294-433f-b9df-38beb3123844-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:50 np0005548731 nova_compute[232433]: 2025-12-06 07:06:50.561 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "001c930d-1294-433f-b9df-38beb3123844-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:50 np0005548731 nova_compute[232433]: 2025-12-06 07:06:50.563 232437 INFO nova.compute.manager [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Terminating instance#033[00m
Dec  6 02:06:50 np0005548731 nova_compute[232433]: 2025-12-06 07:06:50.565 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "refresh_cache-001c930d-1294-433f-b9df-38beb3123844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:06:50 np0005548731 nova_compute[232433]: 2025-12-06 07:06:50.566 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquired lock "refresh_cache-001c930d-1294-433f-b9df-38beb3123844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:06:50 np0005548731 nova_compute[232433]: 2025-12-06 07:06:50.566 232437 DEBUG nova.network.neutron [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:06:51 np0005548731 nova_compute[232433]: 2025-12-06 07:06:51.022 232437 DEBUG nova.network.neutron [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:06:51 np0005548731 nova_compute[232433]: 2025-12-06 07:06:51.510 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:51 np0005548731 nova_compute[232433]: 2025-12-06 07:06:51.657 232437 DEBUG nova.network.neutron [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:06:51 np0005548731 nova_compute[232433]: 2025-12-06 07:06:51.693 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Releasing lock "refresh_cache-001c930d-1294-433f-b9df-38beb3123844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:06:51 np0005548731 nova_compute[232433]: 2025-12-06 07:06:51.694 232437 DEBUG nova.compute.manager [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:06:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:51.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:51.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:51 np0005548731 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000029.scope: Deactivated successfully.
Dec  6 02:06:51 np0005548731 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000029.scope: Consumed 8.210s CPU time.
Dec  6 02:06:51 np0005548731 systemd-machined[195355]: Machine qemu-20-instance-00000029 terminated.
Dec  6 02:06:52 np0005548731 nova_compute[232433]: 2025-12-06 07:06:52.113 232437 INFO nova.virt.libvirt.driver [-] [instance: 001c930d-1294-433f-b9df-38beb3123844] Instance destroyed successfully.#033[00m
Dec  6 02:06:52 np0005548731 nova_compute[232433]: 2025-12-06 07:06:52.114 232437 DEBUG nova.objects.instance [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'resources' on Instance uuid 001c930d-1294-433f-b9df-38beb3123844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:06:52 np0005548731 nova_compute[232433]: 2025-12-06 07:06:52.344 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Updating instance_info_cache with network_info: [{"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:06:52 np0005548731 nova_compute[232433]: 2025-12-06 07:06:52.936 232437 INFO nova.virt.libvirt.driver [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Deleting instance files /var/lib/nova/instances/001c930d-1294-433f-b9df-38beb3123844_del#033[00m
Dec  6 02:06:52 np0005548731 nova_compute[232433]: 2025-12-06 07:06:52.937 232437 INFO nova.virt.libvirt.driver [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Deletion of /var/lib/nova/instances/001c930d-1294-433f-b9df-38beb3123844_del complete#033[00m
Dec  6 02:06:53 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:06:53.272 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:06:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:06:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:53.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:06:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:53.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.144 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.145 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.145 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.145 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.146 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.146 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.146 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.295 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.295 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.296 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.296 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.296 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.326 232437 INFO nova.compute.manager [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 001c930d-1294-433f-b9df-38beb3123844] Took 2.63 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.327 232437 DEBUG oslo.service.loopingcall [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.327 232437 DEBUG nova.compute.manager [-] [instance: 001c930d-1294-433f-b9df-38beb3123844] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.328 232437 DEBUG nova.network.neutron [-] [instance: 001c930d-1294-433f-b9df-38beb3123844] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.675 232437 DEBUG nova.network.neutron [-] [instance: 001c930d-1294-433f-b9df-38beb3123844] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.696 232437 DEBUG nova.network.neutron [-] [instance: 001c930d-1294-433f-b9df-38beb3123844] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:06:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:06:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4114717483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.722 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.729 232437 INFO nova.compute.manager [-] [instance: 001c930d-1294-433f-b9df-38beb3123844] Took 0.40 seconds to deallocate network for instance.#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.810 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.811 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.822 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.822 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:06:54 np0005548731 nova_compute[232433]: 2025-12-06 07:06:54.919 232437 DEBUG oslo_concurrency.processutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:06:54Z|00133|binding|INFO|Releasing lport 0a001355-8429-4726-b883-743f03fd3e79 from this chassis (sb_readonly=0)
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.022 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.024 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4515MB free_disk=20.720535278320312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.025 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.026 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.251 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004800.2500362, 58618778-1470-4993-b50d-a24f19c41ed3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.252 232437 INFO nova.compute.manager [-] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.280 232437 DEBUG nova.compute.manager [None req-b11b42e2-29da-4075-8649-d58a74715751 - - - - - -] [instance: 58618778-1470-4993-b50d-a24f19c41ed3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.287 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:06:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:06:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/199118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.351 232437 DEBUG oslo_concurrency.processutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.356 232437 DEBUG nova.compute.provider_tree [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.387 232437 DEBUG nova.scheduler.client.report [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.432 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.434 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.490 232437 INFO nova.scheduler.client.report [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Deleted allocations for instance 001c930d-1294-433f-b9df-38beb3123844#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.534 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 43acf76c-4b6b-4e85-9f2a-607bd18f7906 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.534 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.534 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.580 232437 DEBUG oslo_concurrency.lockutils [None req-3fbe79ee-1a90-448c-bf7f-82ff460f9a14 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "001c930d-1294-433f-b9df-38beb3123844" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.589 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.714 232437 DEBUG nova.compute.manager [req-28814a7e-5902-4e77-8b95-e4c01b8d3cb0 req-5e5da2ff-946b-48f4-a597-ce2b6e74d05d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Received event network-changed-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.714 232437 DEBUG nova.compute.manager [req-28814a7e-5902-4e77-8b95-e4c01b8d3cb0 req-5e5da2ff-946b-48f4-a597-ce2b6e74d05d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Refreshing instance network info cache due to event network-changed-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.715 232437 DEBUG oslo_concurrency.lockutils [req-28814a7e-5902-4e77-8b95-e4c01b8d3cb0 req-5e5da2ff-946b-48f4-a597-ce2b6e74d05d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.715 232437 DEBUG oslo_concurrency.lockutils [req-28814a7e-5902-4e77-8b95-e4c01b8d3cb0 req-5e5da2ff-946b-48f4-a597-ce2b6e74d05d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:06:55 np0005548731 nova_compute[232433]: 2025-12-06 07:06:55.716 232437 DEBUG nova.network.neutron [req-28814a7e-5902-4e77-8b95-e4c01b8d3cb0 req-5e5da2ff-946b-48f4-a597-ce2b6e74d05d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Refreshing network info cache for port 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:06:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:55.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:55.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:06:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1662777040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:06:56 np0005548731 nova_compute[232433]: 2025-12-06 07:06:56.002 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:06:56 np0005548731 nova_compute[232433]: 2025-12-06 07:06:56.007 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:06:56 np0005548731 nova_compute[232433]: 2025-12-06 07:06:56.034 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:06:56 np0005548731 nova_compute[232433]: 2025-12-06 07:06:56.088 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:06:56 np0005548731 nova_compute[232433]: 2025-12-06 07:06:56.089 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:06:56 np0005548731 nova_compute[232433]: 2025-12-06 07:06:56.511 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:06:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:57.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.029000727s ======
Dec  6 02:06:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:57.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.029000727s
Dec  6 02:06:58 np0005548731 nova_compute[232433]: 2025-12-06 07:06:58.084 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:06:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:06:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:06:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:59.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:06:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:59.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.289 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.408 232437 DEBUG nova.network.neutron [req-28814a7e-5902-4e77-8b95-e4c01b8d3cb0 req-5e5da2ff-946b-48f4-a597-ce2b6e74d05d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Updated VIF entry in instance network info cache for port 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.408 232437 DEBUG nova.network.neutron [req-28814a7e-5902-4e77-8b95-e4c01b8d3cb0 req-5e5da2ff-946b-48f4-a597-ce2b6e74d05d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Updating instance_info_cache with network_info: [{"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.504 232437 DEBUG oslo_concurrency.lockutils [req-28814a7e-5902-4e77-8b95-e4c01b8d3cb0 req-5e5da2ff-946b-48f4-a597-ce2b6e74d05d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-43acf76c-4b6b-4e85-9f2a-607bd18f7906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.598 232437 DEBUG oslo_concurrency.lockutils [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.598 232437 DEBUG oslo_concurrency.lockutils [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.598 232437 DEBUG oslo_concurrency.lockutils [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.599 232437 DEBUG oslo_concurrency.lockutils [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.599 232437 DEBUG oslo_concurrency.lockutils [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.600 232437 INFO nova.compute.manager [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Terminating instance#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.602 232437 DEBUG nova.compute.manager [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:07:00 np0005548731 kernel: tap46e3c9e1-9d (unregistering): left promiscuous mode
Dec  6 02:07:00 np0005548731 NetworkManager[49182]: <info>  [1765004820.6534] device (tap46e3c9e1-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:07:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:07:00Z|00134|binding|INFO|Releasing lport 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 from this chassis (sb_readonly=0)
Dec  6 02:07:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:07:00Z|00135|binding|INFO|Setting lport 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 down in Southbound
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.667 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:07:00Z|00136|binding|INFO|Removing iface tap46e3c9e1-9d ovn-installed in OVS
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.668 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.674 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:1f:9a 10.100.0.12'], port_security=['fa:16:3e:c3:1f:9a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '43acf76c-4b6b-4e85-9f2a-607bd18f7906', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-344a2c5d-4516-4e02-9384-4797cfc76497', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503b2dfdce9d47598a8b9de4b15e1d45', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd9b8be6-903c-4fbc-b82c-3bc27105f7c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9327b272-f30f-48a7-bcf4-62320d61cbdc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.675 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 in datapath 344a2c5d-4516-4e02-9384-4797cfc76497 unbound from our chassis#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.677 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 344a2c5d-4516-4e02-9384-4797cfc76497, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.679 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f4434fbf-ea50-4af9-92a8-5a1e7e92c2fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.680 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497 namespace which is not needed anymore#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.690 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:00 np0005548731 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000025.scope: Deactivated successfully.
Dec  6 02:07:00 np0005548731 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000025.scope: Consumed 14.853s CPU time.
Dec  6 02:07:00 np0005548731 systemd-machined[195355]: Machine qemu-19-instance-00000025 terminated.
Dec  6 02:07:00 np0005548731 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[250449]: [NOTICE]   (250453) : haproxy version is 2.8.14-c23fe91
Dec  6 02:07:00 np0005548731 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[250449]: [NOTICE]   (250453) : path to executable is /usr/sbin/haproxy
Dec  6 02:07:00 np0005548731 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[250449]: [WARNING]  (250453) : Exiting Master process...
Dec  6 02:07:00 np0005548731 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[250449]: [WARNING]  (250453) : Exiting Master process...
Dec  6 02:07:00 np0005548731 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[250449]: [ALERT]    (250453) : Current worker (250455) exited with code 143 (Terminated)
Dec  6 02:07:00 np0005548731 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[250449]: [WARNING]  (250453) : All workers exited. Exiting... (0)
Dec  6 02:07:00 np0005548731 systemd[1]: libpod-f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0.scope: Deactivated successfully.
Dec  6 02:07:00 np0005548731 podman[251236]: 2025-12-06 07:07:00.815948051 +0000 UTC m=+0.047837717 container died f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.820 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.824 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.836 232437 INFO nova.virt.libvirt.driver [-] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Instance destroyed successfully.#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.837 232437 DEBUG nova.objects.instance [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lazy-loading 'resources' on Instance uuid 43acf76c-4b6b-4e85-9f2a-607bd18f7906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.850 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.851 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.851 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:00 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0-userdata-shm.mount: Deactivated successfully.
Dec  6 02:07:00 np0005548731 systemd[1]: var-lib-containers-storage-overlay-3319244f0375c034cfe602eca227eb0fd63d1a65c0a36070546be19678681177-merged.mount: Deactivated successfully.
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.863 232437 DEBUG nova.virt.libvirt.vif [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:06:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-386327972',display_name='tempest-FloatingIPsAssociationTestJSON-server-386327972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-386327972',id=37,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:06:34Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503b2dfdce9d47598a8b9de4b15e1d45',ramdisk_id='',reservation_id='r-ocfaists',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-263213844',owner_user_name='tempest-FloatingIPsAssociationTestJSON-263213844-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:06:34Z,user_data=None,user_id='c9025bd3f0854dff80d9408800d6b76b',uuid=43acf76c-4b6b-4e85-9f2a-607bd18f7906,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.863 232437 DEBUG nova.network.os_vif_util [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converting VIF {"id": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "address": "fa:16:3e:c3:1f:9a", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46e3c9e1-9d", "ovs_interfaceid": "46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.864 232437 DEBUG nova.network.os_vif_util [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:1f:9a,bridge_name='br-int',has_traffic_filtering=True,id=46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46e3c9e1-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.864 232437 DEBUG os_vif [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:1f:9a,bridge_name='br-int',has_traffic_filtering=True,id=46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46e3c9e1-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:07:00 np0005548731 podman[251236]: 2025-12-06 07:07:00.865587413 +0000 UTC m=+0.097477069 container cleanup f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.866 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.867 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap46e3c9e1-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.868 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.869 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.872 232437 INFO os_vif [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:1f:9a,bridge_name='br-int',has_traffic_filtering=True,id=46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46e3c9e1-9d')#033[00m
Dec  6 02:07:00 np0005548731 systemd[1]: libpod-conmon-f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0.scope: Deactivated successfully.
Dec  6 02:07:00 np0005548731 podman[251273]: 2025-12-06 07:07:00.934797701 +0000 UTC m=+0.041663297 container remove f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.940 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5c88c2-9ac3-4949-96f6-c51dca0b0a70]: (4, ('Sat Dec  6 07:07:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497 (f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0)\nf89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0\nSat Dec  6 07:07:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497 (f89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0)\nf89fc0c453cac15fc96657e99da47768b01e3597b38329da91fef5b6b74355e0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.941 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c94e3457-baef-4367-8423-ee73639db1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.942 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap344a2c5d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.944 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:00 np0005548731 kernel: tap344a2c5d-40: left promiscuous mode
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.948 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d6459c-18e4-484a-86f4-df115a729f7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:00 np0005548731 nova_compute[232433]: 2025-12-06 07:07:00.962 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.964 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d65f93-f336-4b54-b2cd-220c35e2bfb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.965 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[921b2b15-0109-42e5-b060-b96d83d82bd1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.979 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bcb743-e930-4afd-96ed-84fb1ce43154]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505620, 'reachable_time': 15436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251329, 'error': None, 'target': 'ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:00 np0005548731 systemd[1]: run-netns-ovnmeta\x2d344a2c5d\x2d4516\x2d4e02\x2d9384\x2d4797cfc76497.mount: Deactivated successfully.
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.981 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:00.982 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[a2cb9f83-514a-4b01-8b8d-c09dcb91c0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:01 np0005548731 nova_compute[232433]: 2025-12-06 07:07:01.280 232437 INFO nova.virt.libvirt.driver [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Deleting instance files /var/lib/nova/instances/43acf76c-4b6b-4e85-9f2a-607bd18f7906_del#033[00m
Dec  6 02:07:01 np0005548731 nova_compute[232433]: 2025-12-06 07:07:01.281 232437 INFO nova.virt.libvirt.driver [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Deletion of /var/lib/nova/instances/43acf76c-4b6b-4e85-9f2a-607bd18f7906_del complete#033[00m
Dec  6 02:07:01 np0005548731 nova_compute[232433]: 2025-12-06 07:07:01.411 232437 INFO nova.compute.manager [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:07:01 np0005548731 nova_compute[232433]: 2025-12-06 07:07:01.411 232437 DEBUG oslo.service.loopingcall [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:07:01 np0005548731 nova_compute[232433]: 2025-12-06 07:07:01.412 232437 DEBUG nova.compute.manager [-] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:07:01 np0005548731 nova_compute[232433]: 2025-12-06 07:07:01.412 232437 DEBUG nova.network.neutron [-] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:07:01 np0005548731 nova_compute[232433]: 2025-12-06 07:07:01.512 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:01.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:02.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:07:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.550 232437 DEBUG nova.compute.manager [req-b41d05be-d738-4de3-b0e8-2e0a6929ecef req-f63a2050-77d9-4067-aa92-d6c77aca5358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Received event network-vif-unplugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.550 232437 DEBUG oslo_concurrency.lockutils [req-b41d05be-d738-4de3-b0e8-2e0a6929ecef req-f63a2050-77d9-4067-aa92-d6c77aca5358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.551 232437 DEBUG oslo_concurrency.lockutils [req-b41d05be-d738-4de3-b0e8-2e0a6929ecef req-f63a2050-77d9-4067-aa92-d6c77aca5358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.551 232437 DEBUG oslo_concurrency.lockutils [req-b41d05be-d738-4de3-b0e8-2e0a6929ecef req-f63a2050-77d9-4067-aa92-d6c77aca5358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.551 232437 DEBUG nova.compute.manager [req-b41d05be-d738-4de3-b0e8-2e0a6929ecef req-f63a2050-77d9-4067-aa92-d6c77aca5358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] No waiting events found dispatching network-vif-unplugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.552 232437 DEBUG nova.compute.manager [req-b41d05be-d738-4de3-b0e8-2e0a6929ecef req-f63a2050-77d9-4067-aa92-d6c77aca5358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Received event network-vif-unplugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.574 232437 DEBUG nova.network.neutron [-] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.600 232437 INFO nova.compute.manager [-] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Took 2.19 seconds to deallocate network for instance.#033[00m
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.647 232437 DEBUG nova.compute.manager [req-adcf7165-4aaf-42ed-8545-845eb66d962a req-0cfc859e-dca5-4137-b2bf-16bb8d419904 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Received event network-vif-deleted-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.685 232437 DEBUG oslo_concurrency.lockutils [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.686 232437 DEBUG oslo_concurrency.lockutils [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:03 np0005548731 nova_compute[232433]: 2025-12-06 07:07:03.743 232437 DEBUG oslo_concurrency.processutils [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:07:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:03.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 02:07:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:04.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 02:07:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:07:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1887594548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:07:04 np0005548731 nova_compute[232433]: 2025-12-06 07:07:04.185 232437 DEBUG oslo_concurrency.processutils [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:07:04 np0005548731 nova_compute[232433]: 2025-12-06 07:07:04.191 232437 DEBUG nova.compute.provider_tree [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:07:04 np0005548731 nova_compute[232433]: 2025-12-06 07:07:04.212 232437 DEBUG nova.scheduler.client.report [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:07:04 np0005548731 nova_compute[232433]: 2025-12-06 07:07:04.246 232437 DEBUG oslo_concurrency.lockutils [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:04 np0005548731 nova_compute[232433]: 2025-12-06 07:07:04.288 232437 INFO nova.scheduler.client.report [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Deleted allocations for instance 43acf76c-4b6b-4e85-9f2a-607bd18f7906#033[00m
Dec  6 02:07:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:07:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:07:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:07:04 np0005548731 nova_compute[232433]: 2025-12-06 07:07:04.536 232437 DEBUG oslo_concurrency.lockutils [None req-1ae1ced4-c8f1-4f0d-90d2-d002a076abb8 c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:05 np0005548731 nova_compute[232433]: 2025-12-06 07:07:05.742 232437 DEBUG nova.compute.manager [req-cb565aac-df0d-431c-a466-497b82cf7983 req-4b24b434-a2a5-4fac-92ea-34dd252931e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Received event network-vif-plugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:07:05 np0005548731 nova_compute[232433]: 2025-12-06 07:07:05.742 232437 DEBUG oslo_concurrency.lockutils [req-cb565aac-df0d-431c-a466-497b82cf7983 req-4b24b434-a2a5-4fac-92ea-34dd252931e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:05 np0005548731 nova_compute[232433]: 2025-12-06 07:07:05.742 232437 DEBUG oslo_concurrency.lockutils [req-cb565aac-df0d-431c-a466-497b82cf7983 req-4b24b434-a2a5-4fac-92ea-34dd252931e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:05 np0005548731 nova_compute[232433]: 2025-12-06 07:07:05.742 232437 DEBUG oslo_concurrency.lockutils [req-cb565aac-df0d-431c-a466-497b82cf7983 req-4b24b434-a2a5-4fac-92ea-34dd252931e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "43acf76c-4b6b-4e85-9f2a-607bd18f7906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:05 np0005548731 nova_compute[232433]: 2025-12-06 07:07:05.743 232437 DEBUG nova.compute.manager [req-cb565aac-df0d-431c-a466-497b82cf7983 req-4b24b434-a2a5-4fac-92ea-34dd252931e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] No waiting events found dispatching network-vif-plugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:07:05 np0005548731 nova_compute[232433]: 2025-12-06 07:07:05.743 232437 WARNING nova.compute.manager [req-cb565aac-df0d-431c-a466-497b82cf7983 req-4b24b434-a2a5-4fac-92ea-34dd252931e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Received unexpected event network-vif-plugged-46e3c9e1-9d7d-48aa-9603-46bfbfe6b2a8 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 02:07:05 np0005548731 nova_compute[232433]: 2025-12-06 07:07:05.870 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:05.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:06.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:06 np0005548731 nova_compute[232433]: 2025-12-06 07:07:06.514 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:07 np0005548731 nova_compute[232433]: 2025-12-06 07:07:07.112 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004812.1104026, 001c930d-1294-433f-b9df-38beb3123844 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:07:07 np0005548731 nova_compute[232433]: 2025-12-06 07:07:07.113 232437 INFO nova.compute.manager [-] [instance: 001c930d-1294-433f-b9df-38beb3123844] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:07:07 np0005548731 nova_compute[232433]: 2025-12-06 07:07:07.139 232437 DEBUG nova.compute.manager [None req-6a4998ce-d846-430c-8d6b-a065b25703a3 - - - - - -] [instance: 001c930d-1294-433f-b9df-38beb3123844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:07:07 np0005548731 podman[251464]: 2025-12-06 07:07:07.897486605 +0000 UTC m=+0.059685062 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  6 02:07:07 np0005548731 podman[251466]: 2025-12-06 07:07:07.905503022 +0000 UTC m=+0.064713772 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Dec  6 02:07:07 np0005548731 podman[251465]: 2025-12-06 07:07:07.918969831 +0000 UTC m=+0.079460864 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 02:07:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:07.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:08.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:07:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1832287170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:07:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:07:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1832287170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:07:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:07:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:07:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:09.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:10.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:10 np0005548731 nova_compute[232433]: 2025-12-06 07:07:10.874 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:11 np0005548731 nova_compute[232433]: 2025-12-06 07:07:11.515 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:11.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:12.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:12 np0005548731 nova_compute[232433]: 2025-12-06 07:07:12.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:13.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:14.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:15 np0005548731 nova_compute[232433]: 2025-12-06 07:07:15.834 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004820.8329422, 43acf76c-4b6b-4e85-9f2a-607bd18f7906 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:07:15 np0005548731 nova_compute[232433]: 2025-12-06 07:07:15.834 232437 INFO nova.compute.manager [-] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:07:15 np0005548731 nova_compute[232433]: 2025-12-06 07:07:15.878 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:15.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:16.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:16 np0005548731 nova_compute[232433]: 2025-12-06 07:07:16.063 232437 DEBUG nova.compute.manager [None req-13b1d8b3-dc1b-465b-8acd-40488e6497e3 - - - - - -] [instance: 43acf76c-4b6b-4e85-9f2a-607bd18f7906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:07:16 np0005548731 nova_compute[232433]: 2025-12-06 07:07:16.518 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:17.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:18.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:18 np0005548731 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  6 02:07:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:19.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:20.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:20 np0005548731 nova_compute[232433]: 2025-12-06 07:07:20.881 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:21 np0005548731 nova_compute[232433]: 2025-12-06 07:07:21.520 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:21.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:22.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:23.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:24.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:24 np0005548731 nova_compute[232433]: 2025-12-06 07:07:24.510 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:25 np0005548731 nova_compute[232433]: 2025-12-06 07:07:25.884 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:25.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:26.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:26 np0005548731 nova_compute[232433]: 2025-12-06 07:07:26.522 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:27.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:28.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:29.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:30.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:30 np0005548731 nova_compute[232433]: 2025-12-06 07:07:30.887 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:31 np0005548731 nova_compute[232433]: 2025-12-06 07:07:31.524 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:31.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:32.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:33.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:34.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:35 np0005548731 nova_compute[232433]: 2025-12-06 07:07:35.890 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:35.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:36.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:36 np0005548731 nova_compute[232433]: 2025-12-06 07:07:36.526 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:36 np0005548731 nova_compute[232433]: 2025-12-06 07:07:36.671 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:36 np0005548731 nova_compute[232433]: 2025-12-06 07:07:36.950 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:37.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:38.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:38 np0005548731 podman[251698]: 2025-12-06 07:07:38.896026414 +0000 UTC m=+0.053716098 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:07:38 np0005548731 podman[251700]: 2025-12-06 07:07:38.908578658 +0000 UTC m=+0.061605982 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 02:07:38 np0005548731 podman[251699]: 2025-12-06 07:07:38.92643463 +0000 UTC m=+0.079843444 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  6 02:07:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 02:07:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:39.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 02:07:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:40.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:40 np0005548731 nova_compute[232433]: 2025-12-06 07:07:40.892 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:41 np0005548731 nova_compute[232433]: 2025-12-06 07:07:41.527 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:42.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:42 np0005548731 nova_compute[232433]: 2025-12-06 07:07:42.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:43.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:44.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:45 np0005548731 nova_compute[232433]: 2025-12-06 07:07:45.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:45 np0005548731 nova_compute[232433]: 2025-12-06 07:07:45.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:45 np0005548731 nova_compute[232433]: 2025-12-06 07:07:45.142 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:45 np0005548731 nova_compute[232433]: 2025-12-06 07:07:45.142 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:45 np0005548731 nova_compute[232433]: 2025-12-06 07:07:45.170 232437 DEBUG nova.compute.manager [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:07:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:45 np0005548731 nova_compute[232433]: 2025-12-06 07:07:45.766 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:45 np0005548731 nova_compute[232433]: 2025-12-06 07:07:45.767 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:45 np0005548731 nova_compute[232433]: 2025-12-06 07:07:45.773 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:07:45 np0005548731 nova_compute[232433]: 2025-12-06 07:07:45.773 232437 INFO nova.compute.claims [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:07:45 np0005548731 nova_compute[232433]: 2025-12-06 07:07:45.896 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:45.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:46.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:46 np0005548731 nova_compute[232433]: 2025-12-06 07:07:46.122 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:46 np0005548731 nova_compute[232433]: 2025-12-06 07:07:46.123 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:46 np0005548731 nova_compute[232433]: 2025-12-06 07:07:46.123 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:07:46 np0005548731 nova_compute[232433]: 2025-12-06 07:07:46.147 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:07:46 np0005548731 nova_compute[232433]: 2025-12-06 07:07:46.529 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:46 np0005548731 nova_compute[232433]: 2025-12-06 07:07:46.611 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:07:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:07:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4060825194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.034 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.042 232437 DEBUG nova.compute.provider_tree [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.129 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.129 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.142 232437 DEBUG nova.scheduler.client.report [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.285 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.286 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.297 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.298 232437 DEBUG nova.compute.manager [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.473 232437 DEBUG nova.compute.manager [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.473 232437 DEBUG nova.network.neutron [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.682 232437 INFO nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:07:47 np0005548731 nova_compute[232433]: 2025-12-06 07:07:47.754 232437 DEBUG nova.compute.manager [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:07:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:47.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:48.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.147 232437 DEBUG nova.compute.manager [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.149 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.149 232437 INFO nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Creating image(s)#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.174 232437 DEBUG nova.storage.rbd_utils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.197 232437 DEBUG nova.storage.rbd_utils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.223 232437 DEBUG nova.storage.rbd_utils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.226 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.248 232437 DEBUG nova.policy [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7c2f91caf1444c68dcf6bd66966d67e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c2236fb6443441618c69ad660b0932dd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.290 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.291 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.292 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.292 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.315 232437 DEBUG nova.storage.rbd_utils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:07:48 np0005548731 nova_compute[232433]: 2025-12-06 07:07:48.317 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:07:49 np0005548731 nova_compute[232433]: 2025-12-06 07:07:49.080 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.762s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:07:49 np0005548731 nova_compute[232433]: 2025-12-06 07:07:49.111 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:49 np0005548731 nova_compute[232433]: 2025-12-06 07:07:49.144 232437 DEBUG nova.storage.rbd_utils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] resizing rbd image 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:07:49 np0005548731 nova_compute[232433]: 2025-12-06 07:07:49.250 232437 DEBUG nova.objects.instance [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'migration_context' on Instance uuid 19dd5782-9191-431a-bc5b-c5c3d41fc07c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:07:49 np0005548731 nova_compute[232433]: 2025-12-06 07:07:49.413 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:07:49 np0005548731 nova_compute[232433]: 2025-12-06 07:07:49.414 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Ensure instance console log exists: /var/lib/nova/instances/19dd5782-9191-431a-bc5b-c5c3d41fc07c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:07:49 np0005548731 nova_compute[232433]: 2025-12-06 07:07:49.414 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:49 np0005548731 nova_compute[232433]: 2025-12-06 07:07:49.414 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:49 np0005548731 nova_compute[232433]: 2025-12-06 07:07:49.415 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:49.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:50.040 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.041 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:50.042 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:07:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:50.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.145 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.145 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.146 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.146 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.146 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.180 232437 DEBUG nova.network.neutron [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Successfully created port: bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:07:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:07:50 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2763915287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.559 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.743 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.745 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4730MB free_disk=20.921947479248047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.745 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.745 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.898 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.929 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 19dd5782-9191-431a-bc5b-c5c3d41fc07c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.930 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.930 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:07:50 np0005548731 nova_compute[232433]: 2025-12-06 07:07:50.975 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:07:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:07:51 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2213247620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:07:51 np0005548731 nova_compute[232433]: 2025-12-06 07:07:51.375 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:07:51 np0005548731 nova_compute[232433]: 2025-12-06 07:07:51.381 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:07:51 np0005548731 nova_compute[232433]: 2025-12-06 07:07:51.531 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:51 np0005548731 nova_compute[232433]: 2025-12-06 07:07:51.558 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:07:51 np0005548731 nova_compute[232433]: 2025-12-06 07:07:51.612 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:07:51 np0005548731 nova_compute[232433]: 2025-12-06 07:07:51.612 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:51.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:52 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:52.044 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:07:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:52.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:52 np0005548731 nova_compute[232433]: 2025-12-06 07:07:52.584 232437 DEBUG nova.network.neutron [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Successfully updated port: bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:07:52 np0005548731 nova_compute[232433]: 2025-12-06 07:07:52.603 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:07:52 np0005548731 nova_compute[232433]: 2025-12-06 07:07:52.604 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquired lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:07:52 np0005548731 nova_compute[232433]: 2025-12-06 07:07:52.604 232437 DEBUG nova.network.neutron [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:07:52 np0005548731 nova_compute[232433]: 2025-12-06 07:07:52.902 232437 DEBUG nova.network.neutron [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:07:53 np0005548731 nova_compute[232433]: 2025-12-06 07:07:53.675 232437 DEBUG nova.compute.manager [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Received event network-changed-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:07:53 np0005548731 nova_compute[232433]: 2025-12-06 07:07:53.675 232437 DEBUG nova.compute.manager [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Refreshing instance network info cache due to event network-changed-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:07:53 np0005548731 nova_compute[232433]: 2025-12-06 07:07:53.676 232437 DEBUG oslo_concurrency.lockutils [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:07:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:53.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:54.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.843 232437 DEBUG nova.network.neutron [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Updating instance_info_cache with network_info: [{"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.881 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Releasing lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.882 232437 DEBUG nova.compute.manager [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Instance network_info: |[{"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.883 232437 DEBUG oslo_concurrency.lockutils [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.884 232437 DEBUG nova.network.neutron [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Refreshing network info cache for port bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.889 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Start _get_guest_xml network_info=[{"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.894 232437 WARNING nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.898 232437 DEBUG nova.virt.libvirt.host [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.898 232437 DEBUG nova.virt.libvirt.host [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.904 232437 DEBUG nova.virt.libvirt.host [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.905 232437 DEBUG nova.virt.libvirt.host [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.906 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.906 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.907 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.907 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.907 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.908 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.908 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.908 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.909 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.909 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.909 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.909 232437 DEBUG nova.virt.hardware [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:07:54 np0005548731 nova_compute[232433]: 2025-12-06 07:07:54.912 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:07:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:07:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:07:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/797452098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.341 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.364 232437 DEBUG nova.storage.rbd_utils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.368 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:07:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:07:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3883785481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.792 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.793 232437 DEBUG nova.virt.libvirt.vif [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:07:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-947925773',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-947925773',id=44,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxNM3sYiKzp1+NUb/2De95mJs+SH1m1mxJqrRUBmO5hfmaPmDEbehrvD5V1R8B784do2XvfbohRCGUURA5SWZg3+xgRCk33hYLHpa3KVfm+1jfWf3JbwWLhSjPluoLz2w==',key_name='tempest-keypair-1560414487',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c2236fb6443441618c69ad660b0932dd',ramdisk_id='',reservation_id='r-tbc21eth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:07:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7c2f91caf1444c68dcf6bd66966d67e',uuid=19dd5782-9191-431a-bc5b-c5c3d41fc07c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.794 232437 DEBUG nova.network.os_vif_util [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converting VIF {"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.795 232437 DEBUG nova.network.os_vif_util [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:66:5e,bridge_name='br-int',has_traffic_filtering=True,id=bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc53fd4b-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.796 232437 DEBUG nova.objects.instance [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 19dd5782-9191-431a-bc5b-c5c3d41fc07c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.816 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  <uuid>19dd5782-9191-431a-bc5b-c5c3d41fc07c</uuid>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  <name>instance-0000002c</name>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-947925773</nova:name>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:07:54</nova:creationTime>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <nova:user uuid="b7c2f91caf1444c68dcf6bd66966d67e">tempest-UpdateMultiattachVolumeNegativeTest-1086119788-project-member</nova:user>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <nova:project uuid="c2236fb6443441618c69ad660b0932dd">tempest-UpdateMultiattachVolumeNegativeTest-1086119788</nova:project>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <nova:port uuid="bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <entry name="serial">19dd5782-9191-431a-bc5b-c5c3d41fc07c</entry>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <entry name="uuid">19dd5782-9191-431a-bc5b-c5c3d41fc07c</entry>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk.config">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:32:66:5e"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <target dev="tapbc53fd4b-9f"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/19dd5782-9191-431a-bc5b-c5c3d41fc07c/console.log" append="off"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:07:55 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:07:55 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:07:55 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:07:55 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.817 232437 DEBUG nova.compute.manager [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Preparing to wait for external event network-vif-plugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.817 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.818 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.818 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.819 232437 DEBUG nova.virt.libvirt.vif [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:07:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-947925773',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-947925773',id=44,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxNM3sYiKzp1+NUb/2De95mJs+SH1m1mxJqrRUBmO5hfmaPmDEbehrvD5V1R8B784do2XvfbohRCGUURA5SWZg3+xgRCk33hYLHpa3KVfm+1jfWf3JbwWLhSjPluoLz2w==',key_name='tempest-keypair-1560414487',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c2236fb6443441618c69ad660b0932dd',ramdisk_id='',reservation_id='r-tbc21eth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:07:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7c2f91caf1444c68dcf6bd66966d67e',uuid=19dd5782-9191-431a-bc5b-c5c3d41fc07c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.819 232437 DEBUG nova.network.os_vif_util [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converting VIF {"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.821 232437 DEBUG nova.network.os_vif_util [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:66:5e,bridge_name='br-int',has_traffic_filtering=True,id=bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc53fd4b-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.821 232437 DEBUG os_vif [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:66:5e,bridge_name='br-int',has_traffic_filtering=True,id=bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc53fd4b-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.822 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.822 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.823 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.827 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.827 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc53fd4b-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.828 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc53fd4b-9f, col_values=(('external_ids', {'iface-id': 'bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:66:5e', 'vm-uuid': '19dd5782-9191-431a-bc5b-c5c3d41fc07c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.830 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:55 np0005548731 NetworkManager[49182]: <info>  [1765004875.8317] manager: (tapbc53fd4b-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.832 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.838 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.839 232437 INFO os_vif [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:66:5e,bridge_name='br-int',has_traffic_filtering=True,id=bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc53fd4b-9f')#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.891 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.892 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.892 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No VIF found with MAC fa:16:3e:32:66:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.893 232437 INFO nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Using config drive#033[00m
Dec  6 02:07:55 np0005548731 nova_compute[232433]: 2025-12-06 07:07:55.920 232437 DEBUG nova.storage.rbd_utils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:07:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:55.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:07:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:56.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:07:56 np0005548731 nova_compute[232433]: 2025-12-06 07:07:56.533 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.161 232437 INFO nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Creating config drive at /var/lib/nova/instances/19dd5782-9191-431a-bc5b-c5c3d41fc07c/disk.config#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.168 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19dd5782-9191-431a-bc5b-c5c3d41fc07c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb9287ebe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.301 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19dd5782-9191-431a-bc5b-c5c3d41fc07c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb9287ebe" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.331 232437 DEBUG nova.storage.rbd_utils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.335 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19dd5782-9191-431a-bc5b-c5c3d41fc07c/disk.config 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.558 232437 DEBUG oslo_concurrency.processutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19dd5782-9191-431a-bc5b-c5c3d41fc07c/disk.config 19dd5782-9191-431a-bc5b-c5c3d41fc07c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.559 232437 INFO nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Deleting local config drive /var/lib/nova/instances/19dd5782-9191-431a-bc5b-c5c3d41fc07c/disk.config because it was imported into RBD.#033[00m
Dec  6 02:07:57 np0005548731 kernel: tapbc53fd4b-9f: entered promiscuous mode
Dec  6 02:07:57 np0005548731 NetworkManager[49182]: <info>  [1765004877.6361] manager: (tapbc53fd4b-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Dec  6 02:07:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:07:57Z|00137|binding|INFO|Claiming lport bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 for this chassis.
Dec  6 02:07:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:07:57Z|00138|binding|INFO|bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4: Claiming fa:16:3e:32:66:5e 10.100.0.14
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.636 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.642 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:57 np0005548731 NetworkManager[49182]: <info>  [1765004877.6550] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.654 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:57 np0005548731 NetworkManager[49182]: <info>  [1765004877.6558] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.673 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:66:5e 10.100.0.14'], port_security=['fa:16:3e:32:66:5e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '19dd5782-9191-431a-bc5b-c5c3d41fc07c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c2236fb6443441618c69ad660b0932dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1faad6c4-429b-4e24-95b0-2a5a82e4cdc8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed024f0c-371e-4b3f-951f-85c5c8e0d63d, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.674 143965 INFO neutron.agent.ovn.metadata.agent [-] Port bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 in datapath 81801c54-c543-4e0f-94c1-cdf24ddb6bce bound to our chassis#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.676 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81801c54-c543-4e0f-94c1-cdf24ddb6bce#033[00m
Dec  6 02:07:57 np0005548731 systemd-udevd[252136]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.690 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6abea1-dc6c-4035-8115-246864cff50e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.691 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81801c54-c1 in ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.695 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81801c54-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.695 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[349e3335-299d-43fb-95bc-9fe2d887091e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.696 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9a291d68-099a-4d11-b4a7-95ea7c80547e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 NetworkManager[49182]: <info>  [1765004877.6994] device (tapbc53fd4b-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:07:57 np0005548731 NetworkManager[49182]: <info>  [1765004877.7009] device (tapbc53fd4b-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:07:57 np0005548731 systemd-machined[195355]: New machine qemu-21-instance-0000002c.
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.707 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab40c15-110a-4b1b-b793-2c87103026a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 systemd[1]: Started Virtual Machine qemu-21-instance-0000002c.
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.733 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4e831c6f-f828-4eaf-a911-33f3e5ae0179]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.762 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6d10852d-ccad-4d44-887a-a7876ac54c3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.767 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2643558d-f0d4-4be4-9769-29f233f85f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 NetworkManager[49182]: <info>  [1765004877.7683] manager: (tap81801c54-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/85)
Dec  6 02:07:57 np0005548731 systemd-udevd[252143]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.798 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac45198-2335-4e8e-af00-518d98c245d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.802 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1f7a66-5303-4e7e-bfbf-ac71228700ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 NetworkManager[49182]: <info>  [1765004877.8278] device (tap81801c54-c0): carrier: link connected
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.833 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[dad2ae12-27ca-4e6f-8e70-e6a2dd73bd7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.848 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bb78142d-08bd-4dec-b403-6e461385475d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81801c54-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:5f:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513983, 'reachable_time': 27178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252222, 'error': None, 'target': 'ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.862 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[46956c61-4bac-4f47-9134-382a1ad9dbee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:5f24'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 513983, 'tstamp': 513983}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252223, 'error': None, 'target': 'ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.877 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5d786469-ca7d-44bb-a348-86cd02fb6715]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81801c54-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:5f:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513983, 'reachable_time': 27178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252224, 'error': None, 'target': 'ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.896 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.913 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a71fff98-2abe-4f1c-a2a5-cc309d8f0ba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.920 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:07:57Z|00139|binding|INFO|Setting lport bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 ovn-installed in OVS
Dec  6 02:07:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:07:57Z|00140|binding|INFO|Setting lport bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 up in Southbound
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.932 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.939 232437 DEBUG nova.network.neutron [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Updated VIF entry in instance network info cache for port bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.939 232437 DEBUG nova.network.neutron [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Updating instance_info_cache with network_info: [{"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.966 232437 DEBUG oslo_concurrency.lockutils [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.976 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[49f61351-50b5-463f-9005-345b3090b5e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.977 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81801c54-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.978 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:07:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:57.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.979 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81801c54-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.980 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:57 np0005548731 NetworkManager[49182]: <info>  [1765004877.9816] manager: (tap81801c54-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Dec  6 02:07:57 np0005548731 kernel: tap81801c54-c0: entered promiscuous mode
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.983 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:57.985 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81801c54-c0, col_values=(('external_ids', {'iface-id': '6cb7c4fe-88f9-400c-a7b5-d8d7714c7b41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:07:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:07:57Z|00141|binding|INFO|Releasing lport 6cb7c4fe-88f9-400c-a7b5-d8d7714c7b41 from this chassis (sb_readonly=0)
Dec  6 02:07:57 np0005548731 nova_compute[232433]: 2025-12-06 07:07:57.986 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:58 np0005548731 nova_compute[232433]: 2025-12-06 07:07:58.036 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:58.037 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81801c54-c543-4e0f-94c1-cdf24ddb6bce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81801c54-c543-4e0f-94c1-cdf24ddb6bce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:58.037 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d87337b7-3428-4615-9f83-43c94c70b05f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:58.038 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-81801c54-c543-4e0f-94c1-cdf24ddb6bce
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/81801c54-c543-4e0f-94c1-cdf24ddb6bce.pid.haproxy
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 81801c54-c543-4e0f-94c1-cdf24ddb6bce
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:07:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:07:58.039 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'env', 'PROCESS_TAG=haproxy-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81801c54-c543-4e0f-94c1-cdf24ddb6bce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:07:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 02:07:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:58.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 02:07:58 np0005548731 nova_compute[232433]: 2025-12-06 07:07:58.147 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004878.146907, 19dd5782-9191-431a-bc5b-c5c3d41fc07c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:07:58 np0005548731 nova_compute[232433]: 2025-12-06 07:07:58.148 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] VM Started (Lifecycle Event)#033[00m
Dec  6 02:07:58 np0005548731 nova_compute[232433]: 2025-12-06 07:07:58.176 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:07:58 np0005548731 nova_compute[232433]: 2025-12-06 07:07:58.179 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004878.1470318, 19dd5782-9191-431a-bc5b-c5c3d41fc07c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:07:58 np0005548731 nova_compute[232433]: 2025-12-06 07:07:58.179 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:07:58 np0005548731 nova_compute[232433]: 2025-12-06 07:07:58.201 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:07:58 np0005548731 nova_compute[232433]: 2025-12-06 07:07:58.204 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:07:58 np0005548731 nova_compute[232433]: 2025-12-06 07:07:58.227 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:07:58 np0005548731 podman[252299]: 2025-12-06 07:07:58.405699015 +0000 UTC m=+0.053679457 container create 42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 02:07:58 np0005548731 systemd[1]: Started libpod-conmon-42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99.scope.
Dec  6 02:07:58 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:07:58 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01fc91097e38cf0acd665342660705e6eb4e311235234e7b86d036b11cf6ebd1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:07:58 np0005548731 podman[252299]: 2025-12-06 07:07:58.374152521 +0000 UTC m=+0.022132973 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:07:58 np0005548731 podman[252299]: 2025-12-06 07:07:58.476464493 +0000 UTC m=+0.124444895 container init 42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 02:07:58 np0005548731 podman[252299]: 2025-12-06 07:07:58.482386336 +0000 UTC m=+0.130366708 container start 42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:07:58 np0005548731 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[252314]: [NOTICE]   (252318) : New worker (252320) forked
Dec  6 02:07:58 np0005548731 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[252314]: [NOTICE]   (252318) : Loading success.
Dec  6 02:07:59 np0005548731 nova_compute[232433]: 2025-12-06 07:07:59.572 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:07:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:07:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:07:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:59.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:00.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:00 np0005548731 nova_compute[232433]: 2025-12-06 07:08:00.831 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:00.851 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:00.852 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:00.852 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:01 np0005548731 nova_compute[232433]: 2025-12-06 07:08:01.535 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:08:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:01.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.073 232437 DEBUG nova.compute.manager [req-a3dcb664-f5e1-46ec-8209-227a0345c84c req-84a7c280-7068-4352-85ee-5038030f5641 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Received event network-vif-plugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.074 232437 DEBUG oslo_concurrency.lockutils [req-a3dcb664-f5e1-46ec-8209-227a0345c84c req-84a7c280-7068-4352-85ee-5038030f5641 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.074 232437 DEBUG oslo_concurrency.lockutils [req-a3dcb664-f5e1-46ec-8209-227a0345c84c req-84a7c280-7068-4352-85ee-5038030f5641 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.075 232437 DEBUG oslo_concurrency.lockutils [req-a3dcb664-f5e1-46ec-8209-227a0345c84c req-84a7c280-7068-4352-85ee-5038030f5641 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.075 232437 DEBUG nova.compute.manager [req-a3dcb664-f5e1-46ec-8209-227a0345c84c req-84a7c280-7068-4352-85ee-5038030f5641 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Processing event network-vif-plugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.076 232437 DEBUG nova.compute.manager [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.081 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004882.0808516, 19dd5782-9191-431a-bc5b-c5c3d41fc07c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.081 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.084 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.089 232437 INFO nova.virt.libvirt.driver [-] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Instance spawned successfully.#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.090 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:08:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:02.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.120 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.127 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.129 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.129 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.130 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.130 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.130 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.131 232437 DEBUG nova.virt.libvirt.driver [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.183 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.216 232437 INFO nova.compute.manager [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Took 14.07 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.217 232437 DEBUG nova.compute.manager [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.283 232437 INFO nova.compute.manager [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Took 17.04 seconds to build instance.#033[00m
Dec  6 02:08:02 np0005548731 nova_compute[232433]: 2025-12-06 07:08:02.301 232437 DEBUG oslo_concurrency.lockutils [None req-82f40fba-72a4-4cee-be3b-c39a4f1dd43a b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:03.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:04.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:04 np0005548731 nova_compute[232433]: 2025-12-06 07:08:04.210 232437 DEBUG nova.compute.manager [req-32267286-2e0b-4ca1-b154-04d273b6b66e req-e551cf5b-e90f-4a37-9331-298aa6fc4ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Received event network-vif-plugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:08:04 np0005548731 nova_compute[232433]: 2025-12-06 07:08:04.211 232437 DEBUG oslo_concurrency.lockutils [req-32267286-2e0b-4ca1-b154-04d273b6b66e req-e551cf5b-e90f-4a37-9331-298aa6fc4ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:04 np0005548731 nova_compute[232433]: 2025-12-06 07:08:04.212 232437 DEBUG oslo_concurrency.lockutils [req-32267286-2e0b-4ca1-b154-04d273b6b66e req-e551cf5b-e90f-4a37-9331-298aa6fc4ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:04 np0005548731 nova_compute[232433]: 2025-12-06 07:08:04.212 232437 DEBUG oslo_concurrency.lockutils [req-32267286-2e0b-4ca1-b154-04d273b6b66e req-e551cf5b-e90f-4a37-9331-298aa6fc4ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:04 np0005548731 nova_compute[232433]: 2025-12-06 07:08:04.212 232437 DEBUG nova.compute.manager [req-32267286-2e0b-4ca1-b154-04d273b6b66e req-e551cf5b-e90f-4a37-9331-298aa6fc4ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] No waiting events found dispatching network-vif-plugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:08:04 np0005548731 nova_compute[232433]: 2025-12-06 07:08:04.213 232437 WARNING nova.compute.manager [req-32267286-2e0b-4ca1-b154-04d273b6b66e req-e551cf5b-e90f-4a37-9331-298aa6fc4ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Received unexpected event network-vif-plugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:08:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:05 np0005548731 nova_compute[232433]: 2025-12-06 07:08:05.834 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:05 np0005548731 ovn_controller[133927]: 2025-12-06T07:08:05Z|00142|binding|INFO|Releasing lport 6cb7c4fe-88f9-400c-a7b5-d8d7714c7b41 from this chassis (sb_readonly=0)
Dec  6 02:08:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:05.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:06 np0005548731 nova_compute[232433]: 2025-12-06 07:08:06.018 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:06.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:06 np0005548731 nova_compute[232433]: 2025-12-06 07:08:06.343 232437 DEBUG nova.compute.manager [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Received event network-changed-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:08:06 np0005548731 nova_compute[232433]: 2025-12-06 07:08:06.343 232437 DEBUG nova.compute.manager [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Refreshing instance network info cache due to event network-changed-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:08:06 np0005548731 nova_compute[232433]: 2025-12-06 07:08:06.345 232437 DEBUG oslo_concurrency.lockutils [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:08:06 np0005548731 nova_compute[232433]: 2025-12-06 07:08:06.345 232437 DEBUG oslo_concurrency.lockutils [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:08:06 np0005548731 nova_compute[232433]: 2025-12-06 07:08:06.346 232437 DEBUG nova.network.neutron [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Refreshing network info cache for port bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:08:06 np0005548731 nova_compute[232433]: 2025-12-06 07:08:06.539 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:08:06Z|00143|binding|INFO|Releasing lport 6cb7c4fe-88f9-400c-a7b5-d8d7714c7b41 from this chassis (sb_readonly=0)
Dec  6 02:08:06 np0005548731 nova_compute[232433]: 2025-12-06 07:08:06.975 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:07 np0005548731 nova_compute[232433]: 2025-12-06 07:08:07.177 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:07.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:08.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:08:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2977333581' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:08:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:08:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2977333581' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:08:09 np0005548731 podman[252360]: 2025-12-06 07:08:09.730060992 +0000 UTC m=+0.077119813 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 02:08:09 np0005548731 podman[252358]: 2025-12-06 07:08:09.740986055 +0000 UTC m=+0.087743858 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  6 02:08:09 np0005548731 podman[252359]: 2025-12-06 07:08:09.757313137 +0000 UTC m=+0.104366577 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  6 02:08:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:08:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:09.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:08:10 np0005548731 nova_compute[232433]: 2025-12-06 07:08:10.007 232437 DEBUG nova.network.neutron [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Updated VIF entry in instance network info cache for port bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:08:10 np0005548731 nova_compute[232433]: 2025-12-06 07:08:10.008 232437 DEBUG nova.network.neutron [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Updating instance_info_cache with network_info: [{"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:08:10 np0005548731 nova_compute[232433]: 2025-12-06 07:08:10.033 232437 DEBUG oslo_concurrency.lockutils [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:08:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:10.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:08:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:08:10 np0005548731 nova_compute[232433]: 2025-12-06 07:08:10.837 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:11 np0005548731 nova_compute[232433]: 2025-12-06 07:08:11.539 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:08:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:08:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:08:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:08:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:08:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:11.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:12.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:13.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:14.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:14 np0005548731 nova_compute[232433]: 2025-12-06 07:08:14.916 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:15 np0005548731 nova_compute[232433]: 2025-12-06 07:08:15.840 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:08:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:15.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:08:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:08:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:16.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:08:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:08:16Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:66:5e 10.100.0.14
Dec  6 02:08:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:08:16Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:66:5e 10.100.0.14
Dec  6 02:08:16 np0005548731 nova_compute[232433]: 2025-12-06 07:08:16.541 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e182 e182: 3 total, 3 up, 3 in
Dec  6 02:08:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:17.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:18.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:08:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:08:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:20.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:20.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:20 np0005548731 nova_compute[232433]: 2025-12-06 07:08:20.843 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:21 np0005548731 nova_compute[232433]: 2025-12-06 07:08:21.335 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:21 np0005548731 nova_compute[232433]: 2025-12-06 07:08:21.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:22.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:08:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:22.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:08:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:24.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000026s ======
Dec  6 02:08:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:24.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  6 02:08:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:25 np0005548731 nova_compute[232433]: 2025-12-06 07:08:25.846 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:25 np0005548731 nova_compute[232433]: 2025-12-06 07:08:25.952 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:26.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:26.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:26 np0005548731 nova_compute[232433]: 2025-12-06 07:08:26.547 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:28.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:28.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:30.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:08:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:30.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:08:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:30 np0005548731 nova_compute[232433]: 2025-12-06 07:08:30.850 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:31 np0005548731 nova_compute[232433]: 2025-12-06 07:08:31.547 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:32.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:32.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:08:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:34.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:08:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:08:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:34.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:08:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:35 np0005548731 nova_compute[232433]: 2025-12-06 07:08:35.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:36.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:36.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e183 e183: 3 total, 3 up, 3 in
Dec  6 02:08:36 np0005548731 nova_compute[232433]: 2025-12-06 07:08:36.548 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:38.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:08:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:38.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:08:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:08:39 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4017702487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:08:39 np0005548731 podman[252691]: 2025-12-06 07:08:39.900306303 +0000 UTC m=+0.055896345 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 02:08:39 np0005548731 podman[252693]: 2025-12-06 07:08:39.900310043 +0000 UTC m=+0.054248125 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:08:39 np0005548731 podman[252692]: 2025-12-06 07:08:39.92827946 +0000 UTC m=+0.084884876 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:08:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:40.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:40.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:40 np0005548731 nova_compute[232433]: 2025-12-06 07:08:40.853 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.168 232437 DEBUG oslo_concurrency.lockutils [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.169 232437 DEBUG oslo_concurrency.lockutils [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.183 232437 DEBUG nova.objects.instance [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'flavor' on Instance uuid 19dd5782-9191-431a-bc5b-c5c3d41fc07c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.227 232437 DEBUG oslo_concurrency.lockutils [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.477 232437 DEBUG oslo_concurrency.lockutils [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.477 232437 DEBUG oslo_concurrency.lockutils [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.477 232437 INFO nova.compute.manager [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Attaching volume 86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6 to /dev/vdb#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.551 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.834 232437 DEBUG os_brick.utils [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.837 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.849 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.850 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[85f51026-bb6d-4fe8-b390-b8a30c13b429]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.852 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.860 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.861 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[4edf59f2-1ae9-4e92-b346-da8f8ea7ab3c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.863 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.876 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.877 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[42596966-3498-4254-a3fa-484678d00a5f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.878 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cfede4-9e33-48b9-a47c-a98b138c6513]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.879 232437 DEBUG oslo_concurrency.processutils [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.912 232437 DEBUG oslo_concurrency.processutils [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.915 232437 DEBUG os_brick.initiator.connectors.lightos [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.915 232437 DEBUG os_brick.initiator.connectors.lightos [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.916 232437 DEBUG os_brick.initiator.connectors.lightos [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.916 232437 DEBUG os_brick.utils [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:08:41 np0005548731 nova_compute[232433]: 2025-12-06 07:08:41.916 232437 DEBUG nova.virt.block_device [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Updating existing volume attachment record: ac8b39ec-4c0b-4c95-beaf-af3f9984f2dd _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:08:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:08:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:42.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:08:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:42.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e184 e184: 3 total, 3 up, 3 in
Dec  6 02:08:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:08:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2836448693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:08:42 np0005548731 nova_compute[232433]: 2025-12-06 07:08:42.792 232437 DEBUG nova.objects.instance [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'flavor' on Instance uuid 19dd5782-9191-431a-bc5b-c5c3d41fc07c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:08:42 np0005548731 nova_compute[232433]: 2025-12-06 07:08:42.816 232437 DEBUG nova.virt.libvirt.driver [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Attempting to attach volume 86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 02:08:42 np0005548731 nova_compute[232433]: 2025-12-06 07:08:42.818 232437 DEBUG nova.virt.libvirt.guest [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:08:42 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:08:42 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6">
Dec  6 02:08:42 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:08:42 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:08:42 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:08:42 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:08:42 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:08:42 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:08:42 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:08:42 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:08:42 np0005548731 nova_compute[232433]:  <serial>86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6</serial>
Dec  6 02:08:42 np0005548731 nova_compute[232433]:  <shareable/>
Dec  6 02:08:42 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:08:42 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:08:42 np0005548731 nova_compute[232433]: 2025-12-06 07:08:42.922 232437 DEBUG nova.virt.libvirt.driver [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:08:42 np0005548731 nova_compute[232433]: 2025-12-06 07:08:42.924 232437 DEBUG nova.virt.libvirt.driver [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:08:42 np0005548731 nova_compute[232433]: 2025-12-06 07:08:42.925 232437 DEBUG nova.virt.libvirt.driver [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:08:42 np0005548731 nova_compute[232433]: 2025-12-06 07:08:42.925 232437 DEBUG nova.virt.libvirt.driver [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No VIF found with MAC fa:16:3e:32:66:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:08:43 np0005548731 nova_compute[232433]: 2025-12-06 07:08:43.122 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:08:43 np0005548731 nova_compute[232433]: 2025-12-06 07:08:43.149 232437 DEBUG oslo_concurrency.lockutils [None req-034b4912-197f-4028-be17-f3a1d2d2cd4e b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e185 e185: 3 total, 3 up, 3 in
Dec  6 02:08:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:44.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:08:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:44.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:08:45 np0005548731 nova_compute[232433]: 2025-12-06 07:08:45.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:08:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e186 e186: 3 total, 3 up, 3 in
Dec  6 02:08:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:45 np0005548731 nova_compute[232433]: 2025-12-06 07:08:45.856 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:46.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:08:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:46.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:08:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e187 e187: 3 total, 3 up, 3 in
Dec  6 02:08:46 np0005548731 nova_compute[232433]: 2025-12-06 07:08:46.553 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.144 232437 DEBUG oslo_concurrency.lockutils [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.144 232437 DEBUG oslo_concurrency.lockutils [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.172 232437 INFO nova.compute.manager [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Detaching volume 86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.340 232437 INFO nova.virt.block_device [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Attempting to driver detach volume 86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6 from mountpoint /dev/vdb#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.348 232437 DEBUG nova.virt.libvirt.driver [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Attempting to detach device vdb from instance 19dd5782-9191-431a-bc5b-c5c3d41fc07c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.349 232437 DEBUG nova.virt.libvirt.guest [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6">
Dec  6 02:08:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <serial>86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6</serial>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <shareable/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:08:47 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.356 232437 INFO nova.virt.libvirt.driver [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Successfully detached device vdb from instance 19dd5782-9191-431a-bc5b-c5c3d41fc07c from the persistent domain config.#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.357 232437 DEBUG nova.virt.libvirt.driver [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 19dd5782-9191-431a-bc5b-c5c3d41fc07c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.357 232437 DEBUG nova.virt.libvirt.guest [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6">
Dec  6 02:08:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <serial>86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6</serial>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <shareable/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:08:47 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:08:47 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.408 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765004927.4082103, 19dd5782-9191-431a-bc5b-c5c3d41fc07c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.410 232437 DEBUG nova.virt.libvirt.driver [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 19dd5782-9191-431a-bc5b-c5c3d41fc07c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:08:47 np0005548731 nova_compute[232433]: 2025-12-06 07:08:47.412 232437 INFO nova.virt.libvirt.driver [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Successfully detached device vdb from instance 19dd5782-9191-431a-bc5b-c5c3d41fc07c from the live domain config.#033[00m
Dec  6 02:08:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e188 e188: 3 total, 3 up, 3 in
Dec  6 02:08:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:48.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:48 np0005548731 nova_compute[232433]: 2025-12-06 07:08:48.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:08:48 np0005548731 nova_compute[232433]: 2025-12-06 07:08:48.157 232437 DEBUG nova.objects.instance [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'flavor' on Instance uuid 19dd5782-9191-431a-bc5b-c5c3d41fc07c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:08:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:48.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:48 np0005548731 nova_compute[232433]: 2025-12-06 07:08:48.195 232437 DEBUG oslo_concurrency.lockutils [None req-66900ac7-116b-4b5f-9974-aca8ffce1bcb b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:49 np0005548731 nova_compute[232433]: 2025-12-06 07:08:49.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:08:49 np0005548731 nova_compute[232433]: 2025-12-06 07:08:49.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:08:49 np0005548731 nova_compute[232433]: 2025-12-06 07:08:49.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:08:49 np0005548731 nova_compute[232433]: 2025-12-06 07:08:49.911 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:08:49 np0005548731 nova_compute[232433]: 2025-12-06 07:08:49.911 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:08:49 np0005548731 nova_compute[232433]: 2025-12-06 07:08:49.911 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:08:49 np0005548731 nova_compute[232433]: 2025-12-06 07:08:49.912 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 19dd5782-9191-431a-bc5b-c5c3d41fc07c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:08:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:50.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:50.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.607 232437 DEBUG oslo_concurrency.lockutils [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.607 232437 DEBUG oslo_concurrency.lockutils [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.607 232437 DEBUG oslo_concurrency.lockutils [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.608 232437 DEBUG oslo_concurrency.lockutils [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.608 232437 DEBUG oslo_concurrency.lockutils [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.609 232437 INFO nova.compute.manager [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Terminating instance#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.610 232437 DEBUG nova.compute.manager [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:08:50 np0005548731 kernel: tapbc53fd4b-9f (unregistering): left promiscuous mode
Dec  6 02:08:50 np0005548731 NetworkManager[49182]: <info>  [1765004930.6702] device (tapbc53fd4b-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:08:50 np0005548731 ovn_controller[133927]: 2025-12-06T07:08:50Z|00144|binding|INFO|Releasing lport bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 from this chassis (sb_readonly=0)
Dec  6 02:08:50 np0005548731 ovn_controller[133927]: 2025-12-06T07:08:50Z|00145|binding|INFO|Setting lport bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 down in Southbound
Dec  6 02:08:50 np0005548731 ovn_controller[133927]: 2025-12-06T07:08:50Z|00146|binding|INFO|Removing iface tapbc53fd4b-9f ovn-installed in OVS
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.683 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.684 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.689 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:66:5e 10.100.0.14'], port_security=['fa:16:3e:32:66:5e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '19dd5782-9191-431a-bc5b-c5c3d41fc07c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c2236fb6443441618c69ad660b0932dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1faad6c4-429b-4e24-95b0-2a5a82e4cdc8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed024f0c-371e-4b3f-951f-85c5c8e0d63d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.690 143965 INFO neutron.agent.ovn.metadata.agent [-] Port bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 in datapath 81801c54-c543-4e0f-94c1-cdf24ddb6bce unbound from our chassis#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.692 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81801c54-c543-4e0f-94c1-cdf24ddb6bce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.693 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[688465db-9b8a-4faf-bbb6-3feae332a9d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.693 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce namespace which is not needed anymore#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.706 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:50 np0005548731 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Dec  6 02:08:50 np0005548731 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002c.scope: Consumed 15.284s CPU time.
Dec  6 02:08:50 np0005548731 systemd-machined[195355]: Machine qemu-21-instance-0000002c terminated.
Dec  6 02:08:50 np0005548731 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[252314]: [NOTICE]   (252318) : haproxy version is 2.8.14-c23fe91
Dec  6 02:08:50 np0005548731 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[252314]: [NOTICE]   (252318) : path to executable is /usr/sbin/haproxy
Dec  6 02:08:50 np0005548731 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[252314]: [WARNING]  (252318) : Exiting Master process...
Dec  6 02:08:50 np0005548731 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[252314]: [ALERT]    (252318) : Current worker (252320) exited with code 143 (Terminated)
Dec  6 02:08:50 np0005548731 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[252314]: [WARNING]  (252318) : All workers exited. Exiting... (0)
Dec  6 02:08:50 np0005548731 systemd[1]: libpod-42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99.scope: Deactivated successfully.
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.828 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:50 np0005548731 podman[252810]: 2025-12-06 07:08:50.831458006 +0000 UTC m=+0.039869676 container died 42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.837 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.842 232437 INFO nova.virt.libvirt.driver [-] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Instance destroyed successfully.#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.842 232437 DEBUG nova.objects.instance [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'resources' on Instance uuid 19dd5782-9191-431a-bc5b-c5c3d41fc07c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:08:50 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99-userdata-shm.mount: Deactivated successfully.
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.858 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:50 np0005548731 systemd[1]: var-lib-containers-storage-overlay-01fc91097e38cf0acd665342660705e6eb4e311235234e7b86d036b11cf6ebd1-merged.mount: Deactivated successfully.
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.861 232437 DEBUG nova.virt.libvirt.vif [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:07:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-947925773',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-947925773',id=44,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxNM3sYiKzp1+NUb/2De95mJs+SH1m1mxJqrRUBmO5hfmaPmDEbehrvD5V1R8B784do2XvfbohRCGUURA5SWZg3+xgRCk33hYLHpa3KVfm+1jfWf3JbwWLhSjPluoLz2w==',key_name='tempest-keypair-1560414487',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:08:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c2236fb6443441618c69ad660b0932dd',ramdisk_id='',reservation_id='r-tbc21eth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:08:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7c2f91caf1444c68dcf6bd66966d67e',uuid=19dd5782-9191-431a-bc5b-c5c3d41fc07c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.861 232437 DEBUG nova.network.os_vif_util [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converting VIF {"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.862 232437 DEBUG nova.network.os_vif_util [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:66:5e,bridge_name='br-int',has_traffic_filtering=True,id=bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc53fd4b-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.863 232437 DEBUG os_vif [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:66:5e,bridge_name='br-int',has_traffic_filtering=True,id=bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc53fd4b-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.864 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.865 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc53fd4b-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.866 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.867 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:50 np0005548731 podman[252810]: 2025-12-06 07:08:50.869696882 +0000 UTC m=+0.078108552 container cleanup 42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.869 232437 INFO os_vif [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:66:5e,bridge_name='br-int',has_traffic_filtering=True,id=bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc53fd4b-9f')#033[00m
Dec  6 02:08:50 np0005548731 systemd[1]: libpod-conmon-42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99.scope: Deactivated successfully.
Dec  6 02:08:50 np0005548731 podman[252858]: 2025-12-06 07:08:50.934970272 +0000 UTC m=+0.040643525 container remove 42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.940 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d01c6fe0-a9cb-4be0-aa69-c52ddc4169f7]: (4, ('Sat Dec  6 07:08:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce (42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99)\n42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99\nSat Dec  6 07:08:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce (42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99)\n42a2fe6957e37c14e64e20fb0ac5a344924386e969c67d2fde4a800276268b99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.942 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[362c695e-f916-4afe-be30-f4e893cb49be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.942 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81801c54-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.944 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:50 np0005548731 kernel: tap81801c54-c0: left promiscuous mode
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.957 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.960 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bae5240e-b2e3-4eeb-9977-c9235618ea99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.973 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc241fd-7ab2-4172-bd0c-294ab09100d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.975 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cb0efe2a-76c5-4bf5-b0f1-a15bbde2f682]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.980 232437 DEBUG nova.compute.manager [req-dad1775f-bdda-488c-9a94-8b695295ff19 req-b0637a27-5740-4bdc-9797-cf8e8a2d606a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Received event network-vif-unplugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.980 232437 DEBUG oslo_concurrency.lockutils [req-dad1775f-bdda-488c-9a94-8b695295ff19 req-b0637a27-5740-4bdc-9797-cf8e8a2d606a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.980 232437 DEBUG oslo_concurrency.lockutils [req-dad1775f-bdda-488c-9a94-8b695295ff19 req-b0637a27-5740-4bdc-9797-cf8e8a2d606a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.981 232437 DEBUG oslo_concurrency.lockutils [req-dad1775f-bdda-488c-9a94-8b695295ff19 req-b0637a27-5740-4bdc-9797-cf8e8a2d606a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.981 232437 DEBUG nova.compute.manager [req-dad1775f-bdda-488c-9a94-8b695295ff19 req-b0637a27-5740-4bdc-9797-cf8e8a2d606a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] No waiting events found dispatching network-vif-unplugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:08:50 np0005548731 nova_compute[232433]: 2025-12-06 07:08:50.981 232437 DEBUG nova.compute.manager [req-dad1775f-bdda-488c-9a94-8b695295ff19 req-b0637a27-5740-4bdc-9797-cf8e8a2d606a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Received event network-vif-unplugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.989 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fec4e002-a636-4f20-b2bd-df96ef42c37d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513976, 'reachable_time': 36984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252884, 'error': None, 'target': 'ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:50 np0005548731 systemd[1]: run-netns-ovnmeta\x2d81801c54\x2dc543\x2d4e0f\x2d94c1\x2dcdf24ddb6bce.mount: Deactivated successfully.
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.992 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:08:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:50.992 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[078d8606-5a2a-4ec8-a796-5b783b770929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:08:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:51.192 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.192 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:51.193 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.263842) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931263921, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2479, "num_deletes": 254, "total_data_size": 5744787, "memory_usage": 5822888, "flush_reason": "Manual Compaction"}
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931282947, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3744233, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28900, "largest_seqno": 31374, "table_properties": {"data_size": 3734041, "index_size": 6494, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21767, "raw_average_key_size": 20, "raw_value_size": 3713428, "raw_average_value_size": 3567, "num_data_blocks": 282, "num_entries": 1041, "num_filter_entries": 1041, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004735, "oldest_key_time": 1765004735, "file_creation_time": 1765004931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 19126 microseconds, and 7687 cpu microseconds.
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.282986) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3744233 bytes OK
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.283004) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.284648) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.284661) EVENT_LOG_v1 {"time_micros": 1765004931284656, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.284680) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5733744, prev total WAL file size 5733744, number of live WAL files 2.
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.286274) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3656KB)], [57(7803KB)]
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931286317, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 11734685, "oldest_snapshot_seqno": -1}
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5974 keys, 9720240 bytes, temperature: kUnknown
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931347978, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 9720240, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9680164, "index_size": 24030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 153169, "raw_average_key_size": 25, "raw_value_size": 9572552, "raw_average_value_size": 1602, "num_data_blocks": 965, "num_entries": 5974, "num_filter_entries": 5974, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765004931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e189 e189: 3 total, 3 up, 3 in
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.348243) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9720240 bytes
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.352741) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.0 rd, 157.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6499, records dropped: 525 output_compression: NoCompression
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.352804) EVENT_LOG_v1 {"time_micros": 1765004931352778, "job": 34, "event": "compaction_finished", "compaction_time_micros": 61750, "compaction_time_cpu_micros": 22907, "output_level": 6, "num_output_files": 1, "total_output_size": 9720240, "num_input_records": 6499, "num_output_records": 5974, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.286209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.354744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.354749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.354752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.354754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:08:51.354756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931355417, "job": 0, "event": "table_file_deletion", "file_number": 59}
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931356800, "job": 0, "event": "table_file_deletion", "file_number": 57}
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.533 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Updating instance_info_cache with network_info: [{"id": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "address": "fa:16:3e:32:66:5e", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc53fd4b-9f", "ovs_interfaceid": "bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.540 232437 INFO nova.virt.libvirt.driver [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Deleting instance files /var/lib/nova/instances/19dd5782-9191-431a-bc5b-c5c3d41fc07c_del#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.541 232437 INFO nova.virt.libvirt.driver [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Deletion of /var/lib/nova/instances/19dd5782-9191-431a-bc5b-c5c3d41fc07c_del complete#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.551 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-19dd5782-9191-431a-bc5b-c5c3d41fc07c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.551 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.551 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.551 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.551 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.551 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.552 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.557 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.580 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.581 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.581 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.581 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.581 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.608 232437 INFO nova.compute.manager [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.609 232437 DEBUG oslo.service.loopingcall [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.609 232437 DEBUG nova.compute.manager [-] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:08:51 np0005548731 nova_compute[232433]: 2025-12-06 07:08:51.610 232437 DEBUG nova.network.neutron [-] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:08:51 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2587063817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.008 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:08:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:52.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.177 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:08:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:52.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.179 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4743MB free_disk=20.84299087524414GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.179 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.179 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.300 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 19dd5782-9191-431a-bc5b-c5c3d41fc07c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.301 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.301 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.331 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.382 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.382 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.397 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.443 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 02:08:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e190 e190: 3 total, 3 up, 3 in
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.518 232437 DEBUG nova.network.neutron [-] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.521 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.549 232437 INFO nova.compute.manager [-] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Took 0.94 seconds to deallocate network for instance.#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.583 232437 DEBUG nova.compute.manager [req-a3bb530c-2de9-4eaa-b53b-e28e7b116381 req-ee6f3bed-6a73-4ec7-b134-8264471cd8dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Received event network-vif-deleted-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.606 232437 DEBUG oslo_concurrency.lockutils [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:08:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1084889315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:08:52 np0005548731 nova_compute[232433]: 2025-12-06 07:08:52.997 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.004 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.114 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:08:53 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:08:53.195 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.206 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.207 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.207 232437 DEBUG oslo_concurrency.lockutils [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.259 232437 DEBUG oslo_concurrency.processutils [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.551 232437 DEBUG nova.compute.manager [req-57cd51af-eb0d-43b5-9670-8d838f7602ed req-8ebc0d9a-7542-463d-853c-b29893840713 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Received event network-vif-plugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.552 232437 DEBUG oslo_concurrency.lockutils [req-57cd51af-eb0d-43b5-9670-8d838f7602ed req-8ebc0d9a-7542-463d-853c-b29893840713 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.552 232437 DEBUG oslo_concurrency.lockutils [req-57cd51af-eb0d-43b5-9670-8d838f7602ed req-8ebc0d9a-7542-463d-853c-b29893840713 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.552 232437 DEBUG oslo_concurrency.lockutils [req-57cd51af-eb0d-43b5-9670-8d838f7602ed req-8ebc0d9a-7542-463d-853c-b29893840713 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.552 232437 DEBUG nova.compute.manager [req-57cd51af-eb0d-43b5-9670-8d838f7602ed req-8ebc0d9a-7542-463d-853c-b29893840713 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] No waiting events found dispatching network-vif-plugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.553 232437 WARNING nova.compute.manager [req-57cd51af-eb0d-43b5-9670-8d838f7602ed req-8ebc0d9a-7542-463d-853c-b29893840713 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Received unexpected event network-vif-plugged-bc53fd4b-9f72-4ae5-bc5f-122f83e81ba4 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 02:08:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:08:53 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4138188771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.726 232437 DEBUG oslo_concurrency.processutils [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.732 232437 DEBUG nova.compute.provider_tree [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.753 232437 DEBUG nova.scheduler.client.report [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.760 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.783 232437 DEBUG oslo_concurrency.lockutils [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.825 232437 INFO nova.scheduler.client.report [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Deleted allocations for instance 19dd5782-9191-431a-bc5b-c5c3d41fc07c#033[00m
Dec  6 02:08:53 np0005548731 nova_compute[232433]: 2025-12-06 07:08:53.908 232437 DEBUG oslo_concurrency.lockutils [None req-99ab4e2c-2ec5-4ef9-a698-c13dfcd5f0d5 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "19dd5782-9191-431a-bc5b-c5c3d41fc07c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:54.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:54 np0005548731 nova_compute[232433]: 2025-12-06 07:08:54.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:08:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:54.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.036 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "8d469882-2e42-4956-8009-73a78a8e5c80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.036 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "8d469882-2e42-4956-8009-73a78a8e5c80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.064 232437 DEBUG nova.compute.manager [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.126 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.127 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.134 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.134 232437 INFO nova.compute.claims [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.267 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:08:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:08:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:08:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1692578340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.711 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.717 232437 DEBUG nova.compute.provider_tree [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.733 232437 DEBUG nova.scheduler.client.report [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.755 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.756 232437 DEBUG nova.compute.manager [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.799 232437 DEBUG nova.compute.manager [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.800 232437 DEBUG nova.network.neutron [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.822 232437 INFO nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.838 232437 DEBUG nova.compute.manager [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.868 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.935 232437 DEBUG nova.compute.manager [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.936 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.936 232437 INFO nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Creating image(s)#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.965 232437 DEBUG nova.storage.rbd_utils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image 8d469882-2e42-4956-8009-73a78a8e5c80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:08:55 np0005548731 nova_compute[232433]: 2025-12-06 07:08:55.998 232437 DEBUG nova.storage.rbd_utils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image 8d469882-2e42-4956-8009-73a78a8e5c80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:08:56 np0005548731 nova_compute[232433]: 2025-12-06 07:08:56.027 232437 DEBUG nova.storage.rbd_utils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image 8d469882-2e42-4956-8009-73a78a8e5c80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:08:56 np0005548731 nova_compute[232433]: 2025-12-06 07:08:56.031 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:08:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:56.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:56 np0005548731 nova_compute[232433]: 2025-12-06 07:08:56.090 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:08:56 np0005548731 nova_compute[232433]: 2025-12-06 07:08:56.091 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:56 np0005548731 nova_compute[232433]: 2025-12-06 07:08:56.091 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:56 np0005548731 nova_compute[232433]: 2025-12-06 07:08:56.092 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:56 np0005548731 nova_compute[232433]: 2025-12-06 07:08:56.121 232437 DEBUG nova.storage.rbd_utils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image 8d469882-2e42-4956-8009-73a78a8e5c80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:08:56 np0005548731 nova_compute[232433]: 2025-12-06 07:08:56.125 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8d469882-2e42-4956-8009-73a78a8e5c80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:08:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:08:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:56.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:08:56 np0005548731 nova_compute[232433]: 2025-12-06 07:08:56.558 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:08:57 np0005548731 nova_compute[232433]: 2025-12-06 07:08:57.907 232437 DEBUG nova.network.neutron [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec  6 02:08:57 np0005548731 nova_compute[232433]: 2025-12-06 07:08:57.907 232437 DEBUG nova.compute.manager [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:08:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:58.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:08:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:08:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:58.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:08:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e191 e191: 3 total, 3 up, 3 in
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.197 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8d469882-2e42-4956-8009-73a78a8e5c80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.508 232437 DEBUG nova.storage.rbd_utils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] resizing rbd image 8d469882-2e42-4956-8009-73a78a8e5c80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.612 232437 DEBUG nova.objects.instance [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lazy-loading 'migration_context' on Instance uuid 8d469882-2e42-4956-8009-73a78a8e5c80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.628 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.628 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Ensure instance console log exists: /var/lib/nova/instances/8d469882-2e42-4956-8009-73a78a8e5c80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.629 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.629 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.629 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.631 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.635 232437 WARNING nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.640 232437 DEBUG nova.virt.libvirt.host [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.641 232437 DEBUG nova.virt.libvirt.host [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.644 232437 DEBUG nova.virt.libvirt.host [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.644 232437 DEBUG nova.virt.libvirt.host [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.645 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.646 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.646 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.647 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.647 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.647 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.647 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.648 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.648 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.648 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.649 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.649 232437 DEBUG nova.virt.hardware [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:08:59 np0005548731 nova_compute[232433]: 2025-12-06 07:08:59.652 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:09:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:00.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:09:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2855757156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:09:00 np0005548731 nova_compute[232433]: 2025-12-06 07:09:00.108 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:09:00 np0005548731 nova_compute[232433]: 2025-12-06 07:09:00.135 232437 DEBUG nova.storage.rbd_utils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image 8d469882-2e42-4956-8009-73a78a8e5c80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:09:00 np0005548731 nova_compute[232433]: 2025-12-06 07:09:00.139 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:09:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:09:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:00.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:09:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:09:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2898849422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:09:00 np0005548731 nova_compute[232433]: 2025-12-06 07:09:00.773 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:09:00 np0005548731 nova_compute[232433]: 2025-12-06 07:09:00.775 232437 DEBUG nova.objects.instance [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d469882-2e42-4956-8009-73a78a8e5c80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:09:00 np0005548731 nova_compute[232433]: 2025-12-06 07:09:00.792 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  <uuid>8d469882-2e42-4956-8009-73a78a8e5c80</uuid>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  <name>instance-00000030</name>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1023164263</nova:name>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:08:59</nova:creationTime>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <nova:user uuid="3786fc2472ec43adb27b29bfa497a6a2">tempest-ListImageFiltersTestJSON-1891540223-project-member</nova:user>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <nova:project uuid="6f86ab5a5bf14cb6b789f065cc8ca04a">tempest-ListImageFiltersTestJSON-1891540223</nova:project>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <nova:ports/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <entry name="serial">8d469882-2e42-4956-8009-73a78a8e5c80</entry>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <entry name="uuid">8d469882-2e42-4956-8009-73a78a8e5c80</entry>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/8d469882-2e42-4956-8009-73a78a8e5c80_disk">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/8d469882-2e42-4956-8009-73a78a8e5c80_disk.config">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/8d469882-2e42-4956-8009-73a78a8e5c80/console.log" append="off"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:09:00 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:09:00 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:09:00 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:09:00 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:09:00 np0005548731 nova_compute[232433]: 2025-12-06 07:09:00.844 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:09:00 np0005548731 nova_compute[232433]: 2025-12-06 07:09:00.844 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:09:00 np0005548731 nova_compute[232433]: 2025-12-06 07:09:00.845 232437 INFO nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Using config drive#033[00m
Dec  6 02:09:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:09:00.851 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:09:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:09:00.852 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:09:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:09:00.852 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:09:00 np0005548731 nova_compute[232433]: 2025-12-06 07:09:00.884 232437 DEBUG nova.storage.rbd_utils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image 8d469882-2e42-4956-8009-73a78a8e5c80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:09:00 np0005548731 nova_compute[232433]: 2025-12-06 07:09:00.892 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:01 np0005548731 nova_compute[232433]: 2025-12-06 07:09:01.561 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:01 np0005548731 nova_compute[232433]: 2025-12-06 07:09:01.904 232437 INFO nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Creating config drive at /var/lib/nova/instances/8d469882-2e42-4956-8009-73a78a8e5c80/disk.config#033[00m
Dec  6 02:09:01 np0005548731 nova_compute[232433]: 2025-12-06 07:09:01.909 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d469882-2e42-4956-8009-73a78a8e5c80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvxb4kwsf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:09:02 np0005548731 nova_compute[232433]: 2025-12-06 07:09:02.038 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d469882-2e42-4956-8009-73a78a8e5c80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvxb4kwsf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:09:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:09:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:02.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:09:02 np0005548731 nova_compute[232433]: 2025-12-06 07:09:02.067 232437 DEBUG nova.storage.rbd_utils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image 8d469882-2e42-4956-8009-73a78a8e5c80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:09:02 np0005548731 nova_compute[232433]: 2025-12-06 07:09:02.072 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d469882-2e42-4956-8009-73a78a8e5c80/disk.config 8d469882-2e42-4956-8009-73a78a8e5c80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:09:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:09:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:02.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:09:02 np0005548731 nova_compute[232433]: 2025-12-06 07:09:02.436 232437 DEBUG oslo_concurrency.processutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d469882-2e42-4956-8009-73a78a8e5c80/disk.config 8d469882-2e42-4956-8009-73a78a8e5c80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:09:02 np0005548731 nova_compute[232433]: 2025-12-06 07:09:02.438 232437 INFO nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Deleting local config drive /var/lib/nova/instances/8d469882-2e42-4956-8009-73a78a8e5c80/disk.config because it was imported into RBD.#033[00m
Dec  6 02:09:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e192 e192: 3 total, 3 up, 3 in
Dec  6 02:09:02 np0005548731 systemd-machined[195355]: New machine qemu-22-instance-00000030.
Dec  6 02:09:02 np0005548731 systemd[1]: Started Virtual Machine qemu-22-instance-00000030.
Dec  6 02:09:02 np0005548731 nova_compute[232433]: 2025-12-06 07:09:02.985 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004942.9843442, 8d469882-2e42-4956-8009-73a78a8e5c80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:09:02 np0005548731 nova_compute[232433]: 2025-12-06 07:09:02.986 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:09:02 np0005548731 nova_compute[232433]: 2025-12-06 07:09:02.989 232437 DEBUG nova.compute.manager [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:09:02 np0005548731 nova_compute[232433]: 2025-12-06 07:09:02.989 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:09:02 np0005548731 nova_compute[232433]: 2025-12-06 07:09:02.993 232437 INFO nova.virt.libvirt.driver [-] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Instance spawned successfully.#033[00m
Dec  6 02:09:02 np0005548731 nova_compute[232433]: 2025-12-06 07:09:02.993 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.021 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.028 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.032 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.033 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.033 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.034 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.035 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.036 232437 DEBUG nova.virt.libvirt.driver [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.085 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.086 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765004942.9857516, 8d469882-2e42-4956-8009-73a78a8e5c80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.086 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] VM Started (Lifecycle Event)#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.118 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.122 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.127 232437 INFO nova.compute.manager [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Took 7.19 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.127 232437 DEBUG nova.compute.manager [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.142 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.189 232437 INFO nova.compute.manager [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Took 8.08 seconds to build instance.#033[00m
Dec  6 02:09:03 np0005548731 nova_compute[232433]: 2025-12-06 07:09:03.206 232437 DEBUG oslo_concurrency.lockutils [None req-47d07374-e9b5-4d6e-b620-6c73e0ce2e5b 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "8d469882-2e42-4956-8009-73a78a8e5c80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:09:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e193 e193: 3 total, 3 up, 3 in
Dec  6 02:09:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:04.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:09:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:04.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:09:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:05 np0005548731 nova_compute[232433]: 2025-12-06 07:09:05.840 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004930.838725, 19dd5782-9191-431a-bc5b-c5c3d41fc07c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:09:05 np0005548731 nova_compute[232433]: 2025-12-06 07:09:05.840 232437 INFO nova.compute.manager [-] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:09:05 np0005548731 nova_compute[232433]: 2025-12-06 07:09:05.869 232437 DEBUG nova.compute.manager [None req-c05a72c3-9748-4c2f-a642-3af589f3afbb - - - - - -] [instance: 19dd5782-9191-431a-bc5b-c5c3d41fc07c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:09:05 np0005548731 nova_compute[232433]: 2025-12-06 07:09:05.900 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:09:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:06.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:09:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:06.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:06 np0005548731 nova_compute[232433]: 2025-12-06 07:09:06.563 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e194 e194: 3 total, 3 up, 3 in
Dec  6 02:09:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e195 e195: 3 total, 3 up, 3 in
Dec  6 02:09:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:09:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:08.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:09:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:08.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e196 e196: 3 total, 3 up, 3 in
Dec  6 02:09:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:10.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:10.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:09:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1388176424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:09:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:09:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1388176424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:09:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:10 np0005548731 nova_compute[232433]: 2025-12-06 07:09:10.903 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:10 np0005548731 podman[253378]: 2025-12-06 07:09:10.924641931 +0000 UTC m=+0.071207755 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  6 02:09:10 np0005548731 podman[253380]: 2025-12-06 07:09:10.94151179 +0000 UTC m=+0.088351930 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:09:10 np0005548731 podman[253379]: 2025-12-06 07:09:10.984410419 +0000 UTC m=+0.131944345 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 02:09:11 np0005548731 nova_compute[232433]: 2025-12-06 07:09:11.565 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:12.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:12 np0005548731 nova_compute[232433]: 2025-12-06 07:09:12.179 232437 DEBUG nova.compute.manager [None req-fdae895e-3310-48da-a3bd-8ff8cd76d199 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:09:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:12.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:12 np0005548731 nova_compute[232433]: 2025-12-06 07:09:12.223 232437 INFO nova.compute.manager [None req-fdae895e-3310-48da-a3bd-8ff8cd76d199 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] instance snapshotting#033[00m
Dec  6 02:09:12 np0005548731 nova_compute[232433]: 2025-12-06 07:09:12.502 232437 INFO nova.virt.libvirt.driver [None req-fdae895e-3310-48da-a3bd-8ff8cd76d199 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Beginning live snapshot process#033[00m
Dec  6 02:09:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e197 e197: 3 total, 3 up, 3 in
Dec  6 02:09:12 np0005548731 nova_compute[232433]: 2025-12-06 07:09:12.667 232437 DEBUG nova.virt.libvirt.imagebackend [None req-fdae895e-3310-48da-a3bd-8ff8cd76d199 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec  6 02:09:12 np0005548731 nova_compute[232433]: 2025-12-06 07:09:12.853 232437 DEBUG nova.storage.rbd_utils [None req-fdae895e-3310-48da-a3bd-8ff8cd76d199 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] creating snapshot(263de0c6ce244c27868fae2284e5d11b) on rbd image(8d469882-2e42-4956-8009-73a78a8e5c80_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:09:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e198 e198: 3 total, 3 up, 3 in
Dec  6 02:09:13 np0005548731 nova_compute[232433]: 2025-12-06 07:09:13.662 232437 DEBUG nova.storage.rbd_utils [None req-fdae895e-3310-48da-a3bd-8ff8cd76d199 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] cloning vms/8d469882-2e42-4956-8009-73a78a8e5c80_disk@263de0c6ce244c27868fae2284e5d11b to images/f0e8ecfe-6e18-4449-a7f5-bd1505cb90f6 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:09:13 np0005548731 nova_compute[232433]: 2025-12-06 07:09:13.795 232437 DEBUG nova.storage.rbd_utils [None req-fdae895e-3310-48da-a3bd-8ff8cd76d199 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] flattening images/f0e8ecfe-6e18-4449-a7f5-bd1505cb90f6 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  6 02:09:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:09:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:14.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:09:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:09:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:14.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:09:14 np0005548731 nova_compute[232433]: 2025-12-06 07:09:14.361 232437 DEBUG nova.storage.rbd_utils [None req-fdae895e-3310-48da-a3bd-8ff8cd76d199 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] removing snapshot(263de0c6ce244c27868fae2284e5d11b) on rbd image(8d469882-2e42-4956-8009-73a78a8e5c80_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:09:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e199 e199: 3 total, 3 up, 3 in
Dec  6 02:09:14 np0005548731 nova_compute[232433]: 2025-12-06 07:09:14.973 232437 DEBUG nova.storage.rbd_utils [None req-fdae895e-3310-48da-a3bd-8ff8cd76d199 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] creating snapshot(snap) on rbd image(f0e8ecfe-6e18-4449-a7f5-bd1505cb90f6) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:09:15 np0005548731 nova_compute[232433]: 2025-12-06 07:09:15.223 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:15 np0005548731 nova_compute[232433]: 2025-12-06 07:09:15.491 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:15 np0005548731 nova_compute[232433]: 2025-12-06 07:09:15.905 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e200 e200: 3 total, 3 up, 3 in
Dec  6 02:09:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:16.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:16.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:16 np0005548731 nova_compute[232433]: 2025-12-06 07:09:16.566 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:17 np0005548731 nova_compute[232433]: 2025-12-06 07:09:17.797 232437 INFO nova.virt.libvirt.driver [None req-fdae895e-3310-48da-a3bd-8ff8cd76d199 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Snapshot image upload complete#033[00m
Dec  6 02:09:17 np0005548731 nova_compute[232433]: 2025-12-06 07:09:17.798 232437 INFO nova.compute.manager [None req-fdae895e-3310-48da-a3bd-8ff8cd76d199 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Took 5.57 seconds to snapshot the instance on the hypervisor.#033[00m
Dec  6 02:09:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e201 e201: 3 total, 3 up, 3 in
Dec  6 02:09:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:18.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:18.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:09:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:20.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:09:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:20.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:20 np0005548731 nova_compute[232433]: 2025-12-06 07:09:20.908 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:21 np0005548731 nova_compute[232433]: 2025-12-06 07:09:21.568 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:09:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:09:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:09:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:09:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:09:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:09:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:22.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:09:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:09:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:22.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:09:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e202 e202: 3 total, 3 up, 3 in
Dec  6 02:09:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:24.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:24.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:09:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5915 writes, 31K keys, 5915 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 5915 writes, 5915 syncs, 1.00 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1648 writes, 7985 keys, 1648 commit groups, 1.0 writes per commit group, ingest: 16.49 MB, 0.03 MB/s#012Interval WAL: 1648 writes, 1648 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     19.0      2.06              0.12        17    0.121       0      0       0.0       0.0#012  L6      1/0    9.27 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.7     62.7     51.9      2.78              0.43        16    0.174     85K   8923       0.0       0.0#012 Sum      1/0    9.27 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.7     36.0     37.9      4.84              0.55        33    0.147     85K   8923       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    124.7    128.0      0.36              0.15         8    0.045     25K   2540       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     62.7     51.9      2.78              0.43        16    0.174     85K   8923       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.1      2.06              0.12        16    0.129       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.18 GB write, 0.08 MB/s write, 0.17 GB read, 0.07 MB/s read, 4.8 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 18.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000167 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1027,17.55 MB,5.77179%) FilterBlock(33,246.48 KB,0.0791801%) IndexBlock(33,434.86 KB,0.139693%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 02:09:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e203 e203: 3 total, 3 up, 3 in
Dec  6 02:09:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:25 np0005548731 nova_compute[232433]: 2025-12-06 07:09:25.912 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:09:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:26.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:09:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e204 e204: 3 total, 3 up, 3 in
Dec  6 02:09:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:26.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:26 np0005548731 nova_compute[232433]: 2025-12-06 07:09:26.570 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:28.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:28.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:09:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:09:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e205 e205: 3 total, 3 up, 3 in
Dec  6 02:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:09:29.988 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:09:29.989 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:09:29 np0005548731 nova_compute[232433]: 2025-12-06 07:09:29.989 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:30.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:30.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:30 np0005548731 nova_compute[232433]: 2025-12-06 07:09:30.915 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:31 np0005548731 nova_compute[232433]: 2025-12-06 07:09:31.572 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:32.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:32.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e206 e206: 3 total, 3 up, 3 in
Dec  6 02:09:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:09:33.991 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:09:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:34.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:34.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:35 np0005548731 nova_compute[232433]: 2025-12-06 07:09:35.918 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:36.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:36.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:36 np0005548731 nova_compute[232433]: 2025-12-06 07:09:36.573 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:38.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e207 e207: 3 total, 3 up, 3 in
Dec  6 02:09:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:38.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:40.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:40.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:40 np0005548731 nova_compute[232433]: 2025-12-06 07:09:40.921 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:41 np0005548731 nova_compute[232433]: 2025-12-06 07:09:41.575 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:41 np0005548731 podman[253881]: 2025-12-06 07:09:41.908525341 +0000 UTC m=+0.055094525 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:09:41 np0005548731 podman[253882]: 2025-12-06 07:09:41.935203646 +0000 UTC m=+0.081863312 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 02:09:41 np0005548731 podman[253883]: 2025-12-06 07:09:41.938045685 +0000 UTC m=+0.083750349 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec  6 02:09:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:42.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:42.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:09:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:44.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:09:44 np0005548731 nova_compute[232433]: 2025-12-06 07:09:44.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:09:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:09:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:44.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:09:45 np0005548731 nova_compute[232433]: 2025-12-06 07:09:45.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:09:45 np0005548731 nova_compute[232433]: 2025-12-06 07:09:45.924 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:46.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:46.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:46 np0005548731 nova_compute[232433]: 2025-12-06 07:09:46.576 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:48.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:49 np0005548731 nova_compute[232433]: 2025-12-06 07:09:49.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:09:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e208 e208: 3 total, 3 up, 3 in
Dec  6 02:09:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:50.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:50 np0005548731 nova_compute[232433]: 2025-12-06 07:09:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:09:50 np0005548731 nova_compute[232433]: 2025-12-06 07:09:50.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:09:50 np0005548731 nova_compute[232433]: 2025-12-06 07:09:50.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:09:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:50.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:50 np0005548731 nova_compute[232433]: 2025-12-06 07:09:50.730 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-8d469882-2e42-4956-8009-73a78a8e5c80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:09:50 np0005548731 nova_compute[232433]: 2025-12-06 07:09:50.730 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-8d469882-2e42-4956-8009-73a78a8e5c80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:09:50 np0005548731 nova_compute[232433]: 2025-12-06 07:09:50.730 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:09:50 np0005548731 nova_compute[232433]: 2025-12-06 07:09:50.731 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8d469882-2e42-4956-8009-73a78a8e5c80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:09:50 np0005548731 nova_compute[232433]: 2025-12-06 07:09:50.926 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:51 np0005548731 nova_compute[232433]: 2025-12-06 07:09:51.148 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:09:51 np0005548731 nova_compute[232433]: 2025-12-06 07:09:51.578 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:51 np0005548731 nova_compute[232433]: 2025-12-06 07:09:51.705 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:09:51 np0005548731 nova_compute[232433]: 2025-12-06 07:09:51.734 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-8d469882-2e42-4956-8009-73a78a8e5c80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:09:51 np0005548731 nova_compute[232433]: 2025-12-06 07:09:51.735 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:09:51 np0005548731 nova_compute[232433]: 2025-12-06 07:09:51.735 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:09:51 np0005548731 nova_compute[232433]: 2025-12-06 07:09:51.735 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:09:51 np0005548731 nova_compute[232433]: 2025-12-06 07:09:51.736 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:09:51 np0005548731 nova_compute[232433]: 2025-12-06 07:09:51.736 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:09:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:52.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.131 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.132 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:09:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:52.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:09:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/599168457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.718 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.797 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.797 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.938 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.940 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4559MB free_disk=20.83084487915039GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.940 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:09:52 np0005548731 nova_compute[232433]: 2025-12-06 07:09:52.940 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:09:53 np0005548731 nova_compute[232433]: 2025-12-06 07:09:53.046 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 8d469882-2e42-4956-8009-73a78a8e5c80 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:09:53 np0005548731 nova_compute[232433]: 2025-12-06 07:09:53.046 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:09:53 np0005548731 nova_compute[232433]: 2025-12-06 07:09:53.046 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:09:53 np0005548731 nova_compute[232433]: 2025-12-06 07:09:53.169 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:09:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e209 e209: 3 total, 3 up, 3 in
Dec  6 02:09:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:09:53 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2339433088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:09:53 np0005548731 nova_compute[232433]: 2025-12-06 07:09:53.776 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:09:53 np0005548731 nova_compute[232433]: 2025-12-06 07:09:53.784 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:09:54 np0005548731 nova_compute[232433]: 2025-12-06 07:09:54.010 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:09:54 np0005548731 nova_compute[232433]: 2025-12-06 07:09:54.075 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:09:54 np0005548731 nova_compute[232433]: 2025-12-06 07:09:54.075 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:09:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:54.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:54.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:55 np0005548731 nova_compute[232433]: 2025-12-06 07:09:55.929 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:56.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:09:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:56.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:09:56 np0005548731 nova_compute[232433]: 2025-12-06 07:09:56.580 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:09:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e210 e210: 3 total, 3 up, 3 in
Dec  6 02:09:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:09:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:58.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:09:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:09:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:09:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:58.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:09:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:09:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e211 e211: 3 total, 3 up, 3 in
Dec  6 02:10:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:00.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:10:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:00.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:10:00 np0005548731 nova_compute[232433]: 2025-12-06 07:10:00.429 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "8d469882-2e42-4956-8009-73a78a8e5c80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:10:00 np0005548731 nova_compute[232433]: 2025-12-06 07:10:00.430 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "8d469882-2e42-4956-8009-73a78a8e5c80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:10:00 np0005548731 nova_compute[232433]: 2025-12-06 07:10:00.430 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "8d469882-2e42-4956-8009-73a78a8e5c80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:10:00 np0005548731 nova_compute[232433]: 2025-12-06 07:10:00.430 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "8d469882-2e42-4956-8009-73a78a8e5c80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:10:00 np0005548731 nova_compute[232433]: 2025-12-06 07:10:00.430 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "8d469882-2e42-4956-8009-73a78a8e5c80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:10:00 np0005548731 nova_compute[232433]: 2025-12-06 07:10:00.432 232437 INFO nova.compute.manager [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Terminating instance#033[00m
Dec  6 02:10:00 np0005548731 nova_compute[232433]: 2025-12-06 07:10:00.432 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "refresh_cache-8d469882-2e42-4956-8009-73a78a8e5c80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:10:00 np0005548731 nova_compute[232433]: 2025-12-06 07:10:00.433 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquired lock "refresh_cache-8d469882-2e42-4956-8009-73a78a8e5c80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:10:00 np0005548731 nova_compute[232433]: 2025-12-06 07:10:00.433 232437 DEBUG nova.network.neutron [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:10:00 np0005548731 nova_compute[232433]: 2025-12-06 07:10:00.780 232437 DEBUG nova.network.neutron [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:10:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:00.853 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:10:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:00.853 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:10:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:00.853 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:10:00 np0005548731 nova_compute[232433]: 2025-12-06 07:10:00.978 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:10:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1088743501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:10:01 np0005548731 nova_compute[232433]: 2025-12-06 07:10:01.581 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:01 np0005548731 nova_compute[232433]: 2025-12-06 07:10:01.640 232437 DEBUG nova.network.neutron [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:10:01 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 02:10:01 np0005548731 nova_compute[232433]: 2025-12-06 07:10:01.675 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Releasing lock "refresh_cache-8d469882-2e42-4956-8009-73a78a8e5c80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:10:01 np0005548731 nova_compute[232433]: 2025-12-06 07:10:01.676 232437 DEBUG nova.compute.manager [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:10:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:02.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:10:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1088743501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:10:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:02.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:02 np0005548731 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000030.scope: Deactivated successfully.
Dec  6 02:10:02 np0005548731 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000030.scope: Consumed 15.974s CPU time.
Dec  6 02:10:02 np0005548731 systemd-machined[195355]: Machine qemu-22-instance-00000030 terminated.
Dec  6 02:10:02 np0005548731 nova_compute[232433]: 2025-12-06 07:10:02.502 232437 INFO nova.virt.libvirt.driver [-] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Instance destroyed successfully.#033[00m
Dec  6 02:10:02 np0005548731 nova_compute[232433]: 2025-12-06 07:10:02.503 232437 DEBUG nova.objects.instance [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lazy-loading 'resources' on Instance uuid 8d469882-2e42-4956-8009-73a78a8e5c80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:10:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e212 e212: 3 total, 3 up, 3 in
Dec  6 02:10:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:10:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:04.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:04.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:05 np0005548731 nova_compute[232433]: 2025-12-06 07:10:05.980 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:06.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:06.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:06 np0005548731 nova_compute[232433]: 2025-12-06 07:10:06.562 232437 INFO nova.virt.libvirt.driver [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Deleting instance files /var/lib/nova/instances/8d469882-2e42-4956-8009-73a78a8e5c80_del#033[00m
Dec  6 02:10:06 np0005548731 nova_compute[232433]: 2025-12-06 07:10:06.563 232437 INFO nova.virt.libvirt.driver [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Deletion of /var/lib/nova/instances/8d469882-2e42-4956-8009-73a78a8e5c80_del complete#033[00m
Dec  6 02:10:06 np0005548731 nova_compute[232433]: 2025-12-06 07:10:06.638 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:06 np0005548731 nova_compute[232433]: 2025-12-06 07:10:06.642 232437 INFO nova.compute.manager [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Took 4.97 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:10:06 np0005548731 nova_compute[232433]: 2025-12-06 07:10:06.642 232437 DEBUG oslo.service.loopingcall [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:10:06 np0005548731 nova_compute[232433]: 2025-12-06 07:10:06.643 232437 DEBUG nova.compute.manager [-] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:10:06 np0005548731 nova_compute[232433]: 2025-12-06 07:10:06.643 232437 DEBUG nova.network.neutron [-] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:10:06 np0005548731 nova_compute[232433]: 2025-12-06 07:10:06.982 232437 DEBUG nova.network.neutron [-] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:10:07 np0005548731 nova_compute[232433]: 2025-12-06 07:10:07.012 232437 DEBUG nova.network.neutron [-] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:10:07 np0005548731 nova_compute[232433]: 2025-12-06 07:10:07.040 232437 INFO nova.compute.manager [-] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Took 0.40 seconds to deallocate network for instance.#033[00m
Dec  6 02:10:07 np0005548731 nova_compute[232433]: 2025-12-06 07:10:07.104 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:10:07 np0005548731 nova_compute[232433]: 2025-12-06 07:10:07.105 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:10:07 np0005548731 nova_compute[232433]: 2025-12-06 07:10:07.174 232437 DEBUG oslo_concurrency.processutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:10:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:10:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3796797556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:10:07 np0005548731 nova_compute[232433]: 2025-12-06 07:10:07.847 232437 DEBUG oslo_concurrency.processutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:10:07 np0005548731 nova_compute[232433]: 2025-12-06 07:10:07.856 232437 DEBUG nova.compute.provider_tree [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:10:07 np0005548731 nova_compute[232433]: 2025-12-06 07:10:07.884 232437 DEBUG nova.scheduler.client.report [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:10:07 np0005548731 nova_compute[232433]: 2025-12-06 07:10:07.923 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:10:07 np0005548731 nova_compute[232433]: 2025-12-06 07:10:07.976 232437 INFO nova.scheduler.client.report [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Deleted allocations for instance 8d469882-2e42-4956-8009-73a78a8e5c80#033[00m
Dec  6 02:10:08 np0005548731 nova_compute[232433]: 2025-12-06 07:10:08.048 232437 DEBUG oslo_concurrency.lockutils [None req-9da39169-c12d-45ac-a875-726fb13d78ed 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "8d469882-2e42-4956-8009-73a78a8e5c80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:10:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:10:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:08.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:10:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:10:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:08.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:10:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:10:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e213 e213: 3 total, 3 up, 3 in
Dec  6 02:10:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:10.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:10.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:10 np0005548731 nova_compute[232433]: 2025-12-06 07:10:10.983 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:11 np0005548731 nova_compute[232433]: 2025-12-06 07:10:11.641 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:12.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:12.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:12.354 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:10:12 np0005548731 nova_compute[232433]: 2025-12-06 07:10:12.354 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:12.355 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:10:12 np0005548731 podman[254094]: 2025-12-06 07:10:12.904710591 +0000 UTC m=+0.061184603 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  6 02:10:12 np0005548731 podman[254096]: 2025-12-06 07:10:12.925052154 +0000 UTC m=+0.066527952 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:10:12 np0005548731 podman[254095]: 2025-12-06 07:10:12.940282183 +0000 UTC m=+0.094250554 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 02:10:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:10:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:14.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:14.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:15 np0005548731 nova_compute[232433]: 2025-12-06 07:10:15.986 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:16.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:16.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:16 np0005548731 nova_compute[232433]: 2025-12-06 07:10:16.644 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:17 np0005548731 nova_compute[232433]: 2025-12-06 07:10:17.501 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005002.4997673, 8d469882-2e42-4956-8009-73a78a8e5c80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:10:17 np0005548731 nova_compute[232433]: 2025-12-06 07:10:17.502 232437 INFO nova.compute.manager [-] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:10:17 np0005548731 nova_compute[232433]: 2025-12-06 07:10:17.573 232437 DEBUG nova.compute.manager [None req-dc3841b2-f7de-43e6-b7ba-c552dfb983d4 - - - - - -] [instance: 8d469882-2e42-4956-8009-73a78a8e5c80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:10:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:18.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:18.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 02:10:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:20.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 02:10:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:10:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:20.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:20.358 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:10:20 np0005548731 nova_compute[232433]: 2025-12-06 07:10:20.988 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:21 np0005548731 nova_compute[232433]: 2025-12-06 07:10:21.646 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e214 e214: 3 total, 3 up, 3 in
Dec  6 02:10:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:10:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:22.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:10:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:22.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:24.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:24.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e215 e215: 3 total, 3 up, 3 in
Dec  6 02:10:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:10:25 np0005548731 nova_compute[232433]: 2025-12-06 07:10:25.990 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:26.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:26 np0005548731 nova_compute[232433]: 2025-12-06 07:10:26.647 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e216 e216: 3 total, 3 up, 3 in
Dec  6 02:10:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:28.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:28.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:10:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:10:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 02:10:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 02:10:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:10:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:10:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:10:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:30.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:10:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:30 np0005548731 nova_compute[232433]: 2025-12-06 07:10:30.994 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:10:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2402.4 total, 600.0 interval#012Cumulative writes: 22K writes, 92K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s#012Cumulative WAL: 22K writes, 7340 syncs, 3.04 writes per sync, written: 0.09 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 53K keys, 12K commit groups, 1.0 writes per commit group, ingest: 54.84 MB, 0.09 MB/s#012Interval WAL: 12K writes, 4869 syncs, 2.60 writes per sync, written: 0.05 GB, 0.09 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 02:10:31 np0005548731 nova_compute[232433]: 2025-12-06 07:10:31.648 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:32.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:10:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:10:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e217 e217: 3 total, 3 up, 3 in
Dec  6 02:10:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:34.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:34.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:10:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:10:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:10:35 np0005548731 nova_compute[232433]: 2025-12-06 07:10:35.998 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:36.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:36.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:36 np0005548731 nova_compute[232433]: 2025-12-06 07:10:36.650 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:38.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 e218: 3 total, 3 up, 3 in
Dec  6 02:10:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:38.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:40.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:10:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:40.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:41 np0005548731 nova_compute[232433]: 2025-12-06 07:10:41.001 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:41 np0005548731 nova_compute[232433]: 2025-12-06 07:10:41.676 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:42.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:42.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:43 np0005548731 podman[254573]: 2025-12-06 07:10:43.9285645 +0000 UTC m=+0.078567884 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec  6 02:10:43 np0005548731 podman[254575]: 2025-12-06 07:10:43.928524419 +0000 UTC m=+0.072382802 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:10:44 np0005548731 podman[254574]: 2025-12-06 07:10:44.009742967 +0000 UTC m=+0.153107058 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller)
Dec  6 02:10:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:44.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:44.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:10:46 np0005548731 nova_compute[232433]: 2025-12-06 07:10:46.004 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:46.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:46.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:46 np0005548731 nova_compute[232433]: 2025-12-06 07:10:46.677 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:47 np0005548731 nova_compute[232433]: 2025-12-06 07:10:47.070 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:10:47 np0005548731 nova_compute[232433]: 2025-12-06 07:10:47.071 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:10:47 np0005548731 nova_compute[232433]: 2025-12-06 07:10:47.646 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquiring lock "34bfd039-04be-43ae-8bee-e37471f0fabe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:10:47 np0005548731 nova_compute[232433]: 2025-12-06 07:10:47.646 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:10:47 np0005548731 nova_compute[232433]: 2025-12-06 07:10:47.674 232437 DEBUG nova.compute.manager [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:10:47 np0005548731 nova_compute[232433]: 2025-12-06 07:10:47.809 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:10:47 np0005548731 nova_compute[232433]: 2025-12-06 07:10:47.809 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:10:47 np0005548731 nova_compute[232433]: 2025-12-06 07:10:47.819 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:10:47 np0005548731 nova_compute[232433]: 2025-12-06 07:10:47.819 232437 INFO nova.compute.claims [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.016 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:10:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:48.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:48.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:10:48 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1779807841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.434 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.440 232437 DEBUG nova.compute.provider_tree [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.457 232437 DEBUG nova.scheduler.client.report [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.495 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.497 232437 DEBUG nova.compute.manager [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.549 232437 DEBUG nova.compute.manager [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.550 232437 DEBUG nova.network.neutron [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.588 232437 INFO nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.614 232437 DEBUG nova.compute.manager [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.741 232437 DEBUG nova.compute.manager [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.743 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.743 232437 INFO nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Creating image(s)#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.772 232437 DEBUG nova.storage.rbd_utils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] rbd image 34bfd039-04be-43ae-8bee-e37471f0fabe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.798 232437 DEBUG nova.storage.rbd_utils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] rbd image 34bfd039-04be-43ae-8bee-e37471f0fabe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.826 232437 DEBUG nova.storage.rbd_utils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] rbd image 34bfd039-04be-43ae-8bee-e37471f0fabe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.829 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.893 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.894 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.895 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.895 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.926 232437 DEBUG nova.storage.rbd_utils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] rbd image 34bfd039-04be-43ae-8bee-e37471f0fabe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:10:48 np0005548731 nova_compute[232433]: 2025-12-06 07:10:48.930 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 34bfd039-04be-43ae-8bee-e37471f0fabe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:10:49 np0005548731 nova_compute[232433]: 2025-12-06 07:10:49.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:10:49 np0005548731 nova_compute[232433]: 2025-12-06 07:10:49.150 232437 DEBUG nova.policy [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef5fe6979bc448399956738faf6c8d3f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '50fdc02b371b4fb9b701031b47f328f4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:10:50 np0005548731 nova_compute[232433]: 2025-12-06 07:10:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:10:50 np0005548731 nova_compute[232433]: 2025-12-06 07:10:50.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:10:50 np0005548731 nova_compute[232433]: 2025-12-06 07:10:50.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:10:50 np0005548731 nova_compute[232433]: 2025-12-06 07:10:50.150 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  6 02:10:50 np0005548731 nova_compute[232433]: 2025-12-06 07:10:50.150 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:10:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:50.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:50.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:50 np0005548731 nova_compute[232433]: 2025-12-06 07:10:50.771 232437 DEBUG nova.network.neutron [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Successfully created port: 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:10:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:10:51 np0005548731 nova_compute[232433]: 2025-12-06 07:10:51.006 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:51 np0005548731 nova_compute[232433]: 2025-12-06 07:10:51.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:10:51 np0005548731 nova_compute[232433]: 2025-12-06 07:10:51.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:10:51 np0005548731 nova_compute[232433]: 2025-12-06 07:10:51.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:10:51 np0005548731 nova_compute[232433]: 2025-12-06 07:10:51.679 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:51.734 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:10:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:51.735 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:10:51 np0005548731 nova_compute[232433]: 2025-12-06 07:10:51.776 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:52 np0005548731 nova_compute[232433]: 2025-12-06 07:10:52.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:10:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:52.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:52.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:52 np0005548731 nova_compute[232433]: 2025-12-06 07:10:52.894 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 34bfd039-04be-43ae-8bee-e37471f0fabe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.964s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:10:52 np0005548731 nova_compute[232433]: 2025-12-06 07:10:52.968 232437 DEBUG nova.storage.rbd_utils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] resizing rbd image 34bfd039-04be-43ae-8bee-e37471f0fabe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.432 232437 DEBUG nova.objects.instance [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lazy-loading 'migration_context' on Instance uuid 34bfd039-04be-43ae-8bee-e37471f0fabe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.464 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.465 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Ensure instance console log exists: /var/lib/nova/instances/34bfd039-04be-43ae-8bee-e37471f0fabe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.465 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.466 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.466 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.617 232437 DEBUG nova.network.neutron [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Successfully updated port: 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.642 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquiring lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.642 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquired lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.642 232437 DEBUG nova.network.neutron [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.757 232437 DEBUG nova.compute.manager [req-efe25949-6e42-4c05-9277-dd7ca0db827a req-456a30a6-ee8a-4522-88b1-31ae7fb9a21f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received event network-changed-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.757 232437 DEBUG nova.compute.manager [req-efe25949-6e42-4c05-9277-dd7ca0db827a req-456a30a6-ee8a-4522-88b1-31ae7fb9a21f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Refreshing instance network info cache due to event network-changed-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.758 232437 DEBUG oslo_concurrency.lockutils [req-efe25949-6e42-4c05-9277-dd7ca0db827a req-456a30a6-ee8a-4522-88b1-31ae7fb9a21f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:10:53 np0005548731 nova_compute[232433]: 2025-12-06 07:10:53.938 232437 DEBUG nova.network.neutron [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.139 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.139 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.139 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.140 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.140 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:10:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:54.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:54.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:10:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4067379754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.555 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.719 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.720 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4755MB free_disk=20.988269805908203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.720 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.720 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:10:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:54.737 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.824 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 34bfd039-04be-43ae-8bee-e37471f0fabe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.824 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.824 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:10:54 np0005548731 nova_compute[232433]: 2025-12-06 07:10:54.862 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:10:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:10:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2114790814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:10:55 np0005548731 nova_compute[232433]: 2025-12-06 07:10:55.291 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:10:55 np0005548731 nova_compute[232433]: 2025-12-06 07:10:55.296 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:10:55 np0005548731 nova_compute[232433]: 2025-12-06 07:10:55.329 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:10:55 np0005548731 nova_compute[232433]: 2025-12-06 07:10:55.396 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:10:55 np0005548731 nova_compute[232433]: 2025-12-06 07:10:55.396 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:10:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.010 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:56.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.199 232437 DEBUG nova.network.neutron [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updating instance_info_cache with network_info: [{"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.261 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Releasing lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.262 232437 DEBUG nova.compute.manager [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Instance network_info: |[{"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.262 232437 DEBUG oslo_concurrency.lockutils [req-efe25949-6e42-4c05-9277-dd7ca0db827a req-456a30a6-ee8a-4522-88b1-31ae7fb9a21f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.262 232437 DEBUG nova.network.neutron [req-efe25949-6e42-4c05-9277-dd7ca0db827a req-456a30a6-ee8a-4522-88b1-31ae7fb9a21f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Refreshing network info cache for port 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.268 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Start _get_guest_xml network_info=[{"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.272 232437 WARNING nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.279 232437 DEBUG nova.virt.libvirt.host [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.280 232437 DEBUG nova.virt.libvirt.host [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.296 232437 DEBUG nova.virt.libvirt.host [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.297 232437 DEBUG nova.virt.libvirt.host [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.298 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.298 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.299 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.299 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.299 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.300 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.300 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.300 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.300 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.300 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.301 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.301 232437 DEBUG nova.virt.hardware [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.304 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:10:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:56.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.680 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:10:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1615937358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.811 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.835 232437 DEBUG nova.storage.rbd_utils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] rbd image 34bfd039-04be-43ae-8bee-e37471f0fabe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:10:56 np0005548731 nova_compute[232433]: 2025-12-06 07:10:56.839 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:10:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:10:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1468335041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.290 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.293 232437 DEBUG nova.virt.libvirt.vif [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:10:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-310644807',display_name='tempest-AttachInterfacesUnderV243Test-server-310644807',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-310644807',id=50,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHbMMSUWiyMtN9cuuBp2EC9fia2AoVr6gsZjeT011Z/dGbvmY9q/cWhY7F6phVv+P4yf4RM8TJprhk1AbPCa7DbOPIhqhtk2bfgBisn7iMaQg0dDtm8aXGE+ltU88q+vDA==',key_name='tempest-keypair-427389425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='50fdc02b371b4fb9b701031b47f328f4',ramdisk_id='',reservation_id='r-1lkgof8x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-486339634',owner_user_name='tempest-AttachInterfacesUnderV243Test-486339634-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:10:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef5fe6979bc448399956738faf6c8d3f',uuid=34bfd039-04be-43ae-8bee-e37471f0fabe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.293 232437 DEBUG nova.network.os_vif_util [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Converting VIF {"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.294 232437 DEBUG nova.network.os_vif_util [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:a6:a1,bridge_name='br-int',has_traffic_filtering=True,id=7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c,network=Network(e6ac552d-917a-49f3-abe3-8df7221907a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a36ceaa-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.295 232437 DEBUG nova.objects.instance [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 34bfd039-04be-43ae-8bee-e37471f0fabe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.312 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  <uuid>34bfd039-04be-43ae-8bee-e37471f0fabe</uuid>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  <name>instance-00000032</name>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-310644807</nova:name>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:10:56</nova:creationTime>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <nova:user uuid="ef5fe6979bc448399956738faf6c8d3f">tempest-AttachInterfacesUnderV243Test-486339634-project-member</nova:user>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <nova:project uuid="50fdc02b371b4fb9b701031b47f328f4">tempest-AttachInterfacesUnderV243Test-486339634</nova:project>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <nova:port uuid="7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <entry name="serial">34bfd039-04be-43ae-8bee-e37471f0fabe</entry>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <entry name="uuid">34bfd039-04be-43ae-8bee-e37471f0fabe</entry>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/34bfd039-04be-43ae-8bee-e37471f0fabe_disk">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/34bfd039-04be-43ae-8bee-e37471f0fabe_disk.config">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:25:a6:a1"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <target dev="tap7a36ceaa-7b"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/34bfd039-04be-43ae-8bee-e37471f0fabe/console.log" append="off"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:10:57 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:10:57 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:10:57 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:10:57 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.314 232437 DEBUG nova.compute.manager [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Preparing to wait for external event network-vif-plugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.315 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquiring lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.315 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.316 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.316 232437 DEBUG nova.virt.libvirt.vif [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:10:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-310644807',display_name='tempest-AttachInterfacesUnderV243Test-server-310644807',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-310644807',id=50,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHbMMSUWiyMtN9cuuBp2EC9fia2AoVr6gsZjeT011Z/dGbvmY9q/cWhY7F6phVv+P4yf4RM8TJprhk1AbPCa7DbOPIhqhtk2bfgBisn7iMaQg0dDtm8aXGE+ltU88q+vDA==',key_name='tempest-keypair-427389425',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='50fdc02b371b4fb9b701031b47f328f4',ramdisk_id='',reservation_id='r-1lkgof8x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-486339634',owner_user_name='tempest-AttachInterfacesUnderV243Test-486339634-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:10:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef5fe6979bc448399956738faf6c8d3f',uuid=34bfd039-04be-43ae-8bee-e37471f0fabe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.317 232437 DEBUG nova.network.os_vif_util [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Converting VIF {"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.317 232437 DEBUG nova.network.os_vif_util [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:a6:a1,bridge_name='br-int',has_traffic_filtering=True,id=7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c,network=Network(e6ac552d-917a-49f3-abe3-8df7221907a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a36ceaa-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.318 232437 DEBUG os_vif [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:a6:a1,bridge_name='br-int',has_traffic_filtering=True,id=7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c,network=Network(e6ac552d-917a-49f3-abe3-8df7221907a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a36ceaa-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.319 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.319 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.320 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.325 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.325 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a36ceaa-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.326 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7a36ceaa-7b, col_values=(('external_ids', {'iface-id': '7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:a6:a1', 'vm-uuid': '34bfd039-04be-43ae-8bee-e37471f0fabe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:57 np0005548731 NetworkManager[49182]: <info>  [1765005057.3283] manager: (tap7a36ceaa-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.334 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.335 232437 INFO os_vif [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:a6:a1,bridge_name='br-int',has_traffic_filtering=True,id=7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c,network=Network(e6ac552d-917a-49f3-abe3-8df7221907a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a36ceaa-7b')#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.392 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.407 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.407 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.407 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] No VIF found with MAC fa:16:3e:25:a6:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.408 232437 INFO nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Using config drive#033[00m
Dec  6 02:10:57 np0005548731 nova_compute[232433]: 2025-12-06 07:10:57.430 232437 DEBUG nova.storage.rbd_utils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] rbd image 34bfd039-04be-43ae-8bee-e37471f0fabe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:10:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:10:57Z|00147|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  6 02:10:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:58.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:10:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:10:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:58.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:10:58 np0005548731 nova_compute[232433]: 2025-12-06 07:10:58.374 232437 INFO nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Creating config drive at /var/lib/nova/instances/34bfd039-04be-43ae-8bee-e37471f0fabe/disk.config#033[00m
Dec  6 02:10:58 np0005548731 nova_compute[232433]: 2025-12-06 07:10:58.382 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34bfd039-04be-43ae-8bee-e37471f0fabe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpitg8g75b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:10:58 np0005548731 nova_compute[232433]: 2025-12-06 07:10:58.512 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34bfd039-04be-43ae-8bee-e37471f0fabe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpitg8g75b" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:10:58 np0005548731 nova_compute[232433]: 2025-12-06 07:10:58.537 232437 DEBUG nova.storage.rbd_utils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] rbd image 34bfd039-04be-43ae-8bee-e37471f0fabe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:10:58 np0005548731 nova_compute[232433]: 2025-12-06 07:10:58.542 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/34bfd039-04be-43ae-8bee-e37471f0fabe/disk.config 34bfd039-04be-43ae-8bee-e37471f0fabe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.075 232437 DEBUG nova.network.neutron [req-efe25949-6e42-4c05-9277-dd7ca0db827a req-456a30a6-ee8a-4522-88b1-31ae7fb9a21f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updated VIF entry in instance network info cache for port 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.076 232437 DEBUG nova.network.neutron [req-efe25949-6e42-4c05-9277-dd7ca0db827a req-456a30a6-ee8a-4522-88b1-31ae7fb9a21f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updating instance_info_cache with network_info: [{"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.099 232437 DEBUG oslo_concurrency.lockutils [req-efe25949-6e42-4c05-9277-dd7ca0db827a req-456a30a6-ee8a-4522-88b1-31ae7fb9a21f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.238 232437 DEBUG oslo_concurrency.processutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/34bfd039-04be-43ae-8bee-e37471f0fabe/disk.config 34bfd039-04be-43ae-8bee-e37471f0fabe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.696s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.238 232437 INFO nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Deleting local config drive /var/lib/nova/instances/34bfd039-04be-43ae-8bee-e37471f0fabe/disk.config because it was imported into RBD.#033[00m
Dec  6 02:10:59 np0005548731 kernel: tap7a36ceaa-7b: entered promiscuous mode
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.290 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:10:59Z|00148|binding|INFO|Claiming lport 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c for this chassis.
Dec  6 02:10:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:10:59Z|00149|binding|INFO|7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c: Claiming fa:16:3e:25:a6:a1 10.100.0.7
Dec  6 02:10:59 np0005548731 NetworkManager[49182]: <info>  [1765005059.2916] manager: (tap7a36ceaa-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.293 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.306 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:a6:a1 10.100.0.7'], port_security=['fa:16:3e:25:a6:a1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '34bfd039-04be-43ae-8bee-e37471f0fabe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6ac552d-917a-49f3-abe3-8df7221907a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50fdc02b371b4fb9b701031b47f328f4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '21d8aee7-6b55-46aa-97b6-6c9d816e3a65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eaee5e93-8d75-41c7-b44d-1f644f0df898, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.307 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c in datapath e6ac552d-917a-49f3-abe3-8df7221907a4 bound to our chassis#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.309 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e6ac552d-917a-49f3-abe3-8df7221907a4#033[00m
Dec  6 02:10:59 np0005548731 systemd-udevd[255063]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.321 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cd642a08-8446-4011-b032-b6e7100e004d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.322 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape6ac552d-91 in ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.325 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape6ac552d-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.325 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[93146ca0-7d37-4e9d-b9db-dbc5e4d8873f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 systemd-machined[195355]: New machine qemu-23-instance-00000032.
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.326 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2dca37dd-83f4-4a7f-835d-1f6eb92cb0d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 NetworkManager[49182]: <info>  [1765005059.3390] device (tap7a36ceaa-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:10:59 np0005548731 NetworkManager[49182]: <info>  [1765005059.3404] device (tap7a36ceaa-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.339 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0e547c-251b-4cc6-a6e3-800907fc83c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 systemd[1]: Started Virtual Machine qemu-23-instance-00000032.
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.360 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.365 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[82f1e9f5-015f-4aa4-968a-4bba38ed6c3a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:10:59Z|00150|binding|INFO|Setting lport 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c ovn-installed in OVS
Dec  6 02:10:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:10:59Z|00151|binding|INFO|Setting lport 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c up in Southbound
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.371 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.401 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[be3c58fe-754d-442f-8592-d3865ba967c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.406 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6df3d171-f9e5-4e98-b7f0-7060e0a7e13d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 NetworkManager[49182]: <info>  [1765005059.4079] manager: (tape6ac552d-90): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Dec  6 02:10:59 np0005548731 systemd-udevd[255067]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.436 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc46f63-b0c6-42a3-a089-fe51c3884e8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.440 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b42bdb-fe60-4965-a48f-b2622eb050f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 NetworkManager[49182]: <info>  [1765005059.4642] device (tape6ac552d-90): carrier: link connected
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.470 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[42908440-2b9a-4d71-8887-f6d442e61a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.485 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[40e41a76-be81-4c04-bae5-28c249ac2d45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6ac552d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:86:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532147, 'reachable_time': 16211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255096, 'error': None, 'target': 'ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.500 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fb400bfd-00d3-4f3c-aa81-d6cbfe64ed73]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:86a5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 532147, 'tstamp': 532147}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255097, 'error': None, 'target': 'ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.515 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[13f1ba64-9f78-4687-bf7c-31682dbe070d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6ac552d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:86:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532147, 'reachable_time': 16211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255098, 'error': None, 'target': 'ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.542 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[04189548-eccc-48da-822a-a982ca98058a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.609 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2978e48d-d1d5-457b-9ca6-e881c1704d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.612 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6ac552d-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.612 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.612 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6ac552d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.614 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:59 np0005548731 kernel: tape6ac552d-90: entered promiscuous mode
Dec  6 02:10:59 np0005548731 NetworkManager[49182]: <info>  [1765005059.6149] manager: (tape6ac552d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.616 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.617 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape6ac552d-90, col_values=(('external_ids', {'iface-id': 'c0d8bdd8-dc44-440a-8665-b4279c85df6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.618 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:10:59Z|00152|binding|INFO|Releasing lport c0d8bdd8-dc44-440a-8665-b4279c85df6c from this chassis (sb_readonly=0)
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.673 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.674 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e6ac552d-917a-49f3-abe3-8df7221907a4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e6ac552d-917a-49f3-abe3-8df7221907a4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.675 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb7b239-2ce6-4e81-9ec7-a065d7eafd52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.675 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-e6ac552d-917a-49f3-abe3-8df7221907a4
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/e6ac552d-917a-49f3-abe3-8df7221907a4.pid.haproxy
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID e6ac552d-917a-49f3-abe3-8df7221907a4
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:10:59.676 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4', 'env', 'PROCESS_TAG=haproxy-e6ac552d-917a-49f3-abe3-8df7221907a4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e6ac552d-917a-49f3-abe3-8df7221907a4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.872 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005059.8721538, 34bfd039-04be-43ae-8bee-e37471f0fabe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.873 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] VM Started (Lifecycle Event)#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.920 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.923 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005059.8723762, 34bfd039-04be-43ae-8bee-e37471f0fabe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.924 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.940 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.942 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:10:59 np0005548731 nova_compute[232433]: 2025-12-06 07:10:59.960 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:11:00 np0005548731 podman[255168]: 2025-12-06 07:11:00.012202161 +0000 UTC m=+0.023247286 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:11:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:00.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.269 232437 DEBUG nova.compute.manager [req-f227fd28-f8f2-4255-8213-7cffe5c2d545 req-7eb01483-04f7-443e-9b7c-b9ce1b67731a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received event network-vif-plugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.269 232437 DEBUG oslo_concurrency.lockutils [req-f227fd28-f8f2-4255-8213-7cffe5c2d545 req-7eb01483-04f7-443e-9b7c-b9ce1b67731a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.269 232437 DEBUG oslo_concurrency.lockutils [req-f227fd28-f8f2-4255-8213-7cffe5c2d545 req-7eb01483-04f7-443e-9b7c-b9ce1b67731a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.269 232437 DEBUG oslo_concurrency.lockutils [req-f227fd28-f8f2-4255-8213-7cffe5c2d545 req-7eb01483-04f7-443e-9b7c-b9ce1b67731a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.270 232437 DEBUG nova.compute.manager [req-f227fd28-f8f2-4255-8213-7cffe5c2d545 req-7eb01483-04f7-443e-9b7c-b9ce1b67731a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Processing event network-vif-plugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.270 232437 DEBUG nova.compute.manager [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.274 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005060.27315, 34bfd039-04be-43ae-8bee-e37471f0fabe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.274 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.275 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.278 232437 INFO nova.virt.libvirt.driver [-] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Instance spawned successfully.#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.278 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.307 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.312 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.312 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.313 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.313 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.313 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.313 232437 DEBUG nova.virt.libvirt.driver [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.317 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.359 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:11:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:00.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.391 232437 INFO nova.compute.manager [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Took 11.65 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.391 232437 DEBUG nova.compute.manager [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:11:00 np0005548731 podman[255168]: 2025-12-06 07:11:00.420732004 +0000 UTC m=+0.431777159 container create 8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.478 232437 INFO nova.compute.manager [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Took 12.72 seconds to build instance.#033[00m
Dec  6 02:11:00 np0005548731 nova_compute[232433]: 2025-12-06 07:11:00.511 232437 DEBUG oslo_concurrency.lockutils [None req-251994b0-76dd-420f-968c-0378d5e29d4f ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:00 np0005548731 systemd[1]: Started libpod-conmon-8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac.scope.
Dec  6 02:11:00 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:11:00 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277f9e71b6f7a59ec50be33bc92a57dff5871cf40928814d6628811506229787/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:11:00 np0005548731 podman[255168]: 2025-12-06 07:11:00.561393922 +0000 UTC m=+0.572439067 container init 8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:11:00 np0005548731 podman[255168]: 2025-12-06 07:11:00.568793325 +0000 UTC m=+0.579838440 container start 8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 02:11:00 np0005548731 neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4[255184]: [NOTICE]   (255188) : New worker (255190) forked
Dec  6 02:11:00 np0005548731 neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4[255184]: [NOTICE]   (255188) : Loading success.
Dec  6 02:11:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:00.853 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:00.854 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:00.855 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:01 np0005548731 nova_compute[232433]: 2025-12-06 07:11:01.683 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:02.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:02 np0005548731 nova_compute[232433]: 2025-12-06 07:11:02.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:02.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:02 np0005548731 nova_compute[232433]: 2025-12-06 07:11:02.448 232437 DEBUG nova.compute.manager [req-2c28281f-8658-4ec3-839f-a72ed825e7a4 req-40c673ff-4bda-4826-b076-ed5230b4aa61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received event network-vif-plugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:11:02 np0005548731 nova_compute[232433]: 2025-12-06 07:11:02.449 232437 DEBUG oslo_concurrency.lockutils [req-2c28281f-8658-4ec3-839f-a72ed825e7a4 req-40c673ff-4bda-4826-b076-ed5230b4aa61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:02 np0005548731 nova_compute[232433]: 2025-12-06 07:11:02.449 232437 DEBUG oslo_concurrency.lockutils [req-2c28281f-8658-4ec3-839f-a72ed825e7a4 req-40c673ff-4bda-4826-b076-ed5230b4aa61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:02 np0005548731 nova_compute[232433]: 2025-12-06 07:11:02.449 232437 DEBUG oslo_concurrency.lockutils [req-2c28281f-8658-4ec3-839f-a72ed825e7a4 req-40c673ff-4bda-4826-b076-ed5230b4aa61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:02 np0005548731 nova_compute[232433]: 2025-12-06 07:11:02.450 232437 DEBUG nova.compute.manager [req-2c28281f-8658-4ec3-839f-a72ed825e7a4 req-40c673ff-4bda-4826-b076-ed5230b4aa61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] No waiting events found dispatching network-vif-plugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:11:02 np0005548731 nova_compute[232433]: 2025-12-06 07:11:02.450 232437 WARNING nova.compute.manager [req-2c28281f-8658-4ec3-839f-a72ed825e7a4 req-40c673ff-4bda-4826-b076-ed5230b4aa61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received unexpected event network-vif-plugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:11:02 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 02:11:02 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 02:11:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:04.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:04.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:04 np0005548731 NetworkManager[49182]: <info>  [1765005064.6570] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Dec  6 02:11:04 np0005548731 NetworkManager[49182]: <info>  [1765005064.6582] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Dec  6 02:11:04 np0005548731 nova_compute[232433]: 2025-12-06 07:11:04.657 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:04 np0005548731 nova_compute[232433]: 2025-12-06 07:11:04.765 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:11:04Z|00153|binding|INFO|Releasing lport c0d8bdd8-dc44-440a-8665-b4279c85df6c from this chassis (sb_readonly=0)
Dec  6 02:11:04 np0005548731 nova_compute[232433]: 2025-12-06 07:11:04.778 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:06.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:06.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:06 np0005548731 nova_compute[232433]: 2025-12-06 07:11:06.502 232437 DEBUG nova.compute.manager [req-2d0548c0-aed8-413c-9bdf-6564875e5a43 req-f4c60693-9e3e-4749-a0d7-eadb03eb6e1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received event network-changed-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:11:06 np0005548731 nova_compute[232433]: 2025-12-06 07:11:06.503 232437 DEBUG nova.compute.manager [req-2d0548c0-aed8-413c-9bdf-6564875e5a43 req-f4c60693-9e3e-4749-a0d7-eadb03eb6e1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Refreshing instance network info cache due to event network-changed-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:11:06 np0005548731 nova_compute[232433]: 2025-12-06 07:11:06.503 232437 DEBUG oslo_concurrency.lockutils [req-2d0548c0-aed8-413c-9bdf-6564875e5a43 req-f4c60693-9e3e-4749-a0d7-eadb03eb6e1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:11:06 np0005548731 nova_compute[232433]: 2025-12-06 07:11:06.503 232437 DEBUG oslo_concurrency.lockutils [req-2d0548c0-aed8-413c-9bdf-6564875e5a43 req-f4c60693-9e3e-4749-a0d7-eadb03eb6e1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:11:06 np0005548731 nova_compute[232433]: 2025-12-06 07:11:06.503 232437 DEBUG nova.network.neutron [req-2d0548c0-aed8-413c-9bdf-6564875e5a43 req-f4c60693-9e3e-4749-a0d7-eadb03eb6e1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Refreshing network info cache for port 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:11:06 np0005548731 nova_compute[232433]: 2025-12-06 07:11:06.685 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:07 np0005548731 nova_compute[232433]: 2025-12-06 07:11:07.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:08.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:08 np0005548731 nova_compute[232433]: 2025-12-06 07:11:08.353 232437 DEBUG nova.network.neutron [req-2d0548c0-aed8-413c-9bdf-6564875e5a43 req-f4c60693-9e3e-4749-a0d7-eadb03eb6e1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updated VIF entry in instance network info cache for port 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:11:08 np0005548731 nova_compute[232433]: 2025-12-06 07:11:08.354 232437 DEBUG nova.network.neutron [req-2d0548c0-aed8-413c-9bdf-6564875e5a43 req-f4c60693-9e3e-4749-a0d7-eadb03eb6e1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updating instance_info_cache with network_info: [{"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:11:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:08.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:08 np0005548731 nova_compute[232433]: 2025-12-06 07:11:08.398 232437 DEBUG oslo_concurrency.lockutils [req-2d0548c0-aed8-413c-9bdf-6564875e5a43 req-f4c60693-9e3e-4749-a0d7-eadb03eb6e1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:11:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:11:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3960716737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:11:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:11:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3960716737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:11:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:10.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:10.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:11 np0005548731 nova_compute[232433]: 2025-12-06 07:11:11.686 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:11:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:12.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:11:12 np0005548731 nova_compute[232433]: 2025-12-06 07:11:12.331 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:12.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:13 np0005548731 nova_compute[232433]: 2025-12-06 07:11:13.364 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:11:14Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:25:a6:a1 10.100.0.7
Dec  6 02:11:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:11:14Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:a6:a1 10.100.0.7
Dec  6 02:11:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:14.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:14.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:14 np0005548731 podman[255209]: 2025-12-06 07:11:14.892893721 +0000 UTC m=+0.050414598 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:11:14 np0005548731 podman[255211]: 2025-12-06 07:11:14.915593422 +0000 UTC m=+0.069286184 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:11:14 np0005548731 podman[255210]: 2025-12-06 07:11:14.943377539 +0000 UTC m=+0.102237259 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  6 02:11:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:11:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:16.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:11:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:16.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:16 np0005548731 nova_compute[232433]: 2025-12-06 07:11:16.687 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:17 np0005548731 nova_compute[232433]: 2025-12-06 07:11:17.332 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:18.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:18.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:19 np0005548731 nova_compute[232433]: 2025-12-06 07:11:19.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:20.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:20.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:21 np0005548731 nova_compute[232433]: 2025-12-06 07:11:21.689 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:22.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:22 np0005548731 nova_compute[232433]: 2025-12-06 07:11:22.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:22.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:23 np0005548731 nova_compute[232433]: 2025-12-06 07:11:23.424 232437 DEBUG nova.objects.instance [None req-b6a0a8ae-c428-4e1f-ac42-7279a61d096d ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lazy-loading 'flavor' on Instance uuid 34bfd039-04be-43ae-8bee-e37471f0fabe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:11:23 np0005548731 nova_compute[232433]: 2025-12-06 07:11:23.476 232437 DEBUG oslo_concurrency.lockutils [None req-b6a0a8ae-c428-4e1f-ac42-7279a61d096d ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquiring lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:11:23 np0005548731 nova_compute[232433]: 2025-12-06 07:11:23.477 232437 DEBUG oslo_concurrency.lockutils [None req-b6a0a8ae-c428-4e1f-ac42-7279a61d096d ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquired lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:11:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:24.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:24.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:11:25Z|00154|binding|INFO|Releasing lport c0d8bdd8-dc44-440a-8665-b4279c85df6c from this chassis (sb_readonly=0)
Dec  6 02:11:25 np0005548731 nova_compute[232433]: 2025-12-06 07:11:25.690 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:11:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:26.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:11:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:26.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:26 np0005548731 nova_compute[232433]: 2025-12-06 07:11:26.692 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:27 np0005548731 nova_compute[232433]: 2025-12-06 07:11:27.173 232437 DEBUG nova.network.neutron [None req-b6a0a8ae-c428-4e1f-ac42-7279a61d096d ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:11:27 np0005548731 nova_compute[232433]: 2025-12-06 07:11:27.390 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:27 np0005548731 nova_compute[232433]: 2025-12-06 07:11:27.428 232437 DEBUG nova.compute.manager [req-afe7744a-8b43-4d61-be9b-a667a29ca7d4 req-8d285ddf-ee2d-4183-8314-b2dec7208bed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received event network-changed-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:11:27 np0005548731 nova_compute[232433]: 2025-12-06 07:11:27.429 232437 DEBUG nova.compute.manager [req-afe7744a-8b43-4d61-be9b-a667a29ca7d4 req-8d285ddf-ee2d-4183-8314-b2dec7208bed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Refreshing instance network info cache due to event network-changed-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:11:27 np0005548731 nova_compute[232433]: 2025-12-06 07:11:27.429 232437 DEBUG oslo_concurrency.lockutils [req-afe7744a-8b43-4d61-be9b-a667a29ca7d4 req-8d285ddf-ee2d-4183-8314-b2dec7208bed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:11:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:28.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:11:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:28.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:11:29 np0005548731 nova_compute[232433]: 2025-12-06 07:11:29.272 232437 DEBUG nova.network.neutron [None req-b6a0a8ae-c428-4e1f-ac42-7279a61d096d ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updating instance_info_cache with network_info: [{"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:11:29 np0005548731 nova_compute[232433]: 2025-12-06 07:11:29.294 232437 DEBUG oslo_concurrency.lockutils [None req-b6a0a8ae-c428-4e1f-ac42-7279a61d096d ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Releasing lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:11:29 np0005548731 nova_compute[232433]: 2025-12-06 07:11:29.295 232437 DEBUG nova.compute.manager [None req-b6a0a8ae-c428-4e1f-ac42-7279a61d096d ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  6 02:11:29 np0005548731 nova_compute[232433]: 2025-12-06 07:11:29.295 232437 DEBUG nova.compute.manager [None req-b6a0a8ae-c428-4e1f-ac42-7279a61d096d ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] network_info to inject: |[{"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  6 02:11:29 np0005548731 nova_compute[232433]: 2025-12-06 07:11:29.300 232437 DEBUG oslo_concurrency.lockutils [req-afe7744a-8b43-4d61-be9b-a667a29ca7d4 req-8d285ddf-ee2d-4183-8314-b2dec7208bed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:11:29 np0005548731 nova_compute[232433]: 2025-12-06 07:11:29.300 232437 DEBUG nova.network.neutron [req-afe7744a-8b43-4d61-be9b-a667a29ca7d4 req-8d285ddf-ee2d-4183-8314-b2dec7208bed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Refreshing network info cache for port 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:11:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:30.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:30.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:30 np0005548731 nova_compute[232433]: 2025-12-06 07:11:30.779 232437 DEBUG nova.objects.instance [None req-dc4f4dd6-d8d9-4ea6-84db-8f7dfdda311e ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lazy-loading 'flavor' on Instance uuid 34bfd039-04be-43ae-8bee-e37471f0fabe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:11:30 np0005548731 nova_compute[232433]: 2025-12-06 07:11:30.820 232437 DEBUG oslo_concurrency.lockutils [None req-dc4f4dd6-d8d9-4ea6-84db-8f7dfdda311e ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquiring lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:11:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e219 e219: 3 total, 3 up, 3 in
Dec  6 02:11:31 np0005548731 nova_compute[232433]: 2025-12-06 07:11:31.694 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:32.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:32 np0005548731 nova_compute[232433]: 2025-12-06 07:11:32.325 232437 DEBUG nova.network.neutron [req-afe7744a-8b43-4d61-be9b-a667a29ca7d4 req-8d285ddf-ee2d-4183-8314-b2dec7208bed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updated VIF entry in instance network info cache for port 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:11:32 np0005548731 nova_compute[232433]: 2025-12-06 07:11:32.326 232437 DEBUG nova.network.neutron [req-afe7744a-8b43-4d61-be9b-a667a29ca7d4 req-8d285ddf-ee2d-4183-8314-b2dec7208bed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updating instance_info_cache with network_info: [{"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:11:32 np0005548731 nova_compute[232433]: 2025-12-06 07:11:32.392 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:11:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:32.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:11:32 np0005548731 nova_compute[232433]: 2025-12-06 07:11:32.485 232437 DEBUG oslo_concurrency.lockutils [req-afe7744a-8b43-4d61-be9b-a667a29ca7d4 req-8d285ddf-ee2d-4183-8314-b2dec7208bed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:11:32 np0005548731 nova_compute[232433]: 2025-12-06 07:11:32.485 232437 DEBUG oslo_concurrency.lockutils [None req-dc4f4dd6-d8d9-4ea6-84db-8f7dfdda311e ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquired lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:11:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e220 e220: 3 total, 3 up, 3 in
Dec  6 02:11:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:33.462 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:11:33 np0005548731 nova_compute[232433]: 2025-12-06 07:11:33.463 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:33.465 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:11:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e221 e221: 3 total, 3 up, 3 in
Dec  6 02:11:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:34.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:34.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:35 np0005548731 nova_compute[232433]: 2025-12-06 07:11:35.030 232437 DEBUG nova.network.neutron [None req-dc4f4dd6-d8d9-4ea6-84db-8f7dfdda311e ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:11:36 np0005548731 podman[255500]: 2025-12-06 07:11:36.026370469 +0000 UTC m=+0.076743009 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  6 02:11:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:36 np0005548731 podman[255500]: 2025-12-06 07:11:36.153606916 +0000 UTC m=+0.203979466 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 02:11:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:36.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:36.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:36 np0005548731 nova_compute[232433]: 2025-12-06 07:11:36.696 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:36 np0005548731 podman[255654]: 2025-12-06 07:11:36.744714444 +0000 UTC m=+0.046090851 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:11:36 np0005548731 podman[255654]: 2025-12-06 07:11:36.757873069 +0000 UTC m=+0.059249486 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:11:36 np0005548731 nova_compute[232433]: 2025-12-06 07:11:36.888 232437 DEBUG nova.compute.manager [req-5aadf657-ab15-41c2-a3e3-82f4629d4913 req-8cfd697b-cb31-4a45-83e6-9b040c4b5c52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received event network-changed-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:11:36 np0005548731 nova_compute[232433]: 2025-12-06 07:11:36.888 232437 DEBUG nova.compute.manager [req-5aadf657-ab15-41c2-a3e3-82f4629d4913 req-8cfd697b-cb31-4a45-83e6-9b040c4b5c52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Refreshing instance network info cache due to event network-changed-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:11:36 np0005548731 nova_compute[232433]: 2025-12-06 07:11:36.888 232437 DEBUG oslo_concurrency.lockutils [req-5aadf657-ab15-41c2-a3e3-82f4629d4913 req-8cfd697b-cb31-4a45-83e6-9b040c4b5c52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:11:36 np0005548731 podman[255722]: 2025-12-06 07:11:36.951368094 +0000 UTC m=+0.048781386 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, version=2.2.4, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, architecture=x86_64, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  6 02:11:36 np0005548731 podman[255722]: 2025-12-06 07:11:36.962781797 +0000 UTC m=+0.060195039 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., release=1793, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.openshift.expose-services=, vcs-type=git)
Dec  6 02:11:37 np0005548731 nova_compute[232433]: 2025-12-06 07:11:37.087 232437 DEBUG nova.network.neutron [None req-dc4f4dd6-d8d9-4ea6-84db-8f7dfdda311e ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updating instance_info_cache with network_info: [{"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:11:37 np0005548731 nova_compute[232433]: 2025-12-06 07:11:37.196 232437 DEBUG oslo_concurrency.lockutils [None req-dc4f4dd6-d8d9-4ea6-84db-8f7dfdda311e ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Releasing lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:11:37 np0005548731 nova_compute[232433]: 2025-12-06 07:11:37.196 232437 DEBUG nova.compute.manager [None req-dc4f4dd6-d8d9-4ea6-84db-8f7dfdda311e ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  6 02:11:37 np0005548731 nova_compute[232433]: 2025-12-06 07:11:37.196 232437 DEBUG nova.compute.manager [None req-dc4f4dd6-d8d9-4ea6-84db-8f7dfdda311e ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] network_info to inject: |[{"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  6 02:11:37 np0005548731 nova_compute[232433]: 2025-12-06 07:11:37.198 232437 DEBUG oslo_concurrency.lockutils [req-5aadf657-ab15-41c2-a3e3-82f4629d4913 req-8cfd697b-cb31-4a45-83e6-9b040c4b5c52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:11:37 np0005548731 nova_compute[232433]: 2025-12-06 07:11:37.198 232437 DEBUG nova.network.neutron [req-5aadf657-ab15-41c2-a3e3-82f4629d4913 req-8cfd697b-cb31-4a45-83e6-9b040c4b5c52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Refreshing network info cache for port 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:11:37 np0005548731 nova_compute[232433]: 2025-12-06 07:11:37.394 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:37 np0005548731 nova_compute[232433]: 2025-12-06 07:11:37.679 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:11:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:11:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:11:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:11:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:11:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e222 e222: 3 total, 3 up, 3 in
Dec  6 02:11:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:38.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:38.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.473 232437 DEBUG oslo_concurrency.lockutils [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquiring lock "34bfd039-04be-43ae-8bee-e37471f0fabe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.473 232437 DEBUG oslo_concurrency.lockutils [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.473 232437 DEBUG oslo_concurrency.lockutils [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquiring lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.474 232437 DEBUG oslo_concurrency.lockutils [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.474 232437 DEBUG oslo_concurrency.lockutils [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.475 232437 INFO nova.compute.manager [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Terminating instance#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.476 232437 DEBUG nova.compute.manager [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:11:38 np0005548731 kernel: tap7a36ceaa-7b (unregistering): left promiscuous mode
Dec  6 02:11:38 np0005548731 NetworkManager[49182]: <info>  [1765005098.5335] device (tap7a36ceaa-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:11:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:11:38Z|00155|binding|INFO|Releasing lport 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c from this chassis (sb_readonly=0)
Dec  6 02:11:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:11:38Z|00156|binding|INFO|Setting lport 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c down in Southbound
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.577 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:11:38Z|00157|binding|INFO|Removing iface tap7a36ceaa-7b ovn-installed in OVS
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.585 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:a6:a1 10.100.0.7'], port_security=['fa:16:3e:25:a6:a1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '34bfd039-04be-43ae-8bee-e37471f0fabe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6ac552d-917a-49f3-abe3-8df7221907a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50fdc02b371b4fb9b701031b47f328f4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '21d8aee7-6b55-46aa-97b6-6c9d816e3a65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eaee5e93-8d75-41c7-b44d-1f644f0df898, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.586 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c in datapath e6ac552d-917a-49f3-abe3-8df7221907a4 unbound from our chassis#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.587 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e6ac552d-917a-49f3-abe3-8df7221907a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.589 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf8659e-ac4e-48e9-ace4-f7569c4764cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.590 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4 namespace which is not needed anymore#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.597 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:38 np0005548731 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000032.scope: Deactivated successfully.
Dec  6 02:11:38 np0005548731 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000032.scope: Consumed 14.802s CPU time.
Dec  6 02:11:38 np0005548731 systemd-machined[195355]: Machine qemu-23-instance-00000032 terminated.
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.711 232437 INFO nova.virt.libvirt.driver [-] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Instance destroyed successfully.#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.730 232437 DEBUG nova.objects.instance [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lazy-loading 'resources' on Instance uuid 34bfd039-04be-43ae-8bee-e37471f0fabe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:11:38 np0005548731 neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4[255184]: [NOTICE]   (255188) : haproxy version is 2.8.14-c23fe91
Dec  6 02:11:38 np0005548731 neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4[255184]: [NOTICE]   (255188) : path to executable is /usr/sbin/haproxy
Dec  6 02:11:38 np0005548731 neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4[255184]: [WARNING]  (255188) : Exiting Master process...
Dec  6 02:11:38 np0005548731 neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4[255184]: [WARNING]  (255188) : Exiting Master process...
Dec  6 02:11:38 np0005548731 neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4[255184]: [ALERT]    (255188) : Current worker (255190) exited with code 143 (Terminated)
Dec  6 02:11:38 np0005548731 neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4[255184]: [WARNING]  (255188) : All workers exited. Exiting... (0)
Dec  6 02:11:38 np0005548731 systemd[1]: libpod-8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac.scope: Deactivated successfully.
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.750 232437 DEBUG nova.virt.libvirt.vif [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:10:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-310644807',display_name='tempest-AttachInterfacesUnderV243Test-server-310644807',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-310644807',id=50,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHbMMSUWiyMtN9cuuBp2EC9fia2AoVr6gsZjeT011Z/dGbvmY9q/cWhY7F6phVv+P4yf4RM8TJprhk1AbPCa7DbOPIhqhtk2bfgBisn7iMaQg0dDtm8aXGE+ltU88q+vDA==',key_name='tempest-keypair-427389425',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:11:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='50fdc02b371b4fb9b701031b47f328f4',ramdisk_id='',reservation_id='r-1lkgof8x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-486339634',owner_user_name='tempest-AttachInterfacesUnderV243Test-486339634-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:11:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef5fe6979bc448399956738faf6c8d3f',uuid=34bfd039-04be-43ae-8bee-e37471f0fabe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.750 232437 DEBUG nova.network.os_vif_util [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Converting VIF {"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.751 232437 DEBUG nova.network.os_vif_util [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:25:a6:a1,bridge_name='br-int',has_traffic_filtering=True,id=7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c,network=Network(e6ac552d-917a-49f3-abe3-8df7221907a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a36ceaa-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.751 232437 DEBUG os_vif [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:a6:a1,bridge_name='br-int',has_traffic_filtering=True,id=7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c,network=Network(e6ac552d-917a-49f3-abe3-8df7221907a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a36ceaa-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:11:38 np0005548731 podman[255912]: 2025-12-06 07:11:38.752068597 +0000 UTC m=+0.066788602 container died 8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.753 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.754 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a36ceaa-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.755 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.756 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.760 232437 INFO os_vif [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:a6:a1,bridge_name='br-int',has_traffic_filtering=True,id=7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c,network=Network(e6ac552d-917a-49f3-abe3-8df7221907a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a36ceaa-7b')#033[00m
Dec  6 02:11:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac-userdata-shm.mount: Deactivated successfully.
Dec  6 02:11:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay-277f9e71b6f7a59ec50be33bc92a57dff5871cf40928814d6628811506229787-merged.mount: Deactivated successfully.
Dec  6 02:11:38 np0005548731 podman[255912]: 2025-12-06 07:11:38.794085746 +0000 UTC m=+0.108805741 container cleanup 8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:11:38 np0005548731 systemd[1]: libpod-conmon-8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac.scope: Deactivated successfully.
Dec  6 02:11:38 np0005548731 podman[255969]: 2025-12-06 07:11:38.853323582 +0000 UTC m=+0.036929335 container remove 8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.859 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[789231a7-a49f-4398-b58d-bc8bace4d248]: (4, ('Sat Dec  6 07:11:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4 (8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac)\n8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac\nSat Dec  6 07:11:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4 (8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac)\n8ad82285b90b63e9482ade089c91e86904bfdb2dca9a815395a67b2ea07af1ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.861 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c88bec-da89-45d6-8c79-ddfb92c00294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.862 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6ac552d-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.864 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:38 np0005548731 kernel: tape6ac552d-90: left promiscuous mode
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.865 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.868 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e2cab0-af0d-4255-b899-52abc81b8900]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:11:38 np0005548731 nova_compute[232433]: 2025-12-06 07:11:38.879 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.881 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[89f92a38-896e-4aea-af41-b315def8d7a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.882 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[18561175-4432-4e9c-aba4-09253b91e2df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.898 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[192a3f3d-c8e7-41c5-b696-b47c370c02b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532140, 'reachable_time': 24224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255984, 'error': None, 'target': 'ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.901 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e6ac552d-917a-49f3-abe3-8df7221907a4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:11:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:38.901 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[60532122-2a06-4c3b-b632-13160220e259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:11:38 np0005548731 systemd[1]: run-netns-ovnmeta\x2de6ac552d\x2d917a\x2d49f3\x2dabe3\x2d8df7221907a4.mount: Deactivated successfully.
Dec  6 02:11:39 np0005548731 nova_compute[232433]: 2025-12-06 07:11:39.416 232437 DEBUG nova.network.neutron [req-5aadf657-ab15-41c2-a3e3-82f4629d4913 req-8cfd697b-cb31-4a45-83e6-9b040c4b5c52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updated VIF entry in instance network info cache for port 7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:11:39 np0005548731 nova_compute[232433]: 2025-12-06 07:11:39.417 232437 DEBUG nova.network.neutron [req-5aadf657-ab15-41c2-a3e3-82f4629d4913 req-8cfd697b-cb31-4a45-83e6-9b040c4b5c52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updating instance_info_cache with network_info: [{"id": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "address": "fa:16:3e:25:a6:a1", "network": {"id": "e6ac552d-917a-49f3-abe3-8df7221907a4", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-591275388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "50fdc02b371b4fb9b701031b47f328f4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a36ceaa-7b", "ovs_interfaceid": "7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:11:39 np0005548731 nova_compute[232433]: 2025-12-06 07:11:39.458 232437 DEBUG oslo_concurrency.lockutils [req-5aadf657-ab15-41c2-a3e3-82f4629d4913 req-8cfd697b-cb31-4a45-83e6-9b040c4b5c52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-34bfd039-04be-43ae-8bee-e37471f0fabe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:11:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:11:39.468 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:11:39 np0005548731 nova_compute[232433]: 2025-12-06 07:11:39.473 232437 DEBUG nova.compute.manager [req-4c20f2d8-37f8-4ed9-acdc-05c5946b25a0 req-0c984535-bdb6-4c27-848a-f74bb31afcc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received event network-vif-unplugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:11:39 np0005548731 nova_compute[232433]: 2025-12-06 07:11:39.473 232437 DEBUG oslo_concurrency.lockutils [req-4c20f2d8-37f8-4ed9-acdc-05c5946b25a0 req-0c984535-bdb6-4c27-848a-f74bb31afcc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:39 np0005548731 nova_compute[232433]: 2025-12-06 07:11:39.474 232437 DEBUG oslo_concurrency.lockutils [req-4c20f2d8-37f8-4ed9-acdc-05c5946b25a0 req-0c984535-bdb6-4c27-848a-f74bb31afcc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:39 np0005548731 nova_compute[232433]: 2025-12-06 07:11:39.474 232437 DEBUG oslo_concurrency.lockutils [req-4c20f2d8-37f8-4ed9-acdc-05c5946b25a0 req-0c984535-bdb6-4c27-848a-f74bb31afcc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:39 np0005548731 nova_compute[232433]: 2025-12-06 07:11:39.474 232437 DEBUG nova.compute.manager [req-4c20f2d8-37f8-4ed9-acdc-05c5946b25a0 req-0c984535-bdb6-4c27-848a-f74bb31afcc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] No waiting events found dispatching network-vif-unplugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:11:39 np0005548731 nova_compute[232433]: 2025-12-06 07:11:39.474 232437 DEBUG nova.compute.manager [req-4c20f2d8-37f8-4ed9-acdc-05c5946b25a0 req-0c984535-bdb6-4c27-848a-f74bb31afcc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received event network-vif-unplugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:11:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:40.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:11:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:40.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:11:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e223 e223: 3 total, 3 up, 3 in
Dec  6 02:11:41 np0005548731 nova_compute[232433]: 2025-12-06 07:11:41.699 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:41 np0005548731 nova_compute[232433]: 2025-12-06 07:11:41.860 232437 INFO nova.virt.libvirt.driver [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Deleting instance files /var/lib/nova/instances/34bfd039-04be-43ae-8bee-e37471f0fabe_del#033[00m
Dec  6 02:11:41 np0005548731 nova_compute[232433]: 2025-12-06 07:11:41.861 232437 INFO nova.virt.libvirt.driver [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Deletion of /var/lib/nova/instances/34bfd039-04be-43ae-8bee-e37471f0fabe_del complete#033[00m
Dec  6 02:11:42 np0005548731 nova_compute[232433]: 2025-12-06 07:11:41.999 232437 DEBUG nova.compute.manager [req-82d52056-ecf0-4b66-a4c1-097bfed61a5f req-b134d773-2168-4bf8-be30-d66b28c12485 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received event network-vif-plugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:11:42 np0005548731 nova_compute[232433]: 2025-12-06 07:11:42.000 232437 DEBUG oslo_concurrency.lockutils [req-82d52056-ecf0-4b66-a4c1-097bfed61a5f req-b134d773-2168-4bf8-be30-d66b28c12485 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:42 np0005548731 nova_compute[232433]: 2025-12-06 07:11:42.000 232437 DEBUG oslo_concurrency.lockutils [req-82d52056-ecf0-4b66-a4c1-097bfed61a5f req-b134d773-2168-4bf8-be30-d66b28c12485 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:42 np0005548731 nova_compute[232433]: 2025-12-06 07:11:42.000 232437 DEBUG oslo_concurrency.lockutils [req-82d52056-ecf0-4b66-a4c1-097bfed61a5f req-b134d773-2168-4bf8-be30-d66b28c12485 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:42 np0005548731 nova_compute[232433]: 2025-12-06 07:11:42.001 232437 DEBUG nova.compute.manager [req-82d52056-ecf0-4b66-a4c1-097bfed61a5f req-b134d773-2168-4bf8-be30-d66b28c12485 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] No waiting events found dispatching network-vif-plugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:11:42 np0005548731 nova_compute[232433]: 2025-12-06 07:11:42.001 232437 WARNING nova.compute.manager [req-82d52056-ecf0-4b66-a4c1-097bfed61a5f req-b134d773-2168-4bf8-be30-d66b28c12485 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received unexpected event network-vif-plugged-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:11:42 np0005548731 nova_compute[232433]: 2025-12-06 07:11:42.147 232437 INFO nova.compute.manager [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Took 3.67 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:11:42 np0005548731 nova_compute[232433]: 2025-12-06 07:11:42.148 232437 DEBUG oslo.service.loopingcall [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:11:42 np0005548731 nova_compute[232433]: 2025-12-06 07:11:42.148 232437 DEBUG nova.compute.manager [-] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:11:42 np0005548731 nova_compute[232433]: 2025-12-06 07:11:42.149 232437 DEBUG nova.network.neutron [-] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:11:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:42.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:42.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:43 np0005548731 nova_compute[232433]: 2025-12-06 07:11:43.405 232437 DEBUG nova.network.neutron [-] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:11:43 np0005548731 nova_compute[232433]: 2025-12-06 07:11:43.461 232437 INFO nova.compute.manager [-] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Took 1.31 seconds to deallocate network for instance.#033[00m
Dec  6 02:11:43 np0005548731 nova_compute[232433]: 2025-12-06 07:11:43.528 232437 DEBUG oslo_concurrency.lockutils [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:43 np0005548731 nova_compute[232433]: 2025-12-06 07:11:43.529 232437 DEBUG oslo_concurrency.lockutils [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:43 np0005548731 nova_compute[232433]: 2025-12-06 07:11:43.623 232437 DEBUG oslo_concurrency.processutils [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:11:43 np0005548731 nova_compute[232433]: 2025-12-06 07:11:43.757 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e224 e224: 3 total, 3 up, 3 in
Dec  6 02:11:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:11:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2485373490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:11:44 np0005548731 nova_compute[232433]: 2025-12-06 07:11:44.052 232437 DEBUG oslo_concurrency.processutils [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:11:44 np0005548731 nova_compute[232433]: 2025-12-06 07:11:44.059 232437 DEBUG nova.compute.provider_tree [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:11:44 np0005548731 nova_compute[232433]: 2025-12-06 07:11:44.072 232437 DEBUG nova.scheduler.client.report [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:11:44 np0005548731 nova_compute[232433]: 2025-12-06 07:11:44.087 232437 DEBUG nova.compute.manager [req-fc7179e9-2ee5-47b6-9bee-b3f27b7afb7e req-e0f877c4-4399-493c-83d2-0a5b0d93b624 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Received event network-vif-deleted-7a36ceaa-7b3d-424c-91e9-3dfcaac10f3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:11:44 np0005548731 nova_compute[232433]: 2025-12-06 07:11:44.103 232437 DEBUG oslo_concurrency.lockutils [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:44 np0005548731 nova_compute[232433]: 2025-12-06 07:11:44.143 232437 INFO nova.scheduler.client.report [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Deleted allocations for instance 34bfd039-04be-43ae-8bee-e37471f0fabe#033[00m
Dec  6 02:11:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:44.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:44 np0005548731 nova_compute[232433]: 2025-12-06 07:11:44.268 232437 DEBUG oslo_concurrency.lockutils [None req-deacd633-1a3d-41b8-8877-4422f95a91c5 ef5fe6979bc448399956738faf6c8d3f 50fdc02b371b4fb9b701031b47f328f4 - - default default] Lock "34bfd039-04be-43ae-8bee-e37471f0fabe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:44 np0005548731 nova_compute[232433]: 2025-12-06 07:11:44.403 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:44.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:11:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:11:45 np0005548731 nova_compute[232433]: 2025-12-06 07:11:45.145 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:11:45 np0005548731 podman[256113]: 2025-12-06 07:11:45.926723404 +0000 UTC m=+0.085931697 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:11:45 np0005548731 podman[256111]: 2025-12-06 07:11:45.933689056 +0000 UTC m=+0.098053646 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 02:11:45 np0005548731 podman[256112]: 2025-12-06 07:11:45.96742389 +0000 UTC m=+0.116629356 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 02:11:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:46.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:46.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:46 np0005548731 nova_compute[232433]: 2025-12-06 07:11:46.701 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:48 np0005548731 nova_compute[232433]: 2025-12-06 07:11:48.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:11:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:48.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:48.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:48 np0005548731 nova_compute[232433]: 2025-12-06 07:11:48.760 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:50 np0005548731 nova_compute[232433]: 2025-12-06 07:11:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:11:50 np0005548731 nova_compute[232433]: 2025-12-06 07:11:50.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:11:50 np0005548731 nova_compute[232433]: 2025-12-06 07:11:50.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:11:50 np0005548731 nova_compute[232433]: 2025-12-06 07:11:50.123 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:11:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:11:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:50.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:11:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:50.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:51 np0005548731 nova_compute[232433]: 2025-12-06 07:11:51.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:11:51 np0005548731 nova_compute[232433]: 2025-12-06 07:11:51.703 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:52 np0005548731 nova_compute[232433]: 2025-12-06 07:11:52.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:11:52 np0005548731 nova_compute[232433]: 2025-12-06 07:11:52.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:11:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:52.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:52.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.071 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.300 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.587 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.588 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.607 232437 DEBUG nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.709 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005098.7085836, 34bfd039-04be-43ae-8bee-e37471f0fabe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.710 232437 INFO nova.compute.manager [-] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.718 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.719 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.727 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.728 232437 INFO nova.compute.claims [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.735 232437 DEBUG nova.compute.manager [None req-052adcd5-6a9e-475f-abee-de2d97b741ce - - - - - -] [instance: 34bfd039-04be-43ae-8bee-e37471f0fabe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.762 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:53 np0005548731 nova_compute[232433]: 2025-12-06 07:11:53.906 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:11:54 np0005548731 nova_compute[232433]: 2025-12-06 07:11:54.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:11:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:54.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:11:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/159950985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:11:54 np0005548731 nova_compute[232433]: 2025-12-06 07:11:54.378 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:11:54 np0005548731 nova_compute[232433]: 2025-12-06 07:11:54.385 232437 DEBUG nova.compute.provider_tree [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:11:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:54.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:54 np0005548731 nova_compute[232433]: 2025-12-06 07:11:54.450 232437 DEBUG nova.scheduler.client.report [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:11:54 np0005548731 nova_compute[232433]: 2025-12-06 07:11:54.871 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:54 np0005548731 nova_compute[232433]: 2025-12-06 07:11:54.872 232437 DEBUG nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.069 232437 DEBUG nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.069 232437 DEBUG nova.network.neutron [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.096 232437 INFO nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.113 232437 DEBUG nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.136 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.136 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.137 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.137 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.137 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.213 232437 DEBUG nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.215 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.216 232437 INFO nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Creating image(s)#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.248 232437 DEBUG nova.storage.rbd_utils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.283 232437 DEBUG nova.storage.rbd_utils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.315 232437 DEBUG nova.storage.rbd_utils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.319 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.395 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.397 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.397 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.398 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.421 232437 DEBUG nova.storage.rbd_utils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.424 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:11:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:11:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2847361391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.626 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.687 232437 DEBUG nova.policy [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '337447c5cffc48bf8256c9166a6ff0e2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.851 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.852 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4697MB free_disk=20.94676971435547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.852 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.852 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.930 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance cd3678f8-d4b4-4c8e-9c3a-be877c8a2268 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.931 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.931 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:11:55 np0005548731 nova_compute[232433]: 2025-12-06 07:11:55.967 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:11:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:11:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:11:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:56.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:11:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:11:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1606640908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:11:56 np0005548731 nova_compute[232433]: 2025-12-06 07:11:56.405 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:11:56 np0005548731 nova_compute[232433]: 2025-12-06 07:11:56.413 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:11:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:56.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:56 np0005548731 nova_compute[232433]: 2025-12-06 07:11:56.579 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:11:56 np0005548731 nova_compute[232433]: 2025-12-06 07:11:56.664 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:11:56 np0005548731 nova_compute[232433]: 2025-12-06 07:11:56.701 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:11:56 np0005548731 nova_compute[232433]: 2025-12-06 07:11:56.701 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:56 np0005548731 nova_compute[232433]: 2025-12-06 07:11:56.739 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:56 np0005548731 nova_compute[232433]: 2025-12-06 07:11:56.746 232437 DEBUG nova.storage.rbd_utils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] resizing rbd image cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:11:57 np0005548731 nova_compute[232433]: 2025-12-06 07:11:57.426 232437 DEBUG nova.network.neutron [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Successfully created port: 1693e5d5-d58b-4161-a25e-13d925d5185a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:11:57 np0005548731 nova_compute[232433]: 2025-12-06 07:11:57.971 232437 DEBUG nova.objects.instance [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lazy-loading 'migration_context' on Instance uuid cd3678f8-d4b4-4c8e-9c3a-be877c8a2268 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:11:57 np0005548731 nova_compute[232433]: 2025-12-06 07:11:57.987 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:11:57 np0005548731 nova_compute[232433]: 2025-12-06 07:11:57.988 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Ensure instance console log exists: /var/lib/nova/instances/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:11:57 np0005548731 nova_compute[232433]: 2025-12-06 07:11:57.988 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:11:57 np0005548731 nova_compute[232433]: 2025-12-06 07:11:57.989 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:11:57 np0005548731 nova_compute[232433]: 2025-12-06 07:11:57.989 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:11:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:11:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:58.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:11:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:11:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:11:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:58.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:11:58 np0005548731 nova_compute[232433]: 2025-12-06 07:11:58.764 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:11:59 np0005548731 nova_compute[232433]: 2025-12-06 07:11:59.166 232437 DEBUG nova.network.neutron [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Successfully created port: f8184177-3443-451c-8620-29c0cb8524ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:12:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:00.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:00.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:00.854 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:00.855 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:00.855 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:12:01 np0005548731 nova_compute[232433]: 2025-12-06 07:12:01.240 232437 DEBUG nova.network.neutron [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Successfully updated port: 1693e5d5-d58b-4161-a25e-13d925d5185a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:12:01 np0005548731 nova_compute[232433]: 2025-12-06 07:12:01.424 232437 DEBUG nova.compute.manager [req-1f190b45-984a-474d-af32-00cb2e852aca req-c6ec5398-3697-4f53-8091-1b8216b140a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-changed-1693e5d5-d58b-4161-a25e-13d925d5185a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:01 np0005548731 nova_compute[232433]: 2025-12-06 07:12:01.425 232437 DEBUG nova.compute.manager [req-1f190b45-984a-474d-af32-00cb2e852aca req-c6ec5398-3697-4f53-8091-1b8216b140a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Refreshing instance network info cache due to event network-changed-1693e5d5-d58b-4161-a25e-13d925d5185a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:12:01 np0005548731 nova_compute[232433]: 2025-12-06 07:12:01.425 232437 DEBUG oslo_concurrency.lockutils [req-1f190b45-984a-474d-af32-00cb2e852aca req-c6ec5398-3697-4f53-8091-1b8216b140a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:12:01 np0005548731 nova_compute[232433]: 2025-12-06 07:12:01.425 232437 DEBUG oslo_concurrency.lockutils [req-1f190b45-984a-474d-af32-00cb2e852aca req-c6ec5398-3697-4f53-8091-1b8216b140a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:12:01 np0005548731 nova_compute[232433]: 2025-12-06 07:12:01.426 232437 DEBUG nova.network.neutron [req-1f190b45-984a-474d-af32-00cb2e852aca req-c6ec5398-3697-4f53-8091-1b8216b140a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Refreshing network info cache for port 1693e5d5-d58b-4161-a25e-13d925d5185a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.451641) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121451717, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2604, "num_deletes": 269, "total_data_size": 6001329, "memory_usage": 6069824, "flush_reason": "Manual Compaction"}
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121477294, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3921227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31379, "largest_seqno": 33978, "table_properties": {"data_size": 3910173, "index_size": 7228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23428, "raw_average_key_size": 21, "raw_value_size": 3888025, "raw_average_value_size": 3566, "num_data_blocks": 310, "num_entries": 1090, "num_filter_entries": 1090, "num_deletions": 269, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004931, "oldest_key_time": 1765004931, "file_creation_time": 1765005121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 26013 microseconds, and 8581 cpu microseconds.
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.477673) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3921227 bytes OK
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.477784) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.480678) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.480714) EVENT_LOG_v1 {"time_micros": 1765005121480704, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.480736) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 5989612, prev total WAL file size 5989612, number of live WAL files 2.
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.482698) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3829KB)], [60(9492KB)]
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121482784, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 13641467, "oldest_snapshot_seqno": -1}
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6517 keys, 11756237 bytes, temperature: kUnknown
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121641422, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 11756237, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11710266, "index_size": 28581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 165751, "raw_average_key_size": 25, "raw_value_size": 11590741, "raw_average_value_size": 1778, "num_data_blocks": 1153, "num_entries": 6517, "num_filter_entries": 6517, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765005121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.642067) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 11756237 bytes
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.645984) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.8 rd, 74.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 9.3 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 7064, records dropped: 547 output_compression: NoCompression
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.646040) EVENT_LOG_v1 {"time_micros": 1765005121646015, "job": 36, "event": "compaction_finished", "compaction_time_micros": 158913, "compaction_time_cpu_micros": 26101, "output_level": 6, "num_output_files": 1, "total_output_size": 11756237, "num_input_records": 7064, "num_output_records": 6517, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121648244, "job": 36, "event": "table_file_deletion", "file_number": 62}
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121651059, "job": 36, "event": "table_file_deletion", "file_number": 60}
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.482611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.651154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.651161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.651162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.651164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:12:01 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:12:01.651166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:12:01 np0005548731 nova_compute[232433]: 2025-12-06 07:12:01.706 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:02 np0005548731 nova_compute[232433]: 2025-12-06 07:12:02.173 232437 DEBUG nova.network.neutron [req-1f190b45-984a-474d-af32-00cb2e852aca req-c6ec5398-3697-4f53-8091-1b8216b140a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:12:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:02.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:02.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:02 np0005548731 nova_compute[232433]: 2025-12-06 07:12:02.551 232437 DEBUG nova.network.neutron [req-1f190b45-984a-474d-af32-00cb2e852aca req-c6ec5398-3697-4f53-8091-1b8216b140a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:12:02 np0005548731 nova_compute[232433]: 2025-12-06 07:12:02.570 232437 DEBUG oslo_concurrency.lockutils [req-1f190b45-984a-474d-af32-00cb2e852aca req-c6ec5398-3697-4f53-8091-1b8216b140a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:12:03 np0005548731 nova_compute[232433]: 2025-12-06 07:12:03.399 232437 DEBUG nova.network.neutron [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Successfully updated port: f8184177-3443-451c-8620-29c0cb8524ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:12:03 np0005548731 nova_compute[232433]: 2025-12-06 07:12:03.431 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "refresh_cache-cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:12:03 np0005548731 nova_compute[232433]: 2025-12-06 07:12:03.431 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquired lock "refresh_cache-cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:12:03 np0005548731 nova_compute[232433]: 2025-12-06 07:12:03.432 232437 DEBUG nova.network.neutron [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:12:03 np0005548731 nova_compute[232433]: 2025-12-06 07:12:03.542 232437 DEBUG nova.compute.manager [req-10c9548d-522b-4414-b56d-f3cf82d112d7 req-8c90c912-46c9-4500-95d7-f8c7117679c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-changed-f8184177-3443-451c-8620-29c0cb8524ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:03 np0005548731 nova_compute[232433]: 2025-12-06 07:12:03.543 232437 DEBUG nova.compute.manager [req-10c9548d-522b-4414-b56d-f3cf82d112d7 req-8c90c912-46c9-4500-95d7-f8c7117679c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Refreshing instance network info cache due to event network-changed-f8184177-3443-451c-8620-29c0cb8524ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:12:03 np0005548731 nova_compute[232433]: 2025-12-06 07:12:03.544 232437 DEBUG oslo_concurrency.lockutils [req-10c9548d-522b-4414-b56d-f3cf82d112d7 req-8c90c912-46c9-4500-95d7-f8c7117679c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:12:03 np0005548731 nova_compute[232433]: 2025-12-06 07:12:03.596 232437 DEBUG nova.network.neutron [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:12:03 np0005548731 nova_compute[232433]: 2025-12-06 07:12:03.767 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:12:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:04.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:12:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:04.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:12:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:06.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:06.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.583 232437 DEBUG nova.network.neutron [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Updating instance_info_cache with network_info: [{"id": "1693e5d5-d58b-4161-a25e-13d925d5185a", "address": "fa:16:3e:07:90:6a", "network": {"id": "76036c60-8614-4d35-bbfa-05b970e10c21", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-744030909", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.115", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693e5d5-d5", "ovs_interfaceid": "1693e5d5-d58b-4161-a25e-13d925d5185a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f8184177-3443-451c-8620-29c0cb8524ae", "address": "fa:16:3e:e7:2b:1a", "network": {"id": "603d56cb-3f26-4fa4-a8ba-1d17c9191d9d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-254058360", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8184177-34", "ovs_interfaceid": "f8184177-3443-451c-8620-29c0cb8524ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.610 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Releasing lock "refresh_cache-cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.610 232437 DEBUG nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Instance network_info: |[{"id": "1693e5d5-d58b-4161-a25e-13d925d5185a", "address": "fa:16:3e:07:90:6a", "network": {"id": "76036c60-8614-4d35-bbfa-05b970e10c21", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-744030909", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.115", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693e5d5-d5", "ovs_interfaceid": "1693e5d5-d58b-4161-a25e-13d925d5185a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f8184177-3443-451c-8620-29c0cb8524ae", "address": "fa:16:3e:e7:2b:1a", "network": {"id": "603d56cb-3f26-4fa4-a8ba-1d17c9191d9d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-254058360", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8184177-34", "ovs_interfaceid": "f8184177-3443-451c-8620-29c0cb8524ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.611 232437 DEBUG oslo_concurrency.lockutils [req-10c9548d-522b-4414-b56d-f3cf82d112d7 req-8c90c912-46c9-4500-95d7-f8c7117679c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.611 232437 DEBUG nova.network.neutron [req-10c9548d-522b-4414-b56d-f3cf82d112d7 req-8c90c912-46c9-4500-95d7-f8c7117679c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Refreshing network info cache for port f8184177-3443-451c-8620-29c0cb8524ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:12:06 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.616 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Start _get_guest_xml network_info=[{"id": "1693e5d5-d58b-4161-a25e-13d925d5185a", "address": "fa:16:3e:07:90:6a", "network": {"id": "76036c60-8614-4d35-bbfa-05b970e10c21", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-744030909", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.115", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693e5d5-d5", "ovs_interfaceid": "1693e5d5-d58b-4161-a25e-13d925d5185a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f8184177-3443-451c-8620-29c0cb8524ae", "address": "fa:16:3e:e7:2b:1a", "network": {"id": "603d56cb-3f26-4fa4-a8ba-1d17c9191d9d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-254058360", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8184177-34", "ovs_interfaceid": "f8184177-3443-451c-8620-29c0cb8524ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.619 232437 WARNING nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.626 232437 DEBUG nova.virt.libvirt.host [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.626 232437 DEBUG nova.virt.libvirt.host [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.634 232437 DEBUG nova.virt.libvirt.host [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.635 232437 DEBUG nova.virt.libvirt.host [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.637 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.637 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.638 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.638 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.638 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.638 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.638 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.639 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.639 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.639 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.639 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.640 232437 DEBUG nova.virt.hardware [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.643 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:06 np0005548731 nova_compute[232433]: 2025-12-06 07:12:06.708 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:12:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/885063946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.059 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.079 232437 DEBUG nova.storage.rbd_utils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.082 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:12:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/506353290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.495 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.497 232437 DEBUG nova.virt.libvirt.vif [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-173632355',display_name='tempest-ServersTestMultiNic-server-173632355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-173632355',id=55,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-lbtrnv84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:55Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=cd3678f8-d4b4-4c8e-9c3a-be877c8a2268,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1693e5d5-d58b-4161-a25e-13d925d5185a", "address": "fa:16:3e:07:90:6a", "network": {"id": "76036c60-8614-4d35-bbfa-05b970e10c21", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-744030909", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.115", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693e5d5-d5", "ovs_interfaceid": "1693e5d5-d58b-4161-a25e-13d925d5185a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.497 232437 DEBUG nova.network.os_vif_util [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "1693e5d5-d58b-4161-a25e-13d925d5185a", "address": "fa:16:3e:07:90:6a", "network": {"id": "76036c60-8614-4d35-bbfa-05b970e10c21", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-744030909", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.115", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693e5d5-d5", "ovs_interfaceid": "1693e5d5-d58b-4161-a25e-13d925d5185a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.497 232437 DEBUG nova.network.os_vif_util [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:90:6a,bridge_name='br-int',has_traffic_filtering=True,id=1693e5d5-d58b-4161-a25e-13d925d5185a,network=Network(76036c60-8614-4d35-bbfa-05b970e10c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693e5d5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.498 232437 DEBUG nova.virt.libvirt.vif [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-173632355',display_name='tempest-ServersTestMultiNic-server-173632355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-173632355',id=55,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-lbtrnv84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:55Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=cd3678f8-d4b4-4c8e-9c3a-be877c8a2268,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8184177-3443-451c-8620-29c0cb8524ae", "address": "fa:16:3e:e7:2b:1a", "network": {"id": "603d56cb-3f26-4fa4-a8ba-1d17c9191d9d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-254058360", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8184177-34", "ovs_interfaceid": "f8184177-3443-451c-8620-29c0cb8524ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.498 232437 DEBUG nova.network.os_vif_util [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "f8184177-3443-451c-8620-29c0cb8524ae", "address": "fa:16:3e:e7:2b:1a", "network": {"id": "603d56cb-3f26-4fa4-a8ba-1d17c9191d9d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-254058360", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8184177-34", "ovs_interfaceid": "f8184177-3443-451c-8620-29c0cb8524ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.499 232437 DEBUG nova.network.os_vif_util [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:2b:1a,bridge_name='br-int',has_traffic_filtering=True,id=f8184177-3443-451c-8620-29c0cb8524ae,network=Network(603d56cb-3f26-4fa4-a8ba-1d17c9191d9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8184177-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.500 232437 DEBUG nova.objects.instance [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd3678f8-d4b4-4c8e-9c3a-be877c8a2268 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.532 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  <uuid>cd3678f8-d4b4-4c8e-9c3a-be877c8a2268</uuid>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  <name>instance-00000037</name>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersTestMultiNic-server-173632355</nova:name>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:12:06</nova:creationTime>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <nova:user uuid="337447c5cffc48bf8256c9166a6ff0e2">tempest-ServersTestMultiNic-889484419-project-member</nova:user>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <nova:project uuid="da24f0e2d59745828feaaecfeb9fed45">tempest-ServersTestMultiNic-889484419</nova:project>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <nova:port uuid="1693e5d5-d58b-4161-a25e-13d925d5185a">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.115" ipVersion="4"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <nova:port uuid="f8184177-3443-451c-8620-29c0cb8524ae">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.1.92" ipVersion="4"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <entry name="serial">cd3678f8-d4b4-4c8e-9c3a-be877c8a2268</entry>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <entry name="uuid">cd3678f8-d4b4-4c8e-9c3a-be877c8a2268</entry>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk.config">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:07:90:6a"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <target dev="tap1693e5d5-d5"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:e7:2b:1a"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <target dev="tapf8184177-34"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268/console.log" append="off"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:12:07 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:12:07 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:12:07 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:12:07 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.532 232437 DEBUG nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Preparing to wait for external event network-vif-plugged-1693e5d5-d58b-4161-a25e-13d925d5185a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.533 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.533 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.533 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.534 232437 DEBUG nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Preparing to wait for external event network-vif-plugged-f8184177-3443-451c-8620-29c0cb8524ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.534 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.534 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.535 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.535 232437 DEBUG nova.virt.libvirt.vif [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-173632355',display_name='tempest-ServersTestMultiNic-server-173632355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-173632355',id=55,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-lbtrnv84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:55Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=cd3678f8-d4b4-4c8e-9c3a-be877c8a2268,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1693e5d5-d58b-4161-a25e-13d925d5185a", "address": "fa:16:3e:07:90:6a", "network": {"id": "76036c60-8614-4d35-bbfa-05b970e10c21", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-744030909", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.115", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693e5d5-d5", "ovs_interfaceid": "1693e5d5-d58b-4161-a25e-13d925d5185a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.535 232437 DEBUG nova.network.os_vif_util [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "1693e5d5-d58b-4161-a25e-13d925d5185a", "address": "fa:16:3e:07:90:6a", "network": {"id": "76036c60-8614-4d35-bbfa-05b970e10c21", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-744030909", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.115", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693e5d5-d5", "ovs_interfaceid": "1693e5d5-d58b-4161-a25e-13d925d5185a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.536 232437 DEBUG nova.network.os_vif_util [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:90:6a,bridge_name='br-int',has_traffic_filtering=True,id=1693e5d5-d58b-4161-a25e-13d925d5185a,network=Network(76036c60-8614-4d35-bbfa-05b970e10c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693e5d5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.536 232437 DEBUG os_vif [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:90:6a,bridge_name='br-int',has_traffic_filtering=True,id=1693e5d5-d58b-4161-a25e-13d925d5185a,network=Network(76036c60-8614-4d35-bbfa-05b970e10c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693e5d5-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.537 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.537 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.538 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.540 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.540 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1693e5d5-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.541 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1693e5d5-d5, col_values=(('external_ids', {'iface-id': '1693e5d5-d58b-4161-a25e-13d925d5185a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:90:6a', 'vm-uuid': 'cd3678f8-d4b4-4c8e-9c3a-be877c8a2268'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:07 np0005548731 NetworkManager[49182]: <info>  [1765005127.5433] manager: (tap1693e5d5-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.545 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.548 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.549 232437 INFO os_vif [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:90:6a,bridge_name='br-int',has_traffic_filtering=True,id=1693e5d5-d58b-4161-a25e-13d925d5185a,network=Network(76036c60-8614-4d35-bbfa-05b970e10c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693e5d5-d5')#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.550 232437 DEBUG nova.virt.libvirt.vif [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-173632355',display_name='tempest-ServersTestMultiNic-server-173632355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-173632355',id=55,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-lbtrnv84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:55Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=cd3678f8-d4b4-4c8e-9c3a-be877c8a2268,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8184177-3443-451c-8620-29c0cb8524ae", "address": "fa:16:3e:e7:2b:1a", "network": {"id": "603d56cb-3f26-4fa4-a8ba-1d17c9191d9d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-254058360", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8184177-34", "ovs_interfaceid": "f8184177-3443-451c-8620-29c0cb8524ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.550 232437 DEBUG nova.network.os_vif_util [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "f8184177-3443-451c-8620-29c0cb8524ae", "address": "fa:16:3e:e7:2b:1a", "network": {"id": "603d56cb-3f26-4fa4-a8ba-1d17c9191d9d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-254058360", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8184177-34", "ovs_interfaceid": "f8184177-3443-451c-8620-29c0cb8524ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.551 232437 DEBUG nova.network.os_vif_util [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:2b:1a,bridge_name='br-int',has_traffic_filtering=True,id=f8184177-3443-451c-8620-29c0cb8524ae,network=Network(603d56cb-3f26-4fa4-a8ba-1d17c9191d9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8184177-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.551 232437 DEBUG os_vif [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:2b:1a,bridge_name='br-int',has_traffic_filtering=True,id=f8184177-3443-451c-8620-29c0cb8524ae,network=Network(603d56cb-3f26-4fa4-a8ba-1d17c9191d9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8184177-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.552 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.552 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.552 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.555 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.555 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8184177-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.556 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf8184177-34, col_values=(('external_ids', {'iface-id': 'f8184177-3443-451c-8620-29c0cb8524ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:2b:1a', 'vm-uuid': 'cd3678f8-d4b4-4c8e-9c3a-be877c8a2268'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.557 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:07 np0005548731 NetworkManager[49182]: <info>  [1765005127.5583] manager: (tapf8184177-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.559 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.566 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.567 232437 INFO os_vif [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:2b:1a,bridge_name='br-int',has_traffic_filtering=True,id=f8184177-3443-451c-8620-29c0cb8524ae,network=Network(603d56cb-3f26-4fa4-a8ba-1d17c9191d9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8184177-34')#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.629 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.630 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.630 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] No VIF found with MAC fa:16:3e:07:90:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.630 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] No VIF found with MAC fa:16:3e:e7:2b:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.631 232437 INFO nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Using config drive#033[00m
Dec  6 02:12:07 np0005548731 nova_compute[232433]: 2025-12-06 07:12:07.654 232437 DEBUG nova.storage.rbd_utils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:12:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:08.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:08.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:08 np0005548731 nova_compute[232433]: 2025-12-06 07:12:08.533 232437 INFO nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Creating config drive at /var/lib/nova/instances/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268/disk.config#033[00m
Dec  6 02:12:08 np0005548731 nova_compute[232433]: 2025-12-06 07:12:08.538 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdn41tg3q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:08 np0005548731 nova_compute[232433]: 2025-12-06 07:12:08.679 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdn41tg3q" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:08 np0005548731 nova_compute[232433]: 2025-12-06 07:12:08.780 232437 DEBUG nova.storage.rbd_utils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:12:08 np0005548731 nova_compute[232433]: 2025-12-06 07:12:08.784 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268/disk.config cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.186 232437 DEBUG oslo_concurrency.processutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268/disk.config cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.187 232437 INFO nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Deleting local config drive /var/lib/nova/instances/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268/disk.config because it was imported into RBD.#033[00m
Dec  6 02:12:09 np0005548731 NetworkManager[49182]: <info>  [1765005129.2276] manager: (tap1693e5d5-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Dec  6 02:12:09 np0005548731 kernel: tap1693e5d5-d5: entered promiscuous mode
Dec  6 02:12:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:09Z|00158|binding|INFO|Claiming lport 1693e5d5-d58b-4161-a25e-13d925d5185a for this chassis.
Dec  6 02:12:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:09Z|00159|binding|INFO|1693e5d5-d58b-4161-a25e-13d925d5185a: Claiming fa:16:3e:07:90:6a 10.100.0.115
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.231 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:09 np0005548731 NetworkManager[49182]: <info>  [1765005129.2416] manager: (tapf8184177-34): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Dec  6 02:12:09 np0005548731 systemd-udevd[256610]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:12:09 np0005548731 systemd-udevd[256611]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:12:09 np0005548731 kernel: tapf8184177-34: entered promiscuous mode
Dec  6 02:12:09 np0005548731 NetworkManager[49182]: <info>  [1765005129.2652] device (tap1693e5d5-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:12:09 np0005548731 NetworkManager[49182]: <info>  [1765005129.2662] device (tap1693e5d5-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:12:09 np0005548731 NetworkManager[49182]: <info>  [1765005129.2693] device (tapf8184177-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:12:09 np0005548731 NetworkManager[49182]: <info>  [1765005129.2705] device (tapf8184177-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.270 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.273 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:90:6a 10.100.0.115'], port_security=['fa:16:3e:07:90:6a 10.100.0.115'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.115/24', 'neutron:device_id': 'cd3678f8-d4b4-4c8e-9c3a-be877c8a2268', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76036c60-8614-4d35-bbfa-05b970e10c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4b40e0f-999e-445b-b382-f95dc10d9fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22245ff4-133d-40ce-b29e-390d22212125, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=1693e5d5-d58b-4161-a25e-13d925d5185a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.274 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 1693e5d5-d58b-4161-a25e-13d925d5185a in datapath 76036c60-8614-4d35-bbfa-05b970e10c21 bound to our chassis#033[00m
Dec  6 02:12:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:09Z|00160|if_status|INFO|Not updating pb chassis for f8184177-3443-451c-8620-29c0cb8524ae now as sb is readonly
Dec  6 02:12:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:09Z|00161|binding|INFO|Claiming lport f8184177-3443-451c-8620-29c0cb8524ae for this chassis.
Dec  6 02:12:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:09Z|00162|binding|INFO|f8184177-3443-451c-8620-29c0cb8524ae: Claiming fa:16:3e:e7:2b:1a 10.100.1.92
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.275 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76036c60-8614-4d35-bbfa-05b970e10c21#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.276 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:09Z|00163|binding|INFO|Setting lport 1693e5d5-d58b-4161-a25e-13d925d5185a ovn-installed in OVS
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.279 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:09Z|00164|binding|INFO|Setting lport 1693e5d5-d58b-4161-a25e-13d925d5185a up in Southbound
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.284 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:2b:1a 10.100.1.92'], port_security=['fa:16:3e:e7:2b:1a 10.100.1.92'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.92/24', 'neutron:device_id': 'cd3678f8-d4b4-4c8e-9c3a-be877c8a2268', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4b40e0f-999e-445b-b382-f95dc10d9fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd97d84a-e2d6-40b1-84e7-9c6868be6694, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f8184177-3443-451c-8620-29c0cb8524ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.285 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c43e18f8-8312-4fb3-8aa0-f8b8cc4a3dcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.286 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap76036c60-81 in ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.288 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap76036c60-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.288 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ada84cd1-122f-48e7-94fd-612575594154]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.289 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b83454b3-4616-425c-9d31-88b6c2e7c806]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 systemd-machined[195355]: New machine qemu-24-instance-00000037.
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.300 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b2d9d7-8ccd-4d6f-b292-7ef4cc48c831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 systemd[1]: Started Virtual Machine qemu-24-instance-00000037.
Dec  6 02:12:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:09Z|00165|binding|INFO|Setting lport f8184177-3443-451c-8620-29c0cb8524ae ovn-installed in OVS
Dec  6 02:12:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:09Z|00166|binding|INFO|Setting lport f8184177-3443-451c-8620-29c0cb8524ae up in Southbound
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.315 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.324 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8b0280-2e24-48af-b187-279a7f6ec21d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.351 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[355a29ae-8411-43af-b667-3d2c692ff24f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.357 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d267002e-0f66-49d3-8b0b-e30337e0a014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 NetworkManager[49182]: <info>  [1765005129.3592] manager: (tap76036c60-80): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.384 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[23a3034a-7c14-481c-bb0f-28f1ceabf05d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.387 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab88b93-603b-4e8d-9b16-7cb6bd7418d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 NetworkManager[49182]: <info>  [1765005129.4035] device (tap76036c60-80): carrier: link connected
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.407 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f8df84-4e17-4b62-89b3-3dce4148c571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.421 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9772fa-6c8d-4bef-90d6-99e2e0714116]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76036c60-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:dc:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539141, 'reachable_time': 25955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256647, 'error': None, 'target': 'ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.433 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ae11dd42-e4c6-45e3-bd13-720efd4b77c4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:dcd3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539141, 'tstamp': 539141}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256648, 'error': None, 'target': 'ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.445 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9962381c-0677-4481-bba3-a20a4a94d425]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76036c60-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:dc:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539141, 'reachable_time': 25955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256649, 'error': None, 'target': 'ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.469 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c235bbdd-877c-4053-afeb-fe66ac993147]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.532 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf9fc4d-3fd0-4736-a8d0-c59656e2dbe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.533 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76036c60-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.534 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.534 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76036c60-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.535 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:09 np0005548731 NetworkManager[49182]: <info>  [1765005129.5364] manager: (tap76036c60-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Dec  6 02:12:09 np0005548731 kernel: tap76036c60-80: entered promiscuous mode
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.541 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76036c60-80, col_values=(('external_ids', {'iface-id': '98411c9e-5be2-4f98-8fbe-495f9d474c33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.543 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:09Z|00167|binding|INFO|Releasing lport 98411c9e-5be2-4f98-8fbe-495f9d474c33 from this chassis (sb_readonly=0)
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.545 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/76036c60-8614-4d35-bbfa-05b970e10c21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/76036c60-8614-4d35-bbfa-05b970e10c21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.546 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6ef7cf-0d12-45af-81f0-29e3a56b3a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.547 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-76036c60-8614-4d35-bbfa-05b970e10c21
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/76036c60-8614-4d35-bbfa-05b970e10c21.pid.haproxy
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 76036c60-8614-4d35-bbfa-05b970e10c21
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:12:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:09.547 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21', 'env', 'PROCESS_TAG=haproxy-76036c60-8614-4d35-bbfa-05b970e10c21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/76036c60-8614-4d35-bbfa-05b970e10c21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.555 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.688 232437 DEBUG nova.network.neutron [req-10c9548d-522b-4414-b56d-f3cf82d112d7 req-8c90c912-46c9-4500-95d7-f8c7117679c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Updated VIF entry in instance network info cache for port f8184177-3443-451c-8620-29c0cb8524ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.689 232437 DEBUG nova.network.neutron [req-10c9548d-522b-4414-b56d-f3cf82d112d7 req-8c90c912-46c9-4500-95d7-f8c7117679c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Updating instance_info_cache with network_info: [{"id": "1693e5d5-d58b-4161-a25e-13d925d5185a", "address": "fa:16:3e:07:90:6a", "network": {"id": "76036c60-8614-4d35-bbfa-05b970e10c21", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-744030909", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.115", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693e5d5-d5", "ovs_interfaceid": "1693e5d5-d58b-4161-a25e-13d925d5185a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f8184177-3443-451c-8620-29c0cb8524ae", "address": "fa:16:3e:e7:2b:1a", "network": {"id": "603d56cb-3f26-4fa4-a8ba-1d17c9191d9d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-254058360", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8184177-34", "ovs_interfaceid": "f8184177-3443-451c-8620-29c0cb8524ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.712 232437 DEBUG nova.compute.manager [req-115dfce5-578f-46e4-9200-e975042763be req-e1bdf8cf-96f5-45a9-938a-4d8754c0efa9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-plugged-1693e5d5-d58b-4161-a25e-13d925d5185a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.713 232437 DEBUG oslo_concurrency.lockutils [req-115dfce5-578f-46e4-9200-e975042763be req-e1bdf8cf-96f5-45a9-938a-4d8754c0efa9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.713 232437 DEBUG oslo_concurrency.lockutils [req-115dfce5-578f-46e4-9200-e975042763be req-e1bdf8cf-96f5-45a9-938a-4d8754c0efa9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.713 232437 DEBUG oslo_concurrency.lockutils [req-115dfce5-578f-46e4-9200-e975042763be req-e1bdf8cf-96f5-45a9-938a-4d8754c0efa9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.714 232437 DEBUG nova.compute.manager [req-115dfce5-578f-46e4-9200-e975042763be req-e1bdf8cf-96f5-45a9-938a-4d8754c0efa9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Processing event network-vif-plugged-1693e5d5-d58b-4161-a25e-13d925d5185a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.715 232437 DEBUG oslo_concurrency.lockutils [req-10c9548d-522b-4414-b56d-f3cf82d112d7 req-8c90c912-46c9-4500-95d7-f8c7117679c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.800 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005129.8003426, cd3678f8-d4b4-4c8e-9c3a-be877c8a2268 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.801 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] VM Started (Lifecycle Event)#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.849 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.854 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005129.8011696, cd3678f8-d4b4-4c8e-9c3a-be877c8a2268 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.854 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.884 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.887 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:12:09 np0005548731 podman[256724]: 2025-12-06 07:12:09.90365467 +0000 UTC m=+0.049205768 container create 82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.915 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:12:09 np0005548731 systemd[1]: Started libpod-conmon-82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d.scope.
Dec  6 02:12:09 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.965 232437 DEBUG nova.compute.manager [req-2c768e0d-388c-4ea5-a339-63ee38423838 req-19abc762-771c-4e3d-9e8e-03153f524171 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-plugged-f8184177-3443-451c-8620-29c0cb8524ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.966 232437 DEBUG oslo_concurrency.lockutils [req-2c768e0d-388c-4ea5-a339-63ee38423838 req-19abc762-771c-4e3d-9e8e-03153f524171 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.966 232437 DEBUG oslo_concurrency.lockutils [req-2c768e0d-388c-4ea5-a339-63ee38423838 req-19abc762-771c-4e3d-9e8e-03153f524171 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.966 232437 DEBUG oslo_concurrency.lockutils [req-2c768e0d-388c-4ea5-a339-63ee38423838 req-19abc762-771c-4e3d-9e8e-03153f524171 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.967 232437 DEBUG nova.compute.manager [req-2c768e0d-388c-4ea5-a339-63ee38423838 req-19abc762-771c-4e3d-9e8e-03153f524171 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Processing event network-vif-plugged-f8184177-3443-451c-8620-29c0cb8524ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:12:09 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3267f8fc0e6a6c98628a066de0865ae51fee55ed6d044ba87be00b2143d0eb5d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.967 232437 DEBUG nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:12:09 np0005548731 podman[256724]: 2025-12-06 07:12:09.878114299 +0000 UTC m=+0.023665417 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.974 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.975 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005129.974492, cd3678f8-d4b4-4c8e-9c3a-be877c8a2268 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.975 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.978 232437 INFO nova.virt.libvirt.driver [-] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Instance spawned successfully.#033[00m
Dec  6 02:12:09 np0005548731 podman[256724]: 2025-12-06 07:12:09.97925887 +0000 UTC m=+0.124809998 container init 82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 02:12:09 np0005548731 nova_compute[232433]: 2025-12-06 07:12:09.979 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:12:09 np0005548731 podman[256724]: 2025-12-06 07:12:09.984733515 +0000 UTC m=+0.130284603 container start 82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  6 02:12:10 np0005548731 neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21[256739]: [NOTICE]   (256743) : New worker (256745) forked
Dec  6 02:12:10 np0005548731 neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21[256739]: [NOTICE]   (256743) : Loading success.
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.008 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.013 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.016 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.016 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.017 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.017 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.017 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.018 232437 DEBUG nova.virt.libvirt.driver [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.036 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f8184177-3443-451c-8620-29c0cb8524ae in datapath 603d56cb-3f26-4fa4-a8ba-1d17c9191d9d unbound from our chassis#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.038 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 603d56cb-3f26-4fa4-a8ba-1d17c9191d9d#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.046 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.048 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b3e187-c11c-4509-8920-04210f8e4de9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.049 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap603d56cb-31 in ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.051 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap603d56cb-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.051 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7e4365-4f3a-4776-89e3-e7eb184b6833]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.052 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2d61c3dd-dac3-42a3-a796-5a78ed7a8964]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.062 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[79a04930-05bf-4c5a-9ffa-c1c517839432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.083 232437 INFO nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Took 14.87 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.084 232437 DEBUG nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.086 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[57be9783-d87e-49bb-b414-e3f4b9482f05]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.112 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a4a8ba-b730-42c6-9219-4bf1628a00c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 NetworkManager[49182]: <info>  [1765005130.1202] manager: (tap603d56cb-30): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Dec  6 02:12:10 np0005548731 systemd-udevd[256636]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.122 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9eddb214-7f37-477b-a5aa-bbe67b468199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.142 232437 INFO nova.compute.manager [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Took 16.46 seconds to build instance.#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.157 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[aca70b75-2c36-4aca-bc5d-c0ac1f98f16d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.159 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f9176541-1fe4-46c9-ad37-7d29dcf7fe8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.164 232437 DEBUG oslo_concurrency.lockutils [None req-fb03423a-abd1-4381-acb1-4ca57ec9d394 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:10 np0005548731 NetworkManager[49182]: <info>  [1765005130.1832] device (tap603d56cb-30): carrier: link connected
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.190 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3fbcc136-37d8-426d-8cb0-5cdce50c4a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.208 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3c3986-df9c-4e30-bf38-65ad7fc591d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap603d56cb-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:1a:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539219, 'reachable_time': 15801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256765, 'error': None, 'target': 'ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.221 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f6783b76-3779-4db7-9d17-ccc1702c72e1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:1a2e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539219, 'tstamp': 539219}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256766, 'error': None, 'target': 'ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:10.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.237 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[892f99b3-1dbf-47f2-9785-a555501fc906]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap603d56cb-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:1a:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539219, 'reachable_time': 15801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256767, 'error': None, 'target': 'ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.265 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7597d6da-c72d-4ed9-b51a-eeb95cd9f5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.317 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bd05a423-1daa-4cb4-8e9a-c10f54a58bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.319 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap603d56cb-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.320 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.320 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap603d56cb-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:10 np0005548731 kernel: tap603d56cb-30: entered promiscuous mode
Dec  6 02:12:10 np0005548731 NetworkManager[49182]: <info>  [1765005130.3224] manager: (tap603d56cb-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.323 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.328 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap603d56cb-30, col_values=(('external_ids', {'iface-id': '91c9e515-4d3f-49c9-be86-3cd5700577ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:10 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:10Z|00168|binding|INFO|Releasing lport 91c9e515-4d3f-49c9-be86-3cd5700577ef from this chassis (sb_readonly=0)
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.332 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/603d56cb-3f26-4fa4-a8ba-1d17c9191d9d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/603d56cb-3f26-4fa4-a8ba-1d17c9191d9d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.333 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b54fd666-c6de-415e-be9e-7fd3649b768c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.334 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/603d56cb-3f26-4fa4-a8ba-1d17c9191d9d.pid.haproxy
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 603d56cb-3f26-4fa4-a8ba-1d17c9191d9d
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:12:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:10.335 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d', 'env', 'PROCESS_TAG=haproxy-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/603d56cb-3f26-4fa4-a8ba-1d17c9191d9d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:12:10 np0005548731 nova_compute[232433]: 2025-12-06 07:12:10.342 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:10 np0005548731 podman[256799]: 2025-12-06 07:12:10.712892473 +0000 UTC m=+0.063482100 container create 7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  6 02:12:10 np0005548731 systemd[1]: Started libpod-conmon-7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd.scope.
Dec  6 02:12:10 np0005548731 podman[256799]: 2025-12-06 07:12:10.681994619 +0000 UTC m=+0.032584266 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:12:10 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:12:10 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ded3c3755d373eda975ed8ce60a35d930b40458e6ff9044257215700b0e0d76e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:12:10 np0005548731 podman[256799]: 2025-12-06 07:12:10.811076031 +0000 UTC m=+0.161665678 container init 7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:12:10 np0005548731 podman[256799]: 2025-12-06 07:12:10.823403206 +0000 UTC m=+0.173992863 container start 7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 02:12:10 np0005548731 neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d[256815]: [NOTICE]   (256819) : New worker (256821) forked
Dec  6 02:12:10 np0005548731 neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d[256815]: [NOTICE]   (256819) : Loading success.
Dec  6 02:12:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:12:11 np0005548731 nova_compute[232433]: 2025-12-06 07:12:11.712 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.068 232437 DEBUG nova.compute.manager [req-b41684d8-4b62-4c3f-ae6c-a11e02854c15 req-40380f0d-aaf8-4ce8-bd4d-4772dad45ccd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-plugged-1693e5d5-d58b-4161-a25e-13d925d5185a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.068 232437 DEBUG oslo_concurrency.lockutils [req-b41684d8-4b62-4c3f-ae6c-a11e02854c15 req-40380f0d-aaf8-4ce8-bd4d-4772dad45ccd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.068 232437 DEBUG oslo_concurrency.lockutils [req-b41684d8-4b62-4c3f-ae6c-a11e02854c15 req-40380f0d-aaf8-4ce8-bd4d-4772dad45ccd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.069 232437 DEBUG oslo_concurrency.lockutils [req-b41684d8-4b62-4c3f-ae6c-a11e02854c15 req-40380f0d-aaf8-4ce8-bd4d-4772dad45ccd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.069 232437 DEBUG nova.compute.manager [req-b41684d8-4b62-4c3f-ae6c-a11e02854c15 req-40380f0d-aaf8-4ce8-bd4d-4772dad45ccd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] No waiting events found dispatching network-vif-plugged-1693e5d5-d58b-4161-a25e-13d925d5185a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.069 232437 WARNING nova.compute.manager [req-b41684d8-4b62-4c3f-ae6c-a11e02854c15 req-40380f0d-aaf8-4ce8-bd4d-4772dad45ccd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received unexpected event network-vif-plugged-1693e5d5-d58b-4161-a25e-13d925d5185a for instance with vm_state active and task_state None.#033[00m
Dec  6 02:12:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:12.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.322 232437 DEBUG nova.compute.manager [req-63537daf-b050-4d92-9a1c-48017c357a2b req-830343ce-4c17-4dca-af0a-4ad81ebc2282 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-plugged-f8184177-3443-451c-8620-29c0cb8524ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.323 232437 DEBUG oslo_concurrency.lockutils [req-63537daf-b050-4d92-9a1c-48017c357a2b req-830343ce-4c17-4dca-af0a-4ad81ebc2282 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.323 232437 DEBUG oslo_concurrency.lockutils [req-63537daf-b050-4d92-9a1c-48017c357a2b req-830343ce-4c17-4dca-af0a-4ad81ebc2282 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.323 232437 DEBUG oslo_concurrency.lockutils [req-63537daf-b050-4d92-9a1c-48017c357a2b req-830343ce-4c17-4dca-af0a-4ad81ebc2282 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.323 232437 DEBUG nova.compute.manager [req-63537daf-b050-4d92-9a1c-48017c357a2b req-830343ce-4c17-4dca-af0a-4ad81ebc2282 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] No waiting events found dispatching network-vif-plugged-f8184177-3443-451c-8620-29c0cb8524ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.324 232437 WARNING nova.compute.manager [req-63537daf-b050-4d92-9a1c-48017c357a2b req-830343ce-4c17-4dca-af0a-4ad81ebc2282 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received unexpected event network-vif-plugged-f8184177-3443-451c-8620-29c0cb8524ae for instance with vm_state active and task_state None.#033[00m
Dec  6 02:12:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:12.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:12 np0005548731 nova_compute[232433]: 2025-12-06 07:12:12.558 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.237 232437 DEBUG oslo_concurrency.lockutils [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.238 232437 DEBUG oslo_concurrency.lockutils [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.238 232437 DEBUG oslo_concurrency.lockutils [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.238 232437 DEBUG oslo_concurrency.lockutils [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.239 232437 DEBUG oslo_concurrency.lockutils [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:14.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.241 232437 INFO nova.compute.manager [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Terminating instance#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.242 232437 DEBUG nova.compute.manager [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:12:14 np0005548731 kernel: tap1693e5d5-d5 (unregistering): left promiscuous mode
Dec  6 02:12:14 np0005548731 NetworkManager[49182]: <info>  [1765005134.2834] device (tap1693e5d5-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.292 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:14Z|00169|binding|INFO|Releasing lport 1693e5d5-d58b-4161-a25e-13d925d5185a from this chassis (sb_readonly=0)
Dec  6 02:12:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:14Z|00170|binding|INFO|Setting lport 1693e5d5-d58b-4161-a25e-13d925d5185a down in Southbound
Dec  6 02:12:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:14Z|00171|binding|INFO|Removing iface tap1693e5d5-d5 ovn-installed in OVS
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.294 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.299 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:90:6a 10.100.0.115'], port_security=['fa:16:3e:07:90:6a 10.100.0.115'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.115/24', 'neutron:device_id': 'cd3678f8-d4b4-4c8e-9c3a-be877c8a2268', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76036c60-8614-4d35-bbfa-05b970e10c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f4b40e0f-999e-445b-b382-f95dc10d9fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22245ff4-133d-40ce-b29e-390d22212125, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=1693e5d5-d58b-4161-a25e-13d925d5185a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.300 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 1693e5d5-d58b-4161-a25e-13d925d5185a in datapath 76036c60-8614-4d35-bbfa-05b970e10c21 unbound from our chassis#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.302 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76036c60-8614-4d35-bbfa-05b970e10c21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.302 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a9f153-5227-40c2-b4b1-9410d76fb6de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.303 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21 namespace which is not needed anymore#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.307 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 kernel: tapf8184177-34 (unregistering): left promiscuous mode
Dec  6 02:12:14 np0005548731 NetworkManager[49182]: <info>  [1765005134.3174] device (tapf8184177-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.319 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:14Z|00172|binding|INFO|Releasing lport f8184177-3443-451c-8620-29c0cb8524ae from this chassis (sb_readonly=0)
Dec  6 02:12:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:14Z|00173|binding|INFO|Setting lport f8184177-3443-451c-8620-29c0cb8524ae down in Southbound
Dec  6 02:12:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:14Z|00174|binding|INFO|Removing iface tapf8184177-34 ovn-installed in OVS
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.336 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:2b:1a 10.100.1.92'], port_security=['fa:16:3e:e7:2b:1a 10.100.1.92'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.92/24', 'neutron:device_id': 'cd3678f8-d4b4-4c8e-9c3a-be877c8a2268', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f4b40e0f-999e-445b-b382-f95dc10d9fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd97d84a-e2d6-40b1-84e7-9c6868be6694, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f8184177-3443-451c-8620-29c0cb8524ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.344 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000037.scope: Deactivated successfully.
Dec  6 02:12:14 np0005548731 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000037.scope: Consumed 4.887s CPU time.
Dec  6 02:12:14 np0005548731 systemd-machined[195355]: Machine qemu-24-instance-00000037 terminated.
Dec  6 02:12:14 np0005548731 neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21[256739]: [NOTICE]   (256743) : haproxy version is 2.8.14-c23fe91
Dec  6 02:12:14 np0005548731 neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21[256739]: [NOTICE]   (256743) : path to executable is /usr/sbin/haproxy
Dec  6 02:12:14 np0005548731 neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21[256739]: [WARNING]  (256743) : Exiting Master process...
Dec  6 02:12:14 np0005548731 neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21[256739]: [ALERT]    (256743) : Current worker (256745) exited with code 143 (Terminated)
Dec  6 02:12:14 np0005548731 neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21[256739]: [WARNING]  (256743) : All workers exited. Exiting... (0)
Dec  6 02:12:14 np0005548731 systemd[1]: libpod-82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d.scope: Deactivated successfully.
Dec  6 02:12:14 np0005548731 podman[256860]: 2025-12-06 07:12:14.4311843 +0000 UTC m=+0.042088002 container died 82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  6 02:12:14 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d-userdata-shm.mount: Deactivated successfully.
Dec  6 02:12:14 np0005548731 NetworkManager[49182]: <info>  [1765005134.4690] manager: (tap1693e5d5-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Dec  6 02:12:14 np0005548731 systemd[1]: var-lib-containers-storage-overlay-3267f8fc0e6a6c98628a066de0865ae51fee55ed6d044ba87be00b2143d0eb5d-merged.mount: Deactivated successfully.
Dec  6 02:12:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:14.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:14 np0005548731 NetworkManager[49182]: <info>  [1765005134.4819] manager: (tapf8184177-34): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Dec  6 02:12:14 np0005548731 podman[256860]: 2025-12-06 07:12:14.481933734 +0000 UTC m=+0.092837416 container cleanup 82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 02:12:14 np0005548731 systemd[1]: libpod-conmon-82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d.scope: Deactivated successfully.
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.499 232437 INFO nova.virt.libvirt.driver [-] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Instance destroyed successfully.#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.500 232437 DEBUG nova.objects.instance [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lazy-loading 'resources' on Instance uuid cd3678f8-d4b4-4c8e-9c3a-be877c8a2268 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.518 232437 DEBUG nova.virt.libvirt.vif [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-173632355',display_name='tempest-ServersTestMultiNic-server-173632355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-173632355',id=55,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:12:10Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-lbtrnv84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:12:10Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=cd3678f8-d4b4-4c8e-9c3a-be877c8a2268,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1693e5d5-d58b-4161-a25e-13d925d5185a", "address": "fa:16:3e:07:90:6a", "network": {"id": "76036c60-8614-4d35-bbfa-05b970e10c21", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-744030909", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.115", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693e5d5-d5", "ovs_interfaceid": "1693e5d5-d58b-4161-a25e-13d925d5185a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.519 232437 DEBUG nova.network.os_vif_util [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "1693e5d5-d58b-4161-a25e-13d925d5185a", "address": "fa:16:3e:07:90:6a", "network": {"id": "76036c60-8614-4d35-bbfa-05b970e10c21", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-744030909", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.115", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693e5d5-d5", "ovs_interfaceid": "1693e5d5-d58b-4161-a25e-13d925d5185a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.519 232437 DEBUG nova.network.os_vif_util [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:90:6a,bridge_name='br-int',has_traffic_filtering=True,id=1693e5d5-d58b-4161-a25e-13d925d5185a,network=Network(76036c60-8614-4d35-bbfa-05b970e10c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693e5d5-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.520 232437 DEBUG os_vif [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:90:6a,bridge_name='br-int',has_traffic_filtering=True,id=1693e5d5-d58b-4161-a25e-13d925d5185a,network=Network(76036c60-8614-4d35-bbfa-05b970e10c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693e5d5-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.521 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.521 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1693e5d5-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.522 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.524 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.527 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.529 232437 INFO os_vif [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:90:6a,bridge_name='br-int',has_traffic_filtering=True,id=1693e5d5-d58b-4161-a25e-13d925d5185a,network=Network(76036c60-8614-4d35-bbfa-05b970e10c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693e5d5-d5')#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.530 232437 DEBUG nova.virt.libvirt.vif [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:11:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-173632355',display_name='tempest-ServersTestMultiNic-server-173632355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-173632355',id=55,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:12:10Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-lbtrnv84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:12:10Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=cd3678f8-d4b4-4c8e-9c3a-be877c8a2268,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f8184177-3443-451c-8620-29c0cb8524ae", "address": "fa:16:3e:e7:2b:1a", "network": {"id": "603d56cb-3f26-4fa4-a8ba-1d17c9191d9d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-254058360", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8184177-34", "ovs_interfaceid": "f8184177-3443-451c-8620-29c0cb8524ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.530 232437 DEBUG nova.network.os_vif_util [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "f8184177-3443-451c-8620-29c0cb8524ae", "address": "fa:16:3e:e7:2b:1a", "network": {"id": "603d56cb-3f26-4fa4-a8ba-1d17c9191d9d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-254058360", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8184177-34", "ovs_interfaceid": "f8184177-3443-451c-8620-29c0cb8524ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.531 232437 DEBUG nova.network.os_vif_util [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:2b:1a,bridge_name='br-int',has_traffic_filtering=True,id=f8184177-3443-451c-8620-29c0cb8524ae,network=Network(603d56cb-3f26-4fa4-a8ba-1d17c9191d9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8184177-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.532 232437 DEBUG os_vif [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:2b:1a,bridge_name='br-int',has_traffic_filtering=True,id=f8184177-3443-451c-8620-29c0cb8524ae,network=Network(603d56cb-3f26-4fa4-a8ba-1d17c9191d9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8184177-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.533 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.533 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8184177-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.534 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.535 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.537 232437 INFO os_vif [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:2b:1a,bridge_name='br-int',has_traffic_filtering=True,id=f8184177-3443-451c-8620-29c0cb8524ae,network=Network(603d56cb-3f26-4fa4-a8ba-1d17c9191d9d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8184177-34')#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.554 232437 DEBUG nova.compute.manager [req-788bc8d2-8b65-4ac3-9ec3-5516ae1b50f3 req-a90054d0-3a5b-4dd6-b3cb-6f25f9076bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-unplugged-1693e5d5-d58b-4161-a25e-13d925d5185a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.554 232437 DEBUG oslo_concurrency.lockutils [req-788bc8d2-8b65-4ac3-9ec3-5516ae1b50f3 req-a90054d0-3a5b-4dd6-b3cb-6f25f9076bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.555 232437 DEBUG oslo_concurrency.lockutils [req-788bc8d2-8b65-4ac3-9ec3-5516ae1b50f3 req-a90054d0-3a5b-4dd6-b3cb-6f25f9076bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.555 232437 DEBUG oslo_concurrency.lockutils [req-788bc8d2-8b65-4ac3-9ec3-5516ae1b50f3 req-a90054d0-3a5b-4dd6-b3cb-6f25f9076bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.555 232437 DEBUG nova.compute.manager [req-788bc8d2-8b65-4ac3-9ec3-5516ae1b50f3 req-a90054d0-3a5b-4dd6-b3cb-6f25f9076bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] No waiting events found dispatching network-vif-unplugged-1693e5d5-d58b-4161-a25e-13d925d5185a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.556 232437 DEBUG nova.compute.manager [req-788bc8d2-8b65-4ac3-9ec3-5516ae1b50f3 req-a90054d0-3a5b-4dd6-b3cb-6f25f9076bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-unplugged-1693e5d5-d58b-4161-a25e-13d925d5185a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:12:14 np0005548731 podman[256909]: 2025-12-06 07:12:14.570564047 +0000 UTC m=+0.046260526 container remove 82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.576 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8132e75f-c98c-40ab-a8af-40589088872b]: (4, ('Sat Dec  6 07:12:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21 (82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d)\n82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d\nSat Dec  6 07:12:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21 (82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d)\n82dd2fa045c618d67b8d539230687b4712695377b1903e14982afed020d7421d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.577 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9edda364-1684-4940-853f-2770f877eb57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.578 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76036c60-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.579 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 kernel: tap76036c60-80: left promiscuous mode
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.598 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.602 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d84543-9528-49ff-b4e2-8cfa54d3c98c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.616 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2c39c835-d366-4297-9966-ddf6e6ecfc77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.618 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[75524cf1-336d-491c-bb83-00040eefd1ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.634 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[357f8737-76b5-41a5-ac04-66f932700caa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539135, 'reachable_time': 27495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256942, 'error': None, 'target': 'ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 systemd[1]: run-netns-ovnmeta\x2d76036c60\x2d8614\x2d4d35\x2dbbfa\x2d05b970e10c21.mount: Deactivated successfully.
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.638 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-76036c60-8614-4d35-bbfa-05b970e10c21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.639 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[f30e59e8-4b98-40aa-a28d-58088ab5bc2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.640 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f8184177-3443-451c-8620-29c0cb8524ae in datapath 603d56cb-3f26-4fa4-a8ba-1d17c9191d9d unbound from our chassis#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.641 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 603d56cb-3f26-4fa4-a8ba-1d17c9191d9d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.642 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[758c0adf-915b-4268-afc9-4c6ea10b8537]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.643 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d namespace which is not needed anymore#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.653 232437 DEBUG nova.compute.manager [req-94f7a252-055d-48f6-96bc-7de3987e7362 req-fb524bf1-6ad7-407b-8d1f-6c16349c6584 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-unplugged-f8184177-3443-451c-8620-29c0cb8524ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.654 232437 DEBUG oslo_concurrency.lockutils [req-94f7a252-055d-48f6-96bc-7de3987e7362 req-fb524bf1-6ad7-407b-8d1f-6c16349c6584 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.654 232437 DEBUG oslo_concurrency.lockutils [req-94f7a252-055d-48f6-96bc-7de3987e7362 req-fb524bf1-6ad7-407b-8d1f-6c16349c6584 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.654 232437 DEBUG oslo_concurrency.lockutils [req-94f7a252-055d-48f6-96bc-7de3987e7362 req-fb524bf1-6ad7-407b-8d1f-6c16349c6584 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.654 232437 DEBUG nova.compute.manager [req-94f7a252-055d-48f6-96bc-7de3987e7362 req-fb524bf1-6ad7-407b-8d1f-6c16349c6584 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] No waiting events found dispatching network-vif-unplugged-f8184177-3443-451c-8620-29c0cb8524ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.655 232437 DEBUG nova.compute.manager [req-94f7a252-055d-48f6-96bc-7de3987e7362 req-fb524bf1-6ad7-407b-8d1f-6c16349c6584 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-unplugged-f8184177-3443-451c-8620-29c0cb8524ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.714 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.715 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d[256815]: [NOTICE]   (256819) : haproxy version is 2.8.14-c23fe91
Dec  6 02:12:14 np0005548731 neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d[256815]: [NOTICE]   (256819) : path to executable is /usr/sbin/haproxy
Dec  6 02:12:14 np0005548731 neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d[256815]: [WARNING]  (256819) : Exiting Master process...
Dec  6 02:12:14 np0005548731 neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d[256815]: [ALERT]    (256819) : Current worker (256821) exited with code 143 (Terminated)
Dec  6 02:12:14 np0005548731 neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d[256815]: [WARNING]  (256819) : All workers exited. Exiting... (0)
Dec  6 02:12:14 np0005548731 systemd[1]: libpod-7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd.scope: Deactivated successfully.
Dec  6 02:12:14 np0005548731 podman[256960]: 2025-12-06 07:12:14.798201816 +0000 UTC m=+0.054524079 container died 7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  6 02:12:14 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd-userdata-shm.mount: Deactivated successfully.
Dec  6 02:12:14 np0005548731 systemd[1]: var-lib-containers-storage-overlay-ded3c3755d373eda975ed8ce60a35d930b40458e6ff9044257215700b0e0d76e-merged.mount: Deactivated successfully.
Dec  6 02:12:14 np0005548731 podman[256960]: 2025-12-06 07:12:14.830953557 +0000 UTC m=+0.087275800 container cleanup 7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:12:14 np0005548731 systemd[1]: libpod-conmon-7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd.scope: Deactivated successfully.
Dec  6 02:12:14 np0005548731 podman[256990]: 2025-12-06 07:12:14.898025655 +0000 UTC m=+0.043046915 container remove 7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.903 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d44f0a-c3ed-427a-bfd6-6d2080cf69ad]: (4, ('Sat Dec  6 07:12:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d (7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd)\n7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd\nSat Dec  6 07:12:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d (7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd)\n7c7ce4f7e7c1062341fbb9ac7a91d6c31fa8110e86449bc067af1a46c21412cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.904 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[50538800-ad58-427f-8b4e-9421741960ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.905 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap603d56cb-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.906 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 kernel: tap603d56cb-30: left promiscuous mode
Dec  6 02:12:14 np0005548731 nova_compute[232433]: 2025-12-06 07:12:14.919 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.921 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0f65683f-fddb-4a88-b7b2-decbbb4c4827]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.937 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[58458009-7360-4fba-9061-c3263a26f9fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.938 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6065a6c8-2324-4341-9c1d-2356eb8d6507]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.954 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d09a66f8-c71a-4c7c-875b-b20f417c9214]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539211, 'reachable_time': 34351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257005, 'error': None, 'target': 'ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.956 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-603d56cb-3f26-4fa4-a8ba-1d17c9191d9d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.956 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[88a9c332-e6d9-4206-a631-b91140e70158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:14.957 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:12:15 np0005548731 systemd[1]: run-netns-ovnmeta\x2d603d56cb\x2d3f26\x2d4fa4\x2da8ba\x2d1d17c9191d9d.mount: Deactivated successfully.
Dec  6 02:12:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:12:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:16.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:16 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:12:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:16.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.713 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.763 232437 DEBUG nova.compute.manager [req-41579677-8a61-48bf-ba2a-43a9d8c0ec74 req-4c2c3bf7-ebb5-4552-b076-2729917f520d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-plugged-f8184177-3443-451c-8620-29c0cb8524ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.764 232437 DEBUG oslo_concurrency.lockutils [req-41579677-8a61-48bf-ba2a-43a9d8c0ec74 req-4c2c3bf7-ebb5-4552-b076-2729917f520d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.764 232437 DEBUG oslo_concurrency.lockutils [req-41579677-8a61-48bf-ba2a-43a9d8c0ec74 req-4c2c3bf7-ebb5-4552-b076-2729917f520d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.764 232437 DEBUG oslo_concurrency.lockutils [req-41579677-8a61-48bf-ba2a-43a9d8c0ec74 req-4c2c3bf7-ebb5-4552-b076-2729917f520d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.765 232437 DEBUG nova.compute.manager [req-41579677-8a61-48bf-ba2a-43a9d8c0ec74 req-4c2c3bf7-ebb5-4552-b076-2729917f520d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] No waiting events found dispatching network-vif-plugged-f8184177-3443-451c-8620-29c0cb8524ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.765 232437 WARNING nova.compute.manager [req-41579677-8a61-48bf-ba2a-43a9d8c0ec74 req-4c2c3bf7-ebb5-4552-b076-2729917f520d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received unexpected event network-vif-plugged-f8184177-3443-451c-8620-29c0cb8524ae for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:12:16 np0005548731 podman[257009]: 2025-12-06 07:12:16.897211766 +0000 UTC m=+0.054191931 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:12:16 np0005548731 podman[257007]: 2025-12-06 07:12:16.897355389 +0000 UTC m=+0.061160413 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:12:16 np0005548731 podman[257008]: 2025-12-06 07:12:16.932246342 +0000 UTC m=+0.095141514 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.958 232437 DEBUG nova.compute.manager [req-06e4b6e1-0327-4ac3-b61f-8400fa892710 req-40612366-0132-47ec-a7ee-f4049a127ba8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-plugged-1693e5d5-d58b-4161-a25e-13d925d5185a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.958 232437 DEBUG oslo_concurrency.lockutils [req-06e4b6e1-0327-4ac3-b61f-8400fa892710 req-40612366-0132-47ec-a7ee-f4049a127ba8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.959 232437 DEBUG oslo_concurrency.lockutils [req-06e4b6e1-0327-4ac3-b61f-8400fa892710 req-40612366-0132-47ec-a7ee-f4049a127ba8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.959 232437 DEBUG oslo_concurrency.lockutils [req-06e4b6e1-0327-4ac3-b61f-8400fa892710 req-40612366-0132-47ec-a7ee-f4049a127ba8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.959 232437 DEBUG nova.compute.manager [req-06e4b6e1-0327-4ac3-b61f-8400fa892710 req-40612366-0132-47ec-a7ee-f4049a127ba8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] No waiting events found dispatching network-vif-plugged-1693e5d5-d58b-4161-a25e-13d925d5185a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:12:16 np0005548731 nova_compute[232433]: 2025-12-06 07:12:16.959 232437 WARNING nova.compute.manager [req-06e4b6e1-0327-4ac3-b61f-8400fa892710 req-40612366-0132-47ec-a7ee-f4049a127ba8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received unexpected event network-vif-plugged-1693e5d5-d58b-4161-a25e-13d925d5185a for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:12:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:18.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:18.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:19 np0005548731 nova_compute[232433]: 2025-12-06 07:12:19.537 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).paxos(paxos updating c 2762..3274) lease_timeout -- calling new election
Dec  6 02:12:19 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 02:12:19 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(46) init, last seen epoch 46
Dec  6 02:12:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:12:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:12:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:20.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:20 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:12:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:20.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:21 np0005548731 nova_compute[232433]: 2025-12-06 07:12:21.714 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:22.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:22.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:12:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:24.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:24 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:12:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:12:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:24.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:24 np0005548731 nova_compute[232433]: 2025-12-06 07:12:24.540 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:12:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:24.958 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e225 e225: 3 total, 3 up, 3 in
Dec  6 02:12:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:12:25 np0005548731 nova_compute[232433]: 2025-12-06 07:12:25.513 232437 INFO nova.virt.libvirt.driver [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Deleting instance files /var/lib/nova/instances/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_del#033[00m
Dec  6 02:12:25 np0005548731 nova_compute[232433]: 2025-12-06 07:12:25.514 232437 INFO nova.virt.libvirt.driver [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Deletion of /var/lib/nova/instances/cd3678f8-d4b4-4c8e-9c3a-be877c8a2268_del complete#033[00m
Dec  6 02:12:25 np0005548731 nova_compute[232433]: 2025-12-06 07:12:25.562 232437 INFO nova.compute.manager [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Took 11.32 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:12:25 np0005548731 nova_compute[232433]: 2025-12-06 07:12:25.563 232437 DEBUG oslo.service.loopingcall [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:12:25 np0005548731 nova_compute[232433]: 2025-12-06 07:12:25.563 232437 DEBUG nova.compute.manager [-] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:12:25 np0005548731 nova_compute[232433]: 2025-12-06 07:12:25.564 232437 DEBUG nova.network.neutron [-] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:12:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:12:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3405406043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:12:25 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 02:12:25 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 02:12:25 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  6 02:12:25 np0005548731 ceph-mon[77458]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec  6 02:12:25 np0005548731 ceph-mon[77458]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec  6 02:12:25 np0005548731 ceph-mon[77458]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec  6 02:12:25 np0005548731 ceph-mon[77458]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec  6 02:12:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e226 e226: 3 total, 3 up, 3 in
Dec  6 02:12:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:12:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:26.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:26.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:26 np0005548731 nova_compute[232433]: 2025-12-06 07:12:26.716 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:26 np0005548731 nova_compute[232433]: 2025-12-06 07:12:26.759 232437 DEBUG nova.compute.manager [req-1a051ceb-39e9-4d9b-887f-45746475f762 req-e16f67e3-8bd9-4706-9301-f27be8182d95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-deleted-1693e5d5-d58b-4161-a25e-13d925d5185a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:26 np0005548731 nova_compute[232433]: 2025-12-06 07:12:26.759 232437 INFO nova.compute.manager [req-1a051ceb-39e9-4d9b-887f-45746475f762 req-e16f67e3-8bd9-4706-9301-f27be8182d95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Neutron deleted interface 1693e5d5-d58b-4161-a25e-13d925d5185a; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:12:26 np0005548731 nova_compute[232433]: 2025-12-06 07:12:26.760 232437 DEBUG nova.network.neutron [req-1a051ceb-39e9-4d9b-887f-45746475f762 req-e16f67e3-8bd9-4706-9301-f27be8182d95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Updating instance_info_cache with network_info: [{"id": "f8184177-3443-451c-8620-29c0cb8524ae", "address": "fa:16:3e:e7:2b:1a", "network": {"id": "603d56cb-3f26-4fa4-a8ba-1d17c9191d9d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-254058360", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8184177-34", "ovs_interfaceid": "f8184177-3443-451c-8620-29c0cb8524ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:12:26 np0005548731 nova_compute[232433]: 2025-12-06 07:12:26.782 232437 DEBUG nova.compute.manager [req-1a051ceb-39e9-4d9b-887f-45746475f762 req-e16f67e3-8bd9-4706-9301-f27be8182d95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Detach interface failed, port_id=1693e5d5-d58b-4161-a25e-13d925d5185a, reason: Instance cd3678f8-d4b4-4c8e-9c3a-be877c8a2268 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:12:27 np0005548731 nova_compute[232433]: 2025-12-06 07:12:27.085 232437 DEBUG nova.network.neutron [-] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:12:27 np0005548731 nova_compute[232433]: 2025-12-06 07:12:27.103 232437 INFO nova.compute.manager [-] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Took 1.54 seconds to deallocate network for instance.#033[00m
Dec  6 02:12:27 np0005548731 nova_compute[232433]: 2025-12-06 07:12:27.143 232437 DEBUG oslo_concurrency.lockutils [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:27 np0005548731 nova_compute[232433]: 2025-12-06 07:12:27.144 232437 DEBUG oslo_concurrency.lockutils [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:27 np0005548731 nova_compute[232433]: 2025-12-06 07:12:27.205 232437 DEBUG oslo_concurrency.processutils [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:12:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/821615078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:12:27 np0005548731 nova_compute[232433]: 2025-12-06 07:12:27.643 232437 DEBUG oslo_concurrency.processutils [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:27 np0005548731 nova_compute[232433]: 2025-12-06 07:12:27.651 232437 DEBUG nova.compute.provider_tree [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:12:27 np0005548731 nova_compute[232433]: 2025-12-06 07:12:27.668 232437 DEBUG nova.scheduler.client.report [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:12:27 np0005548731 nova_compute[232433]: 2025-12-06 07:12:27.689 232437 DEBUG oslo_concurrency.lockutils [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:27 np0005548731 nova_compute[232433]: 2025-12-06 07:12:27.732 232437 INFO nova.scheduler.client.report [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Deleted allocations for instance cd3678f8-d4b4-4c8e-9c3a-be877c8a2268#033[00m
Dec  6 02:12:27 np0005548731 nova_compute[232433]: 2025-12-06 07:12:27.796 232437 DEBUG oslo_concurrency.lockutils [None req-59ce60c1-26e1-4cf4-aa90-74d9a413b684 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "cd3678f8-d4b4-4c8e-9c3a-be877c8a2268" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:28.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:28.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:28 np0005548731 nova_compute[232433]: 2025-12-06 07:12:28.896 232437 DEBUG nova.compute.manager [req-6cfb8c8b-0cd9-4219-89e7-0c6641d0f1df req-f6c89590-8283-4a71-9e9a-b2deb839f706 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Received event network-vif-deleted-f8184177-3443-451c-8620-29c0cb8524ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e227 e227: 3 total, 3 up, 3 in
Dec  6 02:12:29 np0005548731 nova_compute[232433]: 2025-12-06 07:12:29.244 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:29 np0005548731 nova_compute[232433]: 2025-12-06 07:12:29.498 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005134.4976301, cd3678f8-d4b4-4c8e-9c3a-be877c8a2268 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:12:29 np0005548731 nova_compute[232433]: 2025-12-06 07:12:29.499 232437 INFO nova.compute.manager [-] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:12:29 np0005548731 nova_compute[232433]: 2025-12-06 07:12:29.536 232437 DEBUG nova.compute.manager [None req-1b46bcb3-e95e-4df2-9bf5-f64e0b8a5882 - - - - - -] [instance: cd3678f8-d4b4-4c8e-9c3a-be877c8a2268] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:29 np0005548731 nova_compute[232433]: 2025-12-06 07:12:29.541 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:29 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 02:12:29 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(50) init, last seen epoch 50
Dec  6 02:12:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:12:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:12:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:12:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:30.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e228 e228: 3 total, 3 up, 3 in
Dec  6 02:12:30 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 02:12:30 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 02:12:30 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 02:12:30 np0005548731 ceph-mon[77458]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec  6 02:12:30 np0005548731 ceph-mon[77458]: Cluster is now healthy
Dec  6 02:12:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:30.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:12:31 np0005548731 ceph-mon[77458]: mon.compute-1 calling monitor election
Dec  6 02:12:31 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 02:12:31 np0005548731 nova_compute[232433]: 2025-12-06 07:12:31.719 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:32.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:32.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e229 e229: 3 total, 3 up, 3 in
Dec  6 02:12:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:34.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:34.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:34 np0005548731 nova_compute[232433]: 2025-12-06 07:12:34.545 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:12:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:36.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:36 np0005548731 nova_compute[232433]: 2025-12-06 07:12:36.721 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:38.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:38.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:38 np0005548731 nova_compute[232433]: 2025-12-06 07:12:38.781 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:38 np0005548731 nova_compute[232433]: 2025-12-06 07:12:38.781 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:38 np0005548731 nova_compute[232433]: 2025-12-06 07:12:38.812 232437 DEBUG nova.compute.manager [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:12:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e230 e230: 3 total, 3 up, 3 in
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.120 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.121 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.127 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.127 232437 INFO nova.compute.claims [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.275 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:12:39 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1662024483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.684 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.690 232437 DEBUG nova.compute.provider_tree [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.706 232437 DEBUG nova.scheduler.client.report [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.726 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.727 232437 DEBUG nova.compute.manager [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.772 232437 DEBUG nova.compute.manager [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.772 232437 DEBUG nova.network.neutron [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.788 232437 INFO nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.817 232437 DEBUG nova.compute.manager [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.902 232437 DEBUG nova.compute.manager [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.903 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.904 232437 INFO nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Creating image(s)#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.932 232437 DEBUG nova.storage.rbd_utils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.959 232437 DEBUG nova.storage.rbd_utils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.987 232437 DEBUG nova.storage.rbd_utils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:12:39 np0005548731 nova_compute[232433]: 2025-12-06 07:12:39.991 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:40 np0005548731 nova_compute[232433]: 2025-12-06 07:12:40.056 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:40 np0005548731 nova_compute[232433]: 2025-12-06 07:12:40.058 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:40 np0005548731 nova_compute[232433]: 2025-12-06 07:12:40.058 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:40 np0005548731 nova_compute[232433]: 2025-12-06 07:12:40.059 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:40 np0005548731 nova_compute[232433]: 2025-12-06 07:12:40.086 232437 DEBUG nova.storage.rbd_utils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:12:40 np0005548731 nova_compute[232433]: 2025-12-06 07:12:40.091 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:40.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:40 np0005548731 nova_compute[232433]: 2025-12-06 07:12:40.354 232437 DEBUG nova.policy [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bdd7994b0ebb4035a373b6560aa7dbcf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:12:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:40.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:12:41 np0005548731 nova_compute[232433]: 2025-12-06 07:12:41.122 232437 DEBUG nova.network.neutron [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Successfully created port: 4fe57424-f705-4ca6-8802-348f30338843 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:12:41 np0005548731 nova_compute[232433]: 2025-12-06 07:12:41.721 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:41 np0005548731 nova_compute[232433]: 2025-12-06 07:12:41.755 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:41 np0005548731 nova_compute[232433]: 2025-12-06 07:12:41.795 232437 DEBUG nova.storage.rbd_utils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] resizing rbd image a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.042 232437 DEBUG nova.network.neutron [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Successfully updated port: 4fe57424-f705-4ca6-8802-348f30338843 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.056 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "refresh_cache-a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.057 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquired lock "refresh_cache-a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.057 232437 DEBUG nova.network.neutron [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.138 232437 DEBUG nova.compute.manager [req-e940571f-2a69-4c98-b501-9f2e1cd0cb46 req-b40b02f8-3a9e-4bb3-8952-6c7d73d06c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Received event network-changed-4fe57424-f705-4ca6-8802-348f30338843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.139 232437 DEBUG nova.compute.manager [req-e940571f-2a69-4c98-b501-9f2e1cd0cb46 req-b40b02f8-3a9e-4bb3-8952-6c7d73d06c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Refreshing instance network info cache due to event network-changed-4fe57424-f705-4ca6-8802-348f30338843. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.139 232437 DEBUG oslo_concurrency.lockutils [req-e940571f-2a69-4c98-b501-9f2e1cd0cb46 req-b40b02f8-3a9e-4bb3-8952-6c7d73d06c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.196 232437 DEBUG nova.objects.instance [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'migration_context' on Instance uuid a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.210 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.211 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Ensure instance console log exists: /var/lib/nova/instances/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.211 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.212 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.212 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:42 np0005548731 nova_compute[232433]: 2025-12-06 07:12:42.244 232437 DEBUG nova.network.neutron [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:12:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:42.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:42.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.784 232437 DEBUG nova.network.neutron [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Updating instance_info_cache with network_info: [{"id": "4fe57424-f705-4ca6-8802-348f30338843", "address": "fa:16:3e:6b:7a:68", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe57424-f7", "ovs_interfaceid": "4fe57424-f705-4ca6-8802-348f30338843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.806 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Releasing lock "refresh_cache-a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.807 232437 DEBUG nova.compute.manager [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Instance network_info: |[{"id": "4fe57424-f705-4ca6-8802-348f30338843", "address": "fa:16:3e:6b:7a:68", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe57424-f7", "ovs_interfaceid": "4fe57424-f705-4ca6-8802-348f30338843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.808 232437 DEBUG oslo_concurrency.lockutils [req-e940571f-2a69-4c98-b501-9f2e1cd0cb46 req-b40b02f8-3a9e-4bb3-8952-6c7d73d06c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.808 232437 DEBUG nova.network.neutron [req-e940571f-2a69-4c98-b501-9f2e1cd0cb46 req-b40b02f8-3a9e-4bb3-8952-6c7d73d06c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Refreshing network info cache for port 4fe57424-f705-4ca6-8802-348f30338843 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.811 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Start _get_guest_xml network_info=[{"id": "4fe57424-f705-4ca6-8802-348f30338843", "address": "fa:16:3e:6b:7a:68", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe57424-f7", "ovs_interfaceid": "4fe57424-f705-4ca6-8802-348f30338843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:12:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e231 e231: 3 total, 3 up, 3 in
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.821 232437 WARNING nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.830 232437 DEBUG nova.virt.libvirt.host [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.831 232437 DEBUG nova.virt.libvirt.host [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.840 232437 DEBUG nova.virt.libvirt.host [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.841 232437 DEBUG nova.virt.libvirt.host [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.843 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.844 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.845 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.845 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.846 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.846 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.846 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.847 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.847 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.847 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.848 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.848 232437 DEBUG nova.virt.hardware [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:12:43 np0005548731 nova_compute[232433]: 2025-12-06 07:12:43.855 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:44.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:12:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/478934728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.321 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.347 232437 DEBUG nova.storage.rbd_utils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.352 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000048s ======
Dec  6 02:12:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:44.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.552 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:12:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1382731662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.819 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.820 232437 DEBUG nova.virt.libvirt.vif [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:12:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-388885895',display_name='tempest-ImagesTestJSON-server-388885895',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-388885895',id=57,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-u6vou0lx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:12:39Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=a4b6c016-d090-4c71-9a1d-1f0debbdf5b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fe57424-f705-4ca6-8802-348f30338843", "address": "fa:16:3e:6b:7a:68", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe57424-f7", "ovs_interfaceid": "4fe57424-f705-4ca6-8802-348f30338843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.821 232437 DEBUG nova.network.os_vif_util [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "4fe57424-f705-4ca6-8802-348f30338843", "address": "fa:16:3e:6b:7a:68", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe57424-f7", "ovs_interfaceid": "4fe57424-f705-4ca6-8802-348f30338843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.821 232437 DEBUG nova.network.os_vif_util [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:7a:68,bridge_name='br-int',has_traffic_filtering=True,id=4fe57424-f705-4ca6-8802-348f30338843,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe57424-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.822 232437 DEBUG nova.objects.instance [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.840 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  <uuid>a4b6c016-d090-4c71-9a1d-1f0debbdf5b0</uuid>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  <name>instance-00000039</name>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <nova:name>tempest-ImagesTestJSON-server-388885895</nova:name>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:12:43</nova:creationTime>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <nova:user uuid="bdd7994b0ebb4035a373b6560aa7dbcf">tempest-ImagesTestJSON-134159412-project-member</nova:user>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <nova:project uuid="af7365adc05f4624a08a71cd5a77ada6">tempest-ImagesTestJSON-134159412</nova:project>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <nova:port uuid="4fe57424-f705-4ca6-8802-348f30338843">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <entry name="serial">a4b6c016-d090-4c71-9a1d-1f0debbdf5b0</entry>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <entry name="uuid">a4b6c016-d090-4c71-9a1d-1f0debbdf5b0</entry>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk.config">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:6b:7a:68"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <target dev="tap4fe57424-f7"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0/console.log" append="off"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:12:44 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:12:44 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:12:44 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:12:44 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.841 232437 DEBUG nova.compute.manager [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Preparing to wait for external event network-vif-plugged-4fe57424-f705-4ca6-8802-348f30338843 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.842 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.842 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.843 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.843 232437 DEBUG nova.virt.libvirt.vif [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:12:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-388885895',display_name='tempest-ImagesTestJSON-server-388885895',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-388885895',id=57,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-u6vou0lx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:12:39Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=a4b6c016-d090-4c71-9a1d-1f0debbdf5b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fe57424-f705-4ca6-8802-348f30338843", "address": "fa:16:3e:6b:7a:68", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe57424-f7", "ovs_interfaceid": "4fe57424-f705-4ca6-8802-348f30338843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.844 232437 DEBUG nova.network.os_vif_util [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "4fe57424-f705-4ca6-8802-348f30338843", "address": "fa:16:3e:6b:7a:68", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe57424-f7", "ovs_interfaceid": "4fe57424-f705-4ca6-8802-348f30338843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.845 232437 DEBUG nova.network.os_vif_util [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:7a:68,bridge_name='br-int',has_traffic_filtering=True,id=4fe57424-f705-4ca6-8802-348f30338843,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe57424-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.845 232437 DEBUG os_vif [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:7a:68,bridge_name='br-int',has_traffic_filtering=True,id=4fe57424-f705-4ca6-8802-348f30338843,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe57424-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.845 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.846 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.846 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.849 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.849 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fe57424-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.850 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fe57424-f7, col_values=(('external_ids', {'iface-id': '4fe57424-f705-4ca6-8802-348f30338843', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:7a:68', 'vm-uuid': 'a4b6c016-d090-4c71-9a1d-1f0debbdf5b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:44 np0005548731 NetworkManager[49182]: <info>  [1765005164.8518] manager: (tap4fe57424-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.853 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.857 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.858 232437 INFO os_vif [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:7a:68,bridge_name='br-int',has_traffic_filtering=True,id=4fe57424-f705-4ca6-8802-348f30338843,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe57424-f7')#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.923 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.925 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.925 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No VIF found with MAC fa:16:3e:6b:7a:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.925 232437 INFO nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Using config drive#033[00m
Dec  6 02:12:44 np0005548731 nova_compute[232433]: 2025-12-06 07:12:44.944 232437 DEBUG nova.storage.rbd_utils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.219 232437 DEBUG nova.network.neutron [req-e940571f-2a69-4c98-b501-9f2e1cd0cb46 req-b40b02f8-3a9e-4bb3-8952-6c7d73d06c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Updated VIF entry in instance network info cache for port 4fe57424-f705-4ca6-8802-348f30338843. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.219 232437 DEBUG nova.network.neutron [req-e940571f-2a69-4c98-b501-9f2e1cd0cb46 req-b40b02f8-3a9e-4bb3-8952-6c7d73d06c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Updating instance_info_cache with network_info: [{"id": "4fe57424-f705-4ca6-8802-348f30338843", "address": "fa:16:3e:6b:7a:68", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe57424-f7", "ovs_interfaceid": "4fe57424-f705-4ca6-8802-348f30338843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.236 232437 DEBUG oslo_concurrency.lockutils [req-e940571f-2a69-4c98-b501-9f2e1cd0cb46 req-b40b02f8-3a9e-4bb3-8952-6c7d73d06c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.360 232437 INFO nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Creating config drive at /var/lib/nova/instances/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0/disk.config#033[00m
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.366 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcm8fbwr8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.499 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcm8fbwr8" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.526 232437 DEBUG nova.storage.rbd_utils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.531 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0/disk.config a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.691 232437 DEBUG oslo_concurrency.processutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0/disk.config a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.694 232437 INFO nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Deleting local config drive /var/lib/nova/instances/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0/disk.config because it was imported into RBD.#033[00m
Dec  6 02:12:45 np0005548731 kernel: tap4fe57424-f7: entered promiscuous mode
Dec  6 02:12:45 np0005548731 NetworkManager[49182]: <info>  [1765005165.7617] manager: (tap4fe57424-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Dec  6 02:12:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:45Z|00175|binding|INFO|Claiming lport 4fe57424-f705-4ca6-8802-348f30338843 for this chassis.
Dec  6 02:12:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:45Z|00176|binding|INFO|4fe57424-f705-4ca6-8802-348f30338843: Claiming fa:16:3e:6b:7a:68 10.100.0.4
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.762 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.766 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.778 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:7a:68 10.100.0.4'], port_security=['fa:16:3e:6b:7a:68 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a4b6c016-d090-4c71-9a1d-1f0debbdf5b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=4fe57424-f705-4ca6-8802-348f30338843) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.779 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe57424-f705-4ca6-8802-348f30338843 in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad bound to our chassis#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.781 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad#033[00m
Dec  6 02:12:45 np0005548731 systemd-udevd[257656]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.796 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e203e315-6163-43b2-aed0-a324655d1d42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.798 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b0835d7-81 in ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.800 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b0835d7-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.800 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9b0e23-156e-410f-8210-ff2c7b664442]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.802 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[65392963-e0c4-4cbb-a096-2151e3d3641d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:45 np0005548731 systemd-machined[195355]: New machine qemu-25-instance-00000039.
Dec  6 02:12:45 np0005548731 NetworkManager[49182]: <info>  [1765005165.8097] device (tap4fe57424-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:12:45 np0005548731 NetworkManager[49182]: <info>  [1765005165.8104] device (tap4fe57424-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.814 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[f35491cc-9df0-4532-9ebd-a8dd891d4ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:45 np0005548731 systemd[1]: Started Virtual Machine qemu-25-instance-00000039.
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.837 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.842 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[afcb9168-a193-4160-85dd-cd9804258257]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:45Z|00177|binding|INFO|Setting lport 4fe57424-f705-4ca6-8802-348f30338843 ovn-installed in OVS
Dec  6 02:12:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:45Z|00178|binding|INFO|Setting lport 4fe57424-f705-4ca6-8802-348f30338843 up in Southbound
Dec  6 02:12:45 np0005548731 nova_compute[232433]: 2025-12-06 07:12:45.845 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:12:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:12:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.877 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[90310f11-15d7-4515-ab9b-3f0b0faa7cc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:45 np0005548731 NetworkManager[49182]: <info>  [1765005165.8855] manager: (tap2b0835d7-80): new Veth device (/org/freedesktop/NetworkManager/Devices/105)
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.885 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[280ea605-446f-442b-9dae-3a095a47109b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.923 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[31202a88-7999-4deb-8d90-d2f0fad8182c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.927 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3239fd-0313-47c6-96a7-9abb0c93f045]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:45 np0005548731 NetworkManager[49182]: <info>  [1765005165.9540] device (tap2b0835d7-80): carrier: link connected
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.960 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[40b187ae-5a52-44bd-b846-726367f3a648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.977 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ef40f37c-1b4b-48c1-a551-163513f145af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542796, 'reachable_time': 33464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257689, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:45.992 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[81eaa3f7-988a-4ec4-8c7a-4f4790d04aa7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:4e19'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542796, 'tstamp': 542796}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257690, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:46.008 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e700691c-7941-4526-aaa0-a6d94884e31e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542796, 'reachable_time': 33464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257691, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:46.044 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4eeb8c-30e5-4a2f-a119-8e51f6c42385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.065 232437 DEBUG nova.compute.manager [req-205c8ccb-d79b-4c6c-8824-cca8f266a151 req-0fa57b95-3a2b-4b1a-9f0f-75832422e64a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Received event network-vif-plugged-4fe57424-f705-4ca6-8802-348f30338843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.066 232437 DEBUG oslo_concurrency.lockutils [req-205c8ccb-d79b-4c6c-8824-cca8f266a151 req-0fa57b95-3a2b-4b1a-9f0f-75832422e64a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.066 232437 DEBUG oslo_concurrency.lockutils [req-205c8ccb-d79b-4c6c-8824-cca8f266a151 req-0fa57b95-3a2b-4b1a-9f0f-75832422e64a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.067 232437 DEBUG oslo_concurrency.lockutils [req-205c8ccb-d79b-4c6c-8824-cca8f266a151 req-0fa57b95-3a2b-4b1a-9f0f-75832422e64a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.067 232437 DEBUG nova.compute.manager [req-205c8ccb-d79b-4c6c-8824-cca8f266a151 req-0fa57b95-3a2b-4b1a-9f0f-75832422e64a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Processing event network-vif-plugged-4fe57424-f705-4ca6-8802-348f30338843 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:46.100 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3452bc31-a851-41e0-8c9c-0568c2ca70ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:46.102 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:46.102 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:46.103 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b0835d7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:46 np0005548731 NetworkManager[49182]: <info>  [1765005166.1053] manager: (tap2b0835d7-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Dec  6 02:12:46 np0005548731 kernel: tap2b0835d7-80: entered promiscuous mode
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.106 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:46.107 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b0835d7-80, col_values=(('external_ids', {'iface-id': '87f2c5b0-3684-4269-9fbf-5a4dfd5a8759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:46 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:46Z|00179|binding|INFO|Releasing lport 87f2c5b0-3684-4269-9fbf-5a4dfd5a8759 from this chassis (sb_readonly=0)
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.123 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:46.124 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:46.125 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[718be338-7cea-411b-ad72-f9c078caa639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:46.126 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:12:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:46.128 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'env', 'PROCESS_TAG=haproxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:12:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:46.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:46 np0005548731 podman[257724]: 2025-12-06 07:12:46.508360168 +0000 UTC m=+0.053794441 container create f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  6 02:12:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:46.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:46 np0005548731 systemd[1]: Started libpod-conmon-f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1.scope.
Dec  6 02:12:46 np0005548731 podman[257724]: 2025-12-06 07:12:46.475374872 +0000 UTC m=+0.020809165 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:12:46 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:12:46 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa4ce46220201c83ab1481a99c9623519030f166798bc97285506565cf95719/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:12:46 np0005548731 podman[257724]: 2025-12-06 07:12:46.608919925 +0000 UTC m=+0.154354188 container init f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:12:46 np0005548731 podman[257724]: 2025-12-06 07:12:46.615002555 +0000 UTC m=+0.160436828 container start f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:12:46 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[257758]: [NOTICE]   (257762) : New worker (257764) forked
Dec  6 02:12:46 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[257758]: [NOTICE]   (257762) : Loading success.
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.697 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.725 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.787 232437 DEBUG nova.compute.manager [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.789 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005166.7882156, a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.789 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] VM Started (Lifecycle Event)#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.793 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.796 232437 INFO nova.virt.libvirt.driver [-] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Instance spawned successfully.#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.796 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.818 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.823 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.824 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.824 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.825 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.825 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.825 232437 DEBUG nova.virt.libvirt.driver [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.828 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.861 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.862 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005166.7883344, a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.862 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.881 232437 INFO nova.compute.manager [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Took 6.98 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.881 232437 DEBUG nova.compute.manager [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.882 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.888 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005166.793091, a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.888 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.924 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.928 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.944 232437 INFO nova.compute.manager [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Took 8.07 seconds to build instance.#033[00m
Dec  6 02:12:46 np0005548731 nova_compute[232433]: 2025-12-06 07:12:46.963 232437 DEBUG oslo_concurrency.lockutils [None req-17dbf88e-0ded-4420-a9e7-f4b26ef4ec41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:47 np0005548731 podman[257797]: 2025-12-06 07:12:47.898504117 +0000 UTC m=+0.058798385 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  6 02:12:47 np0005548731 podman[257799]: 2025-12-06 07:12:47.920572283 +0000 UTC m=+0.074174695 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:12:47 np0005548731 podman[257798]: 2025-12-06 07:12:47.93744033 +0000 UTC m=+0.093196196 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  6 02:12:48 np0005548731 nova_compute[232433]: 2025-12-06 07:12:48.190 232437 DEBUG nova.compute.manager [req-08f76dcb-53c2-4f49-80c7-0605efb93bbc req-b47adc32-a45c-452e-af8f-93da883ed2d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Received event network-vif-plugged-4fe57424-f705-4ca6-8802-348f30338843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:48 np0005548731 nova_compute[232433]: 2025-12-06 07:12:48.190 232437 DEBUG oslo_concurrency.lockutils [req-08f76dcb-53c2-4f49-80c7-0605efb93bbc req-b47adc32-a45c-452e-af8f-93da883ed2d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:48 np0005548731 nova_compute[232433]: 2025-12-06 07:12:48.190 232437 DEBUG oslo_concurrency.lockutils [req-08f76dcb-53c2-4f49-80c7-0605efb93bbc req-b47adc32-a45c-452e-af8f-93da883ed2d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:48 np0005548731 nova_compute[232433]: 2025-12-06 07:12:48.191 232437 DEBUG oslo_concurrency.lockutils [req-08f76dcb-53c2-4f49-80c7-0605efb93bbc req-b47adc32-a45c-452e-af8f-93da883ed2d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:48 np0005548731 nova_compute[232433]: 2025-12-06 07:12:48.191 232437 DEBUG nova.compute.manager [req-08f76dcb-53c2-4f49-80c7-0605efb93bbc req-b47adc32-a45c-452e-af8f-93da883ed2d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] No waiting events found dispatching network-vif-plugged-4fe57424-f705-4ca6-8802-348f30338843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:12:48 np0005548731 nova_compute[232433]: 2025-12-06 07:12:48.191 232437 WARNING nova.compute.manager [req-08f76dcb-53c2-4f49-80c7-0605efb93bbc req-b47adc32-a45c-452e-af8f-93da883ed2d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Received unexpected event network-vif-plugged-4fe57424-f705-4ca6-8802-348f30338843 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:12:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:48.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:48.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:49 np0005548731 nova_compute[232433]: 2025-12-06 07:12:49.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:49 np0005548731 nova_compute[232433]: 2025-12-06 07:12:49.851 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:49 np0005548731 nova_compute[232433]: 2025-12-06 07:12:49.873 232437 DEBUG nova.objects.instance [None req-1b7a672a-0d93-4e42-9503-aa614cf9c6a0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:12:49 np0005548731 nova_compute[232433]: 2025-12-06 07:12:49.897 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005169.8977594, a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:12:49 np0005548731 nova_compute[232433]: 2025-12-06 07:12:49.898 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:12:49 np0005548731 nova_compute[232433]: 2025-12-06 07:12:49.918 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:49 np0005548731 nova_compute[232433]: 2025-12-06 07:12:49.921 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:12:49 np0005548731 nova_compute[232433]: 2025-12-06 07:12:49.938 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.123 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.123 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.124 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.124 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:12:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:50.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:50 np0005548731 kernel: tap4fe57424-f7 (unregistering): left promiscuous mode
Dec  6 02:12:50 np0005548731 NetworkManager[49182]: <info>  [1765005170.4480] device (tap4fe57424-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:12:50 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:50Z|00180|binding|INFO|Releasing lport 4fe57424-f705-4ca6-8802-348f30338843 from this chassis (sb_readonly=0)
Dec  6 02:12:50 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:50Z|00181|binding|INFO|Setting lport 4fe57424-f705-4ca6-8802-348f30338843 down in Southbound
Dec  6 02:12:50 np0005548731 ovn_controller[133927]: 2025-12-06T07:12:50Z|00182|binding|INFO|Removing iface tap4fe57424-f7 ovn-installed in OVS
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.465 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.473 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:7a:68 10.100.0.4'], port_security=['fa:16:3e:6b:7a:68 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a4b6c016-d090-4c71-9a1d-1f0debbdf5b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=4fe57424-f705-4ca6-8802-348f30338843) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.474 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe57424-f705-4ca6-8802-348f30338843 in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad unbound from our chassis#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.476 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.477 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ee111bae-ee72-4f04-931a-ded41758d3c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.477 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace which is not needed anymore#033[00m
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.483 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:50 np0005548731 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000039.scope: Deactivated successfully.
Dec  6 02:12:50 np0005548731 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000039.scope: Consumed 4.067s CPU time.
Dec  6 02:12:50 np0005548731 systemd-machined[195355]: Machine qemu-25-instance-00000039 terminated.
Dec  6 02:12:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:50.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.617 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.622 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:50 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[257758]: [NOTICE]   (257762) : haproxy version is 2.8.14-c23fe91
Dec  6 02:12:50 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[257758]: [NOTICE]   (257762) : path to executable is /usr/sbin/haproxy
Dec  6 02:12:50 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[257758]: [WARNING]  (257762) : Exiting Master process...
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.628 232437 DEBUG nova.compute.manager [None req-1b7a672a-0d93-4e42-9503-aa614cf9c6a0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:50 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[257758]: [ALERT]    (257762) : Current worker (257764) exited with code 143 (Terminated)
Dec  6 02:12:50 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[257758]: [WARNING]  (257762) : All workers exited. Exiting... (0)
Dec  6 02:12:50 np0005548731 systemd[1]: libpod-f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1.scope: Deactivated successfully.
Dec  6 02:12:50 np0005548731 podman[257887]: 2025-12-06 07:12:50.635977261 +0000 UTC m=+0.057998834 container died f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 02:12:50 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1-userdata-shm.mount: Deactivated successfully.
Dec  6 02:12:50 np0005548731 systemd[1]: var-lib-containers-storage-overlay-5fa4ce46220201c83ab1481a99c9623519030f166798bc97285506565cf95719-merged.mount: Deactivated successfully.
Dec  6 02:12:50 np0005548731 podman[257887]: 2025-12-06 07:12:50.690768765 +0000 UTC m=+0.112790328 container cleanup f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 02:12:50 np0005548731 systemd[1]: libpod-conmon-f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1.scope: Deactivated successfully.
Dec  6 02:12:50 np0005548731 podman[257926]: 2025-12-06 07:12:50.749150578 +0000 UTC m=+0.036360743 container remove f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.754 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a60dda-24a3-45f1-ae7f-9954d980646c]: (4, ('Sat Dec  6 07:12:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1)\nf923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1\nSat Dec  6 07:12:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (f923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1)\nf923946d21c853117fc795f46a561bd2aac4a72651caf163dc6736666382a8b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.756 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e8324800-2f30-4db1-a419-01ab49bfb995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.757 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.758 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:50 np0005548731 kernel: tap2b0835d7-80: left promiscuous mode
Dec  6 02:12:50 np0005548731 nova_compute[232433]: 2025-12-06 07:12:50.774 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.777 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b545c0e9-6fab-43db-8aa9-352f9d0d60a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.788 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fbae2da3-b9ad-4c6c-bfd2-05aacf3a1f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.789 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[03d1accd-7ac3-4699-bd77-a4bc8194028c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.804 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3ea33d-c35f-4afb-8c3a-4169f8a92c7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542787, 'reachable_time': 35626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257945, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:50 np0005548731 systemd[1]: run-netns-ovnmeta\x2d2b0835d7\x2d87e4\x2d46cc\x2d8a94\x2de4e042bd4bad.mount: Deactivated successfully.
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.806 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:12:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:50.807 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2ada3a-0c02-420b-a9af-2a914000f16a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:12:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:12:51 np0005548731 nova_compute[232433]: 2025-12-06 07:12:51.395 232437 DEBUG nova.compute.manager [req-fa8af519-943c-43cd-93cf-176d8f524022 req-5cb86ef4-8f2d-4a99-bffb-a2f0036d5b40 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Received event network-vif-unplugged-4fe57424-f705-4ca6-8802-348f30338843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:51 np0005548731 nova_compute[232433]: 2025-12-06 07:12:51.396 232437 DEBUG oslo_concurrency.lockutils [req-fa8af519-943c-43cd-93cf-176d8f524022 req-5cb86ef4-8f2d-4a99-bffb-a2f0036d5b40 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:51 np0005548731 nova_compute[232433]: 2025-12-06 07:12:51.396 232437 DEBUG oslo_concurrency.lockutils [req-fa8af519-943c-43cd-93cf-176d8f524022 req-5cb86ef4-8f2d-4a99-bffb-a2f0036d5b40 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:51 np0005548731 nova_compute[232433]: 2025-12-06 07:12:51.397 232437 DEBUG oslo_concurrency.lockutils [req-fa8af519-943c-43cd-93cf-176d8f524022 req-5cb86ef4-8f2d-4a99-bffb-a2f0036d5b40 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:51 np0005548731 nova_compute[232433]: 2025-12-06 07:12:51.397 232437 DEBUG nova.compute.manager [req-fa8af519-943c-43cd-93cf-176d8f524022 req-5cb86ef4-8f2d-4a99-bffb-a2f0036d5b40 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] No waiting events found dispatching network-vif-unplugged-4fe57424-f705-4ca6-8802-348f30338843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:12:51 np0005548731 nova_compute[232433]: 2025-12-06 07:12:51.397 232437 WARNING nova.compute.manager [req-fa8af519-943c-43cd-93cf-176d8f524022 req-5cb86ef4-8f2d-4a99-bffb-a2f0036d5b40 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Received unexpected event network-vif-unplugged-4fe57424-f705-4ca6-8802-348f30338843 for instance with vm_state suspended and task_state None.#033[00m
Dec  6 02:12:51 np0005548731 nova_compute[232433]: 2025-12-06 07:12:51.727 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:52.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:52 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:52.395 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:12:52 np0005548731 nova_compute[232433]: 2025-12-06 07:12:52.396 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:52 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:52.396 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:12:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:52.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.115 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Updating instance_info_cache with network_info: [{"id": "4fe57424-f705-4ca6-8802-348f30338843", "address": "fa:16:3e:6b:7a:68", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe57424-f7", "ovs_interfaceid": "4fe57424-f705-4ca6-8802-348f30338843", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.155 232437 DEBUG nova.compute.manager [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.156 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.156 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.156 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.157 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.157 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.157 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.158 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.185 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.185 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.205 232437 INFO nova.compute.manager [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] instance snapshotting#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.205 232437 WARNING nova.compute.manager [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Dec  6 02:12:53 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:12:53.398 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.482 232437 INFO nova.virt.libvirt.driver [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Beginning cold snapshot process#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.518 232437 DEBUG nova.compute.manager [req-1d1249d4-ba7e-45b0-ba2a-29dc1f232349 req-b66666ac-8b0b-4338-a4df-0846eb265979 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Received event network-vif-plugged-4fe57424-f705-4ca6-8802-348f30338843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.518 232437 DEBUG oslo_concurrency.lockutils [req-1d1249d4-ba7e-45b0-ba2a-29dc1f232349 req-b66666ac-8b0b-4338-a4df-0846eb265979 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.519 232437 DEBUG oslo_concurrency.lockutils [req-1d1249d4-ba7e-45b0-ba2a-29dc1f232349 req-b66666ac-8b0b-4338-a4df-0846eb265979 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.519 232437 DEBUG oslo_concurrency.lockutils [req-1d1249d4-ba7e-45b0-ba2a-29dc1f232349 req-b66666ac-8b0b-4338-a4df-0846eb265979 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.520 232437 DEBUG nova.compute.manager [req-1d1249d4-ba7e-45b0-ba2a-29dc1f232349 req-b66666ac-8b0b-4338-a4df-0846eb265979 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] No waiting events found dispatching network-vif-plugged-4fe57424-f705-4ca6-8802-348f30338843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.520 232437 WARNING nova.compute.manager [req-1d1249d4-ba7e-45b0-ba2a-29dc1f232349 req-b66666ac-8b0b-4338-a4df-0846eb265979 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Received unexpected event network-vif-plugged-4fe57424-f705-4ca6-8802-348f30338843 for instance with vm_state suspended and task_state image_snapshot.#033[00m
Dec  6 02:12:53 np0005548731 nova_compute[232433]: 2025-12-06 07:12:53.836 232437 DEBUG nova.virt.libvirt.imagebackend [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec  6 02:12:54 np0005548731 nova_compute[232433]: 2025-12-06 07:12:54.104 232437 DEBUG nova.storage.rbd_utils [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] creating snapshot(af09b99fc56e4929a1a6a6c0d3c533b5) on rbd image(a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:12:54 np0005548731 nova_compute[232433]: 2025-12-06 07:12:54.148 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:54.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e232 e232: 3 total, 3 up, 3 in
Dec  6 02:12:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:54.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:54 np0005548731 nova_compute[232433]: 2025-12-06 07:12:54.582 232437 DEBUG nova.storage.rbd_utils [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] cloning vms/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk@af09b99fc56e4929a1a6a6c0d3c533b5 to images/9b6e4b95-ff8f-4732-b604-e00530db621e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:12:54 np0005548731 nova_compute[232433]: 2025-12-06 07:12:54.715 232437 DEBUG nova.storage.rbd_utils [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] flattening images/9b6e4b95-ff8f-4732-b604-e00530db621e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  6 02:12:54 np0005548731 nova_compute[232433]: 2025-12-06 07:12:54.856 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.118 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.119 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.145 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.145 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.146 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.146 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.146 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.187 232437 DEBUG nova.storage.rbd_utils [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] removing snapshot(af09b99fc56e4929a1a6a6c0d3c533b5) on rbd image(a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:12:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:12:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:12:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:12:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2861895599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.653 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.731 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.731 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:12:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e233 e233: 3 total, 3 up, 3 in
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.791 232437 DEBUG nova.storage.rbd_utils [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] creating snapshot(snap) on rbd image(9b6e4b95-ff8f-4732-b604-e00530db621e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.914 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.915 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4675MB free_disk=20.901775360107422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.915 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:12:55 np0005548731 nova_compute[232433]: 2025-12-06 07:12:55.916 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:12:56 np0005548731 nova_compute[232433]: 2025-12-06 07:12:56.050 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:12:56 np0005548731 nova_compute[232433]: 2025-12-06 07:12:56.050 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:12:56 np0005548731 nova_compute[232433]: 2025-12-06 07:12:56.050 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:12:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:12:56 np0005548731 nova_compute[232433]: 2025-12-06 07:12:56.229 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:12:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:56.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:12:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:56.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:12:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:12:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2384847691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:12:56 np0005548731 nova_compute[232433]: 2025-12-06 07:12:56.715 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:12:56 np0005548731 nova_compute[232433]: 2025-12-06 07:12:56.725 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:12:56 np0005548731 nova_compute[232433]: 2025-12-06 07:12:56.736 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:12:56 np0005548731 nova_compute[232433]: 2025-12-06 07:12:56.754 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:12:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e234 e234: 3 total, 3 up, 3 in
Dec  6 02:12:56 np0005548731 nova_compute[232433]: 2025-12-06 07:12:56.781 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:12:56 np0005548731 nova_compute[232433]: 2025-12-06 07:12:56.782 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:12:57 np0005548731 nova_compute[232433]: 2025-12-06 07:12:57.768 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:57 np0005548731 nova_compute[232433]: 2025-12-06 07:12:57.792 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:12:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:58.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:12:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:12:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:58.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:12:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e235 e235: 3 total, 3 up, 3 in
Dec  6 02:12:59 np0005548731 nova_compute[232433]: 2025-12-06 07:12:59.860 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:00.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:00.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:00 np0005548731 nova_compute[232433]: 2025-12-06 07:13:00.648 232437 INFO nova.virt.libvirt.driver [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Snapshot image upload complete#033[00m
Dec  6 02:13:00 np0005548731 nova_compute[232433]: 2025-12-06 07:13:00.649 232437 INFO nova.compute.manager [None req-a7f5920f-0eb3-4385-9ef9-0e6aa249522e bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Took 7.44 seconds to snapshot the instance on the hypervisor.#033[00m
Dec  6 02:13:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:00.855 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:00.856 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:00.856 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:01 np0005548731 nova_compute[232433]: 2025-12-06 07:13:01.738 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:02 np0005548731 nova_compute[232433]: 2025-12-06 07:13:02.248 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:13:02 np0005548731 nova_compute[232433]: 2025-12-06 07:13:02.273 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Triggering sync for uuid a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  6 02:13:02 np0005548731 nova_compute[232433]: 2025-12-06 07:13:02.273 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:02 np0005548731 nova_compute[232433]: 2025-12-06 07:13:02.273 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:02.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:02 np0005548731 nova_compute[232433]: 2025-12-06 07:13:02.292 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:02.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e236 e236: 3 total, 3 up, 3 in
Dec  6 02:13:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:13:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:04.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:13:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:04.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:04 np0005548731 nova_compute[232433]: 2025-12-06 07:13:04.865 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e237 e237: 3 total, 3 up, 3 in
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.115 232437 DEBUG oslo_concurrency.lockutils [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.115 232437 DEBUG oslo_concurrency.lockutils [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.115 232437 DEBUG oslo_concurrency.lockutils [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.116 232437 DEBUG oslo_concurrency.lockutils [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.116 232437 DEBUG oslo_concurrency.lockutils [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.117 232437 INFO nova.compute.manager [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Terminating instance#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.118 232437 DEBUG nova.compute.manager [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.123 232437 INFO nova.virt.libvirt.driver [-] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Instance destroyed successfully.#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.123 232437 DEBUG nova.objects.instance [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'resources' on Instance uuid a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.137 232437 DEBUG nova.virt.libvirt.vif [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:12:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-388885895',display_name='tempest-ImagesTestJSON-server-388885895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-388885895',id=57,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:12:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-u6vou0lx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:13:00Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=a4b6c016-d090-4c71-9a1d-1f0debbdf5b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4fe57424-f705-4ca6-8802-348f30338843", "address": "fa:16:3e:6b:7a:68", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe57424-f7", "ovs_interfaceid": "4fe57424-f705-4ca6-8802-348f30338843", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.138 232437 DEBUG nova.network.os_vif_util [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "4fe57424-f705-4ca6-8802-348f30338843", "address": "fa:16:3e:6b:7a:68", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe57424-f7", "ovs_interfaceid": "4fe57424-f705-4ca6-8802-348f30338843", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.138 232437 DEBUG nova.network.os_vif_util [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:7a:68,bridge_name='br-int',has_traffic_filtering=True,id=4fe57424-f705-4ca6-8802-348f30338843,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe57424-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.139 232437 DEBUG os_vif [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:7a:68,bridge_name='br-int',has_traffic_filtering=True,id=4fe57424-f705-4ca6-8802-348f30338843,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe57424-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.140 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.140 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fe57424-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.141 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.143 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.145 232437 INFO os_vif [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:7a:68,bridge_name='br-int',has_traffic_filtering=True,id=4fe57424-f705-4ca6-8802-348f30338843,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe57424-f7')#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.630 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005170.6291854, a4b6c016-d090-4c71-9a1d-1f0debbdf5b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.631 232437 INFO nova.compute.manager [-] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.836 232437 DEBUG nova.compute.manager [None req-5d7b00cf-73c3-4188-8631-0fda3066af7b - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.839 232437 DEBUG nova.compute.manager [None req-5d7b00cf-73c3-4188-8631-0fda3066af7b - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: deleting, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:13:05 np0005548731 nova_compute[232433]: 2025-12-06 07:13:05.858 232437 INFO nova.compute.manager [None req-5d7b00cf-73c3-4188-8631-0fda3066af7b - - - - - -] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Dec  6 02:13:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:13:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:06.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:13:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:06.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:06 np0005548731 nova_compute[232433]: 2025-12-06 07:13:06.740 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:07 np0005548731 nova_compute[232433]: 2025-12-06 07:13:07.660 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:07 np0005548731 nova_compute[232433]: 2025-12-06 07:13:07.660 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:07 np0005548731 nova_compute[232433]: 2025-12-06 07:13:07.676 232437 DEBUG nova.compute.manager [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:13:07 np0005548731 nova_compute[232433]: 2025-12-06 07:13:07.752 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:07 np0005548731 nova_compute[232433]: 2025-12-06 07:13:07.753 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:07 np0005548731 nova_compute[232433]: 2025-12-06 07:13:07.760 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:13:07 np0005548731 nova_compute[232433]: 2025-12-06 07:13:07.760 232437 INFO nova.compute.claims [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:13:07 np0005548731 nova_compute[232433]: 2025-12-06 07:13:07.893 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:08.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:13:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/247600855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.343 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.348 232437 DEBUG nova.compute.provider_tree [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.367 232437 DEBUG nova.scheduler.client.report [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.392 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.393 232437 DEBUG nova.compute.manager [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.451 232437 DEBUG nova.compute.manager [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.452 232437 DEBUG nova.network.neutron [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.478 232437 INFO nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.512 232437 DEBUG nova.compute.manager [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:13:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:08.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.631 232437 DEBUG nova.compute.manager [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.632 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.633 232437 INFO nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Creating image(s)#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.664 232437 DEBUG nova.storage.rbd_utils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:13:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172624807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:13:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:13:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172624807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.695 232437 DEBUG nova.storage.rbd_utils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.733 232437 DEBUG nova.storage.rbd_utils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.736 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.796 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.797 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.798 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:08 np0005548731 nova_compute[232433]: 2025-12-06 07:13:08.798 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:09 np0005548731 nova_compute[232433]: 2025-12-06 07:13:09.962 232437 DEBUG nova.storage.rbd_utils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:09 np0005548731 nova_compute[232433]: 2025-12-06 07:13:09.966 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:09 np0005548731 nova_compute[232433]: 2025-12-06 07:13:09.990 232437 DEBUG nova.policy [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bdd7994b0ebb4035a373b6560aa7dbcf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:13:09 np0005548731 nova_compute[232433]: 2025-12-06 07:13:09.993 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:13:09 np0005548731 nova_compute[232433]: 2025-12-06 07:13:09.993 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:13:10 np0005548731 nova_compute[232433]: 2025-12-06 07:13:10.142 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:10.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:10.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e238 e238: 3 total, 3 up, 3 in
Dec  6 02:13:11 np0005548731 nova_compute[232433]: 2025-12-06 07:13:11.142 232437 DEBUG nova.network.neutron [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Successfully created port: 1af5cec6-b2ca-49b8-973c-78801de95fd1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:13:11 np0005548731 nova_compute[232433]: 2025-12-06 07:13:11.742 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:12.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:12.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:12 np0005548731 nova_compute[232433]: 2025-12-06 07:13:12.739 232437 DEBUG nova.network.neutron [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Successfully updated port: 1af5cec6-b2ca-49b8-973c-78801de95fd1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:13:12 np0005548731 nova_compute[232433]: 2025-12-06 07:13:12.772 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "refresh_cache-db2b4e57-20af-415b-ad7b-ac5b7197acb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:13:12 np0005548731 nova_compute[232433]: 2025-12-06 07:13:12.772 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquired lock "refresh_cache-db2b4e57-20af-415b-ad7b-ac5b7197acb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:13:12 np0005548731 nova_compute[232433]: 2025-12-06 07:13:12.773 232437 DEBUG nova.network.neutron [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:13:12 np0005548731 nova_compute[232433]: 2025-12-06 07:13:12.836 232437 DEBUG nova.compute.manager [req-b0c40481-6a8b-42b8-9599-aaef29708058 req-0da0f430-3c37-422b-8a7e-72f7df5b26e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Received event network-changed-1af5cec6-b2ca-49b8-973c-78801de95fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:13:12 np0005548731 nova_compute[232433]: 2025-12-06 07:13:12.836 232437 DEBUG nova.compute.manager [req-b0c40481-6a8b-42b8-9599-aaef29708058 req-0da0f430-3c37-422b-8a7e-72f7df5b26e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Refreshing instance network info cache due to event network-changed-1af5cec6-b2ca-49b8-973c-78801de95fd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:13:12 np0005548731 nova_compute[232433]: 2025-12-06 07:13:12.837 232437 DEBUG oslo_concurrency.lockutils [req-b0c40481-6a8b-42b8-9599-aaef29708058 req-0da0f430-3c37-422b-8a7e-72f7df5b26e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-db2b4e57-20af-415b-ad7b-ac5b7197acb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:13:12 np0005548731 nova_compute[232433]: 2025-12-06 07:13:12.945 232437 DEBUG nova.network.neutron [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.168 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.232 232437 DEBUG nova.storage.rbd_utils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] resizing rbd image db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:13:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e239 e239: 3 total, 3 up, 3 in
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.602 232437 DEBUG nova.objects.instance [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'migration_context' on Instance uuid db2b4e57-20af-415b-ad7b-ac5b7197acb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.636 232437 INFO nova.virt.libvirt.driver [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Deleting instance files /var/lib/nova/instances/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_del#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.638 232437 INFO nova.virt.libvirt.driver [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Deletion of /var/lib/nova/instances/a4b6c016-d090-4c71-9a1d-1f0debbdf5b0_del complete#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.642 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.643 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Ensure instance console log exists: /var/lib/nova/instances/db2b4e57-20af-415b-ad7b-ac5b7197acb6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.644 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.644 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.644 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.751 232437 INFO nova.compute.manager [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Took 8.63 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.751 232437 DEBUG oslo.service.loopingcall [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.752 232437 DEBUG nova.compute.manager [-] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:13:13 np0005548731 nova_compute[232433]: 2025-12-06 07:13:13.752 232437 DEBUG nova.network.neutron [-] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:13:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:14.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:14.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.669 232437 DEBUG nova.network.neutron [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Updating instance_info_cache with network_info: [{"id": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "address": "fa:16:3e:76:88:06", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1af5cec6-b2", "ovs_interfaceid": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:13:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e240 e240: 3 total, 3 up, 3 in
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.733 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Releasing lock "refresh_cache-db2b4e57-20af-415b-ad7b-ac5b7197acb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.733 232437 DEBUG nova.compute.manager [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Instance network_info: |[{"id": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "address": "fa:16:3e:76:88:06", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1af5cec6-b2", "ovs_interfaceid": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.734 232437 DEBUG oslo_concurrency.lockutils [req-b0c40481-6a8b-42b8-9599-aaef29708058 req-0da0f430-3c37-422b-8a7e-72f7df5b26e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-db2b4e57-20af-415b-ad7b-ac5b7197acb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.734 232437 DEBUG nova.network.neutron [req-b0c40481-6a8b-42b8-9599-aaef29708058 req-0da0f430-3c37-422b-8a7e-72f7df5b26e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Refreshing network info cache for port 1af5cec6-b2ca-49b8-973c-78801de95fd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.737 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Start _get_guest_xml network_info=[{"id": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "address": "fa:16:3e:76:88:06", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1af5cec6-b2", "ovs_interfaceid": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.742 232437 WARNING nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.747 232437 DEBUG nova.virt.libvirt.host [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.747 232437 DEBUG nova.virt.libvirt.host [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.750 232437 DEBUG nova.virt.libvirt.host [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.751 232437 DEBUG nova.virt.libvirt.host [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.752 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.752 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.753 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.753 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.753 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.753 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.754 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.754 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.754 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.754 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.755 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.755 232437 DEBUG nova.virt.hardware [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:13:14 np0005548731 nova_compute[232433]: 2025-12-06 07:13:14.758 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.143 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:13:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1671056831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.186 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.208 232437 DEBUG nova.storage.rbd_utils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.211 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:13:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2367303753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.658 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.660 232437 DEBUG nova.virt.libvirt.vif [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-356620768',display_name='tempest-ImagesTestJSON-server-356620768',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-356620768',id=61,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-ko38uyil',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:08Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=db2b4e57-20af-415b-ad7b-ac5b7197acb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "address": "fa:16:3e:76:88:06", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1af5cec6-b2", "ovs_interfaceid": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.661 232437 DEBUG nova.network.os_vif_util [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "address": "fa:16:3e:76:88:06", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1af5cec6-b2", "ovs_interfaceid": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.662 232437 DEBUG nova.network.os_vif_util [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:88:06,bridge_name='br-int',has_traffic_filtering=True,id=1af5cec6-b2ca-49b8-973c-78801de95fd1,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1af5cec6-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.663 232437 DEBUG nova.objects.instance [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'pci_devices' on Instance uuid db2b4e57-20af-415b-ad7b-ac5b7197acb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.790 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  <uuid>db2b4e57-20af-415b-ad7b-ac5b7197acb6</uuid>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  <name>instance-0000003d</name>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <nova:name>tempest-ImagesTestJSON-server-356620768</nova:name>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:13:14</nova:creationTime>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <nova:user uuid="bdd7994b0ebb4035a373b6560aa7dbcf">tempest-ImagesTestJSON-134159412-project-member</nova:user>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <nova:project uuid="af7365adc05f4624a08a71cd5a77ada6">tempest-ImagesTestJSON-134159412</nova:project>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <nova:port uuid="1af5cec6-b2ca-49b8-973c-78801de95fd1">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <entry name="serial">db2b4e57-20af-415b-ad7b-ac5b7197acb6</entry>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <entry name="uuid">db2b4e57-20af-415b-ad7b-ac5b7197acb6</entry>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk.config">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:76:88:06"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <target dev="tap1af5cec6-b2"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/db2b4e57-20af-415b-ad7b-ac5b7197acb6/console.log" append="off"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:13:15 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:13:15 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:13:15 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:13:15 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.792 232437 DEBUG nova.compute.manager [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Preparing to wait for external event network-vif-plugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.792 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.793 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.793 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.794 232437 DEBUG nova.virt.libvirt.vif [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-356620768',display_name='tempest-ImagesTestJSON-server-356620768',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-356620768',id=61,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-ko38uyil',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:08Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=db2b4e57-20af-415b-ad7b-ac5b7197acb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "address": "fa:16:3e:76:88:06", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1af5cec6-b2", "ovs_interfaceid": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.794 232437 DEBUG nova.network.os_vif_util [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "address": "fa:16:3e:76:88:06", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1af5cec6-b2", "ovs_interfaceid": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.794 232437 DEBUG nova.network.os_vif_util [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:88:06,bridge_name='br-int',has_traffic_filtering=True,id=1af5cec6-b2ca-49b8-973c-78801de95fd1,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1af5cec6-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.795 232437 DEBUG os_vif [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:88:06,bridge_name='br-int',has_traffic_filtering=True,id=1af5cec6-b2ca-49b8-973c-78801de95fd1,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1af5cec6-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.796 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.796 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.797 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.799 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.799 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1af5cec6-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.800 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1af5cec6-b2, col_values=(('external_ids', {'iface-id': '1af5cec6-b2ca-49b8-973c-78801de95fd1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:88:06', 'vm-uuid': 'db2b4e57-20af-415b-ad7b-ac5b7197acb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:15 np0005548731 NetworkManager[49182]: <info>  [1765005195.8019] manager: (tap1af5cec6-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.802 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.807 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.808 232437 INFO os_vif [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:88:06,bridge_name='br-int',has_traffic_filtering=True,id=1af5cec6-b2ca-49b8-973c-78801de95fd1,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1af5cec6-b2')#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.842 232437 DEBUG nova.network.neutron [-] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.967 232437 INFO nova.compute.manager [-] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Took 2.22 seconds to deallocate network for instance.#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.975 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.976 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.976 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No VIF found with MAC fa:16:3e:76:88:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:13:15 np0005548731 nova_compute[232433]: 2025-12-06 07:13:15.976 232437 INFO nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Using config drive#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.053 232437 DEBUG nova.storage.rbd_utils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.136 232437 DEBUG oslo_concurrency.lockutils [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.136 232437 DEBUG oslo_concurrency.lockutils [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.197 232437 DEBUG oslo_concurrency.processutils [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:16.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:13:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:16.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.559 232437 INFO nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Creating config drive at /var/lib/nova/instances/db2b4e57-20af-415b-ad7b-ac5b7197acb6/disk.config#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.567 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db2b4e57-20af-415b-ad7b-ac5b7197acb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcbx4123 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:13:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2437772098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.635 232437 DEBUG oslo_concurrency.processutils [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.641 232437 DEBUG nova.compute.provider_tree [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.677 232437 DEBUG nova.scheduler.client.report [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.700 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db2b4e57-20af-415b-ad7b-ac5b7197acb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcbx4123" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.728 232437 DEBUG nova.storage.rbd_utils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.732 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db2b4e57-20af-415b-ad7b-ac5b7197acb6/disk.config db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.753 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.768 232437 DEBUG oslo_concurrency.lockutils [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.837 232437 INFO nova.scheduler.client.report [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Deleted allocations for instance a4b6c016-d090-4c71-9a1d-1f0debbdf5b0#033[00m
Dec  6 02:13:16 np0005548731 nova_compute[232433]: 2025-12-06 07:13:16.998 232437 DEBUG nova.compute.manager [req-717a0d57-961a-4a54-9997-3a27c71aea88 req-9168a085-7848-41f9-86be-e4d3f921a4c7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a4b6c016-d090-4c71-9a1d-1f0debbdf5b0] Received event network-vif-deleted-4fe57424-f705-4ca6-8802-348f30338843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.047 232437 DEBUG nova.network.neutron [req-b0c40481-6a8b-42b8-9599-aaef29708058 req-0da0f430-3c37-422b-8a7e-72f7df5b26e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Updated VIF entry in instance network info cache for port 1af5cec6-b2ca-49b8-973c-78801de95fd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.048 232437 DEBUG nova.network.neutron [req-b0c40481-6a8b-42b8-9599-aaef29708058 req-0da0f430-3c37-422b-8a7e-72f7df5b26e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Updating instance_info_cache with network_info: [{"id": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "address": "fa:16:3e:76:88:06", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1af5cec6-b2", "ovs_interfaceid": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.085 232437 DEBUG oslo_concurrency.lockutils [None req-51055c14-f933-4675-a947-54ca41396d4a bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "a4b6c016-d090-4c71-9a1d-1f0debbdf5b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.114 232437 DEBUG oslo_concurrency.lockutils [req-b0c40481-6a8b-42b8-9599-aaef29708058 req-0da0f430-3c37-422b-8a7e-72f7df5b26e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-db2b4e57-20af-415b-ad7b-ac5b7197acb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.525 232437 DEBUG oslo_concurrency.processutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db2b4e57-20af-415b-ad7b-ac5b7197acb6/disk.config db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.793s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.526 232437 INFO nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Deleting local config drive /var/lib/nova/instances/db2b4e57-20af-415b-ad7b-ac5b7197acb6/disk.config because it was imported into RBD.#033[00m
Dec  6 02:13:17 np0005548731 kernel: tap1af5cec6-b2: entered promiscuous mode
Dec  6 02:13:17 np0005548731 NetworkManager[49182]: <info>  [1765005197.5836] manager: (tap1af5cec6-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Dec  6 02:13:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:17Z|00183|binding|INFO|Claiming lport 1af5cec6-b2ca-49b8-973c-78801de95fd1 for this chassis.
Dec  6 02:13:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:17Z|00184|binding|INFO|1af5cec6-b2ca-49b8-973c-78801de95fd1: Claiming fa:16:3e:76:88:06 10.100.0.8
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.585 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:17Z|00185|binding|INFO|Setting lport 1af5cec6-b2ca-49b8-973c-78801de95fd1 ovn-installed in OVS
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.602 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.605 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:17 np0005548731 systemd-udevd[258610]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:13:17 np0005548731 systemd-machined[195355]: New machine qemu-26-instance-0000003d.
Dec  6 02:13:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:17Z|00186|binding|INFO|Setting lport 1af5cec6-b2ca-49b8-973c-78801de95fd1 up in Southbound
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.622 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:88:06 10.100.0.8'], port_security=['fa:16:3e:76:88:06 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'db2b4e57-20af-415b-ad7b-ac5b7197acb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=1af5cec6-b2ca-49b8-973c-78801de95fd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.625 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 1af5cec6-b2ca-49b8-973c-78801de95fd1 in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad bound to our chassis#033[00m
Dec  6 02:13:17 np0005548731 NetworkManager[49182]: <info>  [1765005197.6299] device (tap1af5cec6-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:13:17 np0005548731 NetworkManager[49182]: <info>  [1765005197.6311] device (tap1af5cec6-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.628 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad#033[00m
Dec  6 02:13:17 np0005548731 systemd[1]: Started Virtual Machine qemu-26-instance-0000003d.
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.647 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b76721-fa02-40ef-9085-d995992abeee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.648 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b0835d7-81 in ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.651 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b0835d7-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.651 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce8a655-0fca-42d7-b274-45dc05c8d42e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.652 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[98d20c0c-0f34-46e0-8b4c-3262b4b2913c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.663 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[3c21d4eb-47a4-4995-8868-170fe37d0d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.688 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2215d034-0ebb-4a5a-91c6-69f34f2dec38]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.719 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[68e1446f-a92f-4d0b-b53a-093393965f86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.726 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[36717d25-73e1-440c-8020-f117c8a9acab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 NetworkManager[49182]: <info>  [1765005197.7275] manager: (tap2b0835d7-80): new Veth device (/org/freedesktop/NetworkManager/Devices/109)
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.757 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4caf0310-be4d-4563-ab45-cce360e322d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.760 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5acf71ad-8b5b-4854-81a0-34d4a259ecbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 NetworkManager[49182]: <info>  [1765005197.7833] device (tap2b0835d7-80): carrier: link connected
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.790 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad0696c-83a0-4219-acfd-fa8bd874cb06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.808 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf57e85-a240-4e87-a3cf-637f5e5c3f9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545979, 'reachable_time': 23966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258644, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.826 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bb94ae7b-19df-43ed-935e-4ed0a5028b9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:4e19'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545979, 'tstamp': 545979}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258645, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.844 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c963cc-db43-4142-8488-947859f9c36c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545979, 'reachable_time': 23966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258646, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.876 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[155659fb-d7c9-40d0-8e2f-4cf4852573f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.934 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[495ad322-788e-4e33-bbcd-8e3e3508f418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.935 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.936 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.936 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b0835d7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:17 np0005548731 kernel: tap2b0835d7-80: entered promiscuous mode
Dec  6 02:13:17 np0005548731 NetworkManager[49182]: <info>  [1765005197.9389] manager: (tap2b0835d7-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.938 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.943 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b0835d7-80, col_values=(('external_ids', {'iface-id': '87f2c5b0-3684-4269-9fbf-5a4dfd5a8759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.944 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:17Z|00187|binding|INFO|Releasing lport 87f2c5b0-3684-4269-9fbf-5a4dfd5a8759 from this chassis (sb_readonly=0)
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.945 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.946 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[db744dbd-532f-4550-9de7-2080877604f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.946 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:13:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:17.947 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'env', 'PROCESS_TAG=haproxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:13:17 np0005548731 nova_compute[232433]: 2025-12-06 07:13:17.958 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:18 np0005548731 nova_compute[232433]: 2025-12-06 07:13:18.285 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005198.285109, db2b4e57-20af-415b-ad7b-ac5b7197acb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:13:18 np0005548731 nova_compute[232433]: 2025-12-06 07:13:18.286 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] VM Started (Lifecycle Event)#033[00m
Dec  6 02:13:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:18.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:18 np0005548731 nova_compute[232433]: 2025-12-06 07:13:18.318 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:13:18 np0005548731 nova_compute[232433]: 2025-12-06 07:13:18.321 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005198.2862267, db2b4e57-20af-415b-ad7b-ac5b7197acb6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:13:18 np0005548731 nova_compute[232433]: 2025-12-06 07:13:18.321 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:13:18 np0005548731 nova_compute[232433]: 2025-12-06 07:13:18.365 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:13:18 np0005548731 nova_compute[232433]: 2025-12-06 07:13:18.368 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:13:18 np0005548731 podman[258722]: 2025-12-06 07:13:18.283210036 +0000 UTC m=+0.023121788 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:13:18 np0005548731 nova_compute[232433]: 2025-12-06 07:13:18.403 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:13:18 np0005548731 podman[258722]: 2025-12-06 07:13:18.433919745 +0000 UTC m=+0.173831477 container create 4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:13:18 np0005548731 systemd[1]: Started libpod-conmon-4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886.scope.
Dec  6 02:13:18 np0005548731 podman[258735]: 2025-12-06 07:13:18.529682144 +0000 UTC m=+0.064325549 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 02:13:18 np0005548731 podman[258737]: 2025-12-06 07:13:18.532181916 +0000 UTC m=+0.061202113 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:13:18 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:13:18 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e8a6a58ba95cc9b9225bb11abe1b201739082a15770959bc947cb1f6cab60d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:13:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:18.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:18 np0005548731 podman[258722]: 2025-12-06 07:13:18.683324254 +0000 UTC m=+0.423236086 container init 4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 02:13:18 np0005548731 podman[258736]: 2025-12-06 07:13:18.685890907 +0000 UTC m=+0.217982860 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  6 02:13:18 np0005548731 podman[258722]: 2025-12-06 07:13:18.689586897 +0000 UTC m=+0.429498629 container start 4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:13:18 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[258792]: [NOTICE]   (258807) : New worker (258809) forked
Dec  6 02:13:18 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[258792]: [NOTICE]   (258807) : Loading success.
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.175 232437 DEBUG nova.compute.manager [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Received event network-vif-plugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.175 232437 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.175 232437 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.176 232437 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.176 232437 DEBUG nova.compute.manager [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Processing event network-vif-plugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.176 232437 DEBUG nova.compute.manager [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Received event network-vif-plugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.176 232437 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.176 232437 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.177 232437 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.177 232437 DEBUG nova.compute.manager [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] No waiting events found dispatching network-vif-plugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.177 232437 WARNING nova.compute.manager [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Received unexpected event network-vif-plugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.178 232437 DEBUG nova.compute.manager [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.182 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005199.1811025, db2b4e57-20af-415b-ad7b-ac5b7197acb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.183 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.184 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.189 232437 INFO nova.virt.libvirt.driver [-] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Instance spawned successfully.#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.189 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.221 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.224 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.233 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.234 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.234 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.234 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.235 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.235 232437 DEBUG nova.virt.libvirt.driver [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.281 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.408 232437 INFO nova.compute.manager [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Took 10.78 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.409 232437 DEBUG nova.compute.manager [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.486 232437 INFO nova.compute.manager [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Took 11.76 seconds to build instance.#033[00m
Dec  6 02:13:19 np0005548731 nova_compute[232433]: 2025-12-06 07:13:19.531 232437 DEBUG oslo_concurrency.lockutils [None req-f853c39b-54c0-4885-a9e4-a966668cbd27 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:20.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:20.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:20 np0005548731 nova_compute[232433]: 2025-12-06 07:13:20.802 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:21 np0005548731 nova_compute[232433]: 2025-12-06 07:13:21.757 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:22.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:22 np0005548731 nova_compute[232433]: 2025-12-06 07:13:22.364 232437 DEBUG nova.compute.manager [None req-6830fa2f-2ae3-422e-b7a1-34272f9311ed bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:13:22 np0005548731 nova_compute[232433]: 2025-12-06 07:13:22.447 232437 INFO nova.compute.manager [None req-6830fa2f-2ae3-422e-b7a1-34272f9311ed bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] instance snapshotting#033[00m
Dec  6 02:13:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:22.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:22 np0005548731 nova_compute[232433]: 2025-12-06 07:13:22.703 232437 INFO nova.virt.libvirt.driver [None req-6830fa2f-2ae3-422e-b7a1-34272f9311ed bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Beginning live snapshot process#033[00m
Dec  6 02:13:22 np0005548731 nova_compute[232433]: 2025-12-06 07:13:22.930 232437 DEBUG nova.virt.libvirt.imagebackend [None req-6830fa2f-2ae3-422e-b7a1-34272f9311ed bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec  6 02:13:23 np0005548731 nova_compute[232433]: 2025-12-06 07:13:23.176 232437 DEBUG nova.storage.rbd_utils [None req-6830fa2f-2ae3-422e-b7a1-34272f9311ed bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] creating snapshot(c86511f2d2d4499f93b5b7980494a2a8) on rbd image(db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:13:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e241 e241: 3 total, 3 up, 3 in
Dec  6 02:13:23 np0005548731 nova_compute[232433]: 2025-12-06 07:13:23.880 232437 DEBUG nova.storage.rbd_utils [None req-6830fa2f-2ae3-422e-b7a1-34272f9311ed bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] cloning vms/db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk@c86511f2d2d4499f93b5b7980494a2a8 to images/6b59300d-e8aa-4c06-be5d-280ba7e39063 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:13:24 np0005548731 nova_compute[232433]: 2025-12-06 07:13:24.283 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "ad9de790-b008-4b68-8d84-f48391631cc3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:24 np0005548731 nova_compute[232433]: 2025-12-06 07:13:24.284 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:24.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:24 np0005548731 nova_compute[232433]: 2025-12-06 07:13:24.305 232437 DEBUG nova.compute.manager [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:13:24 np0005548731 nova_compute[232433]: 2025-12-06 07:13:24.314 232437 DEBUG nova.storage.rbd_utils [None req-6830fa2f-2ae3-422e-b7a1-34272f9311ed bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] flattening images/6b59300d-e8aa-4c06-be5d-280ba7e39063 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  6 02:13:24 np0005548731 nova_compute[232433]: 2025-12-06 07:13:24.423 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:24 np0005548731 nova_compute[232433]: 2025-12-06 07:13:24.424 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:24 np0005548731 nova_compute[232433]: 2025-12-06 07:13:24.432 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:13:24 np0005548731 nova_compute[232433]: 2025-12-06 07:13:24.432 232437 INFO nova.compute.claims [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:13:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:24.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:24 np0005548731 nova_compute[232433]: 2025-12-06 07:13:24.572 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:24 np0005548731 nova_compute[232433]: 2025-12-06 07:13:24.675 232437 DEBUG nova.storage.rbd_utils [None req-6830fa2f-2ae3-422e-b7a1-34272f9311ed bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] removing snapshot(c86511f2d2d4499f93b5b7980494a2a8) on rbd image(db2b4e57-20af-415b-ad7b-ac5b7197acb6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:13:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e242 e242: 3 total, 3 up, 3 in
Dec  6 02:13:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:13:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/895580667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.024 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.030 232437 DEBUG nova.compute.provider_tree [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.132 232437 DEBUG nova.scheduler.client.report [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.196 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.197 232437 DEBUG nova.compute.manager [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.292 232437 DEBUG nova.compute.manager [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.292 232437 DEBUG nova.network.neutron [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.321 232437 INFO nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.353 232437 DEBUG nova.compute.manager [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.575 232437 DEBUG nova.compute.manager [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.577 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.577 232437 INFO nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Creating image(s)#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.600 232437 DEBUG nova.storage.rbd_utils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image ad9de790-b008-4b68-8d84-f48391631cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.628 232437 DEBUG nova.storage.rbd_utils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image ad9de790-b008-4b68-8d84-f48391631cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.655 232437 DEBUG nova.storage.rbd_utils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image ad9de790-b008-4b68-8d84-f48391631cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.659 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.721 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.722 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.723 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.723 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.744 232437 DEBUG nova.storage.rbd_utils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image ad9de790-b008-4b68-8d84-f48391631cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.748 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef ad9de790-b008-4b68-8d84-f48391631cc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e243 e243: 3 total, 3 up, 3 in
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.803 232437 DEBUG nova.storage.rbd_utils [None req-6830fa2f-2ae3-422e-b7a1-34272f9311ed bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] creating snapshot(snap) on rbd image(6b59300d-e8aa-4c06-be5d-280ba7e39063) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:13:25 np0005548731 nova_compute[232433]: 2025-12-06 07:13:25.859 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:26 np0005548731 nova_compute[232433]: 2025-12-06 07:13:26.066 232437 DEBUG nova.policy [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a1ed181a1103481fa4d0b29ce1009dca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c297e84c3a9f48a9a82aebc9e5ade875', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:13:26 np0005548731 nova_compute[232433]: 2025-12-06 07:13:26.115 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef ad9de790-b008-4b68-8d84-f48391631cc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:26 np0005548731 nova_compute[232433]: 2025-12-06 07:13:26.186 232437 DEBUG nova.storage.rbd_utils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] resizing rbd image ad9de790-b008-4b68-8d84-f48391631cc3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:13:26 np0005548731 nova_compute[232433]: 2025-12-06 07:13:26.289 232437 DEBUG nova.objects.instance [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lazy-loading 'migration_context' on Instance uuid ad9de790-b008-4b68-8d84-f48391631cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:13:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:26.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:26 np0005548731 nova_compute[232433]: 2025-12-06 07:13:26.310 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:13:26 np0005548731 nova_compute[232433]: 2025-12-06 07:13:26.311 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Ensure instance console log exists: /var/lib/nova/instances/ad9de790-b008-4b68-8d84-f48391631cc3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:13:26 np0005548731 nova_compute[232433]: 2025-12-06 07:13:26.311 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:26 np0005548731 nova_compute[232433]: 2025-12-06 07:13:26.312 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:26 np0005548731 nova_compute[232433]: 2025-12-06 07:13:26.312 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:26.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:26 np0005548731 nova_compute[232433]: 2025-12-06 07:13:26.758 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e244 e244: 3 total, 3 up, 3 in
Dec  6 02:13:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:27 np0005548731 nova_compute[232433]: 2025-12-06 07:13:27.314 232437 DEBUG nova.network.neutron [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Successfully created port: 79516468-9d88-48fe-801d-7d067458f7bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:13:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:28.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:28 np0005548731 nova_compute[232433]: 2025-12-06 07:13:28.474 232437 DEBUG nova.network.neutron [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Successfully updated port: 79516468-9d88-48fe-801d-7d067458f7bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:13:28 np0005548731 nova_compute[232433]: 2025-12-06 07:13:28.490 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "refresh_cache-ad9de790-b008-4b68-8d84-f48391631cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:13:28 np0005548731 nova_compute[232433]: 2025-12-06 07:13:28.491 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquired lock "refresh_cache-ad9de790-b008-4b68-8d84-f48391631cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:13:28 np0005548731 nova_compute[232433]: 2025-12-06 07:13:28.491 232437 DEBUG nova.network.neutron [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:13:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:28.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:28 np0005548731 nova_compute[232433]: 2025-12-06 07:13:28.609 232437 DEBUG nova.compute.manager [req-8ba2d6d8-53bf-440e-b49c-be56780dc0cd req-1b59bb13-4c18-47ab-9ea8-5e9f4d076d3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Received event network-changed-79516468-9d88-48fe-801d-7d067458f7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:13:28 np0005548731 nova_compute[232433]: 2025-12-06 07:13:28.610 232437 DEBUG nova.compute.manager [req-8ba2d6d8-53bf-440e-b49c-be56780dc0cd req-1b59bb13-4c18-47ab-9ea8-5e9f4d076d3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Refreshing instance network info cache due to event network-changed-79516468-9d88-48fe-801d-7d067458f7bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:13:28 np0005548731 nova_compute[232433]: 2025-12-06 07:13:28.610 232437 DEBUG oslo_concurrency.lockutils [req-8ba2d6d8-53bf-440e-b49c-be56780dc0cd req-1b59bb13-4c18-47ab-9ea8-5e9f4d076d3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ad9de790-b008-4b68-8d84-f48391631cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:13:28 np0005548731 nova_compute[232433]: 2025-12-06 07:13:28.743 232437 DEBUG nova.network.neutron [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:13:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:30.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.319 232437 DEBUG nova.network.neutron [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Updating instance_info_cache with network_info: [{"id": "79516468-9d88-48fe-801d-7d067458f7bd", "address": "fa:16:3e:16:6b:0c", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79516468-9d", "ovs_interfaceid": "79516468-9d88-48fe-801d-7d067458f7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.347 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Releasing lock "refresh_cache-ad9de790-b008-4b68-8d84-f48391631cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.348 232437 DEBUG nova.compute.manager [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Instance network_info: |[{"id": "79516468-9d88-48fe-801d-7d067458f7bd", "address": "fa:16:3e:16:6b:0c", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79516468-9d", "ovs_interfaceid": "79516468-9d88-48fe-801d-7d067458f7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.348 232437 DEBUG oslo_concurrency.lockutils [req-8ba2d6d8-53bf-440e-b49c-be56780dc0cd req-1b59bb13-4c18-47ab-9ea8-5e9f4d076d3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ad9de790-b008-4b68-8d84-f48391631cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.349 232437 DEBUG nova.network.neutron [req-8ba2d6d8-53bf-440e-b49c-be56780dc0cd req-1b59bb13-4c18-47ab-9ea8-5e9f4d076d3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Refreshing network info cache for port 79516468-9d88-48fe-801d-7d067458f7bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.351 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Start _get_guest_xml network_info=[{"id": "79516468-9d88-48fe-801d-7d067458f7bd", "address": "fa:16:3e:16:6b:0c", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79516468-9d", "ovs_interfaceid": "79516468-9d88-48fe-801d-7d067458f7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.355 232437 WARNING nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.359 232437 DEBUG nova.virt.libvirt.host [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.360 232437 DEBUG nova.virt.libvirt.host [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.363 232437 DEBUG nova.virt.libvirt.host [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.364 232437 DEBUG nova.virt.libvirt.host [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.365 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.366 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.367 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.367 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.368 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.368 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.368 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.369 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.369 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.370 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.370 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.370 232437 DEBUG nova.virt.hardware [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.374 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:30.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.668 232437 INFO nova.virt.libvirt.driver [None req-6830fa2f-2ae3-422e-b7a1-34272f9311ed bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Snapshot image upload complete#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.670 232437 INFO nova.compute.manager [None req-6830fa2f-2ae3-422e-b7a1-34272f9311ed bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Took 8.22 seconds to snapshot the instance on the hypervisor.#033[00m
Dec  6 02:13:30 np0005548731 nova_compute[232433]: 2025-12-06 07:13:30.860 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:13:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1271285825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:13:31 np0005548731 nova_compute[232433]: 2025-12-06 07:13:31.696 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:31 np0005548731 nova_compute[232433]: 2025-12-06 07:13:31.747 232437 DEBUG nova.storage.rbd_utils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image ad9de790-b008-4b68-8d84-f48391631cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:31 np0005548731 nova_compute[232433]: 2025-12-06 07:13:31.751 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:31 np0005548731 nova_compute[232433]: 2025-12-06 07:13:31.775 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.074 232437 DEBUG nova.network.neutron [req-8ba2d6d8-53bf-440e-b49c-be56780dc0cd req-1b59bb13-4c18-47ab-9ea8-5e9f4d076d3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Updated VIF entry in instance network info cache for port 79516468-9d88-48fe-801d-7d067458f7bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.075 232437 DEBUG nova.network.neutron [req-8ba2d6d8-53bf-440e-b49c-be56780dc0cd req-1b59bb13-4c18-47ab-9ea8-5e9f4d076d3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Updating instance_info_cache with network_info: [{"id": "79516468-9d88-48fe-801d-7d067458f7bd", "address": "fa:16:3e:16:6b:0c", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79516468-9d", "ovs_interfaceid": "79516468-9d88-48fe-801d-7d067458f7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.099 232437 DEBUG oslo_concurrency.lockutils [req-8ba2d6d8-53bf-440e-b49c-be56780dc0cd req-1b59bb13-4c18-47ab-9ea8-5e9f4d076d3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ad9de790-b008-4b68-8d84-f48391631cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:13:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:13:32 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3851881568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.198 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.200 232437 DEBUG nova.virt.libvirt.vif [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-264405700',display_name='tempest-ImagesOneServerNegativeTestJSON-server-264405700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-264405700',id=64,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c297e84c3a9f48a9a82aebc9e5ade875',ramdisk_id='',reservation_id='r-p81emoj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-324135674',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-324135674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:25Z,user_data=None,user_id='a1ed181a1103481fa4d0b29ce1009dca',uuid=ad9de790-b008-4b68-8d84-f48391631cc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79516468-9d88-48fe-801d-7d067458f7bd", "address": "fa:16:3e:16:6b:0c", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79516468-9d", "ovs_interfaceid": "79516468-9d88-48fe-801d-7d067458f7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.200 232437 DEBUG nova.network.os_vif_util [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converting VIF {"id": "79516468-9d88-48fe-801d-7d067458f7bd", "address": "fa:16:3e:16:6b:0c", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79516468-9d", "ovs_interfaceid": "79516468-9d88-48fe-801d-7d067458f7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.201 232437 DEBUG nova.network.os_vif_util [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:6b:0c,bridge_name='br-int',has_traffic_filtering=True,id=79516468-9d88-48fe-801d-7d067458f7bd,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79516468-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.202 232437 DEBUG nova.objects.instance [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lazy-loading 'pci_devices' on Instance uuid ad9de790-b008-4b68-8d84-f48391631cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.219 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  <uuid>ad9de790-b008-4b68-8d84-f48391631cc3</uuid>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  <name>instance-00000040</name>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-264405700</nova:name>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:13:30</nova:creationTime>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <nova:user uuid="a1ed181a1103481fa4d0b29ce1009dca">tempest-ImagesOneServerNegativeTestJSON-324135674-project-member</nova:user>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <nova:project uuid="c297e84c3a9f48a9a82aebc9e5ade875">tempest-ImagesOneServerNegativeTestJSON-324135674</nova:project>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <nova:port uuid="79516468-9d88-48fe-801d-7d067458f7bd">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <entry name="serial">ad9de790-b008-4b68-8d84-f48391631cc3</entry>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <entry name="uuid">ad9de790-b008-4b68-8d84-f48391631cc3</entry>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/ad9de790-b008-4b68-8d84-f48391631cc3_disk">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/ad9de790-b008-4b68-8d84-f48391631cc3_disk.config">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:16:6b:0c"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <target dev="tap79516468-9d"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/ad9de790-b008-4b68-8d84-f48391631cc3/console.log" append="off"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:13:32 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:13:32 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:13:32 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:13:32 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.220 232437 DEBUG nova.compute.manager [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Preparing to wait for external event network-vif-plugged-79516468-9d88-48fe-801d-7d067458f7bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.221 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.221 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.221 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.222 232437 DEBUG nova.virt.libvirt.vif [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-264405700',display_name='tempest-ImagesOneServerNegativeTestJSON-server-264405700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-264405700',id=64,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c297e84c3a9f48a9a82aebc9e5ade875',ramdisk_id='',reservation_id='r-p81emoj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-324135674',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-324135674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:25Z,user_data=None,user_id='a1ed181a1103481fa4d0b29ce1009dca',uuid=ad9de790-b008-4b68-8d84-f48391631cc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79516468-9d88-48fe-801d-7d067458f7bd", "address": "fa:16:3e:16:6b:0c", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79516468-9d", "ovs_interfaceid": "79516468-9d88-48fe-801d-7d067458f7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.222 232437 DEBUG nova.network.os_vif_util [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converting VIF {"id": "79516468-9d88-48fe-801d-7d067458f7bd", "address": "fa:16:3e:16:6b:0c", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79516468-9d", "ovs_interfaceid": "79516468-9d88-48fe-801d-7d067458f7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.223 232437 DEBUG nova.network.os_vif_util [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:6b:0c,bridge_name='br-int',has_traffic_filtering=True,id=79516468-9d88-48fe-801d-7d067458f7bd,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79516468-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.223 232437 DEBUG os_vif [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:6b:0c,bridge_name='br-int',has_traffic_filtering=True,id=79516468-9d88-48fe-801d-7d067458f7bd,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79516468-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.224 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.224 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.225 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.228 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.228 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79516468-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.228 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79516468-9d, col_values=(('external_ids', {'iface-id': '79516468-9d88-48fe-801d-7d067458f7bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:6b:0c', 'vm-uuid': 'ad9de790-b008-4b68-8d84-f48391631cc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:32 np0005548731 NetworkManager[49182]: <info>  [1765005212.2306] manager: (tap79516468-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.232 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.236 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.236 232437 INFO os_vif [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:6b:0c,bridge_name='br-int',has_traffic_filtering=True,id=79516468-9d88-48fe-801d-7d067458f7bd,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79516468-9d')#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.300 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.301 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.302 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] No VIF found with MAC fa:16:3e:16:6b:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.302 232437 INFO nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Using config drive#033[00m
Dec  6 02:13:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:32.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.343 232437 DEBUG nova.storage.rbd_utils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image ad9de790-b008-4b68-8d84-f48391631cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:32.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.718 232437 INFO nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Creating config drive at /var/lib/nova/instances/ad9de790-b008-4b68-8d84-f48391631cc3/disk.config#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.724 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ad9de790-b008-4b68-8d84-f48391631cc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9k4d78wv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.855 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ad9de790-b008-4b68-8d84-f48391631cc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9k4d78wv" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.881 232437 DEBUG nova.storage.rbd_utils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image ad9de790-b008-4b68-8d84-f48391631cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:13:32 np0005548731 nova_compute[232433]: 2025-12-06 07:13:32.884 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ad9de790-b008-4b68-8d84-f48391631cc3/disk.config ad9de790-b008-4b68-8d84-f48391631cc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.040 232437 DEBUG oslo_concurrency.processutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ad9de790-b008-4b68-8d84-f48391631cc3/disk.config ad9de790-b008-4b68-8d84-f48391631cc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.041 232437 INFO nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Deleting local config drive /var/lib/nova/instances/ad9de790-b008-4b68-8d84-f48391631cc3/disk.config because it was imported into RBD.#033[00m
Dec  6 02:13:33 np0005548731 kernel: tap79516468-9d: entered promiscuous mode
Dec  6 02:13:33 np0005548731 NetworkManager[49182]: <info>  [1765005213.0844] manager: (tap79516468-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.085 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:33Z|00188|binding|INFO|Claiming lport 79516468-9d88-48fe-801d-7d067458f7bd for this chassis.
Dec  6 02:13:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:33Z|00189|binding|INFO|79516468-9d88-48fe-801d-7d067458f7bd: Claiming fa:16:3e:16:6b:0c 10.100.0.12
Dec  6 02:13:33 np0005548731 systemd-udevd[259336]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.115 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:6b:0c 10.100.0.12'], port_security=['fa:16:3e:16:6b:0c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ad9de790-b008-4b68-8d84-f48391631cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49680c77-2db5-4d0f-bd5b-08899440c38e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c297e84c3a9f48a9a82aebc9e5ade875', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44ffcb05-d145-4d07-b800-cc9e3941da49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2602f87-ec0b-4d1c-8f8b-eee8bbcfddb2, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=79516468-9d88-48fe-801d-7d067458f7bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.117 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 79516468-9d88-48fe-801d-7d067458f7bd in datapath 49680c77-2db5-4d0f-bd5b-08899440c38e bound to our chassis#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.118 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49680c77-2db5-4d0f-bd5b-08899440c38e#033[00m
Dec  6 02:13:33 np0005548731 NetworkManager[49182]: <info>  [1765005213.1272] device (tap79516468-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:13:33 np0005548731 NetworkManager[49182]: <info>  [1765005213.1288] device (tap79516468-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.130 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b32e4826-3b1d-4ff5-bc6e-9a49405df68b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.131 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap49680c77-21 in ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.133 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap49680c77-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.133 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[747bc5df-4741-43b3-8d26-bcd55d8a5ac0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.134 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[479708c0-5937-47db-825e-be79b8c3fff0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 systemd-machined[195355]: New machine qemu-27-instance-00000040.
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.144 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[9551b916-8b6c-4cb0-960a-742f29f9c28d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 systemd[1]: Started Virtual Machine qemu-27-instance-00000040.
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.168 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[30fe7389-6ae1-4939-b32c-a3d517fcb4d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.169 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:33Z|00190|binding|INFO|Setting lport 79516468-9d88-48fe-801d-7d067458f7bd ovn-installed in OVS
Dec  6 02:13:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:33Z|00191|binding|INFO|Setting lport 79516468-9d88-48fe-801d-7d067458f7bd up in Southbound
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.174 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.200 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e930b372-5230-4201-9ee1-d1e5b6bbf9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 systemd-udevd[259340]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.206 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[59fe2c69-595a-459b-a3fc-6937614fd2c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 NetworkManager[49182]: <info>  [1765005213.2068] manager: (tap49680c77-20): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.238 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[49963d10-659a-4674-b574-d02e38edd554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.242 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b0fc4888-56a9-41b1-af1c-d5d498cdc3e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 NetworkManager[49182]: <info>  [1765005213.2655] device (tap49680c77-20): carrier: link connected
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.272 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[df3a5413-f1f3-4e2f-8efa-6d66ff76626c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.289 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[aee60f5c-21e0-475b-b198-6a812c8dea22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49680c77-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:86:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547527, 'reachable_time': 36840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259372, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.305 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[48a6dc1f-ee3f-4092-9371-5b0633217ede]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:86b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547527, 'tstamp': 547527}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259373, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.325 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[42c55c91-e30e-4e09-b7cb-8f06dac18555]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49680c77-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:86:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547527, 'reachable_time': 36840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259374, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.360 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[45ff0c5c-5ffa-4f90-a161-f6e972c6bb33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.416 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[858e1d69-9ce2-4f24-8235-330263df1fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.418 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49680c77-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.418 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.418 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49680c77-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:33 np0005548731 NetworkManager[49182]: <info>  [1765005213.4701] manager: (tap49680c77-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Dec  6 02:13:33 np0005548731 kernel: tap49680c77-20: entered promiscuous mode
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.469 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.471 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.472 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49680c77-20, col_values=(('external_ids', {'iface-id': '585005cf-d18f-4bcc-9942-2eb7dec20acd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:33Z|00192|binding|INFO|Releasing lport 585005cf-d18f-4bcc-9942-2eb7dec20acd from this chassis (sb_readonly=0)
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.474 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.491 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.492 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/49680c77-2db5-4d0f-bd5b-08899440c38e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/49680c77-2db5-4d0f-bd5b-08899440c38e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.493 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[582f5eb1-f3d5-4342-b1c8-d0054028af06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.494 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-49680c77-2db5-4d0f-bd5b-08899440c38e
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/49680c77-2db5-4d0f-bd5b-08899440c38e.pid.haproxy
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 49680c77-2db5-4d0f-bd5b-08899440c38e
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:13:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:33.495 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'env', 'PROCESS_TAG=haproxy-49680c77-2db5-4d0f-bd5b-08899440c38e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/49680c77-2db5-4d0f-bd5b-08899440c38e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.578 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005213.5777843, ad9de790-b008-4b68-8d84-f48391631cc3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.578 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] VM Started (Lifecycle Event)#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.611 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.616 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005213.5779335, ad9de790-b008-4b68-8d84-f48391631cc3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.617 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.646 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.651 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:13:33 np0005548731 nova_compute[232433]: 2025-12-06 07:13:33.675 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:13:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:33Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:88:06 10.100.0.8
Dec  6 02:13:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:33Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:88:06 10.100.0.8
Dec  6 02:13:33 np0005548731 podman[259448]: 2025-12-06 07:13:33.89267636 +0000 UTC m=+0.054883247 container create 6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 02:13:33 np0005548731 systemd[1]: Started libpod-conmon-6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113.scope.
Dec  6 02:13:33 np0005548731 podman[259448]: 2025-12-06 07:13:33.862581322 +0000 UTC m=+0.024788229 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:13:33 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:13:33 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed31754b4c6244cd0a9f8a7ab0b35dc2d7554982dfaa6af6b2707d66d95254a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:13:34 np0005548731 podman[259448]: 2025-12-06 07:13:34.040112839 +0000 UTC m=+0.202319756 container init 6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:13:34 np0005548731 podman[259448]: 2025-12-06 07:13:34.046246189 +0000 UTC m=+0.208453076 container start 6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:13:34 np0005548731 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[259464]: [NOTICE]   (259468) : New worker (259470) forked
Dec  6 02:13:34 np0005548731 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[259464]: [NOTICE]   (259468) : Loading success.
Dec  6 02:13:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:34.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:34.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.952 232437 DEBUG nova.compute.manager [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Received event network-vif-plugged-79516468-9d88-48fe-801d-7d067458f7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.953 232437 DEBUG oslo_concurrency.lockutils [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.953 232437 DEBUG oslo_concurrency.lockutils [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.954 232437 DEBUG oslo_concurrency.lockutils [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.954 232437 DEBUG nova.compute.manager [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Processing event network-vif-plugged-79516468-9d88-48fe-801d-7d067458f7bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.955 232437 DEBUG nova.compute.manager [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Received event network-vif-plugged-79516468-9d88-48fe-801d-7d067458f7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.955 232437 DEBUG oslo_concurrency.lockutils [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.955 232437 DEBUG oslo_concurrency.lockutils [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.956 232437 DEBUG oslo_concurrency.lockutils [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.956 232437 DEBUG nova.compute.manager [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] No waiting events found dispatching network-vif-plugged-79516468-9d88-48fe-801d-7d067458f7bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.956 232437 WARNING nova.compute.manager [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Received unexpected event network-vif-plugged-79516468-9d88-48fe-801d-7d067458f7bd for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.957 232437 DEBUG nova.compute.manager [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.963 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005214.9628088, ad9de790-b008-4b68-8d84-f48391631cc3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.963 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.967 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.973 232437 INFO nova.virt.libvirt.driver [-] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Instance spawned successfully.#033[00m
Dec  6 02:13:34 np0005548731 nova_compute[232433]: 2025-12-06 07:13:34.973 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.002 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.007 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.007 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.008 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.009 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.009 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.010 232437 DEBUG nova.virt.libvirt.driver [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.016 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.061 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.095 232437 INFO nova.compute.manager [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Took 9.52 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.096 232437 DEBUG nova.compute.manager [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.185 232437 INFO nova.compute.manager [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Took 10.80 seconds to build instance.#033[00m
Dec  6 02:13:35 np0005548731 nova_compute[232433]: 2025-12-06 07:13:35.217 232437 DEBUG oslo_concurrency.lockutils [None req-08169bad-fe5e-4898-ab3b-9c0270e4f20c a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e245 e245: 3 total, 3 up, 3 in
Dec  6 02:13:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:36.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:36.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:36 np0005548731 nova_compute[232433]: 2025-12-06 07:13:36.761 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.112 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:37.111 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:13:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:37.113 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.229 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.380 232437 DEBUG oslo_concurrency.lockutils [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "ad9de790-b008-4b68-8d84-f48391631cc3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.380 232437 DEBUG oslo_concurrency.lockutils [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.381 232437 DEBUG oslo_concurrency.lockutils [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.381 232437 DEBUG oslo_concurrency.lockutils [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.381 232437 DEBUG oslo_concurrency.lockutils [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.382 232437 INFO nova.compute.manager [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Terminating instance#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.383 232437 DEBUG nova.compute.manager [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:13:37 np0005548731 kernel: tap79516468-9d (unregistering): left promiscuous mode
Dec  6 02:13:37 np0005548731 NetworkManager[49182]: <info>  [1765005217.6382] device (tap79516468-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:13:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:37Z|00193|binding|INFO|Releasing lport 79516468-9d88-48fe-801d-7d067458f7bd from this chassis (sb_readonly=0)
Dec  6 02:13:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:37Z|00194|binding|INFO|Setting lport 79516468-9d88-48fe-801d-7d067458f7bd down in Southbound
Dec  6 02:13:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:37Z|00195|binding|INFO|Removing iface tap79516468-9d ovn-installed in OVS
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.647 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:37.654 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:6b:0c 10.100.0.12'], port_security=['fa:16:3e:16:6b:0c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ad9de790-b008-4b68-8d84-f48391631cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49680c77-2db5-4d0f-bd5b-08899440c38e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c297e84c3a9f48a9a82aebc9e5ade875', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44ffcb05-d145-4d07-b800-cc9e3941da49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2602f87-ec0b-4d1c-8f8b-eee8bbcfddb2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=79516468-9d88-48fe-801d-7d067458f7bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:13:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:37.655 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 79516468-9d88-48fe-801d-7d067458f7bd in datapath 49680c77-2db5-4d0f-bd5b-08899440c38e unbound from our chassis#033[00m
Dec  6 02:13:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:37.657 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 49680c77-2db5-4d0f-bd5b-08899440c38e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:13:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:37.659 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4d92508f-3b8f-4521-9d89-d4f4b638473b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:37.660 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e namespace which is not needed anymore#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.677 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:37 np0005548731 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000040.scope: Deactivated successfully.
Dec  6 02:13:37 np0005548731 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000040.scope: Consumed 2.894s CPU time.
Dec  6 02:13:37 np0005548731 systemd-machined[195355]: Machine qemu-27-instance-00000040 terminated.
Dec  6 02:13:37 np0005548731 kernel: tap79516468-9d: entered promiscuous mode
Dec  6 02:13:37 np0005548731 kernel: tap79516468-9d (unregistering): left promiscuous mode
Dec  6 02:13:37 np0005548731 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[259464]: [NOTICE]   (259468) : haproxy version is 2.8.14-c23fe91
Dec  6 02:13:37 np0005548731 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[259464]: [NOTICE]   (259468) : path to executable is /usr/sbin/haproxy
Dec  6 02:13:37 np0005548731 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[259464]: [WARNING]  (259468) : Exiting Master process...
Dec  6 02:13:37 np0005548731 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[259464]: [ALERT]    (259468) : Current worker (259470) exited with code 143 (Terminated)
Dec  6 02:13:37 np0005548731 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[259464]: [WARNING]  (259468) : All workers exited. Exiting... (0)
Dec  6 02:13:37 np0005548731 systemd[1]: libpod-6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113.scope: Deactivated successfully.
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.808 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:37 np0005548731 podman[259504]: 2025-12-06 07:13:37.813229507 +0000 UTC m=+0.048513882 container died 6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.820 232437 INFO nova.virt.libvirt.driver [-] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Instance destroyed successfully.#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.821 232437 DEBUG nova.objects.instance [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lazy-loading 'resources' on Instance uuid ad9de790-b008-4b68-8d84-f48391631cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.856 232437 DEBUG nova.virt.libvirt.vif [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:13:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-264405700',display_name='tempest-ImagesOneServerNegativeTestJSON-server-264405700',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-264405700',id=64,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:13:35Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c297e84c3a9f48a9a82aebc9e5ade875',ramdisk_id='',reservation_id='r-p81emoj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-324135674',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-324135674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:13:35Z,user_data=None,user_id='a1ed181a1103481fa4d0b29ce1009dca',uuid=ad9de790-b008-4b68-8d84-f48391631cc3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79516468-9d88-48fe-801d-7d067458f7bd", "address": "fa:16:3e:16:6b:0c", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79516468-9d", "ovs_interfaceid": "79516468-9d88-48fe-801d-7d067458f7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.857 232437 DEBUG nova.network.os_vif_util [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converting VIF {"id": "79516468-9d88-48fe-801d-7d067458f7bd", "address": "fa:16:3e:16:6b:0c", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79516468-9d", "ovs_interfaceid": "79516468-9d88-48fe-801d-7d067458f7bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.857 232437 DEBUG nova.network.os_vif_util [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:6b:0c,bridge_name='br-int',has_traffic_filtering=True,id=79516468-9d88-48fe-801d-7d067458f7bd,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79516468-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.858 232437 DEBUG os_vif [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:6b:0c,bridge_name='br-int',has_traffic_filtering=True,id=79516468-9d88-48fe-801d-7d067458f7bd,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79516468-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.860 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.860 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79516468-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.863 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:37 np0005548731 nova_compute[232433]: 2025-12-06 07:13:37.866 232437 INFO os_vif [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:6b:0c,bridge_name='br-int',has_traffic_filtering=True,id=79516468-9d88-48fe-801d-7d067458f7bd,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79516468-9d')#033[00m
Dec  6 02:13:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113-userdata-shm.mount: Deactivated successfully.
Dec  6 02:13:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay-ed31754b4c6244cd0a9f8a7ab0b35dc2d7554982dfaa6af6b2707d66d95254a2-merged.mount: Deactivated successfully.
Dec  6 02:13:38 np0005548731 podman[259504]: 2025-12-06 07:13:38.227328776 +0000 UTC m=+0.462613141 container cleanup 6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:13:38 np0005548731 systemd[1]: libpod-conmon-6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113.scope: Deactivated successfully.
Dec  6 02:13:38 np0005548731 podman[259562]: 2025-12-06 07:13:38.294358041 +0000 UTC m=+0.046415490 container remove 6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:13:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:38.299 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4c748370-f8e1-4d5f-840f-1546d747cf7e]: (4, ('Sat Dec  6 07:13:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e (6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113)\n6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113\nSat Dec  6 07:13:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e (6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113)\n6bc730bf59ee5d2bb94e469da7d609da30fcebb83ab4a3ea2c826be8bdcc6113\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:38.301 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d21795-047e-4018-b725-9ba8b4ab89e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:38.301 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49680c77-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:38 np0005548731 nova_compute[232433]: 2025-12-06 07:13:38.303 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:38 np0005548731 kernel: tap49680c77-20: left promiscuous mode
Dec  6 02:13:38 np0005548731 nova_compute[232433]: 2025-12-06 07:13:38.305 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:38.307 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bae82e20-bb13-4d3c-a75a-1bfbd1af85c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:38.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:38 np0005548731 nova_compute[232433]: 2025-12-06 07:13:38.320 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:38.325 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[de40e3a7-db4a-4e6e-bb9a-32948e59eec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:38.325 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5ddfe283-56ca-4cef-be03-1a9c0fbecc4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:38.340 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b0aaec1c-c02c-43df-9ec1-6123b8701c3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547520, 'reachable_time': 42282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259577, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:38 np0005548731 systemd[1]: run-netns-ovnmeta\x2d49680c77\x2d2db5\x2d4d0f\x2dbd5b\x2d08899440c38e.mount: Deactivated successfully.
Dec  6 02:13:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:38.343 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:13:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:38.344 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[043c1d75-90b0-4f78-92c9-87bd41cc61d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:13:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:38.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.476 232437 DEBUG nova.compute.manager [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Received event network-vif-unplugged-79516468-9d88-48fe-801d-7d067458f7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.476 232437 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.477 232437 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.477 232437 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.477 232437 DEBUG nova.compute.manager [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] No waiting events found dispatching network-vif-unplugged-79516468-9d88-48fe-801d-7d067458f7bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.478 232437 DEBUG nova.compute.manager [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Received event network-vif-unplugged-79516468-9d88-48fe-801d-7d067458f7bd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.478 232437 DEBUG nova.compute.manager [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Received event network-vif-plugged-79516468-9d88-48fe-801d-7d067458f7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.478 232437 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.478 232437 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.479 232437 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.479 232437 DEBUG nova.compute.manager [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] No waiting events found dispatching network-vif-plugged-79516468-9d88-48fe-801d-7d067458f7bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:13:39 np0005548731 nova_compute[232433]: 2025-12-06 07:13:39.481 232437 WARNING nova.compute.manager [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Received unexpected event network-vif-plugged-79516468-9d88-48fe-801d-7d067458f7bd for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:13:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:40.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:40.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:41 np0005548731 nova_compute[232433]: 2025-12-06 07:13:41.764 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:42.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:42.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:42 np0005548731 nova_compute[232433]: 2025-12-06 07:13:42.863 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:44.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:44.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:13:45.116 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:13:46 np0005548731 nova_compute[232433]: 2025-12-06 07:13:46.125 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:13:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:46.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:46.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:46 np0005548731 nova_compute[232433]: 2025-12-06 07:13:46.765 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:47 np0005548731 nova_compute[232433]: 2025-12-06 07:13:47.864 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:48 np0005548731 nova_compute[232433]: 2025-12-06 07:13:48.187 232437 INFO nova.virt.libvirt.driver [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Deleting instance files /var/lib/nova/instances/ad9de790-b008-4b68-8d84-f48391631cc3_del#033[00m
Dec  6 02:13:48 np0005548731 nova_compute[232433]: 2025-12-06 07:13:48.188 232437 INFO nova.virt.libvirt.driver [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Deletion of /var/lib/nova/instances/ad9de790-b008-4b68-8d84-f48391631cc3_del complete#033[00m
Dec  6 02:13:48 np0005548731 nova_compute[232433]: 2025-12-06 07:13:48.297 232437 INFO nova.compute.manager [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Took 10.91 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:13:48 np0005548731 nova_compute[232433]: 2025-12-06 07:13:48.297 232437 DEBUG oslo.service.loopingcall [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:13:48 np0005548731 nova_compute[232433]: 2025-12-06 07:13:48.298 232437 DEBUG nova.compute.manager [-] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:13:48 np0005548731 nova_compute[232433]: 2025-12-06 07:13:48.298 232437 DEBUG nova.network.neutron [-] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:13:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:48.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:48.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:48 np0005548731 podman[259634]: 2025-12-06 07:13:48.901466567 +0000 UTC m=+0.052937460 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:13:48 np0005548731 podman[259636]: 2025-12-06 07:13:48.907956927 +0000 UTC m=+0.054596111 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:13:48 np0005548731 podman[259635]: 2025-12-06 07:13:48.926015499 +0000 UTC m=+0.075940364 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:13:48 np0005548731 nova_compute[232433]: 2025-12-06 07:13:48.966 232437 DEBUG nova.network.neutron [-] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:13:48 np0005548731 nova_compute[232433]: 2025-12-06 07:13:48.983 232437 INFO nova.compute.manager [-] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Took 0.69 seconds to deallocate network for instance.#033[00m
Dec  6 02:13:49 np0005548731 nova_compute[232433]: 2025-12-06 07:13:49.027 232437 DEBUG oslo_concurrency.lockutils [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:49 np0005548731 nova_compute[232433]: 2025-12-06 07:13:49.028 232437 DEBUG oslo_concurrency.lockutils [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:49 np0005548731 nova_compute[232433]: 2025-12-06 07:13:49.051 232437 DEBUG nova.compute.manager [req-daa679f7-7d3a-4a3e-bdef-48cd240f1ed7 req-efd49798-4ecc-470f-8c5b-d45ff0a63165 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Received event network-vif-deleted-79516468-9d88-48fe-801d-7d067458f7bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:13:49 np0005548731 nova_compute[232433]: 2025-12-06 07:13:49.098 232437 DEBUG oslo_concurrency.processutils [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:49 np0005548731 nova_compute[232433]: 2025-12-06 07:13:49.122 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:13:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:13:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1022856760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:13:49 np0005548731 nova_compute[232433]: 2025-12-06 07:13:49.518 232437 DEBUG oslo_concurrency.processutils [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:49 np0005548731 nova_compute[232433]: 2025-12-06 07:13:49.524 232437 DEBUG nova.compute.provider_tree [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:13:49 np0005548731 nova_compute[232433]: 2025-12-06 07:13:49.545 232437 DEBUG nova.scheduler.client.report [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:13:49 np0005548731 nova_compute[232433]: 2025-12-06 07:13:49.569 232437 DEBUG oslo_concurrency.lockutils [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:49 np0005548731 nova_compute[232433]: 2025-12-06 07:13:49.596 232437 INFO nova.scheduler.client.report [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Deleted allocations for instance ad9de790-b008-4b68-8d84-f48391631cc3#033[00m
Dec  6 02:13:49 np0005548731 nova_compute[232433]: 2025-12-06 07:13:49.650 232437 DEBUG oslo_concurrency.lockutils [None req-015d11f4-54f1-4dba-b1b5-193256c6934d a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "ad9de790-b008-4b68-8d84-f48391631cc3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:50 np0005548731 nova_compute[232433]: 2025-12-06 07:13:50.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:13:50 np0005548731 nova_compute[232433]: 2025-12-06 07:13:50.107 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:13:50 np0005548731 nova_compute[232433]: 2025-12-06 07:13:50.107 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:13:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:50.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:50 np0005548731 nova_compute[232433]: 2025-12-06 07:13:50.358 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-db2b4e57-20af-415b-ad7b-ac5b7197acb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:13:50 np0005548731 nova_compute[232433]: 2025-12-06 07:13:50.359 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-db2b4e57-20af-415b-ad7b-ac5b7197acb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:13:50 np0005548731 nova_compute[232433]: 2025-12-06 07:13:50.359 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:13:50 np0005548731 nova_compute[232433]: 2025-12-06 07:13:50.359 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid db2b4e57-20af-415b-ad7b-ac5b7197acb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:13:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:50.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.524423) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231524493, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1780, "num_deletes": 523, "total_data_size": 2955668, "memory_usage": 2994152, "flush_reason": "Manual Compaction"}
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231548432, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 1350608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33984, "largest_seqno": 35758, "table_properties": {"data_size": 1344222, "index_size": 2885, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19524, "raw_average_key_size": 20, "raw_value_size": 1328323, "raw_average_value_size": 1390, "num_data_blocks": 126, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 523, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005122, "oldest_key_time": 1765005122, "file_creation_time": 1765005231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 24075 microseconds, and 5340 cpu microseconds.
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.548500) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 1350608 bytes OK
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.548575) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.550164) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.550206) EVENT_LOG_v1 {"time_micros": 1765005231550199, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.550224) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2946345, prev total WAL file size 2946345, number of live WAL files 2.
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.551330) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303130' seq:72057594037927935, type:22 .. '6D6772737461740031323732' seq:0, type:0; will stop at (end)
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1318KB)], [63(11MB)]
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231551400, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 13106845, "oldest_snapshot_seqno": -1}
Dec  6 02:13:51 np0005548731 nova_compute[232433]: 2025-12-06 07:13:51.766 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6443 keys, 9571133 bytes, temperature: kUnknown
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231798444, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 9571133, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9528182, "index_size": 25753, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 166240, "raw_average_key_size": 25, "raw_value_size": 9412563, "raw_average_value_size": 1460, "num_data_blocks": 1029, "num_entries": 6443, "num_filter_entries": 6443, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765005231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.798758) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9571133 bytes
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.805501) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 53.0 rd, 38.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 11.2 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(16.8) write-amplify(7.1) OK, records in: 7472, records dropped: 1029 output_compression: NoCompression
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.805565) EVENT_LOG_v1 {"time_micros": 1765005231805527, "job": 38, "event": "compaction_finished", "compaction_time_micros": 247182, "compaction_time_cpu_micros": 26009, "output_level": 6, "num_output_files": 1, "total_output_size": 9571133, "num_input_records": 7472, "num_output_records": 6443, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231805935, "job": 38, "event": "table_file_deletion", "file_number": 65}
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231808147, "job": 38, "event": "table_file_deletion", "file_number": 63}
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.551225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.808280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.808286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.808288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.808289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:13:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:13:51.808291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:13:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:52.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:52 np0005548731 nova_compute[232433]: 2025-12-06 07:13:52.362 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Updating instance_info_cache with network_info: [{"id": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "address": "fa:16:3e:76:88:06", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1af5cec6-b2", "ovs_interfaceid": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:13:52 np0005548731 nova_compute[232433]: 2025-12-06 07:13:52.435 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-db2b4e57-20af-415b-ad7b-ac5b7197acb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:13:52 np0005548731 nova_compute[232433]: 2025-12-06 07:13:52.435 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:13:52 np0005548731 nova_compute[232433]: 2025-12-06 07:13:52.436 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:13:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:52.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:52 np0005548731 nova_compute[232433]: 2025-12-06 07:13:52.819 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005217.8189619, ad9de790-b008-4b68-8d84-f48391631cc3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:13:52 np0005548731 nova_compute[232433]: 2025-12-06 07:13:52.820 232437 INFO nova.compute.manager [-] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:13:52 np0005548731 nova_compute[232433]: 2025-12-06 07:13:52.847 232437 DEBUG nova.compute.manager [None req-3c47d199-aeed-4bfd-94a9-da66264f4ac0 - - - - - -] [instance: ad9de790-b008-4b68-8d84-f48391631cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:13:52 np0005548731 nova_compute[232433]: 2025-12-06 07:13:52.867 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:53 np0005548731 nova_compute[232433]: 2025-12-06 07:13:53.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:13:53 np0005548731 nova_compute[232433]: 2025-12-06 07:13:53.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:13:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:54.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:54.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:55 np0005548731 nova_compute[232433]: 2025-12-06 07:13:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.130 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.131 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:56.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:13:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2607273512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.589 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:56.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.658 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.658 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.769 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.815 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.817 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4450MB free_disk=20.930160522460938GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.817 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.817 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.880 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance db2b4e57-20af-415b-ad7b-ac5b7197acb6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.881 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.881 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.896 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.912 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.913 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.926 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.946 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 02:13:56 np0005548731 nova_compute[232433]: 2025-12-06 07:13:56.986 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:13:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:13:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:13:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:13:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:13:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:13:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/567437665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:13:57 np0005548731 nova_compute[232433]: 2025-12-06 07:13:57.452 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:13:57 np0005548731 nova_compute[232433]: 2025-12-06 07:13:57.458 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:13:57 np0005548731 nova_compute[232433]: 2025-12-06 07:13:57.475 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:13:57 np0005548731 nova_compute[232433]: 2025-12-06 07:13:57.499 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:13:57 np0005548731 nova_compute[232433]: 2025-12-06 07:13:57.500 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:13:57 np0005548731 nova_compute[232433]: 2025-12-06 07:13:57.868 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e246 e246: 3 total, 3 up, 3 in
Dec  6 02:13:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:13:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:58.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:13:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:13:58Z|00196|binding|INFO|Releasing lport 87f2c5b0-3684-4269-9fbf-5a4dfd5a8759 from this chassis (sb_readonly=0)
Dec  6 02:13:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:13:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:13:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:58.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:13:58 np0005548731 nova_compute[232433]: 2025-12-06 07:13:58.640 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:13:59 np0005548731 nova_compute[232433]: 2025-12-06 07:13:59.501 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:14:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:00.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:00.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:00.856 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:00.857 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:00.857 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.440 232437 DEBUG oslo_concurrency.lockutils [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.440 232437 DEBUG oslo_concurrency.lockutils [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.441 232437 DEBUG oslo_concurrency.lockutils [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.441 232437 DEBUG oslo_concurrency.lockutils [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.441 232437 DEBUG oslo_concurrency.lockutils [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.442 232437 INFO nova.compute.manager [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Terminating instance#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.443 232437 DEBUG nova.compute.manager [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:14:01 np0005548731 kernel: tap1af5cec6-b2 (unregistering): left promiscuous mode
Dec  6 02:14:01 np0005548731 NetworkManager[49182]: <info>  [1765005241.7521] device (tap1af5cec6-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:14:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:01Z|00197|binding|INFO|Releasing lport 1af5cec6-b2ca-49b8-973c-78801de95fd1 from this chassis (sb_readonly=0)
Dec  6 02:14:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:01Z|00198|binding|INFO|Setting lport 1af5cec6-b2ca-49b8-973c-78801de95fd1 down in Southbound
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.763 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:01Z|00199|binding|INFO|Removing iface tap1af5cec6-b2 ovn-installed in OVS
Dec  6 02:14:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:01.773 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:88:06 10.100.0.8'], port_security=['fa:16:3e:76:88:06 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'db2b4e57-20af-415b-ad7b-ac5b7197acb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=1af5cec6-b2ca-49b8-973c-78801de95fd1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:14:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:01.774 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 1af5cec6-b2ca-49b8-973c-78801de95fd1 in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad unbound from our chassis#033[00m
Dec  6 02:14:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:01.775 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:14:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:01.777 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b37c3698-9b35-4f21-8dfc-df0b7451644f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:01.778 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace which is not needed anymore#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.800 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.802 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:01 np0005548731 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Dec  6 02:14:01 np0005548731 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003d.scope: Consumed 15.083s CPU time.
Dec  6 02:14:01 np0005548731 systemd-machined[195355]: Machine qemu-26-instance-0000003d terminated.
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.876 232437 INFO nova.virt.libvirt.driver [-] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Instance destroyed successfully.#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.877 232437 DEBUG nova.objects.instance [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'resources' on Instance uuid db2b4e57-20af-415b-ad7b-ac5b7197acb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.893 232437 DEBUG nova.virt.libvirt.vif [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:13:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-356620768',display_name='tempest-ImagesTestJSON-server-356620768',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-imagestestjson-server-356620768',id=61,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:13:19Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-ko38uyil',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:13:30Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=db2b4e57-20af-415b-ad7b-ac5b7197acb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "address": "fa:16:3e:76:88:06", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1af5cec6-b2", "ovs_interfaceid": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.894 232437 DEBUG nova.network.os_vif_util [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "address": "fa:16:3e:76:88:06", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1af5cec6-b2", "ovs_interfaceid": "1af5cec6-b2ca-49b8-973c-78801de95fd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.895 232437 DEBUG nova.network.os_vif_util [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:88:06,bridge_name='br-int',has_traffic_filtering=True,id=1af5cec6-b2ca-49b8-973c-78801de95fd1,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1af5cec6-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.895 232437 DEBUG os_vif [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:88:06,bridge_name='br-int',has_traffic_filtering=True,id=1af5cec6-b2ca-49b8-973c-78801de95fd1,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1af5cec6-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.898 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.898 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1af5cec6-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.900 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.901 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:01 np0005548731 nova_compute[232433]: 2025-12-06 07:14:01.903 232437 INFO os_vif [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:88:06,bridge_name='br-int',has_traffic_filtering=True,id=1af5cec6-b2ca-49b8-973c-78801de95fd1,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1af5cec6-b2')#033[00m
Dec  6 02:14:01 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[258792]: [NOTICE]   (258807) : haproxy version is 2.8.14-c23fe91
Dec  6 02:14:01 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[258792]: [NOTICE]   (258807) : path to executable is /usr/sbin/haproxy
Dec  6 02:14:01 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[258792]: [WARNING]  (258807) : Exiting Master process...
Dec  6 02:14:01 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[258792]: [ALERT]    (258807) : Current worker (258809) exited with code 143 (Terminated)
Dec  6 02:14:01 np0005548731 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[258792]: [WARNING]  (258807) : All workers exited. Exiting... (0)
Dec  6 02:14:01 np0005548731 systemd[1]: libpod-4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886.scope: Deactivated successfully.
Dec  6 02:14:01 np0005548731 podman[259979]: 2025-12-06 07:14:01.934127647 +0000 UTC m=+0.053399181 container died 4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:14:01 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886-userdata-shm.mount: Deactivated successfully.
Dec  6 02:14:01 np0005548731 systemd[1]: var-lib-containers-storage-overlay-2e8a6a58ba95cc9b9225bb11abe1b201739082a15770959bc947cb1f6cab60d1-merged.mount: Deactivated successfully.
Dec  6 02:14:02 np0005548731 podman[259979]: 2025-12-06 07:14:02.002709249 +0000 UTC m=+0.121980783 container cleanup 4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 02:14:02 np0005548731 systemd[1]: libpod-conmon-4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886.scope: Deactivated successfully.
Dec  6 02:14:02 np0005548731 podman[260031]: 2025-12-06 07:14:02.070281837 +0000 UTC m=+0.042125954 container remove 4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:14:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:02.076 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea3cff6-c634-454c-a7ef-4a6dd9140b3b]: (4, ('Sat Dec  6 07:14:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886)\n4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886\nSat Dec  6 07:14:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886)\n4c9814240b55bbed3a054b0ec7a2c34d988199fc8eb56b36fbfd6645b5851886\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:02.078 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c96d4e-e748-43a8-b5f9-8902d76c37fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:02.079 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:02 np0005548731 nova_compute[232433]: 2025-12-06 07:14:02.081 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:02 np0005548731 kernel: tap2b0835d7-80: left promiscuous mode
Dec  6 02:14:02 np0005548731 nova_compute[232433]: 2025-12-06 07:14:02.082 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:02.085 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dd39a340-3a32-4b9d-813f-903f4dbc741f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:02 np0005548731 nova_compute[232433]: 2025-12-06 07:14:02.097 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:02.096 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[05ad75cf-fd6e-4e69-8386-8020a248db03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:02.099 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f934c0eb-d5f2-4dac-af56-910208b624f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:02.116 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d133dda3-a31f-41ba-9b35-f1f3e439b079]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545972, 'reachable_time': 36082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260046, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:02.118 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:14:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:02.118 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e04aad-f150-4924-a705-60a6a809cc6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:02 np0005548731 systemd[1]: run-netns-ovnmeta\x2d2b0835d7\x2d87e4\x2d46cc\x2d8a94\x2de4e042bd4bad.mount: Deactivated successfully.
Dec  6 02:14:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:02 np0005548731 nova_compute[232433]: 2025-12-06 07:14:02.578 232437 DEBUG nova.compute.manager [req-9457fd1f-c622-4a5a-b553-3980621bbc27 req-557e7401-9c00-40b9-9d6e-e86bc627bcda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Received event network-vif-unplugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:14:02 np0005548731 nova_compute[232433]: 2025-12-06 07:14:02.578 232437 DEBUG oslo_concurrency.lockutils [req-9457fd1f-c622-4a5a-b553-3980621bbc27 req-557e7401-9c00-40b9-9d6e-e86bc627bcda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:02 np0005548731 nova_compute[232433]: 2025-12-06 07:14:02.579 232437 DEBUG oslo_concurrency.lockutils [req-9457fd1f-c622-4a5a-b553-3980621bbc27 req-557e7401-9c00-40b9-9d6e-e86bc627bcda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:02 np0005548731 nova_compute[232433]: 2025-12-06 07:14:02.579 232437 DEBUG oslo_concurrency.lockutils [req-9457fd1f-c622-4a5a-b553-3980621bbc27 req-557e7401-9c00-40b9-9d6e-e86bc627bcda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:02 np0005548731 nova_compute[232433]: 2025-12-06 07:14:02.579 232437 DEBUG nova.compute.manager [req-9457fd1f-c622-4a5a-b553-3980621bbc27 req-557e7401-9c00-40b9-9d6e-e86bc627bcda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] No waiting events found dispatching network-vif-unplugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:14:02 np0005548731 nova_compute[232433]: 2025-12-06 07:14:02.579 232437 DEBUG nova.compute.manager [req-9457fd1f-c622-4a5a-b553-3980621bbc27 req-557e7401-9c00-40b9-9d6e-e86bc627bcda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Received event network-vif-unplugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:14:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:02.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:03 np0005548731 nova_compute[232433]: 2025-12-06 07:14:03.797 232437 INFO nova.virt.libvirt.driver [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Deleting instance files /var/lib/nova/instances/db2b4e57-20af-415b-ad7b-ac5b7197acb6_del#033[00m
Dec  6 02:14:03 np0005548731 nova_compute[232433]: 2025-12-06 07:14:03.797 232437 INFO nova.virt.libvirt.driver [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Deletion of /var/lib/nova/instances/db2b4e57-20af-415b-ad7b-ac5b7197acb6_del complete#033[00m
Dec  6 02:14:03 np0005548731 nova_compute[232433]: 2025-12-06 07:14:03.851 232437 INFO nova.compute.manager [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Took 2.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:14:03 np0005548731 nova_compute[232433]: 2025-12-06 07:14:03.852 232437 DEBUG oslo.service.loopingcall [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:14:03 np0005548731 nova_compute[232433]: 2025-12-06 07:14:03.852 232437 DEBUG nova.compute.manager [-] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:14:03 np0005548731 nova_compute[232433]: 2025-12-06 07:14:03.853 232437 DEBUG nova.network.neutron [-] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:14:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:04.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:04.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.843 232437 DEBUG nova.network.neutron [-] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.864 232437 INFO nova.compute.manager [-] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Took 1.01 seconds to deallocate network for instance.#033[00m
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.869 232437 DEBUG nova.compute.manager [req-9f1d2778-daf7-42bc-bcf3-d1d8bab41e18 req-24860a84-4d8e-4f3c-a49c-53db86d272f0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Received event network-vif-plugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.869 232437 DEBUG oslo_concurrency.lockutils [req-9f1d2778-daf7-42bc-bcf3-d1d8bab41e18 req-24860a84-4d8e-4f3c-a49c-53db86d272f0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.870 232437 DEBUG oslo_concurrency.lockutils [req-9f1d2778-daf7-42bc-bcf3-d1d8bab41e18 req-24860a84-4d8e-4f3c-a49c-53db86d272f0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.870 232437 DEBUG oslo_concurrency.lockutils [req-9f1d2778-daf7-42bc-bcf3-d1d8bab41e18 req-24860a84-4d8e-4f3c-a49c-53db86d272f0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.870 232437 DEBUG nova.compute.manager [req-9f1d2778-daf7-42bc-bcf3-d1d8bab41e18 req-24860a84-4d8e-4f3c-a49c-53db86d272f0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] No waiting events found dispatching network-vif-plugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.870 232437 WARNING nova.compute.manager [req-9f1d2778-daf7-42bc-bcf3-d1d8bab41e18 req-24860a84-4d8e-4f3c-a49c-53db86d272f0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Received unexpected event network-vif-plugged-1af5cec6-b2ca-49b8-973c-78801de95fd1 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.907 232437 DEBUG nova.compute.manager [req-f2a340fe-56ae-446a-b533-0892fe8817de req-f8a03ebd-2b2e-405d-8649-4b3bba8b7826 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Received event network-vif-deleted-1af5cec6-b2ca-49b8-973c-78801de95fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.934 232437 DEBUG oslo_concurrency.lockutils [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.935 232437 DEBUG oslo_concurrency.lockutils [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:04 np0005548731 nova_compute[232433]: 2025-12-06 07:14:04.996 232437 DEBUG oslo_concurrency.processutils [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:14:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2158123452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:14:05 np0005548731 nova_compute[232433]: 2025-12-06 07:14:05.464 232437 DEBUG oslo_concurrency.processutils [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:05 np0005548731 nova_compute[232433]: 2025-12-06 07:14:05.471 232437 DEBUG nova.compute.provider_tree [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:14:05 np0005548731 nova_compute[232433]: 2025-12-06 07:14:05.487 232437 DEBUG nova.scheduler.client.report [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:14:05 np0005548731 nova_compute[232433]: 2025-12-06 07:14:05.518 232437 DEBUG oslo_concurrency.lockutils [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:05 np0005548731 nova_compute[232433]: 2025-12-06 07:14:05.550 232437 INFO nova.scheduler.client.report [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Deleted allocations for instance db2b4e57-20af-415b-ad7b-ac5b7197acb6#033[00m
Dec  6 02:14:05 np0005548731 nova_compute[232433]: 2025-12-06 07:14:05.637 232437 DEBUG oslo_concurrency.lockutils [None req-8d57d8ae-f06f-4c83-8e21-499c50a608db bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "db2b4e57-20af-415b-ad7b-ac5b7197acb6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:06.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:14:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:06.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:06 np0005548731 nova_compute[232433]: 2025-12-06 07:14:06.807 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:06 np0005548731 nova_compute[232433]: 2025-12-06 07:14:06.900 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e247 e247: 3 total, 3 up, 3 in
Dec  6 02:14:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:08.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:08.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:14:08 np0005548731 nova_compute[232433]: 2025-12-06 07:14:08.921 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:08 np0005548731 nova_compute[232433]: 2025-12-06 07:14:08.921 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:08 np0005548731 nova_compute[232433]: 2025-12-06 07:14:08.939 232437 DEBUG nova.compute.manager [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:14:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:14:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1032650525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:14:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:14:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1032650525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.022 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.023 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.032 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.032 232437 INFO nova.compute.claims [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.133 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:14:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1544661985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.852 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.719s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.858 232437 DEBUG nova.compute.provider_tree [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.873 232437 DEBUG nova.scheduler.client.report [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.899 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.900 232437 DEBUG nova.compute.manager [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.943 232437 DEBUG nova.compute.manager [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.944 232437 DEBUG nova.network.neutron [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.968 232437 INFO nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:14:09 np0005548731 nova_compute[232433]: 2025-12-06 07:14:09.985 232437 DEBUG nova.compute.manager [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:14:10 np0005548731 nova_compute[232433]: 2025-12-06 07:14:10.085 232437 DEBUG nova.compute.manager [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:14:10 np0005548731 nova_compute[232433]: 2025-12-06 07:14:10.087 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:14:10 np0005548731 nova_compute[232433]: 2025-12-06 07:14:10.088 232437 INFO nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Creating image(s)#033[00m
Dec  6 02:14:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:10.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:10.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:12.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:14:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:12.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:14:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:14.127 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:14:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:14.128 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:14:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:14.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:14.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:16 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:14:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:16.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:14:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:16.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.680 232437 DEBUG nova.storage.rbd_utils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.709 232437 DEBUG nova.storage.rbd_utils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.741 232437 DEBUG nova.storage.rbd_utils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.745 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.770 232437 DEBUG nova.policy [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '627c36bb63534e52a4b1d5adf47e6ffd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '929e2be1488d4b80b7ad8946093a6abe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.772 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.779 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.807 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.807 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.808 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.808 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.834 232437 DEBUG nova.storage.rbd_utils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.838 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.860 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.874 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005241.8738453, db2b4e57-20af-415b-ad7b-ac5b7197acb6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.875 232437 INFO nova.compute.manager [-] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:14:16 np0005548731 nova_compute[232433]: 2025-12-06 07:14:16.897 232437 DEBUG nova.compute.manager [None req-3adf1eda-925c-46fd-8a32-a3ee66c9c202 - - - - - -] [instance: db2b4e57-20af-415b-ad7b-ac5b7197acb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:14:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:18 np0005548731 nova_compute[232433]: 2025-12-06 07:14:18.169 232437 DEBUG nova.network.neutron [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Successfully created port: a599f1a0-5413-4dc9-9ae4-d7ba512d761c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:14:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:14:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:18.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:14:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:18.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:19 np0005548731 nova_compute[232433]: 2025-12-06 07:14:19.677 232437 DEBUG nova.network.neutron [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Successfully updated port: a599f1a0-5413-4dc9-9ae4-d7ba512d761c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:14:19 np0005548731 nova_compute[232433]: 2025-12-06 07:14:19.697 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:14:19 np0005548731 nova_compute[232433]: 2025-12-06 07:14:19.698 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:14:19 np0005548731 nova_compute[232433]: 2025-12-06 07:14:19.698 232437 DEBUG nova.network.neutron [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:14:19 np0005548731 nova_compute[232433]: 2025-12-06 07:14:19.785 232437 DEBUG nova.compute.manager [req-71407013-7822-4b35-9681-8597a3ec2fbd req-93343b32-0c53-4227-b0c2-5591742b6639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-changed-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:14:19 np0005548731 nova_compute[232433]: 2025-12-06 07:14:19.785 232437 DEBUG nova.compute.manager [req-71407013-7822-4b35-9681-8597a3ec2fbd req-93343b32-0c53-4227-b0c2-5591742b6639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Refreshing instance network info cache due to event network-changed-a599f1a0-5413-4dc9-9ae4-d7ba512d761c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:14:19 np0005548731 nova_compute[232433]: 2025-12-06 07:14:19.785 232437 DEBUG oslo_concurrency.lockutils [req-71407013-7822-4b35-9681-8597a3ec2fbd req-93343b32-0c53-4227-b0c2-5591742b6639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:14:19 np0005548731 podman[260243]: 2025-12-06 07:14:19.923175425 +0000 UTC m=+0.076467947 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  6 02:14:19 np0005548731 podman[260245]: 2025-12-06 07:14:19.925121313 +0000 UTC m=+0.068109462 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:14:19 np0005548731 podman[260244]: 2025-12-06 07:14:19.958323368 +0000 UTC m=+0.098189880 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:14:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:20.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:20 np0005548731 nova_compute[232433]: 2025-12-06 07:14:20.377 232437 DEBUG nova.network.neutron [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:14:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:14:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:20.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:14:21 np0005548731 nova_compute[232433]: 2025-12-06 07:14:21.409 232437 DEBUG nova.network.neutron [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:14:21 np0005548731 nova_compute[232433]: 2025-12-06 07:14:21.429 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:14:21 np0005548731 nova_compute[232433]: 2025-12-06 07:14:21.429 232437 DEBUG nova.compute.manager [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance network_info: |[{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:14:21 np0005548731 nova_compute[232433]: 2025-12-06 07:14:21.429 232437 DEBUG oslo_concurrency.lockutils [req-71407013-7822-4b35-9681-8597a3ec2fbd req-93343b32-0c53-4227-b0c2-5591742b6639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:14:21 np0005548731 nova_compute[232433]: 2025-12-06 07:14:21.430 232437 DEBUG nova.network.neutron [req-71407013-7822-4b35-9681-8597a3ec2fbd req-93343b32-0c53-4227-b0c2-5591742b6639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Refreshing network info cache for port a599f1a0-5413-4dc9-9ae4-d7ba512d761c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:14:21 np0005548731 nova_compute[232433]: 2025-12-06 07:14:21.776 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:21 np0005548731 nova_compute[232433]: 2025-12-06 07:14:21.812 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:22.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:14:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:22.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:14:22 np0005548731 nova_compute[232433]: 2025-12-06 07:14:22.732 232437 DEBUG nova.network.neutron [req-71407013-7822-4b35-9681-8597a3ec2fbd req-93343b32-0c53-4227-b0c2-5591742b6639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updated VIF entry in instance network info cache for port a599f1a0-5413-4dc9-9ae4-d7ba512d761c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:14:22 np0005548731 nova_compute[232433]: 2025-12-06 07:14:22.732 232437 DEBUG nova.network.neutron [req-71407013-7822-4b35-9681-8597a3ec2fbd req-93343b32-0c53-4227-b0c2-5591742b6639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:14:22 np0005548731 nova_compute[232433]: 2025-12-06 07:14:22.751 232437 DEBUG oslo_concurrency.lockutils [req-71407013-7822-4b35-9681-8597a3ec2fbd req-93343b32-0c53-4227-b0c2-5591742b6639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:14:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:23.131 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:24.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:24 np0005548731 nova_compute[232433]: 2025-12-06 07:14:24.464 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:24 np0005548731 nova_compute[232433]: 2025-12-06 07:14:24.571 232437 DEBUG nova.storage.rbd_utils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] resizing rbd image c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:14:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:24.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.021 232437 DEBUG nova.objects.instance [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'migration_context' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.038 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.039 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Ensure instance console log exists: /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.040 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.040 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.040 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.043 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Start _get_guest_xml network_info=[{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.049 232437 WARNING nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.055 232437 DEBUG nova.virt.libvirt.host [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.056 232437 DEBUG nova.virt.libvirt.host [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.060 232437 DEBUG nova.virt.libvirt.host [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.061 232437 DEBUG nova.virt.libvirt.host [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.063 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.063 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.064 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.064 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.065 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.065 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.066 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.066 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.066 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.067 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.067 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.067 232437 DEBUG nova.virt.hardware [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.072 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:26.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:14:26 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/838833154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.552 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.588 232437 DEBUG nova.storage.rbd_utils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.592 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:26.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.780 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:26 np0005548731 nova_compute[232433]: 2025-12-06 07:14:26.814 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:14:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2970115831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.039 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.041 232437 DEBUG nova.virt.libvirt.vif [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:14:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.041 232437 DEBUG nova.network.os_vif_util [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.042 232437 DEBUG nova.network.os_vif_util [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.043 232437 DEBUG nova.objects.instance [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.058 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  <uuid>c8403a0c-2fe6-48fe-91af-ec5aca71e12d</uuid>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  <name>instance-00000044</name>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerActionsTestJSON-server-893709654</nova:name>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:14:26</nova:creationTime>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <nova:port uuid="a599f1a0-5413-4dc9-9ae4-d7ba512d761c">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <entry name="serial">c8403a0c-2fe6-48fe-91af-ec5aca71e12d</entry>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <entry name="uuid">c8403a0c-2fe6-48fe-91af-ec5aca71e12d</entry>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk.config">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:9b:0b:0a"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <target dev="tapa599f1a0-54"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/console.log" append="off"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:14:27 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:14:27 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:14:27 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:14:27 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.060 232437 DEBUG nova.compute.manager [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Preparing to wait for external event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.060 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.060 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.061 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.061 232437 DEBUG nova.virt.libvirt.vif [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:14:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.062 232437 DEBUG nova.network.os_vif_util [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.062 232437 DEBUG nova.network.os_vif_util [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.063 232437 DEBUG os_vif [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.063 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.064 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.064 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.068 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.068 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa599f1a0-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.069 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa599f1a0-54, col_values=(('external_ids', {'iface-id': 'a599f1a0-5413-4dc9-9ae4-d7ba512d761c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:0b:0a', 'vm-uuid': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:27 np0005548731 NetworkManager[49182]: <info>  [1765005267.0710] manager: (tapa599f1a0-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.072 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.075 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.076 232437 INFO os_vif [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54')#033[00m
Dec  6 02:14:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.272 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.272 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.273 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No VIF found with MAC fa:16:3e:9b:0b:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.274 232437 INFO nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Using config drive#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.303 232437 DEBUG nova.storage.rbd_utils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.596 232437 INFO nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Creating config drive at /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/disk.config#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.604 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2e1dlb_u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:27 np0005548731 nova_compute[232433]: 2025-12-06 07:14:27.741 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2e1dlb_u" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:28 np0005548731 nova_compute[232433]: 2025-12-06 07:14:28.115 232437 DEBUG nova.storage.rbd_utils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:14:28 np0005548731 nova_compute[232433]: 2025-12-06 07:14:28.119 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/disk.config c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:28.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:14:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:28.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:14:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:30.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:30 np0005548731 nova_compute[232433]: 2025-12-06 07:14:30.475 232437 DEBUG oslo_concurrency.processutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/disk.config c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:30 np0005548731 nova_compute[232433]: 2025-12-06 07:14:30.475 232437 INFO nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Deleting local config drive /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/disk.config because it was imported into RBD.#033[00m
Dec  6 02:14:30 np0005548731 kernel: tapa599f1a0-54: entered promiscuous mode
Dec  6 02:14:30 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:30Z|00200|binding|INFO|Claiming lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c for this chassis.
Dec  6 02:14:30 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:30Z|00201|binding|INFO|a599f1a0-5413-4dc9-9ae4-d7ba512d761c: Claiming fa:16:3e:9b:0b:0a 10.100.0.6
Dec  6 02:14:30 np0005548731 nova_compute[232433]: 2025-12-06 07:14:30.515 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:30 np0005548731 NetworkManager[49182]: <info>  [1765005270.5167] manager: (tapa599f1a0-54): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Dec  6 02:14:30 np0005548731 nova_compute[232433]: 2025-12-06 07:14:30.521 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.527 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:0b:0a 10.100.0.6'], port_security=['fa:16:3e:9b:0b:0a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a599f1a0-5413-4dc9-9ae4-d7ba512d761c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.528 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a599f1a0-5413-4dc9-9ae4-d7ba512d761c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.529 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.541 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[678493a4-1f02-455c-9cf2-cc54bc09e436]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.542 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.543 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.544 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bc64d2c4-b550-4a67-a9d1-262bf80f6a43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.544 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f48adcb1-8c49-41c5-93b6-0cbd2fd46075]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 systemd-machined[195355]: New machine qemu-28-instance-00000044.
Dec  6 02:14:30 np0005548731 systemd-udevd[260575]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.554 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[75aac7a7-3ecb-4b78-ac1a-f3986151d992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 NetworkManager[49182]: <info>  [1765005270.5616] device (tapa599f1a0-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:14:30 np0005548731 NetworkManager[49182]: <info>  [1765005270.5626] device (tapa599f1a0-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:14:30 np0005548731 systemd[1]: Started Virtual Machine qemu-28-instance-00000044.
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.581 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[195bbcc7-582c-4f45-ab54-244e4c802ed9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 nova_compute[232433]: 2025-12-06 07:14:30.586 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:30 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:30Z|00202|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c ovn-installed in OVS
Dec  6 02:14:30 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:30Z|00203|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c up in Southbound
Dec  6 02:14:30 np0005548731 nova_compute[232433]: 2025-12-06 07:14:30.593 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.611 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d88653-25bc-490f-a97c-4e41a389b64d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.616 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1dfc7c46-b747-4555-95fc-ea60ef73c42a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 NetworkManager[49182]: <info>  [1765005270.6173] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Dec  6 02:14:30 np0005548731 systemd-udevd[260579]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.647 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5fed1754-9712-4d0b-92e5-4b071a49d34f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.650 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[a6352d39-f45b-46e2-8df9-6cd35c71b54e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:30.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:30 np0005548731 NetworkManager[49182]: <info>  [1765005270.6714] device (tap4d599401-30): carrier: link connected
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.677 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[264fa2ce-c454-4b1a-a505-284aa7cb92c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.692 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa4bc3e-d721-4056-9ee9-c730ff30df1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553267, 'reachable_time': 37249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260607, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.705 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc81580-de76-4fe0-86d1-068a99ae0720]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 553267, 'tstamp': 553267}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260608, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.721 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f8cc1d-dc4c-4009-992e-7fcf4fd3344b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553267, 'reachable_time': 37249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260609, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.749 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b41682ba-8160-4bcc-b326-f5f49c1f51e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.804 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddfd4cf-da59-4f88-9087-6ce49f32af5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.806 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.806 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.806 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:30 np0005548731 nova_compute[232433]: 2025-12-06 07:14:30.808 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:30 np0005548731 NetworkManager[49182]: <info>  [1765005270.8090] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Dec  6 02:14:30 np0005548731 kernel: tap4d599401-30: entered promiscuous mode
Dec  6 02:14:30 np0005548731 nova_compute[232433]: 2025-12-06 07:14:30.812 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.813 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:30 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:30Z|00204|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:14:30 np0005548731 nova_compute[232433]: 2025-12-06 07:14:30.814 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:30 np0005548731 nova_compute[232433]: 2025-12-06 07:14:30.833 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:30 np0005548731 nova_compute[232433]: 2025-12-06 07:14:30.835 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.835 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.836 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[423c1fcc-39cf-4350-9cfa-a1e0a86b2eb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.837 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:14:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:30.838 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.111 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005271.1112196, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.112 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Started (Lifecycle Event)#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.139 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.143 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005271.1114337, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.143 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.161 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.164 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.187 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.221 232437 DEBUG nova.compute.manager [req-a3c5b4f8-a99b-4909-a08b-eac135fa0e83 req-ccc2023e-e53a-4d73-a6b0-dbca8aca268c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.222 232437 DEBUG oslo_concurrency.lockutils [req-a3c5b4f8-a99b-4909-a08b-eac135fa0e83 req-ccc2023e-e53a-4d73-a6b0-dbca8aca268c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.222 232437 DEBUG oslo_concurrency.lockutils [req-a3c5b4f8-a99b-4909-a08b-eac135fa0e83 req-ccc2023e-e53a-4d73-a6b0-dbca8aca268c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.222 232437 DEBUG oslo_concurrency.lockutils [req-a3c5b4f8-a99b-4909-a08b-eac135fa0e83 req-ccc2023e-e53a-4d73-a6b0-dbca8aca268c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.223 232437 DEBUG nova.compute.manager [req-a3c5b4f8-a99b-4909-a08b-eac135fa0e83 req-ccc2023e-e53a-4d73-a6b0-dbca8aca268c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Processing event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.223 232437 DEBUG nova.compute.manager [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.226 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005271.2266872, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.227 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.228 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.230 232437 INFO nova.virt.libvirt.driver [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance spawned successfully.#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.231 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:14:31 np0005548731 podman[260682]: 2025-12-06 07:14:31.176219591 +0000 UTC m=+0.022483013 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.323 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.328 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.331 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.331 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.332 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.332 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.332 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.333 232437 DEBUG nova.virt.libvirt.driver [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.371 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:14:31 np0005548731 podman[260682]: 2025-12-06 07:14:31.401615551 +0000 UTC m=+0.247878943 container create a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.450 232437 INFO nova.compute.manager [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Took 21.36 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.450 232437 DEBUG nova.compute.manager [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:14:31 np0005548731 systemd[1]: Started libpod-conmon-a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c.scope.
Dec  6 02:14:31 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:14:31 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de922b40da32c2c2e683948d8098d87d1822b6573690dcf76d8c75364abee621/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.544 232437 INFO nova.compute.manager [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Took 22.54 seconds to build instance.#033[00m
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.566 232437 DEBUG oslo_concurrency.lockutils [None req-84c73deb-f159-49ec-b0e1-f72828113dec 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:31 np0005548731 podman[260682]: 2025-12-06 07:14:31.701087599 +0000 UTC m=+0.547351011 container init a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  6 02:14:31 np0005548731 podman[260682]: 2025-12-06 07:14:31.711741311 +0000 UTC m=+0.558004703 container start a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 02:14:31 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[260697]: [NOTICE]   (260701) : New worker (260703) forked
Dec  6 02:14:31 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[260697]: [NOTICE]   (260701) : Loading success.
Dec  6 02:14:31 np0005548731 nova_compute[232433]: 2025-12-06 07:14:31.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:32 np0005548731 nova_compute[232433]: 2025-12-06 07:14:32.071 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:32.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:32.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e248 e248: 3 total, 3 up, 3 in
Dec  6 02:14:33 np0005548731 nova_compute[232433]: 2025-12-06 07:14:33.498 232437 DEBUG nova.compute.manager [req-d38602ba-d992-4c8b-bf70-321cbd2286f8 req-b44655af-1d0c-4812-9a6a-68b2068d2805 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:14:33 np0005548731 nova_compute[232433]: 2025-12-06 07:14:33.498 232437 DEBUG oslo_concurrency.lockutils [req-d38602ba-d992-4c8b-bf70-321cbd2286f8 req-b44655af-1d0c-4812-9a6a-68b2068d2805 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:33 np0005548731 nova_compute[232433]: 2025-12-06 07:14:33.499 232437 DEBUG oslo_concurrency.lockutils [req-d38602ba-d992-4c8b-bf70-321cbd2286f8 req-b44655af-1d0c-4812-9a6a-68b2068d2805 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:33 np0005548731 nova_compute[232433]: 2025-12-06 07:14:33.499 232437 DEBUG oslo_concurrency.lockutils [req-d38602ba-d992-4c8b-bf70-321cbd2286f8 req-b44655af-1d0c-4812-9a6a-68b2068d2805 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:33 np0005548731 nova_compute[232433]: 2025-12-06 07:14:33.499 232437 DEBUG nova.compute.manager [req-d38602ba-d992-4c8b-bf70-321cbd2286f8 req-b44655af-1d0c-4812-9a6a-68b2068d2805 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:14:33 np0005548731 nova_compute[232433]: 2025-12-06 07:14:33.499 232437 WARNING nova.compute.manager [req-d38602ba-d992-4c8b-bf70-321cbd2286f8 req-b44655af-1d0c-4812-9a6a-68b2068d2805 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:14:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:14:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:34.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:14:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:34.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e249 e249: 3 total, 3 up, 3 in
Dec  6 02:14:35 np0005548731 NetworkManager[49182]: <info>  [1765005275.6471] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Dec  6 02:14:35 np0005548731 NetworkManager[49182]: <info>  [1765005275.6482] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Dec  6 02:14:35 np0005548731 nova_compute[232433]: 2025-12-06 07:14:35.661 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:35 np0005548731 nova_compute[232433]: 2025-12-06 07:14:35.778 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:35 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:35Z|00205|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:14:35 np0005548731 nova_compute[232433]: 2025-12-06 07:14:35.793 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:36 np0005548731 nova_compute[232433]: 2025-12-06 07:14:36.091 232437 DEBUG nova.compute.manager [req-6bdc548f-9474-4d06-a064-775316b0d1e6 req-4679d320-6d83-45ce-a6a0-a253fa74b77e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-changed-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:14:36 np0005548731 nova_compute[232433]: 2025-12-06 07:14:36.091 232437 DEBUG nova.compute.manager [req-6bdc548f-9474-4d06-a064-775316b0d1e6 req-4679d320-6d83-45ce-a6a0-a253fa74b77e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Refreshing instance network info cache due to event network-changed-a599f1a0-5413-4dc9-9ae4-d7ba512d761c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:14:36 np0005548731 nova_compute[232433]: 2025-12-06 07:14:36.091 232437 DEBUG oslo_concurrency.lockutils [req-6bdc548f-9474-4d06-a064-775316b0d1e6 req-4679d320-6d83-45ce-a6a0-a253fa74b77e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:14:36 np0005548731 nova_compute[232433]: 2025-12-06 07:14:36.092 232437 DEBUG oslo_concurrency.lockutils [req-6bdc548f-9474-4d06-a064-775316b0d1e6 req-4679d320-6d83-45ce-a6a0-a253fa74b77e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:14:36 np0005548731 nova_compute[232433]: 2025-12-06 07:14:36.092 232437 DEBUG nova.network.neutron [req-6bdc548f-9474-4d06-a064-775316b0d1e6 req-4679d320-6d83-45ce-a6a0-a253fa74b77e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Refreshing network info cache for port a599f1a0-5413-4dc9-9ae4-d7ba512d761c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:14:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e250 e250: 3 total, 3 up, 3 in
Dec  6 02:14:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:36.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:36.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:36 np0005548731 nova_compute[232433]: 2025-12-06 07:14:36.818 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:37 np0005548731 nova_compute[232433]: 2025-12-06 07:14:37.072 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e251 e251: 3 total, 3 up, 3 in
Dec  6 02:14:38 np0005548731 nova_compute[232433]: 2025-12-06 07:14:38.186 232437 DEBUG nova.network.neutron [req-6bdc548f-9474-4d06-a064-775316b0d1e6 req-4679d320-6d83-45ce-a6a0-a253fa74b77e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updated VIF entry in instance network info cache for port a599f1a0-5413-4dc9-9ae4-d7ba512d761c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:14:38 np0005548731 nova_compute[232433]: 2025-12-06 07:14:38.187 232437 DEBUG nova.network.neutron [req-6bdc548f-9474-4d06-a064-775316b0d1e6 req-4679d320-6d83-45ce-a6a0-a253fa74b77e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:14:38 np0005548731 nova_compute[232433]: 2025-12-06 07:14:38.207 232437 DEBUG oslo_concurrency.lockutils [req-6bdc548f-9474-4d06-a064-775316b0d1e6 req-4679d320-6d83-45ce-a6a0-a253fa74b77e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:14:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:38.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:38.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:40.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:40.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:41 np0005548731 nova_compute[232433]: 2025-12-06 07:14:41.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:42 np0005548731 nova_compute[232433]: 2025-12-06 07:14:42.075 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:42.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:14:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:42.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:14:43 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:43Z|00206|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:14:43 np0005548731 nova_compute[232433]: 2025-12-06 07:14:43.139 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:43 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:43Z|00207|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:14:43 np0005548731 nova_compute[232433]: 2025-12-06 07:14:43.613 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:44Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:0b:0a 10.100.0.6
Dec  6 02:14:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:44Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:0b:0a 10.100.0.6
Dec  6 02:14:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:44.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:44.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:14:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:46.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:14:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 e252: 3 total, 3 up, 3 in
Dec  6 02:14:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:14:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:46.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:14:46 np0005548731 nova_compute[232433]: 2025-12-06 07:14:46.821 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:47 np0005548731 nova_compute[232433]: 2025-12-06 07:14:47.076 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:48 np0005548731 nova_compute[232433]: 2025-12-06 07:14:48.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:14:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:48.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:48.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:50 np0005548731 nova_compute[232433]: 2025-12-06 07:14:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:14:50 np0005548731 nova_compute[232433]: 2025-12-06 07:14:50.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:14:50 np0005548731 nova_compute[232433]: 2025-12-06 07:14:50.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:14:50 np0005548731 nova_compute[232433]: 2025-12-06 07:14:50.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:50.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:50 np0005548731 nova_compute[232433]: 2025-12-06 07:14:50.562 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:14:50 np0005548731 nova_compute[232433]: 2025-12-06 07:14:50.563 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:14:50 np0005548731 nova_compute[232433]: 2025-12-06 07:14:50.563 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:14:50 np0005548731 nova_compute[232433]: 2025-12-06 07:14:50.563 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:14:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:50.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:50 np0005548731 podman[260773]: 2025-12-06 07:14:50.927429244 +0000 UTC m=+0.059251375 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:14:50 np0005548731 podman[260775]: 2025-12-06 07:14:50.930883968 +0000 UTC m=+0.065367934 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:14:50 np0005548731 podman[260774]: 2025-12-06 07:14:50.958381743 +0000 UTC m=+0.092980873 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 02:14:51 np0005548731 nova_compute[232433]: 2025-12-06 07:14:51.630 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:51 np0005548731 nova_compute[232433]: 2025-12-06 07:14:51.823 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:52 np0005548731 nova_compute[232433]: 2025-12-06 07:14:52.077 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:52.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:14:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:52.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:14:53 np0005548731 nova_compute[232433]: 2025-12-06 07:14:53.385 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:14:53 np0005548731 nova_compute[232433]: 2025-12-06 07:14:53.464 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:14:53 np0005548731 nova_compute[232433]: 2025-12-06 07:14:53.465 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:14:53 np0005548731 nova_compute[232433]: 2025-12-06 07:14:53.465 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:14:53 np0005548731 nova_compute[232433]: 2025-12-06 07:14:53.466 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:14:54 np0005548731 nova_compute[232433]: 2025-12-06 07:14:54.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:14:54 np0005548731 nova_compute[232433]: 2025-12-06 07:14:54.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:14:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:54.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:54.474 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:14:54 np0005548731 nova_compute[232433]: 2025-12-06 07:14:54.474 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:54.475 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:54.476 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:54 np0005548731 nova_compute[232433]: 2025-12-06 07:14:54.488 232437 DEBUG oslo_concurrency.lockutils [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:54 np0005548731 nova_compute[232433]: 2025-12-06 07:14:54.489 232437 DEBUG oslo_concurrency.lockutils [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:54 np0005548731 nova_compute[232433]: 2025-12-06 07:14:54.489 232437 INFO nova.compute.manager [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Rebooting instance#033[00m
Dec  6 02:14:54 np0005548731 nova_compute[232433]: 2025-12-06 07:14:54.509 232437 DEBUG oslo_concurrency.lockutils [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:14:54 np0005548731 nova_compute[232433]: 2025-12-06 07:14:54.509 232437 DEBUG oslo_concurrency.lockutils [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:14:54 np0005548731 nova_compute[232433]: 2025-12-06 07:14:54.510 232437 DEBUG nova.network.neutron [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:14:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:54Z|00208|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:14:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:54.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:54 np0005548731 nova_compute[232433]: 2025-12-06 07:14:54.745 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:54 np0005548731 nova_compute[232433]: 2025-12-06 07:14:54.876 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:55 np0005548731 nova_compute[232433]: 2025-12-06 07:14:55.652 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:55 np0005548731 nova_compute[232433]: 2025-12-06 07:14:55.992 232437 DEBUG nova.network.neutron [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.012 232437 DEBUG oslo_concurrency.lockutils [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.014 232437 DEBUG nova.compute.manager [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.131 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.131 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:56.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:14:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2536976784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.546 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:56.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.825 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:56 np0005548731 kernel: tapa599f1a0-54 (unregistering): left promiscuous mode
Dec  6 02:14:56 np0005548731 NetworkManager[49182]: <info>  [1765005296.9274] device (tapa599f1a0-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:14:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:56Z|00209|binding|INFO|Releasing lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c from this chassis (sb_readonly=0)
Dec  6 02:14:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:56Z|00210|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c down in Southbound
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.938 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:56Z|00211|binding|INFO|Removing iface tapa599f1a0-54 ovn-installed in OVS
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.941 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:56.947 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:0b:0a 10.100.0.6'], port_security=['fa:16:3e:9b:0b:0a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a599f1a0-5413-4dc9-9ae4-d7ba512d761c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:14:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:56.948 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a599f1a0-5413-4dc9-9ae4-d7ba512d761c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis#033[00m
Dec  6 02:14:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:56.950 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:14:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:56.952 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[88048a0a-898c-43c6-bbd9-c2aa37af028d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:56.953 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore#033[00m
Dec  6 02:14:56 np0005548731 nova_compute[232433]: 2025-12-06 07:14:56.960 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:57 np0005548731 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000044.scope: Deactivated successfully.
Dec  6 02:14:57 np0005548731 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000044.scope: Consumed 13.958s CPU time.
Dec  6 02:14:57 np0005548731 systemd-machined[195355]: Machine qemu-28-instance-00000044 terminated.
Dec  6 02:14:57 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[260697]: [NOTICE]   (260701) : haproxy version is 2.8.14-c23fe91
Dec  6 02:14:57 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[260697]: [NOTICE]   (260701) : path to executable is /usr/sbin/haproxy
Dec  6 02:14:57 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[260697]: [WARNING]  (260701) : Exiting Master process...
Dec  6 02:14:57 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[260697]: [ALERT]    (260701) : Current worker (260703) exited with code 143 (Terminated)
Dec  6 02:14:57 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[260697]: [WARNING]  (260701) : All workers exited. Exiting... (0)
Dec  6 02:14:57 np0005548731 systemd[1]: libpod-a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c.scope: Deactivated successfully.
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.079 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:57 np0005548731 podman[260884]: 2025-12-06 07:14:57.080459194 +0000 UTC m=+0.043389506 container died a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 02:14:57 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c-userdata-shm.mount: Deactivated successfully.
Dec  6 02:14:57 np0005548731 systemd[1]: var-lib-containers-storage-overlay-de922b40da32c2c2e683948d8098d87d1822b6573690dcf76d8c75364abee621-merged.mount: Deactivated successfully.
Dec  6 02:14:57 np0005548731 podman[260884]: 2025-12-06 07:14:57.122623218 +0000 UTC m=+0.085553530 container cleanup a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 02:14:57 np0005548731 systemd[1]: libpod-conmon-a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c.scope: Deactivated successfully.
Dec  6 02:14:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.183 232437 INFO nova.virt.libvirt.driver [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance destroyed successfully.#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.185 232437 DEBUG nova.objects.instance [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.216 232437 DEBUG nova.virt.libvirt.vif [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:14:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.216 232437 DEBUG nova.network.os_vif_util [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.217 232437 DEBUG nova.network.os_vif_util [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.218 232437 DEBUG os_vif [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.220 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.220 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa599f1a0-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.221 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.222 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.225 232437 INFO os_vif [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54')#033[00m
Dec  6 02:14:57 np0005548731 podman[260916]: 2025-12-06 07:14:57.229626604 +0000 UTC m=+0.049324151 container remove a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.235 232437 DEBUG nova.virt.libvirt.driver [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Start _get_guest_xml network_info=[{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:14:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:57.239 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[871a4ab8-609b-442c-812a-bb395fb379c2]: (4, ('Sat Dec  6 07:14:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c)\na5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c\nSat Dec  6 07:14:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (a5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c)\na5fbdb4cd12236b89645fc9ab405c1ff58c8aa40494e676ec078c683b5de780c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.241 232437 WARNING nova.virt.libvirt.driver [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:14:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:57.242 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c6387e-a45f-4502-ae21-3ebae4bc062d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:57.244 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:57 np0005548731 kernel: tap4d599401-30: left promiscuous mode
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.245 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.248 232437 DEBUG nova.virt.libvirt.host [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.249 232437 DEBUG nova.virt.libvirt.host [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.255 232437 DEBUG nova.virt.libvirt.host [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.255 232437 DEBUG nova.virt.libvirt.host [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.257 232437 DEBUG nova.virt.libvirt.driver [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.257 232437 DEBUG nova.virt.hardware [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.257 232437 DEBUG nova.virt.hardware [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.257 232437 DEBUG nova.virt.hardware [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.257 232437 DEBUG nova.virt.hardware [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.258 232437 DEBUG nova.virt.hardware [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.258 232437 DEBUG nova.virt.hardware [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.259 232437 DEBUG nova.virt.hardware [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.259 232437 DEBUG nova.virt.hardware [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.260 232437 DEBUG nova.virt.hardware [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.260 232437 DEBUG nova.virt.hardware [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.260 232437 DEBUG nova.virt.hardware [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.260 232437 DEBUG nova.objects.instance [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'vcpu_model' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.261 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:57.263 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[59746ac5-d966-4570-a57b-fa4a37d3360b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.265 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Error from libvirt while getting description of instance-00000044: [Error Code 42] Domain not found: no domain with matching uuid 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d' (instance-00000044): libvirt.libvirtError: Domain not found: no domain with matching uuid 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d' (instance-00000044)#033[00m
Dec  6 02:14:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:57.274 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[066fc1d0-2ed0-4ee3-b896-c1a8124cf880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:57.276 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d7111a-4bfd-42c6-969a-5cfac5f84645]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.281 232437 DEBUG oslo_concurrency.processutils [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:57.292 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e00262e3-732a-4535-a9a3-1c54ce73df54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553261, 'reachable_time': 17600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260937, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:57 np0005548731 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec  6 02:14:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:57.297 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:14:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:57.297 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[835506c1-7b0d-4441-bece-ab7096ba05cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.447 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.448 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4674MB free_disk=20.942859649658203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.448 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.448 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.532 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance c8403a0c-2fe6-48fe-91af-ec5aca71e12d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.532 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.533 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.571 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:14:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2670706817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.747 232437 DEBUG oslo_concurrency.processutils [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:57 np0005548731 nova_compute[232433]: 2025-12-06 07:14:57.798 232437 DEBUG oslo_concurrency.processutils [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:14:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:14:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2433529301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.009 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.015 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.038 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.073 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.074 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:14:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2565456730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:14:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:58.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.466 232437 DEBUG oslo_concurrency.processutils [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.469 232437 DEBUG nova.virt.libvirt.vif [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:14:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.470 232437 DEBUG nova.network.os_vif_util [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.470 232437 DEBUG nova.network.os_vif_util [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.472 232437 DEBUG nova.objects.instance [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.490 232437 DEBUG nova.virt.libvirt.driver [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  <uuid>c8403a0c-2fe6-48fe-91af-ec5aca71e12d</uuid>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  <name>instance-00000044</name>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerActionsTestJSON-server-893709654</nova:name>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:14:57</nova:creationTime>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <nova:port uuid="a599f1a0-5413-4dc9-9ae4-d7ba512d761c">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <entry name="serial">c8403a0c-2fe6-48fe-91af-ec5aca71e12d</entry>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <entry name="uuid">c8403a0c-2fe6-48fe-91af-ec5aca71e12d</entry>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk.config">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:9b:0b:0a"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <target dev="tapa599f1a0-54"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/console.log" append="off"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <input type="keyboard" bus="usb"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:14:58 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:14:58 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:14:58 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:14:58 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.491 232437 DEBUG nova.virt.libvirt.driver [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.492 232437 DEBUG nova.virt.libvirt.driver [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.493 232437 DEBUG nova.virt.libvirt.vif [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:14:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.493 232437 DEBUG nova.network.os_vif_util [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.494 232437 DEBUG nova.network.os_vif_util [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.494 232437 DEBUG os_vif [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.495 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.495 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.496 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.498 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.498 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa599f1a0-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.498 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa599f1a0-54, col_values=(('external_ids', {'iface-id': 'a599f1a0-5413-4dc9-9ae4-d7ba512d761c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:0b:0a', 'vm-uuid': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:58 np0005548731 NetworkManager[49182]: <info>  [1765005298.5008] manager: (tapa599f1a0-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.501 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.504 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.505 232437 INFO os_vif [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54')#033[00m
Dec  6 02:14:58 np0005548731 kernel: tapa599f1a0-54: entered promiscuous mode
Dec  6 02:14:58 np0005548731 systemd-udevd[260865]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:14:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:58Z|00212|binding|INFO|Claiming lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c for this chassis.
Dec  6 02:14:58 np0005548731 NetworkManager[49182]: <info>  [1765005298.5820] manager: (tapa599f1a0-54): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Dec  6 02:14:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:58Z|00213|binding|INFO|a599f1a0-5413-4dc9-9ae4-d7ba512d761c: Claiming fa:16:3e:9b:0b:0a 10.100.0.6
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.581 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.588 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:0b:0a 10.100.0.6'], port_security=['fa:16:3e:9b:0b:0a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a599f1a0-5413-4dc9-9ae4-d7ba512d761c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.589 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a599f1a0-5413-4dc9-9ae4-d7ba512d761c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.590 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64#033[00m
Dec  6 02:14:58 np0005548731 NetworkManager[49182]: <info>  [1765005298.6178] device (tapa599f1a0-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:14:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:58Z|00214|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c ovn-installed in OVS
Dec  6 02:14:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:58Z|00215|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c up in Southbound
Dec  6 02:14:58 np0005548731 NetworkManager[49182]: <info>  [1765005298.6199] device (tapa599f1a0-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.620 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.626 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.629 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb370fd-ef88-4d49-a4b1-82443bda7ddf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.629 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.630 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.631 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3366fcc8-d12e-4c12-a329-7933006029cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.631 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0d524712-aa92-4c89-9181-424a463d2574]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 systemd-machined[195355]: New machine qemu-29-instance-00000044.
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.645 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c3606190-3752-41cc-a63f-e6fb6c7497a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 systemd[1]: Started Virtual Machine qemu-29-instance-00000044.
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.671 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[04c364c9-80af-4b6c-b442-710c7808523b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.697 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c02811c5-5aea-45a7-8ec0-84b6b67dce0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.701 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf1b528-4b3f-48e1-99c9-d41eb5739d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 NetworkManager[49182]: <info>  [1765005298.7032] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/123)
Dec  6 02:14:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:14:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:14:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:58.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.731 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[8987705c-f232-4754-8ab9-3685fe75cc43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.734 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f6cdcf0c-7e10-4432-8064-5fc6c6d38ecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 NetworkManager[49182]: <info>  [1765005298.7563] device (tap4d599401-30): carrier: link connected
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.763 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce58fd3-44ff-4bd7-80e6-aa8e037569d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.782 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[78675d80-8150-4504-a2b3-57968bcc3416]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556076, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261068, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.799 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9b86f508-14f2-476e-a3ef-17ffdcc38e6a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556076, 'tstamp': 556076}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261069, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.813 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ca951497-ff4b-4432-bcdb-ab82a97ab256]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556076, 'reachable_time': 24755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261070, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.841 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8d690536-a59e-408a-b4b7-f1a6555ceff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.892 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a906165c-5667-4d5a-860e-4fe1f5a7474a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.893 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.894 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.894 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.896 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:58 np0005548731 NetworkManager[49182]: <info>  [1765005298.8970] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Dec  6 02:14:58 np0005548731 kernel: tap4d599401-30: entered promiscuous mode
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.898 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.899 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.900 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:14:58Z|00216|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:14:58 np0005548731 nova_compute[232433]: 2025-12-06 07:14:58.913 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.914 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.915 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[70444224-c00d-455a-acfa-c81921e426f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.915 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:14:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:14:58.916 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.073 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:14:59 np0005548731 podman[261127]: 2025-12-06 07:14:59.242235645 +0000 UTC m=+0.020839992 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.345 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for c8403a0c-2fe6-48fe-91af-ec5aca71e12d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.346 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005299.3449378, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.346 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.348 232437 DEBUG nova.compute.manager [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.351 232437 INFO nova.virt.libvirt.driver [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance rebooted successfully.#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.352 232437 DEBUG nova.compute.manager [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.384 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.387 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.415 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.416 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005299.3480742, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.416 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Started (Lifecycle Event)#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.426 232437 DEBUG oslo_concurrency.lockutils [None req-4aacfe2d-1cb2-4fae-83b9-da64ca1ea6ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.443 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:14:59 np0005548731 nova_compute[232433]: 2025-12-06 07:14:59.445 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:14:59 np0005548731 podman[261127]: 2025-12-06 07:14:59.55177435 +0000 UTC m=+0.330378677 container create c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 02:14:59 np0005548731 systemd[1]: Started libpod-conmon-c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3.scope.
Dec  6 02:14:59 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:14:59 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a6d5d75a97b59bde2c975b959fd3e87c820a814dcf34f4870306a23f6c57f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:15:00 np0005548731 podman[261127]: 2025-12-06 07:15:00.059341834 +0000 UTC m=+0.837946181 container init c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 02:15:00 np0005548731 podman[261127]: 2025-12-06 07:15:00.065413533 +0000 UTC m=+0.844017860 container start c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 02:15:00 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261160]: [NOTICE]   (261164) : New worker (261166) forked
Dec  6 02:15:00 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261160]: [NOTICE]   (261164) : Loading success.
Dec  6 02:15:00 np0005548731 nova_compute[232433]: 2025-12-06 07:15:00.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:15:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:00.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:00 np0005548731 nova_compute[232433]: 2025-12-06 07:15:00.446 232437 INFO nova.compute.manager [None req-1f25c5eb-51d8-44fc-8a01-8551f4b35d58 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Get console output#033[00m
Dec  6 02:15:00 np0005548731 nova_compute[232433]: 2025-12-06 07:15:00.450 232437 INFO oslo.privsep.daemon [None req-1f25c5eb-51d8-44fc-8a01-8551f4b35d58 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmphkxbcf2k/privsep.sock']#033[00m
Dec  6 02:15:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:00.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:00.857 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:00.858 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:00.858 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:01 np0005548731 nova_compute[232433]: 2025-12-06 07:15:01.169 232437 INFO oslo.privsep.daemon [None req-1f25c5eb-51d8-44fc-8a01-8551f4b35d58 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  6 02:15:01 np0005548731 nova_compute[232433]: 2025-12-06 07:15:01.033 261230 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  6 02:15:01 np0005548731 nova_compute[232433]: 2025-12-06 07:15:01.037 261230 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  6 02:15:01 np0005548731 nova_compute[232433]: 2025-12-06 07:15:01.039 261230 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  6 02:15:01 np0005548731 nova_compute[232433]: 2025-12-06 07:15:01.039 261230 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261230#033[00m
Dec  6 02:15:01 np0005548731 nova_compute[232433]: 2025-12-06 07:15:01.828 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:02 np0005548731 nova_compute[232433]: 2025-12-06 07:15:02.101 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:15:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:02.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:02.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:02 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 02:15:03 np0005548731 nova_compute[232433]: 2025-12-06 07:15:03.501 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:04.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:04.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:05 np0005548731 nova_compute[232433]: 2025-12-06 07:15:05.630 232437 DEBUG nova.compute.manager [req-7c2c00be-6d72-4782-9f55-da570988be85 req-da7be891-c18a-44b5-bc9c-f59c46e47efc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:15:05 np0005548731 nova_compute[232433]: 2025-12-06 07:15:05.631 232437 DEBUG oslo_concurrency.lockutils [req-7c2c00be-6d72-4782-9f55-da570988be85 req-da7be891-c18a-44b5-bc9c-f59c46e47efc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:05 np0005548731 nova_compute[232433]: 2025-12-06 07:15:05.631 232437 DEBUG oslo_concurrency.lockutils [req-7c2c00be-6d72-4782-9f55-da570988be85 req-da7be891-c18a-44b5-bc9c-f59c46e47efc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:05 np0005548731 nova_compute[232433]: 2025-12-06 07:15:05.632 232437 DEBUG oslo_concurrency.lockutils [req-7c2c00be-6d72-4782-9f55-da570988be85 req-da7be891-c18a-44b5-bc9c-f59c46e47efc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:05 np0005548731 nova_compute[232433]: 2025-12-06 07:15:05.632 232437 DEBUG nova.compute.manager [req-7c2c00be-6d72-4782-9f55-da570988be85 req-da7be891-c18a-44b5-bc9c-f59c46e47efc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:15:05 np0005548731 nova_compute[232433]: 2025-12-06 07:15:05.633 232437 WARNING nova.compute.manager [req-7c2c00be-6d72-4782-9f55-da570988be85 req-da7be891-c18a-44b5-bc9c-f59c46e47efc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:15:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:06.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:06.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:06 np0005548731 nova_compute[232433]: 2025-12-06 07:15:06.828 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.809 232437 DEBUG nova.compute.manager [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.811 232437 DEBUG oslo_concurrency.lockutils [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.812 232437 DEBUG oslo_concurrency.lockutils [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.812 232437 DEBUG oslo_concurrency.lockutils [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.812 232437 DEBUG nova.compute.manager [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.813 232437 WARNING nova.compute.manager [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.813 232437 DEBUG nova.compute.manager [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.813 232437 DEBUG oslo_concurrency.lockutils [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.814 232437 DEBUG oslo_concurrency.lockutils [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.814 232437 DEBUG oslo_concurrency.lockutils [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.814 232437 DEBUG nova.compute.manager [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.814 232437 WARNING nova.compute.manager [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.815 232437 DEBUG nova.compute.manager [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.815 232437 DEBUG oslo_concurrency.lockutils [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.815 232437 DEBUG oslo_concurrency.lockutils [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.816 232437 DEBUG oslo_concurrency.lockutils [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.817 232437 DEBUG nova.compute.manager [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.818 232437 WARNING nova.compute.manager [req-b83cbe3f-a2b2-4b3d-bc7b-a4bbe1ba045e req-3f856da5-7767-4b6b-8b6a-97d3a0bd255a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.893 232437 DEBUG oslo_concurrency.lockutils [None req-566389a6-c8af-44cc-a025-8845cbfa2883 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.894 232437 DEBUG oslo_concurrency.lockutils [None req-566389a6-c8af-44cc-a025-8845cbfa2883 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.894 232437 DEBUG nova.compute.manager [None req-566389a6-c8af-44cc-a025-8845cbfa2883 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.899 232437 DEBUG nova.compute.manager [None req-566389a6-c8af-44cc-a025-8845cbfa2883 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec  6 02:15:07 np0005548731 nova_compute[232433]: 2025-12-06 07:15:07.899 232437 DEBUG nova.objects.instance [None req-566389a6-c8af-44cc-a025-8845cbfa2883 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'flavor' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:15:08 np0005548731 nova_compute[232433]: 2025-12-06 07:15:08.092 232437 DEBUG nova.virt.libvirt.driver [None req-566389a6-c8af-44cc-a025-8845cbfa2883 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:15:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 02:15:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:15:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:15:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:15:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:08.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:08 np0005548731 nova_compute[232433]: 2025-12-06 07:15:08.504 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:08.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:10.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:10.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:11 np0005548731 nova_compute[232433]: 2025-12-06 07:15:11.830 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:12.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:12.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:13 np0005548731 nova_compute[232433]: 2025-12-06 07:15:13.506 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:15:13Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:0b:0a 10.100.0.6
Dec  6 02:15:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:14.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:14.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:16.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:16 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:15:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:16.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:16 np0005548731 nova_compute[232433]: 2025-12-06 07:15:16.832 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:18 np0005548731 nova_compute[232433]: 2025-12-06 07:15:18.137 232437 DEBUG nova.virt.libvirt.driver [None req-566389a6-c8af-44cc-a025-8845cbfa2883 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:15:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:18.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:18 np0005548731 nova_compute[232433]: 2025-12-06 07:15:18.508 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:18.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:20.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:15:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:20.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:21 np0005548731 podman[261448]: 2025-12-06 07:15:21.128588459 +0000 UTC m=+0.069961902 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:15:21 np0005548731 podman[261450]: 2025-12-06 07:15:21.12985125 +0000 UTC m=+0.061891865 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec  6 02:15:21 np0005548731 nova_compute[232433]: 2025-12-06 07:15:21.134 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:21 np0005548731 podman[261449]: 2025-12-06 07:15:21.158014228 +0000 UTC m=+0.095584908 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  6 02:15:21 np0005548731 nova_compute[232433]: 2025-12-06 07:15:21.833 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.152 232437 INFO nova.virt.libvirt.driver [None req-566389a6-c8af-44cc-a025-8845cbfa2883 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance shutdown successfully after 14 seconds.#033[00m
Dec  6 02:15:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:22 np0005548731 kernel: tapa599f1a0-54 (unregistering): left promiscuous mode
Dec  6 02:15:22 np0005548731 NetworkManager[49182]: <info>  [1765005322.3682] device (tapa599f1a0-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:15:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:15:22Z|00217|binding|INFO|Releasing lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c from this chassis (sb_readonly=0)
Dec  6 02:15:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:15:22Z|00218|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c down in Southbound
Dec  6 02:15:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:15:22Z|00219|binding|INFO|Removing iface tapa599f1a0-54 ovn-installed in OVS
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.377 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.383 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:0b:0a 10.100.0.6'], port_security=['fa:16:3e:9b:0b:0a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '6', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a599f1a0-5413-4dc9-9ae4-d7ba512d761c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.385 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a599f1a0-5413-4dc9-9ae4-d7ba512d761c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.387 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.388 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a976c125-90f6-423f-9ec7-17a262bbe313]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.389 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore#033[00m
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.399 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:22.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:22 np0005548731 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000044.scope: Deactivated successfully.
Dec  6 02:15:22 np0005548731 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000044.scope: Consumed 14.746s CPU time.
Dec  6 02:15:22 np0005548731 systemd-machined[195355]: Machine qemu-29-instance-00000044 terminated.
Dec  6 02:15:22 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261160]: [NOTICE]   (261164) : haproxy version is 2.8.14-c23fe91
Dec  6 02:15:22 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261160]: [NOTICE]   (261164) : path to executable is /usr/sbin/haproxy
Dec  6 02:15:22 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261160]: [WARNING]  (261164) : Exiting Master process...
Dec  6 02:15:22 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261160]: [WARNING]  (261164) : Exiting Master process...
Dec  6 02:15:22 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261160]: [ALERT]    (261164) : Current worker (261166) exited with code 143 (Terminated)
Dec  6 02:15:22 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261160]: [WARNING]  (261164) : All workers exited. Exiting... (0)
Dec  6 02:15:22 np0005548731 systemd[1]: libpod-c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3.scope: Deactivated successfully.
Dec  6 02:15:22 np0005548731 podman[261560]: 2025-12-06 07:15:22.535661485 +0000 UTC m=+0.042814299 container died c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 02:15:22 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3-userdata-shm.mount: Deactivated successfully.
Dec  6 02:15:22 np0005548731 systemd[1]: var-lib-containers-storage-overlay-e6a6d5d75a97b59bde2c975b959fd3e87c820a814dcf34f4870306a23f6c57f2-merged.mount: Deactivated successfully.
Dec  6 02:15:22 np0005548731 podman[261560]: 2025-12-06 07:15:22.583404313 +0000 UTC m=+0.090557117 container cleanup c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.592 232437 INFO nova.virt.libvirt.driver [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance destroyed successfully.#033[00m
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.594 232437 DEBUG nova.objects.instance [None req-566389a6-c8af-44cc-a025-8845cbfa2883 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'numa_topology' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:15:22 np0005548731 systemd[1]: libpod-conmon-c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3.scope: Deactivated successfully.
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.652 232437 DEBUG nova.compute.manager [None req-566389a6-c8af-44cc-a025-8845cbfa2883 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:15:22 np0005548731 podman[261599]: 2025-12-06 07:15:22.660634661 +0000 UTC m=+0.052688700 container remove c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.667 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[90e37f4e-f9c2-4d2b-b312-edc5c50738d7]: (4, ('Sat Dec  6 07:15:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3)\nc460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3\nSat Dec  6 07:15:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (c460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3)\nc460140e95b5f6a336220976d6f04a618710f2082d6324097d6d44f7b3a890f3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.669 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7162cdb1-06b0-486d-9ad5-0c82b401b941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.670 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.672 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:22 np0005548731 kernel: tap4d599401-30: left promiscuous mode
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.689 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.691 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[91dbea9b-e35f-4789-bd4a-3fd64b817e5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.704 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6805c7d7-ada5-4fd3-b823-6092d58021c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.706 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[25c5977f-d8c4-4456-922c-357f3e168ab9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.720 232437 DEBUG nova.compute.manager [req-f00a9c6f-3ed9-4a57-a251-b638017c681a req-601f6395-9bc6-4bde-9258-bb694b17993a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.721 232437 DEBUG oslo_concurrency.lockutils [req-f00a9c6f-3ed9-4a57-a251-b638017c681a req-601f6395-9bc6-4bde-9258-bb694b17993a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.720 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ad701262-11b3-4723-b08b-4157d75821bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556069, 'reachable_time': 24322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261618, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.721 232437 DEBUG oslo_concurrency.lockutils [req-f00a9c6f-3ed9-4a57-a251-b638017c681a req-601f6395-9bc6-4bde-9258-bb694b17993a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.721 232437 DEBUG oslo_concurrency.lockutils [req-f00a9c6f-3ed9-4a57-a251-b638017c681a req-601f6395-9bc6-4bde-9258-bb694b17993a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.721 232437 DEBUG nova.compute.manager [req-f00a9c6f-3ed9-4a57-a251-b638017c681a req-601f6395-9bc6-4bde-9258-bb694b17993a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.722 232437 WARNING nova.compute.manager [req-f00a9c6f-3ed9-4a57-a251-b638017c681a req-601f6395-9bc6-4bde-9258-bb694b17993a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state powering-off.#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.723 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:22.723 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b43d75-159a-44fd-8760-c450a1d63d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:22 np0005548731 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec  6 02:15:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:22.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:22 np0005548731 nova_compute[232433]: 2025-12-06 07:15:22.897 232437 DEBUG oslo_concurrency.lockutils [None req-566389a6-c8af-44cc-a025-8845cbfa2883 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 15.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:23 np0005548731 nova_compute[232433]: 2025-12-06 07:15:23.510 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:24.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:24.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:24 np0005548731 nova_compute[232433]: 2025-12-06 07:15:24.903 232437 DEBUG nova.compute.manager [req-73fdaaee-e636-4576-8ce7-af772b361dc3 req-9f8c900c-1a59-4c86-8c71-5cfd8902cdf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:15:24 np0005548731 nova_compute[232433]: 2025-12-06 07:15:24.904 232437 DEBUG oslo_concurrency.lockutils [req-73fdaaee-e636-4576-8ce7-af772b361dc3 req-9f8c900c-1a59-4c86-8c71-5cfd8902cdf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:24 np0005548731 nova_compute[232433]: 2025-12-06 07:15:24.904 232437 DEBUG oslo_concurrency.lockutils [req-73fdaaee-e636-4576-8ce7-af772b361dc3 req-9f8c900c-1a59-4c86-8c71-5cfd8902cdf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:24 np0005548731 nova_compute[232433]: 2025-12-06 07:15:24.905 232437 DEBUG oslo_concurrency.lockutils [req-73fdaaee-e636-4576-8ce7-af772b361dc3 req-9f8c900c-1a59-4c86-8c71-5cfd8902cdf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:24 np0005548731 nova_compute[232433]: 2025-12-06 07:15:24.905 232437 DEBUG nova.compute.manager [req-73fdaaee-e636-4576-8ce7-af772b361dc3 req-9f8c900c-1a59-4c86-8c71-5cfd8902cdf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:15:24 np0005548731 nova_compute[232433]: 2025-12-06 07:15:24.905 232437 WARNING nova.compute.manager [req-73fdaaee-e636-4576-8ce7-af772b361dc3 req-9f8c900c-1a59-4c86-8c71-5cfd8902cdf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state stopped and task_state None.#033[00m
Dec  6 02:15:25 np0005548731 nova_compute[232433]: 2025-12-06 07:15:25.533 232437 DEBUG nova.objects.instance [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'flavor' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:15:25 np0005548731 nova_compute[232433]: 2025-12-06 07:15:25.564 232437 DEBUG oslo_concurrency.lockutils [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:15:25 np0005548731 nova_compute[232433]: 2025-12-06 07:15:25.565 232437 DEBUG oslo_concurrency.lockutils [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:15:25 np0005548731 nova_compute[232433]: 2025-12-06 07:15:25.566 232437 DEBUG nova.network.neutron [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:15:25 np0005548731 nova_compute[232433]: 2025-12-06 07:15:25.566 232437 DEBUG nova.objects.instance [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'info_cache' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.051 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:26.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.688 232437 DEBUG nova.network.neutron [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.706 232437 DEBUG oslo_concurrency.lockutils [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:15:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:26.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.750 232437 INFO nova.virt.libvirt.driver [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance destroyed successfully.#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.751 232437 DEBUG nova.objects.instance [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'numa_topology' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.771 232437 DEBUG nova.objects.instance [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.785 232437 DEBUG nova.virt.libvirt.vif [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.785 232437 DEBUG nova.network.os_vif_util [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.786 232437 DEBUG nova.network.os_vif_util [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.787 232437 DEBUG os_vif [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.788 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.789 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa599f1a0-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.790 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.791 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.794 232437 INFO os_vif [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54')#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.800 232437 DEBUG nova.virt.libvirt.driver [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Start _get_guest_xml network_info=[{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.804 232437 WARNING nova.virt.libvirt.driver [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.811 232437 DEBUG nova.virt.libvirt.host [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.812 232437 DEBUG nova.virt.libvirt.host [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.815 232437 DEBUG nova.virt.libvirt.host [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.816 232437 DEBUG nova.virt.libvirt.host [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.817 232437 DEBUG nova.virt.libvirt.driver [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.818 232437 DEBUG nova.virt.hardware [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.818 232437 DEBUG nova.virt.hardware [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.818 232437 DEBUG nova.virt.hardware [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.819 232437 DEBUG nova.virt.hardware [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.819 232437 DEBUG nova.virt.hardware [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.819 232437 DEBUG nova.virt.hardware [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.819 232437 DEBUG nova.virt.hardware [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.820 232437 DEBUG nova.virt.hardware [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.820 232437 DEBUG nova.virt.hardware [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.820 232437 DEBUG nova.virt.hardware [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.820 232437 DEBUG nova.virt.hardware [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.821 232437 DEBUG nova.objects.instance [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'vcpu_model' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.835 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:26 np0005548731 nova_compute[232433]: 2025-12-06 07:15:26.839 232437 DEBUG oslo_concurrency.processutils [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:15:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:15:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2013365150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:15:27 np0005548731 nova_compute[232433]: 2025-12-06 07:15:27.280 232437 DEBUG oslo_concurrency.processutils [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:15:27 np0005548731 nova_compute[232433]: 2025-12-06 07:15:27.313 232437 DEBUG oslo_concurrency.processutils [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:15:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:15:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4084160538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:15:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:28.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.731 232437 DEBUG oslo_concurrency.processutils [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.732 232437 DEBUG nova.virt.libvirt.vif [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.733 232437 DEBUG nova.network.os_vif_util [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.733 232437 DEBUG nova.network.os_vif_util [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.734 232437 DEBUG nova.objects.instance [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.748 232437 DEBUG nova.virt.libvirt.driver [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  <uuid>c8403a0c-2fe6-48fe-91af-ec5aca71e12d</uuid>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  <name>instance-00000044</name>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerActionsTestJSON-server-893709654</nova:name>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:15:26</nova:creationTime>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <nova:port uuid="a599f1a0-5413-4dc9-9ae4-d7ba512d761c">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <entry name="serial">c8403a0c-2fe6-48fe-91af-ec5aca71e12d</entry>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <entry name="uuid">c8403a0c-2fe6-48fe-91af-ec5aca71e12d</entry>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk.config">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:9b:0b:0a"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <target dev="tapa599f1a0-54"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/console.log" append="off"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <input type="keyboard" bus="usb"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:15:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:15:28 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:15:28 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:15:28 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:15:28 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.749 232437 DEBUG nova.virt.libvirt.driver [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:15:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.749 232437 DEBUG nova.virt.libvirt.driver [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:15:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:28.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.750 232437 DEBUG nova.virt.libvirt.vif [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.750 232437 DEBUG nova.network.os_vif_util [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.751 232437 DEBUG nova.network.os_vif_util [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.751 232437 DEBUG os_vif [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.751 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.752 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.752 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.755 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.755 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa599f1a0-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.756 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa599f1a0-54, col_values=(('external_ids', {'iface-id': 'a599f1a0-5413-4dc9-9ae4-d7ba512d761c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:0b:0a', 'vm-uuid': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.757 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:28 np0005548731 NetworkManager[49182]: <info>  [1765005328.7579] manager: (tapa599f1a0-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.759 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.764 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.765 232437 INFO os_vif [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54')#033[00m
Dec  6 02:15:28 np0005548731 kernel: tapa599f1a0-54: entered promiscuous mode
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.834 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:15:28Z|00220|binding|INFO|Claiming lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c for this chassis.
Dec  6 02:15:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:15:28Z|00221|binding|INFO|a599f1a0-5413-4dc9-9ae4-d7ba512d761c: Claiming fa:16:3e:9b:0b:0a 10.100.0.6
Dec  6 02:15:28 np0005548731 NetworkManager[49182]: <info>  [1765005328.8363] manager: (tapa599f1a0-54): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Dec  6 02:15:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:15:28Z|00222|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c ovn-installed in OVS
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.851 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:28 np0005548731 nova_compute[232433]: 2025-12-06 07:15:28.853 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:15:28Z|00223|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c up in Southbound
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.857 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:0b:0a 10.100.0.6'], port_security=['fa:16:3e:9b:0b:0a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '7', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a599f1a0-5413-4dc9-9ae4-d7ba512d761c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.858 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a599f1a0-5413-4dc9-9ae4-d7ba512d761c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis#033[00m
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.859 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64#033[00m
Dec  6 02:15:28 np0005548731 systemd-machined[195355]: New machine qemu-30-instance-00000044.
Dec  6 02:15:28 np0005548731 systemd-udevd[261700]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.869 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ba56ec-93b1-4fab-9722-4b6f649c0b03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.870 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.873 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.873 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f875a41b-fe59-47f7-9ed4-eacfea0de646]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.874 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e466a8c4-9e57-43de-b4fa-fd3fdb3daa3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:28 np0005548731 systemd[1]: Started Virtual Machine qemu-30-instance-00000044.
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.883 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[131906e7-e67b-4640-9936-09bd140a3c0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:28 np0005548731 NetworkManager[49182]: <info>  [1765005328.8846] device (tapa599f1a0-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:15:28 np0005548731 NetworkManager[49182]: <info>  [1765005328.8856] device (tapa599f1a0-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.896 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a01f489c-a42c-4046-b09c-5d2017feeea4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.924 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[09a32350-439a-424b-88db-7880c2c89fe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:28 np0005548731 systemd-udevd[261703]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.930 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab6db54-7892-46ba-8f9b-6078e4d6da44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:28 np0005548731 NetworkManager[49182]: <info>  [1765005328.9312] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.959 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c2fed019-3d8e-429e-99fc-7f50f0b2a171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.962 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e5aced6b-664f-4a6c-9024-3710089002b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:28 np0005548731 NetworkManager[49182]: <info>  [1765005328.9819] device (tap4d599401-30): carrier: link connected
Dec  6 02:15:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:28.988 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c2bfb3b1-a525-45a5-94ed-3e7db3409206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.005 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[227885ba-82ce-40a6-9134-8cc731e40998]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559098, 'reachable_time': 20893, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261732, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.020 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[31920102-180f-42f9-81da-6c0a612e2fee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 559098, 'tstamp': 559098}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261733, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.036 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[80a1e4cf-71e8-463a-8616-d131ee964c33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559098, 'reachable_time': 20893, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261734, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.064 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c5388001-3265-4c02-a055-b0f8b7ca668b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.124 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[321adb1a-da1c-486f-bb43-9ac4e55a4eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.125 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.125 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.126 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:15:29 np0005548731 NetworkManager[49182]: <info>  [1765005329.1284] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Dec  6 02:15:29 np0005548731 nova_compute[232433]: 2025-12-06 07:15:29.128 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:29 np0005548731 kernel: tap4d599401-30: entered promiscuous mode
Dec  6 02:15:29 np0005548731 nova_compute[232433]: 2025-12-06 07:15:29.130 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.133 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:15:29 np0005548731 nova_compute[232433]: 2025-12-06 07:15:29.134 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:29 np0005548731 ovn_controller[133927]: 2025-12-06T07:15:29Z|00224|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:15:29 np0005548731 nova_compute[232433]: 2025-12-06 07:15:29.135 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.137 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.137 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[150871c4-40f9-45be-bc2c-fef9e7c2971c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.138 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:15:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:29.139 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:15:29 np0005548731 nova_compute[232433]: 2025-12-06 07:15:29.149 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:29 np0005548731 podman[261784]: 2025-12-06 07:15:29.491273587 +0000 UTC m=+0.045851132 container create 90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:15:29 np0005548731 systemd[1]: Started libpod-conmon-90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246.scope.
Dec  6 02:15:29 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:15:29 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/540f9a3122af7249133e29e4aa48cbe38f37a2fb666c667065a24788e7d64ab9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:15:29 np0005548731 podman[261784]: 2025-12-06 07:15:29.466921311 +0000 UTC m=+0.021498876 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:15:29 np0005548731 podman[261784]: 2025-12-06 07:15:29.568113876 +0000 UTC m=+0.122691441 container init 90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 02:15:29 np0005548731 podman[261784]: 2025-12-06 07:15:29.573422986 +0000 UTC m=+0.128000531 container start 90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 02:15:29 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261799]: [NOTICE]   (261803) : New worker (261805) forked
Dec  6 02:15:29 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261799]: [NOTICE]   (261803) : Loading success.
Dec  6 02:15:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:30.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:30.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:30 np0005548731 nova_compute[232433]: 2025-12-06 07:15:30.938 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for c8403a0c-2fe6-48fe-91af-ec5aca71e12d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:15:30 np0005548731 nova_compute[232433]: 2025-12-06 07:15:30.938 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005330.937671, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:15:30 np0005548731 nova_compute[232433]: 2025-12-06 07:15:30.939 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:15:30 np0005548731 nova_compute[232433]: 2025-12-06 07:15:30.941 232437 DEBUG nova.compute.manager [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:15:30 np0005548731 nova_compute[232433]: 2025-12-06 07:15:30.945 232437 INFO nova.virt.libvirt.driver [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance rebooted successfully.#033[00m
Dec  6 02:15:30 np0005548731 nova_compute[232433]: 2025-12-06 07:15:30.946 232437 DEBUG nova.compute.manager [None req-3a5bc6d2-3dea-4380-b73f-ec980ec46271 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:15:30 np0005548731 nova_compute[232433]: 2025-12-06 07:15:30.956 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:15:30 np0005548731 nova_compute[232433]: 2025-12-06 07:15:30.959 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:15:30 np0005548731 nova_compute[232433]: 2025-12-06 07:15:30.997 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Dec  6 02:15:30 np0005548731 nova_compute[232433]: 2025-12-06 07:15:30.997 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005330.9380634, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:15:30 np0005548731 nova_compute[232433]: 2025-12-06 07:15:30.998 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Started (Lifecycle Event)#033[00m
Dec  6 02:15:31 np0005548731 nova_compute[232433]: 2025-12-06 07:15:31.047 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:15:31 np0005548731 nova_compute[232433]: 2025-12-06 07:15:31.051 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:15:31 np0005548731 nova_compute[232433]: 2025-12-06 07:15:31.837 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:15:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:32.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:15:32 np0005548731 nova_compute[232433]: 2025-12-06 07:15:32.623 232437 DEBUG nova.compute.manager [req-8641e620-8f71-4406-9179-944285043c94 req-4de72fce-dc2a-494c-8e0f-4449cf7b221a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:15:32 np0005548731 nova_compute[232433]: 2025-12-06 07:15:32.623 232437 DEBUG oslo_concurrency.lockutils [req-8641e620-8f71-4406-9179-944285043c94 req-4de72fce-dc2a-494c-8e0f-4449cf7b221a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:32 np0005548731 nova_compute[232433]: 2025-12-06 07:15:32.623 232437 DEBUG oslo_concurrency.lockutils [req-8641e620-8f71-4406-9179-944285043c94 req-4de72fce-dc2a-494c-8e0f-4449cf7b221a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:32 np0005548731 nova_compute[232433]: 2025-12-06 07:15:32.624 232437 DEBUG oslo_concurrency.lockutils [req-8641e620-8f71-4406-9179-944285043c94 req-4de72fce-dc2a-494c-8e0f-4449cf7b221a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:32 np0005548731 nova_compute[232433]: 2025-12-06 07:15:32.624 232437 DEBUG nova.compute.manager [req-8641e620-8f71-4406-9179-944285043c94 req-4de72fce-dc2a-494c-8e0f-4449cf7b221a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:15:32 np0005548731 nova_compute[232433]: 2025-12-06 07:15:32.624 232437 WARNING nova.compute.manager [req-8641e620-8f71-4406-9179-944285043c94 req-4de72fce-dc2a-494c-8e0f-4449cf7b221a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:15:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:32.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:33 np0005548731 nova_compute[232433]: 2025-12-06 07:15:33.799 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:34.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:34 np0005548731 nova_compute[232433]: 2025-12-06 07:15:34.691 232437 DEBUG nova.compute.manager [req-f7ff09ca-b6aa-48b9-8f70-ac17741047ea req-a0965fc2-9dc7-4a21-897a-de0dd45214a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:15:34 np0005548731 nova_compute[232433]: 2025-12-06 07:15:34.692 232437 DEBUG oslo_concurrency.lockutils [req-f7ff09ca-b6aa-48b9-8f70-ac17741047ea req-a0965fc2-9dc7-4a21-897a-de0dd45214a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:34 np0005548731 nova_compute[232433]: 2025-12-06 07:15:34.692 232437 DEBUG oslo_concurrency.lockutils [req-f7ff09ca-b6aa-48b9-8f70-ac17741047ea req-a0965fc2-9dc7-4a21-897a-de0dd45214a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:34 np0005548731 nova_compute[232433]: 2025-12-06 07:15:34.692 232437 DEBUG oslo_concurrency.lockutils [req-f7ff09ca-b6aa-48b9-8f70-ac17741047ea req-a0965fc2-9dc7-4a21-897a-de0dd45214a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:34 np0005548731 nova_compute[232433]: 2025-12-06 07:15:34.693 232437 DEBUG nova.compute.manager [req-f7ff09ca-b6aa-48b9-8f70-ac17741047ea req-a0965fc2-9dc7-4a21-897a-de0dd45214a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:15:34 np0005548731 nova_compute[232433]: 2025-12-06 07:15:34.693 232437 WARNING nova.compute.manager [req-f7ff09ca-b6aa-48b9-8f70-ac17741047ea req-a0965fc2-9dc7-4a21-897a-de0dd45214a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:15:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:34.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:36.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:36.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:36 np0005548731 nova_compute[232433]: 2025-12-06 07:15:36.839 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:36 np0005548731 nova_compute[232433]: 2025-12-06 07:15:36.846 232437 INFO nova.compute.manager [None req-92ae0c7d-aaaa-4fe4-a5eb-d8e2860521e1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Pausing#033[00m
Dec  6 02:15:36 np0005548731 nova_compute[232433]: 2025-12-06 07:15:36.847 232437 DEBUG nova.objects.instance [None req-92ae0c7d-aaaa-4fe4-a5eb-d8e2860521e1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'flavor' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:15:36 np0005548731 nova_compute[232433]: 2025-12-06 07:15:36.873 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005336.8728163, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:15:36 np0005548731 nova_compute[232433]: 2025-12-06 07:15:36.873 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:15:36 np0005548731 nova_compute[232433]: 2025-12-06 07:15:36.874 232437 DEBUG nova.compute.manager [None req-92ae0c7d-aaaa-4fe4-a5eb-d8e2860521e1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:15:36 np0005548731 nova_compute[232433]: 2025-12-06 07:15:36.898 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:15:36 np0005548731 nova_compute[232433]: 2025-12-06 07:15:36.900 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:15:36 np0005548731 nova_compute[232433]: 2025-12-06 07:15:36.935 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Dec  6 02:15:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:38.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:15:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:38.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:15:38 np0005548731 nova_compute[232433]: 2025-12-06 07:15:38.802 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:39 np0005548731 nova_compute[232433]: 2025-12-06 07:15:39.712 232437 INFO nova.compute.manager [None req-1a1fc531-11ac-4c7d-be64-a19ad6858c34 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Unpausing#033[00m
Dec  6 02:15:39 np0005548731 nova_compute[232433]: 2025-12-06 07:15:39.714 232437 DEBUG nova.objects.instance [None req-1a1fc531-11ac-4c7d-be64-a19ad6858c34 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'flavor' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:15:39 np0005548731 nova_compute[232433]: 2025-12-06 07:15:39.740 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005339.7405133, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:15:39 np0005548731 nova_compute[232433]: 2025-12-06 07:15:39.740 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:15:39 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 02:15:39 np0005548731 nova_compute[232433]: 2025-12-06 07:15:39.744 232437 DEBUG nova.virt.libvirt.guest [None req-1a1fc531-11ac-4c7d-be64-a19ad6858c34 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 02:15:39 np0005548731 nova_compute[232433]: 2025-12-06 07:15:39.745 232437 DEBUG nova.compute.manager [None req-1a1fc531-11ac-4c7d-be64-a19ad6858c34 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:15:39 np0005548731 nova_compute[232433]: 2025-12-06 07:15:39.772 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:15:39 np0005548731 nova_compute[232433]: 2025-12-06 07:15:39.775 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:15:39 np0005548731 nova_compute[232433]: 2025-12-06 07:15:39.809 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Dec  6 02:15:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:40.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:40.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:41 np0005548731 nova_compute[232433]: 2025-12-06 07:15:41.841 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:42.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:42.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:43 np0005548731 nova_compute[232433]: 2025-12-06 07:15:43.803 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:44.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:44.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:15:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:46.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:15:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:46.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:46 np0005548731 nova_compute[232433]: 2025-12-06 07:15:46.843 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:48 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:15:48 np0005548731 nova_compute[232433]: 2025-12-06 07:15:48.376 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:15:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:15:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:48.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:15:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:48.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:48 np0005548731 nova_compute[232433]: 2025-12-06 07:15:48.806 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:50.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:50.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:51 np0005548731 nova_compute[232433]: 2025-12-06 07:15:51.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:15:51 np0005548731 nova_compute[232433]: 2025-12-06 07:15:51.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:15:51 np0005548731 nova_compute[232433]: 2025-12-06 07:15:51.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:15:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:15:51Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:0b:0a 10.100.0.6
Dec  6 02:15:51 np0005548731 nova_compute[232433]: 2025-12-06 07:15:51.527 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:15:51 np0005548731 nova_compute[232433]: 2025-12-06 07:15:51.528 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:15:51 np0005548731 nova_compute[232433]: 2025-12-06 07:15:51.528 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:15:51 np0005548731 nova_compute[232433]: 2025-12-06 07:15:51.529 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:15:51 np0005548731 nova_compute[232433]: 2025-12-06 07:15:51.845 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:51 np0005548731 podman[261899]: 2025-12-06 07:15:51.893499109 +0000 UTC m=+0.050184389 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec  6 02:15:51 np0005548731 podman[261900]: 2025-12-06 07:15:51.920214052 +0000 UTC m=+0.076861981 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:15:51 np0005548731 podman[261901]: 2025-12-06 07:15:51.924474666 +0000 UTC m=+0.079634629 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:15:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:52.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.530474) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352530552, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1372, "num_deletes": 254, "total_data_size": 2977444, "memory_usage": 3019792, "flush_reason": "Manual Compaction"}
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352546060, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 1951469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35763, "largest_seqno": 37130, "table_properties": {"data_size": 1945462, "index_size": 3274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13654, "raw_average_key_size": 20, "raw_value_size": 1933142, "raw_average_value_size": 2924, "num_data_blocks": 144, "num_entries": 661, "num_filter_entries": 661, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005232, "oldest_key_time": 1765005232, "file_creation_time": 1765005352, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 15683 microseconds, and 5505 cpu microseconds.
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.546155) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 1951469 bytes OK
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.546189) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.549179) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.549234) EVENT_LOG_v1 {"time_micros": 1765005352549222, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.549265) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2970910, prev total WAL file size 2970910, number of live WAL files 2.
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.551049) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(1905KB)], [66(9346KB)]
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352551118, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 11522602, "oldest_snapshot_seqno": -1}
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6580 keys, 9589699 bytes, temperature: kUnknown
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352627758, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 9589699, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9545854, "index_size": 26261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 170089, "raw_average_key_size": 25, "raw_value_size": 9427828, "raw_average_value_size": 1432, "num_data_blocks": 1044, "num_entries": 6580, "num_filter_entries": 6580, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765005352, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.628156) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9589699 bytes
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.629844) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.1 rd, 124.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(10.8) write-amplify(4.9) OK, records in: 7104, records dropped: 524 output_compression: NoCompression
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.629874) EVENT_LOG_v1 {"time_micros": 1765005352629859, "job": 40, "event": "compaction_finished", "compaction_time_micros": 76771, "compaction_time_cpu_micros": 42640, "output_level": 6, "num_output_files": 1, "total_output_size": 9589699, "num_input_records": 7104, "num_output_records": 6580, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352630655, "job": 40, "event": "table_file_deletion", "file_number": 68}
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352633193, "job": 40, "event": "table_file_deletion", "file_number": 66}
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.550968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.633325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.633332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.633334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.633336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:15:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:15:52.633338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:15:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:52.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:53 np0005548731 nova_compute[232433]: 2025-12-06 07:15:53.862 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:53 np0005548731 nova_compute[232433]: 2025-12-06 07:15:53.898 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:15:53 np0005548731 nova_compute[232433]: 2025-12-06 07:15:53.919 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:15:53 np0005548731 nova_compute[232433]: 2025-12-06 07:15:53.920 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:15:53 np0005548731 nova_compute[232433]: 2025-12-06 07:15:53.920 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:15:53 np0005548731 nova_compute[232433]: 2025-12-06 07:15:53.920 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:15:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:54.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:54.578 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:15:54 np0005548731 nova_compute[232433]: 2025-12-06 07:15:54.579 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:15:54.580 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:15:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:54.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:55 np0005548731 nova_compute[232433]: 2025-12-06 07:15:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:15:55 np0005548731 nova_compute[232433]: 2025-12-06 07:15:55.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:15:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:56.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:56.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:56 np0005548731 nova_compute[232433]: 2025-12-06 07:15:56.847 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:57 np0005548731 nova_compute[232433]: 2025-12-06 07:15:57.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:15:57 np0005548731 nova_compute[232433]: 2025-12-06 07:15:57.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:15:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:15:58 np0005548731 nova_compute[232433]: 2025-12-06 07:15:58.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:15:58 np0005548731 nova_compute[232433]: 2025-12-06 07:15:58.139 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:58 np0005548731 nova_compute[232433]: 2025-12-06 07:15:58.139 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:58 np0005548731 nova_compute[232433]: 2025-12-06 07:15:58.139 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:58 np0005548731 nova_compute[232433]: 2025-12-06 07:15:58.140 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:15:58 np0005548731 nova_compute[232433]: 2025-12-06 07:15:58.140 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:15:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:58.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:15:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:15:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:58.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:15:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:15:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/404538913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:15:58 np0005548731 nova_compute[232433]: 2025-12-06 07:15:58.859 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.719s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:15:58 np0005548731 nova_compute[232433]: 2025-12-06 07:15:58.907 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:15:58 np0005548731 nova_compute[232433]: 2025-12-06 07:15:58.975 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:15:58 np0005548731 nova_compute[232433]: 2025-12-06 07:15:58.976 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.145 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.146 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4451MB free_disk=20.855361938476562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.147 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.147 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.224 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance c8403a0c-2fe6-48fe-91af-ec5aca71e12d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.224 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.224 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.277 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:15:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:15:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2563791832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.692 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:15:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:15:59Z|00225|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.698 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.712 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.734 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.735 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:15:59 np0005548731 nova_compute[232433]: 2025-12-06 07:15:59.750 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:00.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:00.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:00.858 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:16:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:00.859 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:16:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:00.859 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:16:01 np0005548731 nova_compute[232433]: 2025-12-06 07:16:01.849 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:02.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:02.582 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:16:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:02.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:03 np0005548731 nova_compute[232433]: 2025-12-06 07:16:03.428 232437 DEBUG oslo_concurrency.lockutils [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:16:03 np0005548731 nova_compute[232433]: 2025-12-06 07:16:03.429 232437 DEBUG oslo_concurrency.lockutils [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:16:03 np0005548731 nova_compute[232433]: 2025-12-06 07:16:03.429 232437 INFO nova.compute.manager [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Rebooting instance#033[00m
Dec  6 02:16:03 np0005548731 nova_compute[232433]: 2025-12-06 07:16:03.446 232437 DEBUG oslo_concurrency.lockutils [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:16:03 np0005548731 nova_compute[232433]: 2025-12-06 07:16:03.446 232437 DEBUG oslo_concurrency.lockutils [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:16:03 np0005548731 nova_compute[232433]: 2025-12-06 07:16:03.446 232437 DEBUG nova.network.neutron [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:16:03 np0005548731 nova_compute[232433]: 2025-12-06 07:16:03.735 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:16:03 np0005548731 nova_compute[232433]: 2025-12-06 07:16:03.911 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:04.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:04 np0005548731 nova_compute[232433]: 2025-12-06 07:16:04.637 232437 DEBUG nova.network.neutron [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:16:04 np0005548731 nova_compute[232433]: 2025-12-06 07:16:04.654 232437 DEBUG oslo_concurrency.lockutils [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:16:04 np0005548731 nova_compute[232433]: 2025-12-06 07:16:04.655 232437 DEBUG nova.compute.manager [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:16:04 np0005548731 kernel: tapa599f1a0-54 (unregistering): left promiscuous mode
Dec  6 02:16:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:16:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:04.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:16:04 np0005548731 NetworkManager[49182]: <info>  [1765005364.8091] device (tapa599f1a0-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:16:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:16:04Z|00226|binding|INFO|Releasing lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c from this chassis (sb_readonly=0)
Dec  6 02:16:04 np0005548731 nova_compute[232433]: 2025-12-06 07:16:04.823 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:16:04Z|00227|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c down in Southbound
Dec  6 02:16:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:16:04Z|00228|binding|INFO|Removing iface tapa599f1a0-54 ovn-installed in OVS
Dec  6 02:16:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:04.829 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:0b:0a 10.100.0.6'], port_security=['fa:16:3e:9b:0b:0a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '8', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a599f1a0-5413-4dc9-9ae4-d7ba512d761c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:16:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:04.831 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a599f1a0-5413-4dc9-9ae4-d7ba512d761c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis#033[00m
Dec  6 02:16:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:04.832 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:16:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:04.834 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[378727e5-edfa-4efe-888c-c86ef4cf0f63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:04.835 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore#033[00m
Dec  6 02:16:04 np0005548731 nova_compute[232433]: 2025-12-06 07:16:04.841 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:04 np0005548731 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000044.scope: Deactivated successfully.
Dec  6 02:16:04 np0005548731 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000044.scope: Consumed 14.534s CPU time.
Dec  6 02:16:04 np0005548731 systemd-machined[195355]: Machine qemu-30-instance-00000044 terminated.
Dec  6 02:16:04 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261799]: [NOTICE]   (261803) : haproxy version is 2.8.14-c23fe91
Dec  6 02:16:04 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261799]: [NOTICE]   (261803) : path to executable is /usr/sbin/haproxy
Dec  6 02:16:04 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261799]: [WARNING]  (261803) : Exiting Master process...
Dec  6 02:16:04 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261799]: [ALERT]    (261803) : Current worker (261805) exited with code 143 (Terminated)
Dec  6 02:16:04 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[261799]: [WARNING]  (261803) : All workers exited. Exiting... (0)
Dec  6 02:16:04 np0005548731 systemd[1]: libpod-90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246.scope: Deactivated successfully.
Dec  6 02:16:04 np0005548731 nova_compute[232433]: 2025-12-06 07:16:04.983 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:04 np0005548731 nova_compute[232433]: 2025-12-06 07:16:04.988 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:04 np0005548731 podman[262090]: 2025-12-06 07:16:04.989199584 +0000 UTC m=+0.057768434 container died 90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.000 232437 INFO nova.virt.libvirt.driver [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance destroyed successfully.#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.001 232437 DEBUG nova.objects.instance [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:16:05 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246-userdata-shm.mount: Deactivated successfully.
Dec  6 02:16:05 np0005548731 systemd[1]: var-lib-containers-storage-overlay-540f9a3122af7249133e29e4aa48cbe38f37a2fb666c667065a24788e7d64ab9-merged.mount: Deactivated successfully.
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.050 232437 DEBUG nova.virt.libvirt.vif [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:16:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.050 232437 DEBUG nova.network.os_vif_util [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.051 232437 DEBUG nova.network.os_vif_util [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.051 232437 DEBUG os_vif [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.053 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.054 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa599f1a0-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.056 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:05 np0005548731 podman[262090]: 2025-12-06 07:16:05.057894585 +0000 UTC m=+0.126463435 container cleanup 90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.059 232437 INFO os_vif [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54')#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.066 232437 DEBUG nova.virt.libvirt.driver [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Start _get_guest_xml network_info=[{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:16:05 np0005548731 systemd[1]: libpod-conmon-90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246.scope: Deactivated successfully.
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.072 232437 WARNING nova.virt.libvirt.driver [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.079 232437 DEBUG nova.virt.libvirt.host [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.079 232437 DEBUG nova.virt.libvirt.host [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.082 232437 DEBUG nova.virt.libvirt.host [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.083 232437 DEBUG nova.virt.libvirt.host [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.084 232437 DEBUG nova.virt.libvirt.driver [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.085 232437 DEBUG nova.virt.hardware [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.085 232437 DEBUG nova.virt.hardware [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.085 232437 DEBUG nova.virt.hardware [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.085 232437 DEBUG nova.virt.hardware [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.086 232437 DEBUG nova.virt.hardware [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.086 232437 DEBUG nova.virt.hardware [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.086 232437 DEBUG nova.virt.hardware [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.086 232437 DEBUG nova.virt.hardware [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.086 232437 DEBUG nova.virt.hardware [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.087 232437 DEBUG nova.virt.hardware [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.087 232437 DEBUG nova.virt.hardware [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.087 232437 DEBUG nova.objects.instance [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'vcpu_model' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:16:05 np0005548731 podman[262128]: 2025-12-06 07:16:05.136342873 +0000 UTC m=+0.051325056 container remove 90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 02:16:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:05.142 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e9b3a9-ec7a-4d6b-9d62-fa42a678fb13]: (4, ('Sat Dec  6 07:16:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246)\n90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246\nSat Dec  6 07:16:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246)\n90f130dae902f2ae743d2267cf75563643b1411c736d36f613e3346ec70d4246\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:05.144 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba6d17f-2f01-4b56-8e35-ecc0666400d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:05.145 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.147 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:05 np0005548731 kernel: tap4d599401-30: left promiscuous mode
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.161 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:05.164 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[17d12b72-a99e-4885-a3e8-d9c009d92e36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.166 232437 DEBUG nova.compute.manager [req-80e180ef-4684-4406-9424-c6f7bc286a14 req-49891194-12c3-4689-9999-a8719849fb21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.167 232437 DEBUG oslo_concurrency.lockutils [req-80e180ef-4684-4406-9424-c6f7bc286a14 req-49891194-12c3-4689-9999-a8719849fb21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.168 232437 DEBUG oslo_concurrency.lockutils [req-80e180ef-4684-4406-9424-c6f7bc286a14 req-49891194-12c3-4689-9999-a8719849fb21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.168 232437 DEBUG oslo_concurrency.lockutils [req-80e180ef-4684-4406-9424-c6f7bc286a14 req-49891194-12c3-4689-9999-a8719849fb21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.169 232437 DEBUG nova.compute.manager [req-80e180ef-4684-4406-9424-c6f7bc286a14 req-49891194-12c3-4689-9999-a8719849fb21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.169 232437 WARNING nova.compute.manager [req-80e180ef-4684-4406-9424-c6f7bc286a14 req-49891194-12c3-4689-9999-a8719849fb21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.174 232437 DEBUG oslo_concurrency.processutils [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:16:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:05.181 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1161cfaa-ecee-4e41-94c3-97c02bb6307d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:05.182 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[58c52460-d1e6-43d4-9ed7-cb0134470473]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:05.199 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5665ee-96a8-4474-a6fd-d95490cb1121]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559092, 'reachable_time': 41598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262144, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:05.201 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:16:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:05.201 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[70bcf01b-6007-4654-acf8-b16f88099064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:05 np0005548731 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec  6 02:16:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:16:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3858770152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.620 232437 DEBUG oslo_concurrency.processutils [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:16:05 np0005548731 nova_compute[232433]: 2025-12-06 07:16:05.656 232437 DEBUG oslo_concurrency.processutils [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:16:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:16:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2759368417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.077 232437 DEBUG oslo_concurrency.processutils [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.079 232437 DEBUG nova.virt.libvirt.vif [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:16:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.080 232437 DEBUG nova.network.os_vif_util [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.081 232437 DEBUG nova.network.os_vif_util [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.082 232437 DEBUG nova.objects.instance [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.165 232437 DEBUG nova.virt.libvirt.driver [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  <uuid>c8403a0c-2fe6-48fe-91af-ec5aca71e12d</uuid>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  <name>instance-00000044</name>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerActionsTestJSON-server-893709654</nova:name>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:16:05</nova:creationTime>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <nova:port uuid="a599f1a0-5413-4dc9-9ae4-d7ba512d761c">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <entry name="serial">c8403a0c-2fe6-48fe-91af-ec5aca71e12d</entry>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <entry name="uuid">c8403a0c-2fe6-48fe-91af-ec5aca71e12d</entry>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk.config">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:9b:0b:0a"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <target dev="tapa599f1a0-54"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/console.log" append="off"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <input type="keyboard" bus="usb"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:16:06 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:16:06 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:16:06 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:16:06 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.166 232437 DEBUG nova.virt.libvirt.driver [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.167 232437 DEBUG nova.virt.libvirt.driver [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.167 232437 DEBUG nova.virt.libvirt.vif [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:16:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.168 232437 DEBUG nova.network.os_vif_util [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.168 232437 DEBUG nova.network.os_vif_util [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.169 232437 DEBUG os_vif [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.169 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.169 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.170 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.172 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.172 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa599f1a0-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.173 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa599f1a0-54, col_values=(('external_ids', {'iface-id': 'a599f1a0-5413-4dc9-9ae4-d7ba512d761c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:0b:0a', 'vm-uuid': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:16:06 np0005548731 NetworkManager[49182]: <info>  [1765005366.1749] manager: (tapa599f1a0-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.175 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.179 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.180 232437 INFO os_vif [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54')#033[00m
Dec  6 02:16:06 np0005548731 kernel: tapa599f1a0-54: entered promiscuous mode
Dec  6 02:16:06 np0005548731 NetworkManager[49182]: <info>  [1765005366.2472] manager: (tapa599f1a0-54): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Dec  6 02:16:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:16:06Z|00229|binding|INFO|Claiming lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c for this chassis.
Dec  6 02:16:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:16:06Z|00230|binding|INFO|a599f1a0-5413-4dc9-9ae4-d7ba512d761c: Claiming fa:16:3e:9b:0b:0a 10.100.0.6
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.247 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:06 np0005548731 systemd-udevd[262070]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.259 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:0b:0a 10.100.0.6'], port_security=['fa:16:3e:9b:0b:0a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '9', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a599f1a0-5413-4dc9-9ae4-d7ba512d761c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:16:06 np0005548731 NetworkManager[49182]: <info>  [1765005366.2609] device (tapa599f1a0-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.260 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a599f1a0-5413-4dc9-9ae4-d7ba512d761c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis#033[00m
Dec  6 02:16:06 np0005548731 NetworkManager[49182]: <info>  [1765005366.2622] device (tapa599f1a0-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.262 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64#033[00m
Dec  6 02:16:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:16:06Z|00231|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c ovn-installed in OVS
Dec  6 02:16:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:16:06Z|00232|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c up in Southbound
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.264 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.267 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.274 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f66f1919-b293-4da4-baf0-6f5fef84e7c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.274 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.278 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.278 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[909b78df-8969-477d-8ad9-f4ba91dd30cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.279 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e59c0d77-7d71-4a83-8cc4-62cae875a894]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 systemd-machined[195355]: New machine qemu-31-instance-00000044.
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.291 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[cec7e142-80a3-49b7-bfc8-fe8d6f7100de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 systemd[1]: Started Virtual Machine qemu-31-instance-00000044.
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.304 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[12db754e-fc7d-424c-b197-1fabe7f261d3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.333 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f38eadb6-f395-48f1-a45a-525c4d4a16eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.338 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4a8207-c39f-45c4-89a9-414712336f6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 NetworkManager[49182]: <info>  [1765005366.3392] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/131)
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.366 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[48a8281c-316c-4f8e-a1c7-a6874fdb69c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.369 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3e81d1f0-d811-465c-a6d5-298cb77fe18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 NetworkManager[49182]: <info>  [1765005366.3903] device (tap4d599401-30): carrier: link connected
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.394 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c7106643-b770-4c65-86cb-21621b6b2bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.409 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[39af53da-cb98-401a-9995-f4e38fd83f1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562839, 'reachable_time': 40280, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262252, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.423 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5db0f699-ee41-4bbf-b0ab-ae53e2e5ce4f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562839, 'tstamp': 562839}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262253, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.438 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9ddaf7-dbc1-4c80-abb1-a1799a05e5b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562839, 'reachable_time': 40280, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262254, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:06.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.466 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3dedf0-018c-4ae2-aac2-a3ad10eb1dae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.515 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5de33152-9d95-416e-aef8-1c6088a8e7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.516 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.516 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.516 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:16:06 np0005548731 NetworkManager[49182]: <info>  [1765005366.5187] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.518 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:06 np0005548731 kernel: tap4d599401-30: entered promiscuous mode
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.521 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:16:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:16:06Z|00233|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.523 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.532 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bafe93e3-6a41-40f9-ab11-de1086e09ffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.533 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:06.534 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.535 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.791 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for c8403a0c-2fe6-48fe-91af-ec5aca71e12d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.792 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005366.791404, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.792 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.795 232437 DEBUG nova.compute.manager [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.799 232437 INFO nova.virt.libvirt.driver [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance rebooted successfully.#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.800 232437 DEBUG nova.compute.manager [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:16:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:06.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.846 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.851 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.863 232437 DEBUG oslo_concurrency.lockutils [None req-e380e3c5-3ddc-47c6-9f9c-8aa5b6a05e74 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.873 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005366.7941396, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.873 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Started (Lifecycle Event)#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.891 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:16:06 np0005548731 nova_compute[232433]: 2025-12-06 07:16:06.894 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:16:06 np0005548731 podman[262327]: 2025-12-06 07:16:06.895069961 +0000 UTC m=+0.021959538 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:16:07 np0005548731 podman[262327]: 2025-12-06 07:16:07.058222171 +0000 UTC m=+0.185111738 container create 317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:16:07 np0005548731 systemd[1]: Started libpod-conmon-317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189.scope.
Dec  6 02:16:07 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:16:07 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/475c105d3a78b321a23d8ade3162af64a94a2b613eaed7e9692b9a012d35810f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:16:07 np0005548731 podman[262327]: 2025-12-06 07:16:07.19102954 +0000 UTC m=+0.317919137 container init 317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  6 02:16:07 np0005548731 podman[262327]: 2025-12-06 07:16:07.198940614 +0000 UTC m=+0.325830181 container start 317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 02:16:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:07 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[262342]: [NOTICE]   (262346) : New worker (262348) forked
Dec  6 02:16:07 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[262342]: [NOTICE]   (262346) : Loading success.
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.333 232437 DEBUG nova.compute.manager [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.334 232437 DEBUG oslo_concurrency.lockutils [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.334 232437 DEBUG oslo_concurrency.lockutils [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.335 232437 DEBUG oslo_concurrency.lockutils [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.335 232437 DEBUG nova.compute.manager [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.335 232437 WARNING nova.compute.manager [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.335 232437 DEBUG nova.compute.manager [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.335 232437 DEBUG oslo_concurrency.lockutils [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.336 232437 DEBUG oslo_concurrency.lockutils [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.336 232437 DEBUG oslo_concurrency.lockutils [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.336 232437 DEBUG nova.compute.manager [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.336 232437 WARNING nova.compute.manager [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.336 232437 DEBUG nova.compute.manager [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.336 232437 DEBUG oslo_concurrency.lockutils [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.336 232437 DEBUG oslo_concurrency.lockutils [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.337 232437 DEBUG oslo_concurrency.lockutils [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.337 232437 DEBUG nova.compute.manager [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.337 232437 WARNING nova.compute.manager [req-54de7281-09dd-4662-9d72-a870e601e832 req-88a26d8f-0b93-4e3a-a7a1-527dc6963bca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:16:07 np0005548731 nova_compute[232433]: 2025-12-06 07:16:07.696 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:08.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:08.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:10.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:10.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:11 np0005548731 nova_compute[232433]: 2025-12-06 07:16:11.175 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:11 np0005548731 nova_compute[232433]: 2025-12-06 07:16:11.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:12 np0005548731 nova_compute[232433]: 2025-12-06 07:16:12.025 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:12.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:12.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:16:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:14.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:16:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:14.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:16 np0005548731 nova_compute[232433]: 2025-12-06 07:16:16.179 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:16.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:16.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:16 np0005548731 nova_compute[232433]: 2025-12-06 07:16:16.854 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:17 np0005548731 nova_compute[232433]: 2025-12-06 07:16:17.677 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:16:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:16:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:18.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:16:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:18.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:16:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:16:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:16:19Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:0b:0a 10.100.0.6
Dec  6 02:16:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:20.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:20.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:21 np0005548731 nova_compute[232433]: 2025-12-06 07:16:21.181 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:21 np0005548731 nova_compute[232433]: 2025-12-06 07:16:21.426 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:21 np0005548731 nova_compute[232433]: 2025-12-06 07:16:21.857 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:22.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:22.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:22 np0005548731 podman[262546]: 2025-12-06 07:16:22.901559022 +0000 UTC m=+0.060387717 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:16:22 np0005548731 podman[262547]: 2025-12-06 07:16:22.93130965 +0000 UTC m=+0.090140185 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 02:16:22 np0005548731 podman[262548]: 2025-12-06 07:16:22.932886838 +0000 UTC m=+0.086137667 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  6 02:16:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:24.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:16:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:24.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:16:26 np0005548731 nova_compute[232433]: 2025-12-06 07:16:26.183 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:26.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:26.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:26 np0005548731 nova_compute[232433]: 2025-12-06 07:16:26.859 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:16:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:28.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:16:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:28.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:16:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:16:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:30.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:30.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:31 np0005548731 nova_compute[232433]: 2025-12-06 07:16:31.187 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:31 np0005548731 nova_compute[232433]: 2025-12-06 07:16:31.861 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:32.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:32.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:33.488 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:16:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:33.489 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:16:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:16:33.489 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:16:33 np0005548731 nova_compute[232433]: 2025-12-06 07:16:33.496 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:16:33Z|00234|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:16:33 np0005548731 nova_compute[232433]: 2025-12-06 07:16:33.709 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:34.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:34.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:36 np0005548731 nova_compute[232433]: 2025-12-06 07:16:36.189 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:16:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:36.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:16:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:36.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:36 np0005548731 nova_compute[232433]: 2025-12-06 07:16:36.862 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:38.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:38.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:40.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:40.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:41 np0005548731 nova_compute[232433]: 2025-12-06 07:16:41.192 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:41 np0005548731 nova_compute[232433]: 2025-12-06 07:16:41.864 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:42 np0005548731 nova_compute[232433]: 2025-12-06 07:16:42.301 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:42.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:16:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:42.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:16:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:16:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:44.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:16:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:44.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:46 np0005548731 nova_compute[232433]: 2025-12-06 07:16:46.026 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:46 np0005548731 nova_compute[232433]: 2025-12-06 07:16:46.194 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:46.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:46 np0005548731 nova_compute[232433]: 2025-12-06 07:16:46.866 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:46.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:48.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:48.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:50 np0005548731 nova_compute[232433]: 2025-12-06 07:16:50.101 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:16:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:50.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:50.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:51 np0005548731 nova_compute[232433]: 2025-12-06 07:16:51.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:16:51 np0005548731 nova_compute[232433]: 2025-12-06 07:16:51.196 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:51 np0005548731 nova_compute[232433]: 2025-12-06 07:16:51.868 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:52.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:52.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:53 np0005548731 nova_compute[232433]: 2025-12-06 07:16:53.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:16:53 np0005548731 nova_compute[232433]: 2025-12-06 07:16:53.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:16:53 np0005548731 nova_compute[232433]: 2025-12-06 07:16:53.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:16:53 np0005548731 nova_compute[232433]: 2025-12-06 07:16:53.567 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:16:53 np0005548731 nova_compute[232433]: 2025-12-06 07:16:53.568 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:16:53 np0005548731 nova_compute[232433]: 2025-12-06 07:16:53.568 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:16:53 np0005548731 nova_compute[232433]: 2025-12-06 07:16:53.568 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:16:53 np0005548731 podman[262722]: 2025-12-06 07:16:53.897130795 +0000 UTC m=+0.056245887 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Dec  6 02:16:53 np0005548731 podman[262724]: 2025-12-06 07:16:53.90264278 +0000 UTC m=+0.056176145 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:16:53 np0005548731 podman[262723]: 2025-12-06 07:16:53.949311592 +0000 UTC m=+0.105792999 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 02:16:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:54.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:54.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:55 np0005548731 nova_compute[232433]: 2025-12-06 07:16:55.929 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:16:55 np0005548731 nova_compute[232433]: 2025-12-06 07:16:55.958 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:16:55 np0005548731 nova_compute[232433]: 2025-12-06 07:16:55.958 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:16:55 np0005548731 nova_compute[232433]: 2025-12-06 07:16:55.958 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:16:56 np0005548731 nova_compute[232433]: 2025-12-06 07:16:56.198 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:56.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:56 np0005548731 nova_compute[232433]: 2025-12-06 07:16:56.870 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:16:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:56.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:16:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:16:57Z|00235|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:16:57 np0005548731 nova_compute[232433]: 2025-12-06 07:16:57.070 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:16:57 np0005548731 nova_compute[232433]: 2025-12-06 07:16:57.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:16:57 np0005548731 nova_compute[232433]: 2025-12-06 07:16:57.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:16:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:16:58 np0005548731 nova_compute[232433]: 2025-12-06 07:16:58.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:16:58 np0005548731 nova_compute[232433]: 2025-12-06 07:16:58.151 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:16:58 np0005548731 nova_compute[232433]: 2025-12-06 07:16:58.151 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:16:58 np0005548731 nova_compute[232433]: 2025-12-06 07:16:58.152 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:16:58 np0005548731 nova_compute[232433]: 2025-12-06 07:16:58.152 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:16:58 np0005548731 nova_compute[232433]: 2025-12-06 07:16:58.152 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:16:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:16:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:58.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:16:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:16:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/857765210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:16:58 np0005548731 nova_compute[232433]: 2025-12-06 07:16:58.581 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:16:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:16:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:16:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:58.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:16:59 np0005548731 nova_compute[232433]: 2025-12-06 07:16:59.520 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:16:59 np0005548731 nova_compute[232433]: 2025-12-06 07:16:59.521 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:16:59 np0005548731 nova_compute[232433]: 2025-12-06 07:16:59.669 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:16:59 np0005548731 nova_compute[232433]: 2025-12-06 07:16:59.670 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4458MB free_disk=20.921871185302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:16:59 np0005548731 nova_compute[232433]: 2025-12-06 07:16:59.671 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:16:59 np0005548731 nova_compute[232433]: 2025-12-06 07:16:59.671 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:16:59 np0005548731 nova_compute[232433]: 2025-12-06 07:16:59.800 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance c8403a0c-2fe6-48fe-91af-ec5aca71e12d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:16:59 np0005548731 nova_compute[232433]: 2025-12-06 07:16:59.801 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:16:59 np0005548731 nova_compute[232433]: 2025-12-06 07:16:59.801 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:16:59 np0005548731 nova_compute[232433]: 2025-12-06 07:16:59.889 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:17:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1613157721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:17:00 np0005548731 nova_compute[232433]: 2025-12-06 07:17:00.331 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:00 np0005548731 nova_compute[232433]: 2025-12-06 07:17:00.337 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:17:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:17:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:00.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:17:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:00.859 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:00.860 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:00.860 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:17:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:00.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:17:01 np0005548731 nova_compute[232433]: 2025-12-06 07:17:01.200 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:01 np0005548731 nova_compute[232433]: 2025-12-06 07:17:01.339 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:17:01 np0005548731 nova_compute[232433]: 2025-12-06 07:17:01.340 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:17:01 np0005548731 nova_compute[232433]: 2025-12-06 07:17:01.340 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:01 np0005548731 nova_compute[232433]: 2025-12-06 07:17:01.872 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.014 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.015 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.058 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.179 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.179 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.185 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.186 232437 INFO nova.compute.claims [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:17:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.325 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.352 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.354 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.354 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:17:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:02.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:17:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3941074034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.758 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.765 232437 DEBUG nova.compute.provider_tree [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.789 232437 DEBUG nova.scheduler.client.report [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.816 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.817 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:17:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:02.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.910 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.911 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:17:02 np0005548731 nova_compute[232433]: 2025-12-06 07:17:02.946 232437 INFO nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.061 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.101 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.291 232437 INFO nova.virt.block_device [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Booting with volume 0a534f34-78f8-4439-88a1-9db3d9b7638b at /dev/vda#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.631 232437 DEBUG os_brick.utils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.633 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.645 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.646 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[12d8d7d6-5e71-4e1f-8785-9ebab0a87bc2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.648 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.656 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.656 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[f7064a3f-8f41-47d5-85b6-d80b23bd0e8d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.658 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.665 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.666 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[661aabd4-520e-4bcc-967a-74677f48ae97]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.667 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[8c944691-2bc4-4316-a95d-4bdde74aca73]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.667 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.692 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.697 232437 DEBUG os_brick.initiator.connectors.lightos [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.698 232437 DEBUG os_brick.initiator.connectors.lightos [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.698 232437 DEBUG os_brick.initiator.connectors.lightos [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.699 232437 DEBUG os_brick.utils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:17:03 np0005548731 nova_compute[232433]: 2025-12-06 07:17:03.699 232437 DEBUG nova.virt.block_device [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating existing volume attachment record: f4de751c-899f-475e-ac3a-d68ed0f1ae07 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:17:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:04.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:04 np0005548731 nova_compute[232433]: 2025-12-06 07:17:04.858 232437 DEBUG nova.policy [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '197a9b0ee1db487d82542eb31e84f33e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:17:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:04.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.203 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.377 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully created port: 5408643b-7986-461d-867d-9f6545eeabb3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.492 232437 INFO nova.virt.block_device [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Booting with volume 4f7ebde6-5ba7-48cd-9d0a-8cec3b7dceb3 at /dev/vdb#033[00m
Dec  6 02:17:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:06.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.699 232437 DEBUG os_brick.utils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.701 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.727 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.727 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[16a0b029-bbc0-4785-8d9f-bd627d0c299e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.729 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.736 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.737 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[ea01f3a0-2009-4512-8fc1-8f1f79eb72fc]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.739 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.750 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.750 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[9fceecfd-709a-4e36-bd96-22b5f6eb88e8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.754 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7baad9-51f2-406a-bbab-e5cbf75ab27c]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.755 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.775 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.777 232437 DEBUG os_brick.initiator.connectors.lightos [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.777 232437 DEBUG os_brick.initiator.connectors.lightos [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.777 232437 DEBUG os_brick.initiator.connectors.lightos [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.778 232437 DEBUG os_brick.utils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.778 232437 DEBUG nova.virt.block_device [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating existing volume attachment record: af25083f-54c8-43c2-9707-73274e5a5dea _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:17:06 np0005548731 nova_compute[232433]: 2025-12-06 07:17:06.874 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:06.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:07 np0005548731 nova_compute[232433]: 2025-12-06 07:17:07.257 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully created port: e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:17:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:08.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:17:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:17:09 np0005548731 nova_compute[232433]: 2025-12-06 07:17:09.917 232437 INFO nova.virt.block_device [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Booting with volume d3c6383c-41f5-49ad-8585-fc49088a4d05 at /dev/vdc#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.146 232437 DEBUG os_brick.utils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.147 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.157 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.158 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[d7726373-e828-4627-894c-4e06a236cf17]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.159 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.163 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully created port: 5984d840-e1cd-45dc-87cb-a337e73a753c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.167 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.167 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[86797aa8-1554-4749-9ffc-5260ee7ca9aa]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.168 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.175 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.176 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[f37d622b-2d37-4ac7-a96d-ef4b942da292]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.177 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[821e7a08-0e52-4db3-8951-495ac4bb6654]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.177 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.201 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.203 232437 DEBUG os_brick.initiator.connectors.lightos [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.204 232437 DEBUG os_brick.initiator.connectors.lightos [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.204 232437 DEBUG os_brick.initiator.connectors.lightos [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.204 232437 DEBUG os_brick.utils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:17:10 np0005548731 nova_compute[232433]: 2025-12-06 07:17:10.205 232437 DEBUG nova.virt.block_device [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating existing volume attachment record: 8db5ccc5-1962-4524-9c53-b93313669576 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:17:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:10.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:10.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:11 np0005548731 nova_compute[232433]: 2025-12-06 07:17:11.205 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:17:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2290600617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:17:11 np0005548731 nova_compute[232433]: 2025-12-06 07:17:11.674 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:17:11 np0005548731 nova_compute[232433]: 2025-12-06 07:17:11.676 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:17:11 np0005548731 nova_compute[232433]: 2025-12-06 07:17:11.677 232437 INFO nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Creating image(s)#033[00m
Dec  6 02:17:11 np0005548731 nova_compute[232433]: 2025-12-06 07:17:11.677 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:17:11 np0005548731 nova_compute[232433]: 2025-12-06 07:17:11.677 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Ensure instance console log exists: /var/lib/nova/instances/067b423b-4ac2-4ca9-9000-44b6fb2af34e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:17:11 np0005548731 nova_compute[232433]: 2025-12-06 07:17:11.678 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:11 np0005548731 nova_compute[232433]: 2025-12-06 07:17:11.678 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:11 np0005548731 nova_compute[232433]: 2025-12-06 07:17:11.678 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:11 np0005548731 nova_compute[232433]: 2025-12-06 07:17:11.786 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully created port: 509f7f18-31bd-42cd-83f5-e9b62a4e200b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:17:11 np0005548731 nova_compute[232433]: 2025-12-06 07:17:11.876 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:12.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:17:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:12.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:17:13 np0005548731 nova_compute[232433]: 2025-12-06 07:17:13.751 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully created port: 0355ba71-a598-4e22-a0b7-c5c10b61733e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:17:13 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Dec  6 02:17:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.003000071s ======
Dec  6 02:17:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:14.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Dec  6 02:17:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:14.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:14 np0005548731 nova_compute[232433]: 2025-12-06 07:17:14.912 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:15 np0005548731 nova_compute[232433]: 2025-12-06 07:17:15.385 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully updated port: 5408643b-7986-461d-867d-9f6545eeabb3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:17:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:15.448 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:17:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:15.448 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:17:15 np0005548731 nova_compute[232433]: 2025-12-06 07:17:15.449 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:16 np0005548731 nova_compute[232433]: 2025-12-06 07:17:16.150 232437 DEBUG nova.compute.manager [req-c364206d-8b89-4ff7-a30e-6416a8fa8576 req-9fa8ddc2-2944-4a59-8b4d-da701f802e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-changed-5408643b-7986-461d-867d-9f6545eeabb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:16 np0005548731 nova_compute[232433]: 2025-12-06 07:17:16.150 232437 DEBUG nova.compute.manager [req-c364206d-8b89-4ff7-a30e-6416a8fa8576 req-9fa8ddc2-2944-4a59-8b4d-da701f802e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing instance network info cache due to event network-changed-5408643b-7986-461d-867d-9f6545eeabb3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:17:16 np0005548731 nova_compute[232433]: 2025-12-06 07:17:16.150 232437 DEBUG oslo_concurrency.lockutils [req-c364206d-8b89-4ff7-a30e-6416a8fa8576 req-9fa8ddc2-2944-4a59-8b4d-da701f802e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:17:16 np0005548731 nova_compute[232433]: 2025-12-06 07:17:16.151 232437 DEBUG oslo_concurrency.lockutils [req-c364206d-8b89-4ff7-a30e-6416a8fa8576 req-9fa8ddc2-2944-4a59-8b4d-da701f802e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:17:16 np0005548731 nova_compute[232433]: 2025-12-06 07:17:16.151 232437 DEBUG nova.network.neutron [req-c364206d-8b89-4ff7-a30e-6416a8fa8576 req-9fa8ddc2-2944-4a59-8b4d-da701f802e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing network info cache for port 5408643b-7986-461d-867d-9f6545eeabb3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:17:16 np0005548731 nova_compute[232433]: 2025-12-06 07:17:16.207 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:16.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:16 np0005548731 nova_compute[232433]: 2025-12-06 07:17:16.681 232437 DEBUG nova.network.neutron [req-c364206d-8b89-4ff7-a30e-6416a8fa8576 req-9fa8ddc2-2944-4a59-8b4d-da701f802e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:17:16 np0005548731 nova_compute[232433]: 2025-12-06 07:17:16.789 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully updated port: 0f3261df-8a99-4884-8db8-4ab5250dafc3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:17:16 np0005548731 nova_compute[232433]: 2025-12-06 07:17:16.878 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:16.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:17 np0005548731 nova_compute[232433]: 2025-12-06 07:17:17.862 232437 DEBUG nova.network.neutron [req-c364206d-8b89-4ff7-a30e-6416a8fa8576 req-9fa8ddc2-2944-4a59-8b4d-da701f802e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:17:17 np0005548731 nova_compute[232433]: 2025-12-06 07:17:17.888 232437 DEBUG oslo_concurrency.lockutils [req-c364206d-8b89-4ff7-a30e-6416a8fa8576 req-9fa8ddc2-2944-4a59-8b4d-da701f802e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:17:18 np0005548731 nova_compute[232433]: 2025-12-06 07:17:18.254 232437 DEBUG nova.compute.manager [req-a3cd9ee0-f583-4406-8247-ef2f40975f44 req-270cbd39-e84e-4f6c-b906-0ff6415b5a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-changed-0f3261df-8a99-4884-8db8-4ab5250dafc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:18 np0005548731 nova_compute[232433]: 2025-12-06 07:17:18.254 232437 DEBUG nova.compute.manager [req-a3cd9ee0-f583-4406-8247-ef2f40975f44 req-270cbd39-e84e-4f6c-b906-0ff6415b5a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing instance network info cache due to event network-changed-0f3261df-8a99-4884-8db8-4ab5250dafc3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:17:18 np0005548731 nova_compute[232433]: 2025-12-06 07:17:18.254 232437 DEBUG oslo_concurrency.lockutils [req-a3cd9ee0-f583-4406-8247-ef2f40975f44 req-270cbd39-e84e-4f6c-b906-0ff6415b5a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:17:18 np0005548731 nova_compute[232433]: 2025-12-06 07:17:18.255 232437 DEBUG oslo_concurrency.lockutils [req-a3cd9ee0-f583-4406-8247-ef2f40975f44 req-270cbd39-e84e-4f6c-b906-0ff6415b5a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:17:18 np0005548731 nova_compute[232433]: 2025-12-06 07:17:18.255 232437 DEBUG nova.network.neutron [req-a3cd9ee0-f583-4406-8247-ef2f40975f44 req-270cbd39-e84e-4f6c-b906-0ff6415b5a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing network info cache for port 0f3261df-8a99-4884-8db8-4ab5250dafc3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:17:18 np0005548731 nova_compute[232433]: 2025-12-06 07:17:18.338 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully updated port: 0afb38e3-893f-4379-98a8-4a56e2b84f9d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:17:18 np0005548731 nova_compute[232433]: 2025-12-06 07:17:18.510 232437 DEBUG nova.network.neutron [req-a3cd9ee0-f583-4406-8247-ef2f40975f44 req-270cbd39-e84e-4f6c-b906-0ff6415b5a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:17:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:18.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:18 np0005548731 nova_compute[232433]: 2025-12-06 07:17:18.904 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:18.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:18 np0005548731 nova_compute[232433]: 2025-12-06 07:17:18.965 232437 DEBUG nova.network.neutron [req-a3cd9ee0-f583-4406-8247-ef2f40975f44 req-270cbd39-e84e-4f6c-b906-0ff6415b5a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:17:18 np0005548731 nova_compute[232433]: 2025-12-06 07:17:18.980 232437 DEBUG oslo_concurrency.lockutils [req-a3cd9ee0-f583-4406-8247-ef2f40975f44 req-270cbd39-e84e-4f6c-b906-0ff6415b5a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:17:19 np0005548731 nova_compute[232433]: 2025-12-06 07:17:19.705 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully updated port: e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.349 232437 DEBUG nova.compute.manager [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-changed-0afb38e3-893f-4379-98a8-4a56e2b84f9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.350 232437 DEBUG nova.compute.manager [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing instance network info cache due to event network-changed-0afb38e3-893f-4379-98a8-4a56e2b84f9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.350 232437 DEBUG oslo_concurrency.lockutils [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.350 232437 DEBUG oslo_concurrency.lockutils [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.350 232437 DEBUG nova.network.neutron [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing network info cache for port 0afb38e3-893f-4379-98a8-4a56e2b84f9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.491 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully updated port: 5984d840-e1cd-45dc-87cb-a337e73a753c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.543 232437 DEBUG nova.network.neutron [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:17:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:20.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.896 232437 DEBUG nova.network.neutron [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.913 232437 DEBUG oslo_concurrency.lockutils [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.914 232437 DEBUG nova.compute.manager [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-changed-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.914 232437 DEBUG nova.compute.manager [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing instance network info cache due to event network-changed-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.914 232437 DEBUG oslo_concurrency.lockutils [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.914 232437 DEBUG oslo_concurrency.lockutils [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:17:20 np0005548731 nova_compute[232433]: 2025-12-06 07:17:20.914 232437 DEBUG nova.network.neutron [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing network info cache for port e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:17:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:17:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:20.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:17:21 np0005548731 nova_compute[232433]: 2025-12-06 07:17:21.209 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:21 np0005548731 nova_compute[232433]: 2025-12-06 07:17:21.444 232437 DEBUG nova.network.neutron [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:17:21 np0005548731 nova_compute[232433]: 2025-12-06 07:17:21.775 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully updated port: 509f7f18-31bd-42cd-83f5-e9b62a4e200b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:17:21 np0005548731 nova_compute[232433]: 2025-12-06 07:17:21.851 232437 DEBUG nova.network.neutron [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:17:21 np0005548731 nova_compute[232433]: 2025-12-06 07:17:21.873 232437 DEBUG oslo_concurrency.lockutils [req-732c78ca-ee83-4a9a-b5ed-a358df69cfcd req-294f98fe-e520-4323-8d77-e689db86e761 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:17:21 np0005548731 nova_compute[232433]: 2025-12-06 07:17:21.881 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:22 np0005548731 nova_compute[232433]: 2025-12-06 07:17:22.458 232437 DEBUG nova.compute.manager [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-changed-5984d840-e1cd-45dc-87cb-a337e73a753c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:22 np0005548731 nova_compute[232433]: 2025-12-06 07:17:22.459 232437 DEBUG nova.compute.manager [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing instance network info cache due to event network-changed-5984d840-e1cd-45dc-87cb-a337e73a753c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:17:22 np0005548731 nova_compute[232433]: 2025-12-06 07:17:22.459 232437 DEBUG oslo_concurrency.lockutils [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:17:22 np0005548731 nova_compute[232433]: 2025-12-06 07:17:22.460 232437 DEBUG oslo_concurrency.lockutils [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:17:22 np0005548731 nova_compute[232433]: 2025-12-06 07:17:22.460 232437 DEBUG nova.network.neutron [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing network info cache for port 5984d840-e1cd-45dc-87cb-a337e73a753c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:17:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:17:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:22.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:17:22 np0005548731 nova_compute[232433]: 2025-12-06 07:17:22.592 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Successfully updated port: 0355ba71-a598-4e22-a0b7-c5c10b61733e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:17:22 np0005548731 nova_compute[232433]: 2025-12-06 07:17:22.616 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:17:22 np0005548731 nova_compute[232433]: 2025-12-06 07:17:22.666 232437 DEBUG nova.network.neutron [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:17:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:22.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:23 np0005548731 nova_compute[232433]: 2025-12-06 07:17:23.620 232437 DEBUG nova.network.neutron [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:17:23 np0005548731 nova_compute[232433]: 2025-12-06 07:17:23.664 232437 DEBUG oslo_concurrency.lockutils [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:17:23 np0005548731 nova_compute[232433]: 2025-12-06 07:17:23.665 232437 DEBUG nova.compute.manager [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-changed-509f7f18-31bd-42cd-83f5-e9b62a4e200b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:23 np0005548731 nova_compute[232433]: 2025-12-06 07:17:23.665 232437 DEBUG nova.compute.manager [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing instance network info cache due to event network-changed-509f7f18-31bd-42cd-83f5-e9b62a4e200b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:17:23 np0005548731 nova_compute[232433]: 2025-12-06 07:17:23.665 232437 DEBUG oslo_concurrency.lockutils [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:17:23 np0005548731 nova_compute[232433]: 2025-12-06 07:17:23.666 232437 DEBUG oslo_concurrency.lockutils [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:17:23 np0005548731 nova_compute[232433]: 2025-12-06 07:17:23.666 232437 DEBUG nova.network.neutron [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing network info cache for port 509f7f18-31bd-42cd-83f5-e9b62a4e200b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:17:24 np0005548731 nova_compute[232433]: 2025-12-06 07:17:24.221 232437 DEBUG nova.network.neutron [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:17:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:24.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:24 np0005548731 nova_compute[232433]: 2025-12-06 07:17:24.718 232437 DEBUG nova.compute.manager [req-d928de36-6752-4ebb-9548-1f7c9f7a5a4c req-e4bdbdf1-991f-4768-a697-c04e4d98a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-changed-0355ba71-a598-4e22-a0b7-c5c10b61733e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:24 np0005548731 nova_compute[232433]: 2025-12-06 07:17:24.718 232437 DEBUG nova.compute.manager [req-d928de36-6752-4ebb-9548-1f7c9f7a5a4c req-e4bdbdf1-991f-4768-a697-c04e4d98a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing instance network info cache due to event network-changed-0355ba71-a598-4e22-a0b7-c5c10b61733e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:17:24 np0005548731 nova_compute[232433]: 2025-12-06 07:17:24.718 232437 DEBUG oslo_concurrency.lockutils [req-d928de36-6752-4ebb-9548-1f7c9f7a5a4c req-e4bdbdf1-991f-4768-a697-c04e4d98a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:17:24 np0005548731 podman[262990]: 2025-12-06 07:17:24.894848961 +0000 UTC m=+0.051039073 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  6 02:17:24 np0005548731 podman[262992]: 2025-12-06 07:17:24.912590423 +0000 UTC m=+0.065090325 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:17:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 02:17:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:24.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 02:17:24 np0005548731 podman[262991]: 2025-12-06 07:17:24.94245279 +0000 UTC m=+0.094526722 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  6 02:17:25 np0005548731 nova_compute[232433]: 2025-12-06 07:17:25.156 232437 DEBUG nova.network.neutron [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:17:25 np0005548731 nova_compute[232433]: 2025-12-06 07:17:25.179 232437 DEBUG oslo_concurrency.lockutils [req-6874525b-fac5-46be-a405-8bb45c28d6db req-ff9afe0e-8353-43bf-9571-c173050bab56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:17:25 np0005548731 nova_compute[232433]: 2025-12-06 07:17:25.181 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquired lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:17:25 np0005548731 nova_compute[232433]: 2025-12-06 07:17:25.181 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:17:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:25.451 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:25 np0005548731 nova_compute[232433]: 2025-12-06 07:17:25.639 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:17:26 np0005548731 nova_compute[232433]: 2025-12-06 07:17:26.212 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:26.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:26 np0005548731 nova_compute[232433]: 2025-12-06 07:17:26.923 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:17:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:26.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:17:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:28.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:17:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:28.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:17:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:30.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:30.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:31 np0005548731 nova_compute[232433]: 2025-12-06 07:17:31.214 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:17:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:17:31 np0005548731 nova_compute[232433]: 2025-12-06 07:17:31.924 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:17:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:17:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:17:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:32.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:32.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:34.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:34.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:36 np0005548731 nova_compute[232433]: 2025-12-06 07:17:36.216 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:36 np0005548731 nova_compute[232433]: 2025-12-06 07:17:36.926 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:36.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:17:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:38.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.615 232437 DEBUG nova.network.neutron [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [{"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "address": "fa:16:3e:44:6f:c0", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape27e5b98-b4", "ovs_interfaceid": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.649 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Releasing lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.650 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Instance network_info: |[{"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "address": "fa:16:3e:44:6f:c0", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape27e5b98-b4", "ovs_interfaceid": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.650 232437 DEBUG oslo_concurrency.lockutils [req-d928de36-6752-4ebb-9548-1f7c9f7a5a4c req-e4bdbdf1-991f-4768-a697-c04e4d98a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.651 232437 DEBUG nova.network.neutron [req-d928de36-6752-4ebb-9548-1f7c9f7a5a4c req-e4bdbdf1-991f-4768-a697-c04e4d98a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing network info cache for port 0355ba71-a598-4e22-a0b7-c5c10b61733e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.660 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Start _get_guest_xml network_info=[{"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "address": "fa:16:3e:44:6f:c0", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape27e5b98-b4", "ovs_interfaceid": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12
Dec  6 02:17:38 np0005548731 nova_compute[232433]: k=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0a534f34-78f8-4439-88a1-9db3d9b7638b', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0a534f34-78f8-4439-88a1-9db3d9b7638b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'attached_at': '', 'detached_at': '', 'volume_id': '0a534f34-78f8-4439-88a1-9db3d9b7638b', 'serial': '0a534f34-78f8-4439-88a1-9db3d9b7638b'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'attachment_id': 'f4de751c-899f-475e-ac3a-d68ed0f1ae07', 'volume_type': None}, {'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4f7ebde6-5ba7-48cd-9d0a-8cec3b7dceb3', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4f7ebde6-5ba7-48cd-9d0a-8cec3b7dceb3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'attached_at': '', 'detached_at': '', 'volume_id': '4f7ebde6-5ba7-48cd-9d0a-8cec3b7dceb3', 'serial': '4f7ebde6-5ba7-48cd-9d0a-8cec3b7dceb3'}, 'disk_bus': 'virtio', 'boot_index': 1, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'attachment_id': 'af25083f-54c8-43c2-9707-73274e5a5dea', 'volume_type': None}, {'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d3c6383c-41f5-49ad-8585-fc49088a4d05', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd3c6383c-41f5-49ad-8585-fc49088a4d05', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'attached_at': '', 'detached_at': '', 'volume_id': 'd3c6383c-41f5-49ad-8585-fc49088a4d05', 'serial': 'd3c6383c-41f5-49ad-8585-fc49088a4d05'}, 'disk_bus': 'virtio', 'boot_index': 2, 'delete_on_termination': False, 'mount_device': '/dev/vdc', 'attachment_id': '8db5ccc5-1962-4524-9c53-b93313669576', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.665 232437 WARNING nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.670 232437 DEBUG nova.virt.libvirt.host [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.671 232437 DEBUG nova.virt.libvirt.host [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.677 232437 DEBUG nova.virt.libvirt.host [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.677 232437 DEBUG nova.virt.libvirt.host [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.678 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.678 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.679 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.679 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.679 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.679 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.680 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.680 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.680 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.680 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.681 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.681 232437 DEBUG nova.virt.hardware [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:17:38 np0005548731 rsyslogd[1006]: message too long (8192) with configured size 8096, begin of message is: 2025-12-06 07:17:38.660 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.830 232437 DEBUG nova.storage.rbd_utils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] rbd image 067b423b-4ac2-4ca9-9000-44b6fb2af34e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:17:38 np0005548731 nova_compute[232433]: 2025-12-06 07:17:38.835 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:38.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:17:39 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2337843887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.261 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.381 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.381 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.382 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:c0:da,bridge_name='br-int',has_traffic_filtering=True,id=5408643b-7986-461d-867d-9f6545eeabb3,network=Network(1867060b-2830-47a2-bf0a-46a10388f745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5408643b-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.383 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.384 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.385 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:5a:4c,bridge_name='br-int',has_traffic_filtering=True,id=0f3261df-8a99-4884-8db8-4ab5250dafc3,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0f3261df-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.386 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.386 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.387 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7f:ee,bridge_name='br-int',has_traffic_filtering=True,id=0afb38e3-893f-4379-98a8-4a56e2b84f9d,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0afb38e3-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.387 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "address": "fa:16:3e:44:6f:c0", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape27e5b98-b4", "ovs_interfaceid": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.387 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "address": "fa:16:3e:44:6f:c0", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape27e5b98-b4", "ovs_interfaceid": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.388 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:6f:c0,bridge_name='br-int',has_traffic_filtering=True,id=e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape27e5b98-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.388 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.389 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.389 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:78:59,bridge_name='br-int',has_traffic_filtering=True,id=5984d840-e1cd-45dc-87cb-a337e73a753c,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5984d840-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.390 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.390 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.391 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:d0:96,bridge_name='br-int',has_traffic_filtering=True,id=509f7f18-31bd-42cd-83f5-e9b62a4e200b,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509f7f18-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.392 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.392 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.392 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:48:7a,bridge_name='br-int',has_traffic_filtering=True,id=0355ba71-a598-4e22-a0b7-c5c10b61733e,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0355ba71-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.394 232437 DEBUG nova.objects.instance [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lazy-loading 'pci_devices' on Instance uuid 067b423b-4ac2-4ca9-9000-44b6fb2af34e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.439 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  <uuid>067b423b-4ac2-4ca9-9000-44b6fb2af34e</uuid>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  <name>instance-0000004b</name>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <nova:name>tempest-device-tagging-server-1733639154</nova:name>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:17:38</nova:creationTime>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:user uuid="197a9b0ee1db487d82542eb31e84f33e">tempest-TaggedBootDevicesTest-128611843-project-member</nova:user>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:project uuid="8f938a037b8141cf9408cbf6f5cd081d">tempest-TaggedBootDevicesTest-128611843</nova:project>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:port uuid="5408643b-7986-461d-867d-9f6545eeabb3">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:port uuid="0f3261df-8a99-4884-8db8-4ab5250dafc3">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.1.1.159" ipVersion="4"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:port uuid="0afb38e3-893f-4379-98a8-4a56e2b84f9d">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.1.1.142" ipVersion="4"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:port uuid="e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.1.1.98" ipVersion="4"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:port uuid="5984d840-e1cd-45dc-87cb-a337e73a753c">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.1.1.181" ipVersion="4"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:port uuid="509f7f18-31bd-42cd-83f5-e9b62a4e200b">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <nova:port uuid="0355ba71-a598-4e22-a0b7-c5c10b61733e">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <entry name="serial">067b423b-4ac2-4ca9-9000-44b6fb2af34e</entry>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <entry name="uuid">067b423b-4ac2-4ca9-9000-44b6fb2af34e</entry>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/067b423b-4ac2-4ca9-9000-44b6fb2af34e_disk.config">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-0a534f34-78f8-4439-88a1-9db3d9b7638b">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <serial>0a534f34-78f8-4439-88a1-9db3d9b7638b</serial>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-4f7ebde6-5ba7-48cd-9d0a-8cec3b7dceb3">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <target dev="vdb" bus="virtio"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <serial>4f7ebde6-5ba7-48cd-9d0a-8cec3b7dceb3</serial>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-d3c6383c-41f5-49ad-8585-fc49088a4d05">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <target dev="vdc" bus="virtio"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <serial>d3c6383c-41f5-49ad-8585-fc49088a4d05</serial>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:43:c0:da"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <target dev="tap5408643b-79"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:33:5a:4c"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <target dev="tap0f3261df-8a"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:b0:7f:ee"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <target dev="tap0afb38e3-89"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:44:6f:c0"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <target dev="tape27e5b98-b4"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:0b:78:59"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <target dev="tap5984d840-e1"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:3a:d0:96"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <target dev="tap509f7f18-31"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:a4:48:7a"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <target dev="tap0355ba71-a5"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/067b423b-4ac2-4ca9-9000-44b6fb2af34e/console.log" append="off"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:17:39 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:17:39 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:17:39 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:17:39 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.440 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Preparing to wait for external event network-vif-plugged-5408643b-7986-461d-867d-9f6545eeabb3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.441 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.441 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.442 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.442 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Preparing to wait for external event network-vif-plugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.442 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.442 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.442 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.443 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Preparing to wait for external event network-vif-plugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.443 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.443 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.443 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.443 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Preparing to wait for external event network-vif-plugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.444 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.444 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.444 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.444 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Preparing to wait for external event network-vif-plugged-5984d840-e1cd-45dc-87cb-a337e73a753c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.445 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.445 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.445 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.445 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Preparing to wait for external event network-vif-plugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.445 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.446 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.446 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.446 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Preparing to wait for external event network-vif-plugged-0355ba71-a598-4e22-a0b7-c5c10b61733e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.446 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.446 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.447 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.447 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.448 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.448 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:c0:da,bridge_name='br-int',has_traffic_filtering=True,id=5408643b-7986-461d-867d-9f6545eeabb3,network=Network(1867060b-2830-47a2-bf0a-46a10388f745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5408643b-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.448 232437 DEBUG os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:c0:da,bridge_name='br-int',has_traffic_filtering=True,id=5408643b-7986-461d-867d-9f6545eeabb3,network=Network(1867060b-2830-47a2-bf0a-46a10388f745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5408643b-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.449 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.449 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.450 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.453 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.454 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5408643b-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.454 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5408643b-79, col_values=(('external_ids', {'iface-id': '5408643b-7986-461d-867d-9f6545eeabb3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:c0:da', 'vm-uuid': '067b423b-4ac2-4ca9-9000-44b6fb2af34e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.455 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 NetworkManager[49182]: <info>  [1765005459.4566] manager: (tap5408643b-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.458 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.463 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.463 232437 INFO os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:c0:da,bridge_name='br-int',has_traffic_filtering=True,id=5408643b-7986-461d-867d-9f6545eeabb3,network=Network(1867060b-2830-47a2-bf0a-46a10388f745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5408643b-79')#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.464 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.464 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.465 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:5a:4c,bridge_name='br-int',has_traffic_filtering=True,id=0f3261df-8a99-4884-8db8-4ab5250dafc3,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0f3261df-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.465 232437 DEBUG os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:5a:4c,bridge_name='br-int',has_traffic_filtering=True,id=0f3261df-8a99-4884-8db8-4ab5250dafc3,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0f3261df-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.465 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.466 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.466 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.468 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.468 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f3261df-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.468 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f3261df-8a, col_values=(('external_ids', {'iface-id': '0f3261df-8a99-4884-8db8-4ab5250dafc3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:5a:4c', 'vm-uuid': '067b423b-4ac2-4ca9-9000-44b6fb2af34e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.469 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 NetworkManager[49182]: <info>  [1765005459.4704] manager: (tap0f3261df-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.471 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.476 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.477 232437 INFO os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:5a:4c,bridge_name='br-int',has_traffic_filtering=True,id=0f3261df-8a99-4884-8db8-4ab5250dafc3,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0f3261df-8a')#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.477 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.478 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.478 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7f:ee,bridge_name='br-int',has_traffic_filtering=True,id=0afb38e3-893f-4379-98a8-4a56e2b84f9d,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0afb38e3-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.479 232437 DEBUG os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7f:ee,bridge_name='br-int',has_traffic_filtering=True,id=0afb38e3-893f-4379-98a8-4a56e2b84f9d,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0afb38e3-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.479 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.479 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.480 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.481 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.481 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0afb38e3-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.482 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0afb38e3-89, col_values=(('external_ids', {'iface-id': '0afb38e3-893f-4379-98a8-4a56e2b84f9d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:7f:ee', 'vm-uuid': '067b423b-4ac2-4ca9-9000-44b6fb2af34e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.483 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 NetworkManager[49182]: <info>  [1765005459.4844] manager: (tap0afb38e3-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.486 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.491 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.492 232437 INFO os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7f:ee,bridge_name='br-int',has_traffic_filtering=True,id=0afb38e3-893f-4379-98a8-4a56e2b84f9d,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0afb38e3-89')#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.493 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "address": "fa:16:3e:44:6f:c0", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape27e5b98-b4", "ovs_interfaceid": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.493 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "address": "fa:16:3e:44:6f:c0", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape27e5b98-b4", "ovs_interfaceid": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.494 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:6f:c0,bridge_name='br-int',has_traffic_filtering=True,id=e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape27e5b98-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.494 232437 DEBUG os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:6f:c0,bridge_name='br-int',has_traffic_filtering=True,id=e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape27e5b98-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.494 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.495 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.495 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.497 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.497 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape27e5b98-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.497 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape27e5b98-b4, col_values=(('external_ids', {'iface-id': 'e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:6f:c0', 'vm-uuid': '067b423b-4ac2-4ca9-9000-44b6fb2af34e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.498 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 NetworkManager[49182]: <info>  [1765005459.4994] manager: (tape27e5b98-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.500 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.509 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.510 232437 INFO os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:6f:c0,bridge_name='br-int',has_traffic_filtering=True,id=e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape27e5b98-b4')#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.510 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.511 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.511 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:78:59,bridge_name='br-int',has_traffic_filtering=True,id=5984d840-e1cd-45dc-87cb-a337e73a753c,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5984d840-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.512 232437 DEBUG os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:78:59,bridge_name='br-int',has_traffic_filtering=True,id=5984d840-e1cd-45dc-87cb-a337e73a753c,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5984d840-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.512 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.512 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.512 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.514 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.514 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5984d840-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.515 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5984d840-e1, col_values=(('external_ids', {'iface-id': '5984d840-e1cd-45dc-87cb-a337e73a753c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:78:59', 'vm-uuid': '067b423b-4ac2-4ca9-9000-44b6fb2af34e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.516 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 NetworkManager[49182]: <info>  [1765005459.5169] manager: (tap5984d840-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.518 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.529 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.529 232437 INFO os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:78:59,bridge_name='br-int',has_traffic_filtering=True,id=5984d840-e1cd-45dc-87cb-a337e73a753c,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5984d840-e1')#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.530 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.530 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.531 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:d0:96,bridge_name='br-int',has_traffic_filtering=True,id=509f7f18-31bd-42cd-83f5-e9b62a4e200b,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509f7f18-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.531 232437 DEBUG os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:d0:96,bridge_name='br-int',has_traffic_filtering=True,id=509f7f18-31bd-42cd-83f5-e9b62a4e200b,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509f7f18-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.532 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.532 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.532 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.533 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.534 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap509f7f18-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.534 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap509f7f18-31, col_values=(('external_ids', {'iface-id': '509f7f18-31bd-42cd-83f5-e9b62a4e200b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:d0:96', 'vm-uuid': '067b423b-4ac2-4ca9-9000-44b6fb2af34e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.535 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 NetworkManager[49182]: <info>  [1765005459.5362] manager: (tap509f7f18-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.537 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.550 232437 INFO os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:d0:96,bridge_name='br-int',has_traffic_filtering=True,id=509f7f18-31bd-42cd-83f5-e9b62a4e200b,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509f7f18-31')#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.550 232437 DEBUG nova.virt.libvirt.vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.551 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.551 232437 DEBUG nova.network.os_vif_util [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:48:7a,bridge_name='br-int',has_traffic_filtering=True,id=0355ba71-a598-4e22-a0b7-c5c10b61733e,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0355ba71-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.552 232437 DEBUG os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:48:7a,bridge_name='br-int',has_traffic_filtering=True,id=0355ba71-a598-4e22-a0b7-c5c10b61733e,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0355ba71-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.552 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.552 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.552 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.554 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.554 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0355ba71-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.555 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0355ba71-a5, col_values=(('external_ids', {'iface-id': '0355ba71-a598-4e22-a0b7-c5c10b61733e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:48:7a', 'vm-uuid': '067b423b-4ac2-4ca9-9000-44b6fb2af34e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.556 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 NetworkManager[49182]: <info>  [1765005459.5571] manager: (tap0355ba71-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.558 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.571 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.572 232437 INFO os_vif [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:48:7a,bridge_name='br-int',has_traffic_filtering=True,id=0355ba71-a598-4e22-a0b7-c5c10b61733e,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0355ba71-a5')#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.630 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.631 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.631 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] No VIF found with MAC fa:16:3e:43:c0:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.631 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] No VIF found with MAC fa:16:3e:0b:78:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.632 232437 INFO nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Using config drive#033[00m
Dec  6 02:17:39 np0005548731 nova_compute[232433]: 2025-12-06 07:17:39.656 232437 DEBUG nova.storage.rbd_utils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] rbd image 067b423b-4ac2-4ca9-9000-44b6fb2af34e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.088 232437 INFO nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Creating config drive at /var/lib/nova/instances/067b423b-4ac2-4ca9-9000-44b6fb2af34e/disk.config#033[00m
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.096 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/067b423b-4ac2-4ca9-9000-44b6fb2af34e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7nnz8zg1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.227 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/067b423b-4ac2-4ca9-9000-44b6fb2af34e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7nnz8zg1" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.256 232437 DEBUG nova.storage.rbd_utils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] rbd image 067b423b-4ac2-4ca9-9000-44b6fb2af34e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.262 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/067b423b-4ac2-4ca9-9000-44b6fb2af34e/disk.config 067b423b-4ac2-4ca9-9000-44b6fb2af34e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.289 232437 DEBUG nova.network.neutron [req-d928de36-6752-4ebb-9548-1f7c9f7a5a4c req-e4bdbdf1-991f-4768-a697-c04e4d98a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updated VIF entry in instance network info cache for port 0355ba71-a598-4e22-a0b7-c5c10b61733e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.291 232437 DEBUG nova.network.neutron [req-d928de36-6752-4ebb-9548-1f7c9f7a5a4c req-e4bdbdf1-991f-4768-a697-c04e4d98a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [{"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "address": "fa:16:3e:44:6f:c0", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape27e5b98-b4", "ovs_interfaceid": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.315 232437 DEBUG oslo_concurrency.lockutils [req-d928de36-6752-4ebb-9548-1f7c9f7a5a4c req-e4bdbdf1-991f-4768-a697-c04e4d98a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:17:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:40.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.788 232437 DEBUG oslo_concurrency.processutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/067b423b-4ac2-4ca9-9000-44b6fb2af34e/disk.config 067b423b-4ac2-4ca9-9000-44b6fb2af34e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.789 232437 INFO nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Deleting local config drive /var/lib/nova/instances/067b423b-4ac2-4ca9-9000-44b6fb2af34e/disk.config because it was imported into RBD.#033[00m
Dec  6 02:17:40 np0005548731 kernel: tap5408643b-79: entered promiscuous mode
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.8422] manager: (tap5408643b-79): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.8593] manager: (tap0f3261df-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Dec  6 02:17:40 np0005548731 kernel: tap0f3261df-8a: entered promiscuous mode
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00236|binding|INFO|Claiming lport 0f3261df-8a99-4884-8db8-4ab5250dafc3 for this chassis.
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00237|binding|INFO|0f3261df-8a99-4884-8db8-4ab5250dafc3: Claiming fa:16:3e:33:5a:4c 10.1.1.159
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00238|binding|INFO|Claiming lport 5408643b-7986-461d-867d-9f6545eeabb3 for this chassis.
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00239|binding|INFO|5408643b-7986-461d-867d-9f6545eeabb3: Claiming fa:16:3e:43:c0:da 10.100.0.5
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.862 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:40 np0005548731 systemd-udevd[263336]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:17:40 np0005548731 systemd-udevd[263337]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:17:40 np0005548731 systemd-udevd[263342]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.871 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:5a:4c 10.1.1.159'], port_security=['fa:16:3e:33:5a:4c 10.1.1.159'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1263915628', 'neutron:cidrs': '10.1.1.159/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83f8ee1c-bee2-4425-8792-4c822f63072c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1263915628', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c82004de-8ebc-40e2-958d-d9900669c6b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e94e8fb2-850a-43df-84ef-79dd7fe43cfe, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=0f3261df-8a99-4884-8db8-4ab5250dafc3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.874 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:c0:da 10.100.0.5'], port_security=['fa:16:3e:43:c0:da 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1867060b-2830-47a2-bf0a-46a10388f745', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '237cb9be-2d8b-4ac0-bb7a-8d0c44abab0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cde0a4c1-aca8-4a0b-ab02-e783a68c2106, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=5408643b-7986-461d-867d-9f6545eeabb3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.876 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 0f3261df-8a99-4884-8db8-4ab5250dafc3 in datapath 83f8ee1c-bee2-4425-8792-4c822f63072c bound to our chassis#033[00m
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.880 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 83f8ee1c-bee2-4425-8792-4c822f63072c#033[00m
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.8836] manager: (tap0afb38e3-89): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.8849] device (tap0f3261df-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.8868] device (tap0f3261df-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.8879] device (tap5408643b-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.8900] device (tap5408643b-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.8966] manager: (tape27e5b98-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.895 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d45961-a294-4210-a5f7-1a43a9064772]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.896 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap83f8ee1c-b1 in ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.899 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap83f8ee1c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.899 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fe988a40-93d5-4f87-ae19-dd2858b076a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.900 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[12c77035-fa2b-49fa-9d6d-4a351cd1b9dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9135] manager: (tap5984d840-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.913 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[17328a8c-91eb-4621-af0d-ee5092b37d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:40 np0005548731 kernel: tap5984d840-e1: entered promiscuous mode
Dec  6 02:17:40 np0005548731 kernel: tape27e5b98-b4: entered promiscuous mode
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9188] device (tape27e5b98-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:17:40 np0005548731 kernel: tap0afb38e3-89: entered promiscuous mode
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9202] device (tap0afb38e3-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9208] device (tape27e5b98-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9213] device (tap0afb38e3-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9239] device (tap5984d840-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9248] device (tap5984d840-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9267] manager: (tap509f7f18-31): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.927 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b872d0a7-321d-4b26-9827-eb419f91425f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00240|binding|INFO|Claiming lport 5984d840-e1cd-45dc-87cb-a337e73a753c for this chassis.
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00241|binding|INFO|5984d840-e1cd-45dc-87cb-a337e73a753c: Claiming fa:16:3e:0b:78:59 10.1.1.181
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00242|binding|INFO|Claiming lport e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 for this chassis.
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00243|binding|INFO|e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91: Claiming fa:16:3e:44:6f:c0 10.1.1.98
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00244|binding|INFO|Claiming lport 0afb38e3-893f-4379-98a8-4a56e2b84f9d for this chassis.
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00245|binding|INFO|0afb38e3-893f-4379-98a8-4a56e2b84f9d: Claiming fa:16:3e:b0:7f:ee 10.1.1.142
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.929 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:40 np0005548731 kernel: tap509f7f18-31: entered promiscuous mode
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.937 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:6f:c0 10.1.1.98'], port_security=['fa:16:3e:44:6f:c0 10.1.1.98'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.98/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83f8ee1c-bee2-4425-8792-4c822f63072c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '237cb9be-2d8b-4ac0-bb7a-8d0c44abab0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e94e8fb2-850a-43df-84ef-79dd7fe43cfe, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00246|binding|INFO|Claiming lport 509f7f18-31bd-42cd-83f5-e9b62a4e200b for this chassis.
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00247|binding|INFO|509f7f18-31bd-42cd-83f5-e9b62a4e200b: Claiming fa:16:3e:3a:d0:96 10.2.2.100
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9391] manager: (tap0355ba71-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9404] device (tap509f7f18-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.938 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9412] device (tap509f7f18-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.941 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.941 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:7f:ee 10.1.1.142'], port_security=['fa:16:3e:b0:7f:ee 10.1.1.142'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-375565817', 'neutron:cidrs': '10.1.1.142/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83f8ee1c-bee2-4425-8792-4c822f63072c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-375565817', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c82004de-8ebc-40e2-958d-d9900669c6b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e94e8fb2-850a-43df-84ef-79dd7fe43cfe, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=0afb38e3-893f-4379-98a8-4a56e2b84f9d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.943 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:78:59 10.1.1.181'], port_security=['fa:16:3e:0b:78:59 10.1.1.181'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.181/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83f8ee1c-bee2-4425-8792-4c822f63072c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '237cb9be-2d8b-4ac0-bb7a-8d0c44abab0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e94e8fb2-850a-43df-84ef-79dd7fe43cfe, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=5984d840-e1cd-45dc-87cb-a337e73a753c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:17:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.947 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:d0:96 10.2.2.100'], port_security=['fa:16:3e:3a:d0:96 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '237cb9be-2d8b-4ac0-bb7a-8d0c44abab0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=732bca81-ac12-4eee-8f7d-ffbf43eb8c29, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=509f7f18-31bd-42cd-83f5-e9b62a4e200b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:17:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:40.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:40 np0005548731 kernel: tap0355ba71-a5: entered promiscuous mode
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00248|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00249|binding|INFO|Claiming lport 0355ba71-a598-4e22-a0b7-c5c10b61733e for this chassis.
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00250|binding|INFO|0355ba71-a598-4e22-a0b7-c5c10b61733e: Claiming fa:16:3e:a4:48:7a 10.2.2.200
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.949 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9529] device (tap0355ba71-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9538] device (tap0355ba71-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.962 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4afe66ac-cede-460d-aa71-836cf7510655]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00251|binding|INFO|Setting lport 0f3261df-8a99-4884-8db8-4ab5250dafc3 ovn-installed in OVS
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00252|binding|INFO|Setting lport 5408643b-7986-461d-867d-9f6545eeabb3 ovn-installed in OVS
Dec  6 02:17:40 np0005548731 nova_compute[232433]: 2025-12-06 07:17:40.969 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:40 np0005548731 NetworkManager[49182]: <info>  [1765005460.9712] manager: (tap83f8ee1c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/147)
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.970 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdc9448-b1c5-4a1e-8d3c-1b9e44237aaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:40 np0005548731 systemd-machined[195355]: New machine qemu-32-instance-0000004b.
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00253|binding|INFO|Setting lport 0f3261df-8a99-4884-8db8-4ab5250dafc3 up in Southbound
Dec  6 02:17:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00254|binding|INFO|Setting lport 5408643b-7986-461d-867d-9f6545eeabb3 up in Southbound
Dec  6 02:17:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:40.974 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:48:7a 10.2.2.200'], port_security=['fa:16:3e:a4:48:7a 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '237cb9be-2d8b-4ac0-bb7a-8d0c44abab0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=732bca81-ac12-4eee-8f7d-ffbf43eb8c29, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=0355ba71-a598-4e22-a0b7-c5c10b61733e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:17:40 np0005548731 systemd[1]: Started Virtual Machine qemu-32-instance-0000004b.
Dec  6 02:17:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:40Z|00255|binding|INFO|Setting lport e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 ovn-installed in OVS
Dec  6 02:17:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:41Z|00256|binding|INFO|Setting lport e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 up in Southbound
Dec  6 02:17:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:41Z|00257|binding|INFO|Setting lport 0afb38e3-893f-4379-98a8-4a56e2b84f9d ovn-installed in OVS
Dec  6 02:17:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:41Z|00258|binding|INFO|Setting lport 0afb38e3-893f-4379-98a8-4a56e2b84f9d up in Southbound
Dec  6 02:17:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:41Z|00259|binding|INFO|Setting lport 5984d840-e1cd-45dc-87cb-a337e73a753c ovn-installed in OVS
Dec  6 02:17:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:41Z|00260|binding|INFO|Setting lport 5984d840-e1cd-45dc-87cb-a337e73a753c up in Southbound
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.001 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.011 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[835852a3-8039-4785-90f3-21bf1697b4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.013 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[81da490a-cbd3-4a27-9249-640390bf7605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 NetworkManager[49182]: <info>  [1765005461.0362] device (tap83f8ee1c-b0): carrier: link connected
Dec  6 02:17:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:41Z|00261|binding|INFO|Setting lport 0355ba71-a598-4e22-a0b7-c5c10b61733e ovn-installed in OVS
Dec  6 02:17:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:41Z|00262|binding|INFO|Setting lport 0355ba71-a598-4e22-a0b7-c5c10b61733e up in Southbound
Dec  6 02:17:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:41Z|00263|binding|INFO|Setting lport 509f7f18-31bd-42cd-83f5-e9b62a4e200b ovn-installed in OVS
Dec  6 02:17:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:41Z|00264|binding|INFO|Setting lport 509f7f18-31bd-42cd-83f5-e9b62a4e200b up in Southbound
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.042 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7939d5-c109-4c73-865e-db6a4ac65769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.052 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.066 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[02fd4119-438f-4aa3-92f6-1b5b22dbdf28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap83f8ee1c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:d5:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572304, 'reachable_time': 31290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263396, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.079 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9905a26f-acb2-4564-9b9f-f31769ace03a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:d541'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572304, 'tstamp': 572304}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263398, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.095 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa4da31-762d-4bd9-b20e-66c41e30e506]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap83f8ee1c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:d5:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572304, 'reachable_time': 31290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263399, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.129 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3083374-4ca9-49fd-878c-8db1bace7cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.189 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d9c57e-4a78-465e-a3d1-8d0542e018ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.191 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83f8ee1c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.191 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.192 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83f8ee1c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.194 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:41 np0005548731 kernel: tap83f8ee1c-b0: entered promiscuous mode
Dec  6 02:17:41 np0005548731 NetworkManager[49182]: <info>  [1765005461.1950] manager: (tap83f8ee1c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.197 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.198 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap83f8ee1c-b0, col_values=(('external_ids', {'iface-id': 'aaa2a250-1a44-4385-bf81-2c758e7e09fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.199 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:41Z|00265|binding|INFO|Releasing lport aaa2a250-1a44-4385-bf81-2c758e7e09fb from this chassis (sb_readonly=0)
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.213 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.214 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/83f8ee1c-bee2-4425-8792-4c822f63072c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/83f8ee1c-bee2-4425-8792-4c822f63072c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.215 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[01c87432-563d-46cd-b661-fc4e50f73e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.216 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-83f8ee1c-bee2-4425-8792-4c822f63072c
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/83f8ee1c-bee2-4425-8792-4c822f63072c.pid.haproxy
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 83f8ee1c-bee2-4425-8792-4c822f63072c
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.217 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'env', 'PROCESS_TAG=haproxy-83f8ee1c-bee2-4425-8792-4c822f63072c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/83f8ee1c-bee2-4425-8792-4c822f63072c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:17:41 np0005548731 podman[263514]: 2025-12-06 07:17:41.593960455 +0000 UTC m=+0.056094576 container create ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  6 02:17:41 np0005548731 systemd[1]: Started libpod-conmon-ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299.scope.
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.632 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005461.6325889, 067b423b-4ac2-4ca9-9000-44b6fb2af34e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.633 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] VM Started (Lifecycle Event)#033[00m
Dec  6 02:17:41 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:17:41 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b93af762ec9ad47ceb9b136c4c3a97982506755fecfcf07c5759fc50eae26815/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:17:41 np0005548731 podman[263514]: 2025-12-06 07:17:41.563422852 +0000 UTC m=+0.025557003 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:17:41 np0005548731 podman[263514]: 2025-12-06 07:17:41.660710731 +0000 UTC m=+0.122844892 container init ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.661 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.665 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005461.6333485, 067b423b-4ac2-4ca9-9000-44b6fb2af34e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:17:41 np0005548731 podman[263514]: 2025-12-06 07:17:41.666366389 +0000 UTC m=+0.128500540 container start ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.666 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.689 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:17:41 np0005548731 neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c[263530]: [NOTICE]   (263534) : New worker (263536) forked
Dec  6 02:17:41 np0005548731 neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c[263530]: [NOTICE]   (263534) : Loading success.
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.694 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.721 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 5408643b-7986-461d-867d-9f6545eeabb3 in datapath 1867060b-2830-47a2-bf0a-46a10388f745 unbound from our chassis#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.723 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1867060b-2830-47a2-bf0a-46a10388f745#033[00m
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.730 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.735 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9742b327-48b9-479f-935f-7a90646a230c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.736 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1867060b-21 in ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.738 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1867060b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.738 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[54942e88-1aa1-431e-a4e7-9fc94b42124e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.739 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d3e45f-f654-4ac0-bb10-5df69b1c6444]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.749 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[76251296-97d0-4d54-8321-a87056895788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.764 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[56d71d3c-0349-4641-862c-5b8c70035de8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.793 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfeaa4b-1449-473b-99a7-c5257a2e59ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.798 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3332ef95-162f-4716-94be-b924f2a1a3ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 NetworkManager[49182]: <info>  [1765005461.7994] manager: (tap1867060b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Dec  6 02:17:41 np0005548731 systemd-udevd[263385]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.829 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[46ec03b4-15ef-420f-b0af-e4fb5e1f972d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.833 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed0832a-6338-42cf-9ff5-dc4e7abcb25f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 NetworkManager[49182]: <info>  [1765005461.8581] device (tap1867060b-20): carrier: link connected
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.865 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b42189ca-084b-4b43-ba9a-78057ae3f631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.883 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[228bc820-e259-4f77-8f26-49f24fd37d41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1867060b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d7:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572386, 'reachable_time': 41204, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263555, 'error': None, 'target': 'ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.903 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[842e0a8c-a935-4572-ad71-a4bc23838853]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:d7ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572386, 'tstamp': 572386}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263556, 'error': None, 'target': 'ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.922 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[90229060-107e-4881-9750-dc578a3d36a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1867060b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d7:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572386, 'reachable_time': 41204, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263557, 'error': None, 'target': 'ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:41 np0005548731 nova_compute[232433]: 2025-12-06 07:17:41.928 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:41.951 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[52721d37-6a1b-4a2f-bc2e-a44462b30623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:42.017 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1c0f5a67-0376-4557-b5af-a61016e34a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:42.019 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1867060b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:42.020 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:42.021 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1867060b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:42 np0005548731 kernel: tap1867060b-20: entered promiscuous mode
Dec  6 02:17:42 np0005548731 NetworkManager[49182]: <info>  [1765005462.0243] manager: (tap1867060b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.024 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:42.030 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1867060b-20, col_values=(('external_ids', {'iface-id': 'c99144ca-f47e-4d7c-874e-ccf4efc57815'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:42Z|00266|binding|INFO|Releasing lport c99144ca-f47e-4d7c-874e-ccf4efc57815 from this chassis (sb_readonly=0)
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.032 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.033 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:42.034 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1867060b-2830-47a2-bf0a-46a10388f745.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1867060b-2830-47a2-bf0a-46a10388f745.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:42.036 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d23978a7-49b0-4710-998a-50236c58753b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:42.037 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-1867060b-2830-47a2-bf0a-46a10388f745
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/1867060b-2830-47a2-bf0a-46a10388f745.pid.haproxy
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 1867060b-2830-47a2-bf0a-46a10388f745
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:17:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:42.038 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745', 'env', 'PROCESS_TAG=haproxy-1867060b-2830-47a2-bf0a-46a10388f745', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1867060b-2830-47a2-bf0a-46a10388f745.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.052 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:42 np0005548731 podman[263613]: 2025-12-06 07:17:42.375626879 +0000 UTC m=+0.020055229 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:17:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:17:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:42.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.728 232437 DEBUG nova.compute.manager [req-2e84af21-c117-4420-8b00-b096382b72ab req-7df0e9d1-5638-4ede-b41a-55f651b397f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.729 232437 DEBUG oslo_concurrency.lockutils [req-2e84af21-c117-4420-8b00-b096382b72ab req-7df0e9d1-5638-4ede-b41a-55f651b397f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.730 232437 DEBUG oslo_concurrency.lockutils [req-2e84af21-c117-4420-8b00-b096382b72ab req-7df0e9d1-5638-4ede-b41a-55f651b397f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.730 232437 DEBUG oslo_concurrency.lockutils [req-2e84af21-c117-4420-8b00-b096382b72ab req-7df0e9d1-5638-4ede-b41a-55f651b397f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.730 232437 DEBUG nova.compute.manager [req-2e84af21-c117-4420-8b00-b096382b72ab req-7df0e9d1-5638-4ede-b41a-55f651b397f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Processing event network-vif-plugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.748 232437 DEBUG nova.compute.manager [req-bcae0483-bba8-4a7e-ab27-24aa15cb88e0 req-0e5e60bf-4fe3-449d-831e-4d65612f64f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.748 232437 DEBUG oslo_concurrency.lockutils [req-bcae0483-bba8-4a7e-ab27-24aa15cb88e0 req-0e5e60bf-4fe3-449d-831e-4d65612f64f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.749 232437 DEBUG oslo_concurrency.lockutils [req-bcae0483-bba8-4a7e-ab27-24aa15cb88e0 req-0e5e60bf-4fe3-449d-831e-4d65612f64f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.749 232437 DEBUG oslo_concurrency.lockutils [req-bcae0483-bba8-4a7e-ab27-24aa15cb88e0 req-0e5e60bf-4fe3-449d-831e-4d65612f64f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:42 np0005548731 nova_compute[232433]: 2025-12-06 07:17:42.750 232437 DEBUG nova.compute.manager [req-bcae0483-bba8-4a7e-ab27-24aa15cb88e0 req-0e5e60bf-4fe3-449d-831e-4d65612f64f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Processing event network-vif-plugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:17:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:42.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:17:43 np0005548731 podman[263613]: 2025-12-06 07:17:43.411748608 +0000 UTC m=+1.056176948 container create 1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:17:43 np0005548731 systemd[1]: Started libpod-conmon-1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5.scope.
Dec  6 02:17:43 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:17:43 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d909fbe269594c6a5a0b52c4e0d43221b5bfd0ad074ad79ed5c29a32af488c79/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:17:43 np0005548731 podman[263613]: 2025-12-06 07:17:43.505851741 +0000 UTC m=+1.150280091 container init 1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:17:43 np0005548731 podman[263613]: 2025-12-06 07:17:43.51158576 +0000 UTC m=+1.156014090 container start 1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:17:43 np0005548731 neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745[263703]: [NOTICE]   (263707) : New worker (263709) forked
Dec  6 02:17:43 np0005548731 neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745[263703]: [NOTICE]   (263707) : Loading success.
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.573 143965 INFO neutron.agent.ovn.metadata.agent [-] Port e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 in datapath 83f8ee1c-bee2-4425-8792-4c822f63072c unbound from our chassis#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.576 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 83f8ee1c-bee2-4425-8792-4c822f63072c#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.590 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f8591a-9521-421e-ab0d-e4ede797256e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.617 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c11162-792e-418b-ab0b-e8f3f8f815ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.621 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ab9c94c7-fc02-497c-adaf-dda9db4e392c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.650 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe61506-c2f5-4d2f-b434-10468716043d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.667 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2a9ee5-18a0-4641-9783-59edf28aa6e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap83f8ee1c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:d5:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572304, 'reachable_time': 31290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263723, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.683 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1018da83-b7a9-4d0d-960f-7034f3d2c151]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap83f8ee1c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572316, 'tstamp': 572316}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263724, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap83f8ee1c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572319, 'tstamp': 572319}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263724, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.685 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83f8ee1c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:43 np0005548731 nova_compute[232433]: 2025-12-06 07:17:43.687 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.690 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83f8ee1c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.690 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.691 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap83f8ee1c-b0, col_values=(('external_ids', {'iface-id': 'aaa2a250-1a44-4385-bf81-2c758e7e09fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.691 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.692 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 0afb38e3-893f-4379-98a8-4a56e2b84f9d in datapath 83f8ee1c-bee2-4425-8792-4c822f63072c unbound from our chassis#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.694 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 83f8ee1c-bee2-4425-8792-4c822f63072c#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.710 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9806a3e7-d83c-4b9d-a1a1-0a368a392cc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.738 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[afef82d9-73a4-454e-b2cf-d1c8d22258e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.740 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[a3664917-f7c5-451d-86d1-693217debf7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 nova_compute[232433]: 2025-12-06 07:17:43.741 232437 DEBUG nova.compute.manager [req-f4f4d881-11ea-44b0-a17c-b92dee62f60c req-a41e1f3e-97ca-4433-a0f2-3f4585faed5f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:43 np0005548731 nova_compute[232433]: 2025-12-06 07:17:43.741 232437 DEBUG oslo_concurrency.lockutils [req-f4f4d881-11ea-44b0-a17c-b92dee62f60c req-a41e1f3e-97ca-4433-a0f2-3f4585faed5f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:43 np0005548731 nova_compute[232433]: 2025-12-06 07:17:43.742 232437 DEBUG oslo_concurrency.lockutils [req-f4f4d881-11ea-44b0-a17c-b92dee62f60c req-a41e1f3e-97ca-4433-a0f2-3f4585faed5f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:43 np0005548731 nova_compute[232433]: 2025-12-06 07:17:43.742 232437 DEBUG oslo_concurrency.lockutils [req-f4f4d881-11ea-44b0-a17c-b92dee62f60c req-a41e1f3e-97ca-4433-a0f2-3f4585faed5f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:43 np0005548731 nova_compute[232433]: 2025-12-06 07:17:43.743 232437 DEBUG nova.compute.manager [req-f4f4d881-11ea-44b0-a17c-b92dee62f60c req-a41e1f3e-97ca-4433-a0f2-3f4585faed5f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Processing event network-vif-plugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.766 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cc726e-f00f-49f6-b943-fa96eed0de20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.784 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[87f029be-b21e-464a-bed2-7ba92c0791a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap83f8ee1c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:d5:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572304, 'reachable_time': 31290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263730, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.799 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[682783f6-581e-4c11-abad-98bbf775f7ae]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap83f8ee1c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572316, 'tstamp': 572316}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263731, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap83f8ee1c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572319, 'tstamp': 572319}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263731, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.801 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83f8ee1c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:43 np0005548731 nova_compute[232433]: 2025-12-06 07:17:43.802 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:43 np0005548731 nova_compute[232433]: 2025-12-06 07:17:43.803 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.803 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83f8ee1c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.804 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.804 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap83f8ee1c-b0, col_values=(('external_ids', {'iface-id': 'aaa2a250-1a44-4385-bf81-2c758e7e09fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.804 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.805 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 5984d840-e1cd-45dc-87cb-a337e73a753c in datapath 83f8ee1c-bee2-4425-8792-4c822f63072c unbound from our chassis#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.807 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 83f8ee1c-bee2-4425-8792-4c822f63072c#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.821 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[31cf01b6-3e38-4fc6-a8b9-7a9fad617720]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.848 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6690e2a1-43cd-4da5-9d37-6724d1c5d411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.851 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9d718f28-df0e-4b16-bcf3-f0115852d898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.875 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2e80a7-5ab6-44b4-82c1-b9a5a26e70f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.889 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[08d557ee-776a-4a80-a646-620e827b48af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap83f8ee1c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:d5:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572304, 'reachable_time': 31290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263737, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.906 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dc54e062-af05-43c6-b660-fbd290bd6ffb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap83f8ee1c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572316, 'tstamp': 572316}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263738, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap83f8ee1c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572319, 'tstamp': 572319}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263738, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.907 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83f8ee1c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:43 np0005548731 nova_compute[232433]: 2025-12-06 07:17:43.909 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:43 np0005548731 nova_compute[232433]: 2025-12-06 07:17:43.910 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.910 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83f8ee1c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.910 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.911 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap83f8ee1c-b0, col_values=(('external_ids', {'iface-id': 'aaa2a250-1a44-4385-bf81-2c758e7e09fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.911 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.912 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 509f7f18-31bd-42cd-83f5-e9b62a4e200b in datapath 8856153c-fd33-4eca-913c-fbbc6c3bd29c unbound from our chassis#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.914 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8856153c-fd33-4eca-913c-fbbc6c3bd29c#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.923 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[32028381-b898-4df1-8f90-79540aec732e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.923 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8856153c-f1 in ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.925 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8856153c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.925 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f24879ae-692a-4bbb-85fa-73e40f6ae928]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.926 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5601abb4-daa3-42b1-897c-e6ca48ae1e24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.936 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[b9abbf53-eaac-453d-b823-7fa501f47735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.949 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5137590c-f69b-4239-8326-23a96f25566d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.984 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3e9ffe-e82c-473a-8f00-3f9ba79f6a72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:43.991 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6dbcd75c-79bf-4a4d-a018-28e125d65c86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:43 np0005548731 NetworkManager[49182]: <info>  [1765005463.9923] manager: (tap8856153c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/151)
Dec  6 02:17:44 np0005548731 systemd-udevd[263746]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.024 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f6aea7f4-4c60-4b87-90e7-7fbba8b29b53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.028 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c9eedcc7-cded-4d3e-bcc3-55280fe1b49d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 NetworkManager[49182]: <info>  [1765005464.0517] device (tap8856153c-f0): carrier: link connected
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.057 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[15621f8c-b4d6-4122-a6a6-b1ff18ade6f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.080 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[83cf827e-e45a-4be9-8a0c-e4e7afd5744d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8856153c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:c5:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572605, 'reachable_time': 42493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263765, 'error': None, 'target': 'ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.096 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4c5781-69d9-4d4c-b53c-62d46c2de8d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:c57f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572605, 'tstamp': 572605}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263766, 'error': None, 'target': 'ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.117 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[eb385e11-ac4f-4c26-bf0a-e9a7ca615291]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8856153c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:c5:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572605, 'reachable_time': 42493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263767, 'error': None, 'target': 'ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.149 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f24756-b94d-455d-bf71-5a91e4e6e7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.213 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[286e6efe-8dee-4962-a070-087e136433a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.214 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8856153c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.214 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.215 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8856153c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.216 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:44 np0005548731 kernel: tap8856153c-f0: entered promiscuous mode
Dec  6 02:17:44 np0005548731 NetworkManager[49182]: <info>  [1765005464.2173] manager: (tap8856153c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.219 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.221 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8856153c-f0, col_values=(('external_ids', {'iface-id': 'f620b05f-f6bf-438b-809a-9cd1eea6480a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.222 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:17:44Z|00267|binding|INFO|Releasing lport f620b05f-f6bf-438b-809a-9cd1eea6480a from this chassis (sb_readonly=0)
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.236 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.237 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.237 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8856153c-fd33-4eca-913c-fbbc6c3bd29c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8856153c-fd33-4eca-913c-fbbc6c3bd29c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.238 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f29f6f75-1ea3-46b7-923d-bf90cda47596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.238 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-8856153c-fd33-4eca-913c-fbbc6c3bd29c
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/8856153c-fd33-4eca-913c-fbbc6c3bd29c.pid.haproxy
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 8856153c-fd33-4eca-913c-fbbc6c3bd29c
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.239 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'env', 'PROCESS_TAG=haproxy-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8856153c-fd33-4eca-913c-fbbc6c3bd29c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:17:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:44.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.604 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:44 np0005548731 podman[263801]: 2025-12-06 07:17:44.624332195 +0000 UTC m=+0.088477726 container create 13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  6 02:17:44 np0005548731 podman[263801]: 2025-12-06 07:17:44.557478097 +0000 UTC m=+0.021623648 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:17:44 np0005548731 systemd[1]: Started libpod-conmon-13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606.scope.
Dec  6 02:17:44 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:17:44 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871924b6cb342e86e814aae8fb0d25c4f6afec51b9e6f508da1e9ffead9c3195/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:17:44 np0005548731 podman[263801]: 2025-12-06 07:17:44.700346856 +0000 UTC m=+0.164492387 container init 13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 02:17:44 np0005548731 podman[263801]: 2025-12-06 07:17:44.705392959 +0000 UTC m=+0.169538490 container start 13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  6 02:17:44 np0005548731 neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c[263816]: [NOTICE]   (263820) : New worker (263822) forked
Dec  6 02:17:44 np0005548731 neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c[263816]: [NOTICE]   (263820) : Loading success.
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.756 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 0355ba71-a598-4e22-a0b7-c5c10b61733e in datapath 8856153c-fd33-4eca-913c-fbbc6c3bd29c unbound from our chassis#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.758 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8856153c-fd33-4eca-913c-fbbc6c3bd29c#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.773 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f973756e-b670-4852-b21c-01fd4f5bd3e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.801 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca485ad-3fcf-4e33-9e65-e0ea3454a22e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.804 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[fa75b4c8-a1b0-4d34-854a-d034f1e3a5b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.833 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[21b20103-b1e4-493e-b23b-33d66c57a291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.835 232437 DEBUG nova.compute.manager [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.835 232437 DEBUG oslo_concurrency.lockutils [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.836 232437 DEBUG oslo_concurrency.lockutils [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.836 232437 DEBUG oslo_concurrency.lockutils [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.836 232437 DEBUG nova.compute.manager [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No event matching network-vif-plugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d in dict_keys([('network-vif-plugged', '5408643b-7986-461d-867d-9f6545eeabb3'), ('network-vif-plugged', 'e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91'), ('network-vif-plugged', '5984d840-e1cd-45dc-87cb-a337e73a753c'), ('network-vif-plugged', '0355ba71-a598-4e22-a0b7-c5c10b61733e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.836 232437 WARNING nova.compute.manager [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.836 232437 DEBUG nova.compute.manager [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-5984d840-e1cd-45dc-87cb-a337e73a753c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.837 232437 DEBUG oslo_concurrency.lockutils [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.837 232437 DEBUG oslo_concurrency.lockutils [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.837 232437 DEBUG oslo_concurrency.lockutils [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.837 232437 DEBUG nova.compute.manager [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Processing event network-vif-plugged-5984d840-e1cd-45dc-87cb-a337e73a753c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.837 232437 DEBUG nova.compute.manager [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-5984d840-e1cd-45dc-87cb-a337e73a753c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.838 232437 DEBUG oslo_concurrency.lockutils [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.838 232437 DEBUG oslo_concurrency.lockutils [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.838 232437 DEBUG oslo_concurrency.lockutils [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.838 232437 DEBUG nova.compute.manager [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No event matching network-vif-plugged-5984d840-e1cd-45dc-87cb-a337e73a753c in dict_keys([('network-vif-plugged', '5408643b-7986-461d-867d-9f6545eeabb3'), ('network-vif-plugged', 'e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91'), ('network-vif-plugged', '0355ba71-a598-4e22-a0b7-c5c10b61733e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.838 232437 WARNING nova.compute.manager [req-e65c2780-8283-4d8e-b9de-03ee5336d9b8 req-20311482-3aa1-47a5-9a93-cc6424113bcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-5984d840-e1cd-45dc-87cb-a337e73a753c for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.851 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[78a7d89e-d4ef-4211-bacc-2085081d0bb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8856153c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:c5:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 4, 'rx_bytes': 176, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 4, 'rx_bytes': 176, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572605, 'reachable_time': 42493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263836, 'error': None, 'target': 'ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.869 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea90aa4-c1ea-483b-9c86-13af3227f9c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8856153c-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572618, 'tstamp': 572618}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263837, 'error': None, 'target': 'ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tap8856153c-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572621, 'tstamp': 572621}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263837, 'error': None, 'target': 'ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.873 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8856153c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.876 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.877 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8856153c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.878 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.878 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8856153c-f0, col_values=(('external_ids', {'iface-id': 'f620b05f-f6bf-438b-809a-9cd1eea6480a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:17:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:44.879 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.931 232437 DEBUG nova.compute.manager [req-b3895c77-5445-490f-8946-b387f58b68d0 req-ecbb1b12-16f3-4c43-b68f-155fb71f2c0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.932 232437 DEBUG oslo_concurrency.lockutils [req-b3895c77-5445-490f-8946-b387f58b68d0 req-ecbb1b12-16f3-4c43-b68f-155fb71f2c0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.932 232437 DEBUG oslo_concurrency.lockutils [req-b3895c77-5445-490f-8946-b387f58b68d0 req-ecbb1b12-16f3-4c43-b68f-155fb71f2c0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.932 232437 DEBUG oslo_concurrency.lockutils [req-b3895c77-5445-490f-8946-b387f58b68d0 req-ecbb1b12-16f3-4c43-b68f-155fb71f2c0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.933 232437 DEBUG nova.compute.manager [req-b3895c77-5445-490f-8946-b387f58b68d0 req-ecbb1b12-16f3-4c43-b68f-155fb71f2c0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No event matching network-vif-plugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 in dict_keys([('network-vif-plugged', '5408643b-7986-461d-867d-9f6545eeabb3'), ('network-vif-plugged', 'e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91'), ('network-vif-plugged', '0355ba71-a598-4e22-a0b7-c5c10b61733e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:17:44 np0005548731 nova_compute[232433]: 2025-12-06 07:17:44.933 232437 WARNING nova.compute.manager [req-b3895c77-5445-490f-8946-b387f58b68d0 req-ecbb1b12-16f3-4c43-b68f-155fb71f2c0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:17:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:17:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:44.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.831 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-5408643b-7986-461d-867d-9f6545eeabb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.831 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.831 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.832 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.832 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Processing event network-vif-plugged-5408643b-7986-461d-867d-9f6545eeabb3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.832 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-5408643b-7986-461d-867d-9f6545eeabb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.832 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.832 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.832 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.833 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No event matching network-vif-plugged-5408643b-7986-461d-867d-9f6545eeabb3 in dict_keys([('network-vif-plugged', 'e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91'), ('network-vif-plugged', '0355ba71-a598-4e22-a0b7-c5c10b61733e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.833 232437 WARNING nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-5408643b-7986-461d-867d-9f6545eeabb3 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.833 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.833 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.833 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.833 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.833 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Processing event network-vif-plugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.834 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.834 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.834 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.834 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.834 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No event matching network-vif-plugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 in dict_keys([('network-vif-plugged', '0355ba71-a598-4e22-a0b7-c5c10b61733e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.834 232437 WARNING nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.834 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-0355ba71-a598-4e22-a0b7-c5c10b61733e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.835 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.835 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.835 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.835 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Processing event network-vif-plugged-0355ba71-a598-4e22-a0b7-c5c10b61733e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.835 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-0355ba71-a598-4e22-a0b7-c5c10b61733e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.836 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.836 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.836 232437 DEBUG oslo_concurrency.lockutils [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.836 232437 DEBUG nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-plugged-0355ba71-a598-4e22-a0b7-c5c10b61733e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.836 232437 WARNING nova.compute.manager [req-38410d89-e3db-4f9f-b945-c3a9c3402a3d req-73f2aec0-b201-40c0-b277-86c3fed27836 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-0355ba71-a598-4e22-a0b7-c5c10b61733e for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.837 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.841 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005465.8412971, 067b423b-4ac2-4ca9-9000-44b6fb2af34e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.841 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.843 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.847 232437 INFO nova.virt.libvirt.driver [-] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Instance spawned successfully.#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.848 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.868 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.872 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.873 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.873 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.873 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.874 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.874 232437 DEBUG nova.virt.libvirt.driver [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.878 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.914 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.929 232437 DEBUG nova.compute.manager [req-271539c5-9cb6-4a17-81fb-de3ee67b10e8 req-82927c24-c061-4284-8c38-c653ab5ae070 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.930 232437 DEBUG oslo_concurrency.lockutils [req-271539c5-9cb6-4a17-81fb-de3ee67b10e8 req-82927c24-c061-4284-8c38-c653ab5ae070 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.930 232437 DEBUG oslo_concurrency.lockutils [req-271539c5-9cb6-4a17-81fb-de3ee67b10e8 req-82927c24-c061-4284-8c38-c653ab5ae070 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.930 232437 DEBUG oslo_concurrency.lockutils [req-271539c5-9cb6-4a17-81fb-de3ee67b10e8 req-82927c24-c061-4284-8c38-c653ab5ae070 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.930 232437 DEBUG nova.compute.manager [req-271539c5-9cb6-4a17-81fb-de3ee67b10e8 req-82927c24-c061-4284-8c38-c653ab5ae070 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-plugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.930 232437 WARNING nova.compute.manager [req-271539c5-9cb6-4a17-81fb-de3ee67b10e8 req-82927c24-c061-4284-8c38-c653ab5ae070 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.967 232437 INFO nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Took 34.29 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:17:45 np0005548731 nova_compute[232433]: 2025-12-06 07:17:45.967 232437 DEBUG nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:17:46 np0005548731 nova_compute[232433]: 2025-12-06 07:17:46.039 232437 INFO nova.compute.manager [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Took 43.92 seconds to build instance.#033[00m
Dec  6 02:17:46 np0005548731 nova_compute[232433]: 2025-12-06 07:17:46.061 232437 DEBUG oslo_concurrency.lockutils [None req-1b05c8c5-02cf-4837-8e67-324222e26c9b 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 44.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:17:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:46.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:46 np0005548731 nova_compute[232433]: 2025-12-06 07:17:46.932 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:46.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:48.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:48.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:49 np0005548731 nova_compute[232433]: 2025-12-06 07:17:49.608 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:50 np0005548731 nova_compute[232433]: 2025-12-06 07:17:50.141 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:17:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:17:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:50.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:17:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:50.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:51 np0005548731 nova_compute[232433]: 2025-12-06 07:17:51.934 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:17:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:52.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:17:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:52.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:53 np0005548731 nova_compute[232433]: 2025-12-06 07:17:53.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:17:53 np0005548731 nova_compute[232433]: 2025-12-06 07:17:53.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:17:53 np0005548731 nova_compute[232433]: 2025-12-06 07:17:53.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:17:53 np0005548731 nova_compute[232433]: 2025-12-06 07:17:53.137 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:17:53 np0005548731 nova_compute[232433]: 2025-12-06 07:17:53.585 232437 DEBUG nova.compute.manager [req-ae7205a2-8039-4bc9-a845-b570f56ced61 req-2016d52e-319e-4c3a-9bbe-433b5da4463b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-changed-5408643b-7986-461d-867d-9f6545eeabb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:17:53 np0005548731 nova_compute[232433]: 2025-12-06 07:17:53.585 232437 DEBUG nova.compute.manager [req-ae7205a2-8039-4bc9-a845-b570f56ced61 req-2016d52e-319e-4c3a-9bbe-433b5da4463b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing instance network info cache due to event network-changed-5408643b-7986-461d-867d-9f6545eeabb3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:17:53 np0005548731 nova_compute[232433]: 2025-12-06 07:17:53.585 232437 DEBUG oslo_concurrency.lockutils [req-ae7205a2-8039-4bc9-a845-b570f56ced61 req-2016d52e-319e-4c3a-9bbe-433b5da4463b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:17:53 np0005548731 nova_compute[232433]: 2025-12-06 07:17:53.586 232437 DEBUG oslo_concurrency.lockutils [req-ae7205a2-8039-4bc9-a845-b570f56ced61 req-2016d52e-319e-4c3a-9bbe-433b5da4463b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:17:53 np0005548731 nova_compute[232433]: 2025-12-06 07:17:53.586 232437 DEBUG nova.network.neutron [req-ae7205a2-8039-4bc9-a845-b570f56ced61 req-2016d52e-319e-4c3a-9bbe-433b5da4463b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Refreshing network info cache for port 5408643b-7986-461d-867d-9f6545eeabb3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:17:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:17:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:54.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:17:54 np0005548731 nova_compute[232433]: 2025-12-06 07:17:54.610 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:54.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:55 np0005548731 nova_compute[232433]: 2025-12-06 07:17:55.138 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:17:55 np0005548731 nova_compute[232433]: 2025-12-06 07:17:55.138 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:17:55 np0005548731 nova_compute[232433]: 2025-12-06 07:17:55.139 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:17:55 np0005548731 nova_compute[232433]: 2025-12-06 07:17:55.585 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:17:55 np0005548731 nova_compute[232433]: 2025-12-06 07:17:55.586 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:17:55 np0005548731 nova_compute[232433]: 2025-12-06 07:17:55.586 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:17:55 np0005548731 nova_compute[232433]: 2025-12-06 07:17:55.586 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:17:55 np0005548731 nova_compute[232433]: 2025-12-06 07:17:55.936 232437 DEBUG nova.network.neutron [req-ae7205a2-8039-4bc9-a845-b570f56ced61 req-2016d52e-319e-4c3a-9bbe-433b5da4463b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updated VIF entry in instance network info cache for port 5408643b-7986-461d-867d-9f6545eeabb3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:17:55 np0005548731 nova_compute[232433]: 2025-12-06 07:17:55.937 232437 DEBUG nova.network.neutron [req-ae7205a2-8039-4bc9-a845-b570f56ced61 req-2016d52e-319e-4c3a-9bbe-433b5da4463b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [{"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "address": "fa:16:3e:44:6f:c0", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape27e5b98-b4", "ovs_interfaceid": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:17:55 np0005548731 podman[263845]: 2025-12-06 07:17:55.940684317 +0000 UTC m=+0.094360259 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  6 02:17:55 np0005548731 podman[263843]: 2025-12-06 07:17:55.941275011 +0000 UTC m=+0.099574115 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:17:55 np0005548731 podman[263844]: 2025-12-06 07:17:55.962458607 +0000 UTC m=+0.121838858 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  6 02:17:55 np0005548731 nova_compute[232433]: 2025-12-06 07:17:55.971 232437 DEBUG oslo_concurrency.lockutils [req-ae7205a2-8039-4bc9-a845-b570f56ced61 req-2016d52e-319e-4c3a-9bbe-433b5da4463b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-067b423b-4ac2-4ca9-9000-44b6fb2af34e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:17:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:56.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:56 np0005548731 nova_compute[232433]: 2025-12-06 07:17:56.937 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:56.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:57 np0005548731 nova_compute[232433]: 2025-12-06 07:17:57.678 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:17:57 np0005548731 nova_compute[232433]: 2025-12-06 07:17:57.694 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:17:57 np0005548731 nova_compute[232433]: 2025-12-06 07:17:57.694 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:17:57 np0005548731 nova_compute[232433]: 2025-12-06 07:17:57.695 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:17:57 np0005548731 nova_compute[232433]: 2025-12-06 07:17:57.695 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:17:57 np0005548731 nova_compute[232433]: 2025-12-06 07:17:57.695 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:17:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:17:58 np0005548731 nova_compute[232433]: 2025-12-06 07:17:58.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:17:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:17:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:58.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:17:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:17:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:17:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:58.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:17:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:59.194 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:17:59 np0005548731 nova_compute[232433]: 2025-12-06 07:17:59.195 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:17:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:17:59.196 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:17:59 np0005548731 nova_compute[232433]: 2025-12-06 07:17:59.613 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.152 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.153 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.179 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.179 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.179 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.180 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.180 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:18:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:00Z|00268|binding|INFO|Releasing lport aaa2a250-1a44-4385-bf81-2c758e7e09fb from this chassis (sb_readonly=0)
Dec  6 02:18:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:00Z|00269|binding|INFO|Releasing lport c99144ca-f47e-4d7c-874e-ccf4efc57815 from this chassis (sb_readonly=0)
Dec  6 02:18:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:00Z|00270|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec  6 02:18:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:00Z|00271|binding|INFO|Releasing lport f620b05f-f6bf-438b-809a-9cd1eea6480a from this chassis (sb_readonly=0)
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.400 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:00.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:18:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3472577622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.612 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.783 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.784 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.788 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.788 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.788 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:18:00 np0005548731 nova_compute[232433]: 2025-12-06 07:18:00.788 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:18:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:00.860 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:00.861 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:00.864 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:18:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:00.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:18:01 np0005548731 nova_compute[232433]: 2025-12-06 07:18:01.009 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:18:01 np0005548731 nova_compute[232433]: 2025-12-06 07:18:01.010 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4198MB free_disk=20.89682388305664GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:18:01 np0005548731 nova_compute[232433]: 2025-12-06 07:18:01.011 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:01 np0005548731 nova_compute[232433]: 2025-12-06 07:18:01.011 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:01 np0005548731 nova_compute[232433]: 2025-12-06 07:18:01.095 232437 INFO nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating resource usage from migration e712029d-e908-4f4e-9c2c-a2742ca0daa7#033[00m
Dec  6 02:18:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:01Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:5a:4c 10.1.1.159
Dec  6 02:18:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:01Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:d0:96 10.2.2.100
Dec  6 02:18:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:01Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:5a:4c 10.1.1.159
Dec  6 02:18:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:01Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:d0:96 10.2.2.100
Dec  6 02:18:01 np0005548731 nova_compute[232433]: 2025-12-06 07:18:01.399 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance c8403a0c-2fe6-48fe-91af-ec5aca71e12d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:18:01 np0005548731 nova_compute[232433]: 2025-12-06 07:18:01.399 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 067b423b-4ac2-4ca9-9000-44b6fb2af34e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:18:01 np0005548731 nova_compute[232433]: 2025-12-06 07:18:01.400 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:18:01 np0005548731 nova_compute[232433]: 2025-12-06 07:18:01.400 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:18:01 np0005548731 nova_compute[232433]: 2025-12-06 07:18:01.546 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:18:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:01Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:6f:c0 10.1.1.98
Dec  6 02:18:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:01Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:6f:c0 10.1.1.98
Dec  6 02:18:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:01Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:c0:da 10.100.0.5
Dec  6 02:18:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:01Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:c0:da 10.100.0.5
Dec  6 02:18:01 np0005548731 nova_compute[232433]: 2025-12-06 07:18:01.940 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:18:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/13716958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:18:02 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:02Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:7f:ee 10.1.1.142
Dec  6 02:18:02 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:02Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:7f:ee 10.1.1.142
Dec  6 02:18:02 np0005548731 nova_compute[232433]: 2025-12-06 07:18:02.009 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:18:02 np0005548731 nova_compute[232433]: 2025-12-06 07:18:02.015 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:18:02 np0005548731 nova_compute[232433]: 2025-12-06 07:18:02.044 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:18:02 np0005548731 nova_compute[232433]: 2025-12-06 07:18:02.075 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:18:02 np0005548731 nova_compute[232433]: 2025-12-06 07:18:02.075 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:02 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:02Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:48:7a 10.2.2.200
Dec  6 02:18:02 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:02Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:48:7a 10.2.2.200
Dec  6 02:18:02 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:02Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:78:59 10.1.1.181
Dec  6 02:18:02 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:02Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:78:59 10.1.1.181
Dec  6 02:18:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:02.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:02.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:03 np0005548731 nova_compute[232433]: 2025-12-06 07:18:03.027 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:18:03 np0005548731 nova_compute[232433]: 2025-12-06 07:18:03.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:18:04 np0005548731 nova_compute[232433]: 2025-12-06 07:18:04.226 232437 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:18:04 np0005548731 nova_compute[232433]: 2025-12-06 07:18:04.227 232437 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:18:04 np0005548731 nova_compute[232433]: 2025-12-06 07:18:04.227 232437 DEBUG nova.network.neutron [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:18:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 02:18:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:04.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 02:18:04 np0005548731 nova_compute[232433]: 2025-12-06 07:18:04.669 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:04.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:06.199 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:06.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:06 np0005548731 nova_compute[232433]: 2025-12-06 07:18:06.950 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:06.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:07 np0005548731 nova_compute[232433]: 2025-12-06 07:18:07.744 232437 DEBUG nova.network.neutron [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:18:07 np0005548731 nova_compute[232433]: 2025-12-06 07:18:07.759 232437 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:18:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:07 np0005548731 nova_compute[232433]: 2025-12-06 07:18:07.887 232437 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Dec  6 02:18:07 np0005548731 nova_compute[232433]: 2025-12-06 07:18:07.888 232437 DEBUG nova.virt.libvirt.volume.remotefs [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Creating file /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/8bbe7126d8314bb788c1990c9d60285b.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Dec  6 02:18:07 np0005548731 nova_compute[232433]: 2025-12-06 07:18:07.888 232437 DEBUG oslo_concurrency.processutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/8bbe7126d8314bb788c1990c9d60285b.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:18:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:08.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:08 np0005548731 nova_compute[232433]: 2025-12-06 07:18:08.654 232437 DEBUG oslo_concurrency.processutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/8bbe7126d8314bb788c1990c9d60285b.tmp" returned: 1 in 0.766s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:18:08 np0005548731 nova_compute[232433]: 2025-12-06 07:18:08.655 232437 DEBUG oslo_concurrency.processutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/8bbe7126d8314bb788c1990c9d60285b.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  6 02:18:08 np0005548731 nova_compute[232433]: 2025-12-06 07:18:08.655 232437 DEBUG nova.virt.libvirt.volume.remotefs [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Creating directory /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Dec  6 02:18:08 np0005548731 nova_compute[232433]: 2025-12-06 07:18:08.655 232437 DEBUG oslo_concurrency.processutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:18:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:18:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3749883810' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:18:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:18:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3749883810' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:18:08 np0005548731 nova_compute[232433]: 2025-12-06 07:18:08.858 232437 DEBUG oslo_concurrency.processutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:18:08 np0005548731 nova_compute[232433]: 2025-12-06 07:18:08.863 232437 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:18:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:08.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:09 np0005548731 nova_compute[232433]: 2025-12-06 07:18:09.673 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:10.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:10.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:11 np0005548731 kernel: tapa599f1a0-54 (unregistering): left promiscuous mode
Dec  6 02:18:11 np0005548731 NetworkManager[49182]: <info>  [1765005491.4099] device (tapa599f1a0-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.424 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:11Z|00272|binding|INFO|Releasing lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c from this chassis (sb_readonly=0)
Dec  6 02:18:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:11Z|00273|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c down in Southbound
Dec  6 02:18:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:11Z|00274|binding|INFO|Removing iface tapa599f1a0-54 ovn-installed in OVS
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.427 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.440 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:11 np0005548731 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000044.scope: Deactivated successfully.
Dec  6 02:18:11 np0005548731 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000044.scope: Consumed 18.376s CPU time.
Dec  6 02:18:11 np0005548731 systemd-machined[195355]: Machine qemu-31-instance-00000044 terminated.
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.879 232437 INFO nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance shutdown successfully after 3 seconds.#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.884 232437 INFO nova.virt.libvirt.driver [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance destroyed successfully.#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.885 232437 DEBUG nova.virt.libvirt.vif [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:18:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:9b:0b:0a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.886 232437 DEBUG nova.network.os_vif_util [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:9b:0b:0a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.887 232437 DEBUG nova.network.os_vif_util [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.887 232437 DEBUG os_vif [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.889 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.889 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa599f1a0-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.891 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.893 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.893 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.896 232437 INFO os_vif [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54')#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.901 232437 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.901 232437 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:18:11 np0005548731 nova_compute[232433]: 2025-12-06 07:18:11.953 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.006 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:0b:0a 10.100.0.6'], port_security=['fa:16:3e:9b:0b:0a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '10', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a599f1a0-5413-4dc9-9ae4-d7ba512d761c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.007 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a599f1a0-5413-4dc9-9ae4-d7ba512d761c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.009 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.011 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a5bfac-b313-4fed-8977-10ea57f68482]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.012 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore#033[00m
Dec  6 02:18:12 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[262342]: [NOTICE]   (262346) : haproxy version is 2.8.14-c23fe91
Dec  6 02:18:12 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[262342]: [NOTICE]   (262346) : path to executable is /usr/sbin/haproxy
Dec  6 02:18:12 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[262342]: [WARNING]  (262346) : Exiting Master process...
Dec  6 02:18:12 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[262342]: [ALERT]    (262346) : Current worker (262348) exited with code 143 (Terminated)
Dec  6 02:18:12 np0005548731 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[262342]: [WARNING]  (262346) : All workers exited. Exiting... (0)
Dec  6 02:18:12 np0005548731 systemd[1]: libpod-317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189.scope: Deactivated successfully.
Dec  6 02:18:12 np0005548731 podman[264046]: 2025-12-06 07:18:12.205811513 +0000 UTC m=+0.060098424 container died 317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:18:12 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189-userdata-shm.mount: Deactivated successfully.
Dec  6 02:18:12 np0005548731 systemd[1]: var-lib-containers-storage-overlay-475c105d3a78b321a23d8ade3162af64a94a2b613eaed7e9692b9a012d35810f-merged.mount: Deactivated successfully.
Dec  6 02:18:12 np0005548731 podman[264046]: 2025-12-06 07:18:12.24305224 +0000 UTC m=+0.097339161 container cleanup 317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:18:12 np0005548731 systemd[1]: libpod-conmon-317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189.scope: Deactivated successfully.
Dec  6 02:18:12 np0005548731 podman[264076]: 2025-12-06 07:18:12.30013599 +0000 UTC m=+0.038744145 container remove 317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.305 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1e900b-8846-42ae-97ce-956702e3e4db]: (4, ('Sat Dec  6 07:18:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189)\n317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189\nSat Dec  6 07:18:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189)\n317e1c23308267c2af425d4e07b494cff705a13752ffc5c7271932c07e5b1189\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.307 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f864316a-ec77-497c-97b7-34a024033101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.308 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:12 np0005548731 kernel: tap4d599401-30: left promiscuous mode
Dec  6 02:18:12 np0005548731 nova_compute[232433]: 2025-12-06 07:18:12.312 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:12 np0005548731 nova_compute[232433]: 2025-12-06 07:18:12.323 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.326 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d253c183-b14c-4be6-a44f-0116a913b229]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.342 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0cded86f-e21d-4e4e-bba1-59902a4c7ade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.343 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5759fc58-7d8f-4dd2-abd7-4073f8b93ce3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.370 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[08d95e1f-3c07-45d9-8281-a513ffbcce14]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562833, 'reachable_time': 28936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264094, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:12 np0005548731 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.374 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:18:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:12.374 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[3a78f24c-e246-42d9-8f53-8ef09d064e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:18:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:12.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:18:12 np0005548731 nova_compute[232433]: 2025-12-06 07:18:12.698 232437 DEBUG neutronclient.v2_0.client [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port a599f1a0-5413-4dc9-9ae4-d7ba512d761c for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Dec  6 02:18:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:12 np0005548731 nova_compute[232433]: 2025-12-06 07:18:12.851 232437 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:12 np0005548731 nova_compute[232433]: 2025-12-06 07:18:12.852 232437 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:12 np0005548731 nova_compute[232433]: 2025-12-06 07:18:12.852 232437 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:12.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:13.014 144074 DEBUG eventlet.wsgi.server [-] (144074) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  6 02:18:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:13.016 144074 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Dec  6 02:18:13 np0005548731 ovn_metadata_agent[143960]: Accept: */*#015
Dec  6 02:18:13 np0005548731 ovn_metadata_agent[143960]: Connection: close#015
Dec  6 02:18:13 np0005548731 ovn_metadata_agent[143960]: Content-Type: text/plain#015
Dec  6 02:18:13 np0005548731 ovn_metadata_agent[143960]: Host: 169.254.169.254#015
Dec  6 02:18:13 np0005548731 ovn_metadata_agent[143960]: User-Agent: curl/7.84.0#015
Dec  6 02:18:13 np0005548731 ovn_metadata_agent[143960]: X-Forwarded-For: 10.100.0.5#015
Dec  6 02:18:13 np0005548731 ovn_metadata_agent[143960]: X-Ovn-Network-Id: 1867060b-2830-47a2-bf0a-46a10388f745 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  6 02:18:13 np0005548731 nova_compute[232433]: 2025-12-06 07:18:13.058 232437 DEBUG nova.compute.manager [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:13 np0005548731 nova_compute[232433]: 2025-12-06 07:18:13.058 232437 DEBUG oslo_concurrency.lockutils [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:13 np0005548731 nova_compute[232433]: 2025-12-06 07:18:13.058 232437 DEBUG oslo_concurrency.lockutils [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:13 np0005548731 nova_compute[232433]: 2025-12-06 07:18:13.059 232437 DEBUG oslo_concurrency.lockutils [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:13 np0005548731 nova_compute[232433]: 2025-12-06 07:18:13.059 232437 DEBUG nova.compute.manager [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:13 np0005548731 nova_compute[232433]: 2025-12-06 07:18:13.059 232437 WARNING nova.compute.manager [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state resize_migrated.#033[00m
Dec  6 02:18:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:14.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:14.805 144074 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  6 02:18:14 np0005548731 haproxy-metadata-proxy-1867060b-2830-47a2-bf0a-46a10388f745[263709]: 10.100.0.5:57918 [06/Dec/2025:07:18:13.013] listener listener/metadata 0/0/0/1793/1793 200 2534 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Dec  6 02:18:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:14.806 144074 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2550 time: 1.7896991#033[00m
Dec  6 02:18:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:14.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:15 np0005548731 nova_compute[232433]: 2025-12-06 07:18:15.530 232437 DEBUG nova.compute.manager [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:15 np0005548731 nova_compute[232433]: 2025-12-06 07:18:15.530 232437 DEBUG oslo_concurrency.lockutils [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:15 np0005548731 nova_compute[232433]: 2025-12-06 07:18:15.531 232437 DEBUG oslo_concurrency.lockutils [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:15 np0005548731 nova_compute[232433]: 2025-12-06 07:18:15.531 232437 DEBUG oslo_concurrency.lockutils [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:15 np0005548731 nova_compute[232433]: 2025-12-06 07:18:15.531 232437 DEBUG nova.compute.manager [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:15 np0005548731 nova_compute[232433]: 2025-12-06 07:18:15.531 232437 WARNING nova.compute.manager [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state resize_migrated.#033[00m
Dec  6 02:18:15 np0005548731 nova_compute[232433]: 2025-12-06 07:18:15.687 232437 DEBUG nova.compute.manager [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-changed-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:15 np0005548731 nova_compute[232433]: 2025-12-06 07:18:15.687 232437 DEBUG nova.compute.manager [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Refreshing instance network info cache due to event network-changed-a599f1a0-5413-4dc9-9ae4-d7ba512d761c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:18:15 np0005548731 nova_compute[232433]: 2025-12-06 07:18:15.688 232437 DEBUG oslo_concurrency.lockutils [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:18:15 np0005548731 nova_compute[232433]: 2025-12-06 07:18:15.688 232437 DEBUG oslo_concurrency.lockutils [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:18:15 np0005548731 nova_compute[232433]: 2025-12-06 07:18:15.688 232437 DEBUG nova.network.neutron [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Refreshing network info cache for port a599f1a0-5413-4dc9-9ae4-d7ba512d761c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:18:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:16.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:16 np0005548731 nova_compute[232433]: 2025-12-06 07:18:16.892 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:16 np0005548731 nova_compute[232433]: 2025-12-06 07:18:16.957 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:16.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.679 232437 DEBUG oslo_concurrency.lockutils [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.680 232437 DEBUG oslo_concurrency.lockutils [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.680 232437 DEBUG oslo_concurrency.lockutils [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.681 232437 DEBUG oslo_concurrency.lockutils [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.681 232437 DEBUG oslo_concurrency.lockutils [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.682 232437 INFO nova.compute.manager [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Terminating instance#033[00m
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.684 232437 DEBUG nova.compute.manager [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:18:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:17 np0005548731 kernel: tap5408643b-79 (unregistering): left promiscuous mode
Dec  6 02:18:17 np0005548731 NetworkManager[49182]: <info>  [1765005497.7783] device (tap5408643b-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.786 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00275|binding|INFO|Releasing lport 5408643b-7986-461d-867d-9f6545eeabb3 from this chassis (sb_readonly=0)
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00276|binding|INFO|Setting lport 5408643b-7986-461d-867d-9f6545eeabb3 down in Southbound
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00277|binding|INFO|Removing iface tap5408643b-79 ovn-installed in OVS
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.790 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.803 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:17.803 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:c0:da 10.100.0.5'], port_security=['fa:16:3e:43:c0:da 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1867060b-2830-47a2-bf0a-46a10388f745', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '237cb9be-2d8b-4ac0-bb7a-8d0c44abab0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cde0a4c1-aca8-4a0b-ab02-e783a68c2106, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=5408643b-7986-461d-867d-9f6545eeabb3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:18:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:17.805 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 5408643b-7986-461d-867d-9f6545eeabb3 in datapath 1867060b-2830-47a2-bf0a-46a10388f745 unbound from our chassis#033[00m
Dec  6 02:18:17 np0005548731 kernel: tap0f3261df-8a (unregistering): left promiscuous mode
Dec  6 02:18:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:17.808 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1867060b-2830-47a2-bf0a-46a10388f745, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:18:17 np0005548731 NetworkManager[49182]: <info>  [1765005497.8097] device (tap0f3261df-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:18:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:17.810 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[79226032-8d69-4faa-8707-f681f81452d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:17.811 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745 namespace which is not needed anymore#033[00m
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00278|binding|INFO|Releasing lport 0f3261df-8a99-4884-8db8-4ab5250dafc3 from this chassis (sb_readonly=0)
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.814 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00279|binding|INFO|Setting lport 0f3261df-8a99-4884-8db8-4ab5250dafc3 down in Southbound
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00280|binding|INFO|Removing iface tap0f3261df-8a ovn-installed in OVS
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.820 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:17.837 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:5a:4c 10.1.1.159'], port_security=['fa:16:3e:33:5a:4c 10.1.1.159'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1263915628', 'neutron:cidrs': '10.1.1.159/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83f8ee1c-bee2-4425-8792-4c822f63072c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1263915628', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c82004de-8ebc-40e2-958d-d9900669c6b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e94e8fb2-850a-43df-84ef-79dd7fe43cfe, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=0f3261df-8a99-4884-8db8-4ab5250dafc3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.841 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 kernel: tap0afb38e3-89 (unregistering): left promiscuous mode
Dec  6 02:18:17 np0005548731 NetworkManager[49182]: <info>  [1765005497.8493] device (tap0afb38e3-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.863 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.865 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00281|binding|INFO|Releasing lport 0afb38e3-893f-4379-98a8-4a56e2b84f9d from this chassis (sb_readonly=0)
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00282|binding|INFO|Setting lport 0afb38e3-893f-4379-98a8-4a56e2b84f9d down in Southbound
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00283|binding|INFO|Removing iface tap0afb38e3-89 ovn-installed in OVS
Dec  6 02:18:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:17.874 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:7f:ee 10.1.1.142'], port_security=['fa:16:3e:b0:7f:ee 10.1.1.142'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-375565817', 'neutron:cidrs': '10.1.1.142/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83f8ee1c-bee2-4425-8792-4c822f63072c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-375565817', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c82004de-8ebc-40e2-958d-d9900669c6b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e94e8fb2-850a-43df-84ef-79dd7fe43cfe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=0afb38e3-893f-4379-98a8-4a56e2b84f9d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:18:17 np0005548731 kernel: tape27e5b98-b4 (unregistering): left promiscuous mode
Dec  6 02:18:17 np0005548731 NetworkManager[49182]: <info>  [1765005497.8799] device (tape27e5b98-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.882 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00284|binding|INFO|Releasing lport e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 from this chassis (sb_readonly=0)
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00285|binding|INFO|Setting lport e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 down in Southbound
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.895 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00286|binding|INFO|Removing iface tape27e5b98-b4 ovn-installed in OVS
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.897 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:17.902 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:6f:c0 10.1.1.98'], port_security=['fa:16:3e:44:6f:c0 10.1.1.98'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.98/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83f8ee1c-bee2-4425-8792-4c822f63072c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '237cb9be-2d8b-4ac0-bb7a-8d0c44abab0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e94e8fb2-850a-43df-84ef-79dd7fe43cfe, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:18:17 np0005548731 kernel: tap5984d840-e1 (unregistering): left promiscuous mode
Dec  6 02:18:17 np0005548731 NetworkManager[49182]: <info>  [1765005497.9109] device (tap5984d840-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.913 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 kernel: tap509f7f18-31 (unregistering): left promiscuous mode
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.930 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00287|binding|INFO|Releasing lport 5984d840-e1cd-45dc-87cb-a337e73a753c from this chassis (sb_readonly=0)
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00288|binding|INFO|Setting lport 5984d840-e1cd-45dc-87cb-a337e73a753c down in Southbound
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00289|binding|INFO|Removing iface tap5984d840-e1 ovn-installed in OVS
Dec  6 02:18:17 np0005548731 NetworkManager[49182]: <info>  [1765005497.9336] device (tap509f7f18-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.934 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745[263703]: [NOTICE]   (263707) : haproxy version is 2.8.14-c23fe91
Dec  6 02:18:17 np0005548731 neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745[263703]: [NOTICE]   (263707) : path to executable is /usr/sbin/haproxy
Dec  6 02:18:17 np0005548731 neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745[263703]: [WARNING]  (263707) : Exiting Master process...
Dec  6 02:18:17 np0005548731 neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745[263703]: [ALERT]    (263707) : Current worker (263709) exited with code 143 (Terminated)
Dec  6 02:18:17 np0005548731 neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745[263703]: [WARNING]  (263707) : All workers exited. Exiting... (0)
Dec  6 02:18:17 np0005548731 systemd[1]: libpod-1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5.scope: Deactivated successfully.
Dec  6 02:18:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:17.947 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:78:59 10.1.1.181'], port_security=['fa:16:3e:0b:78:59 10.1.1.181'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.181/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83f8ee1c-bee2-4425-8792-4c822f63072c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '237cb9be-2d8b-4ac0-bb7a-8d0c44abab0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e94e8fb2-850a-43df-84ef-79dd7fe43cfe, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=5984d840-e1cd-45dc-87cb-a337e73a753c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:18:17 np0005548731 kernel: tap0355ba71-a5 (unregistering): left promiscuous mode
Dec  6 02:18:17 np0005548731 podman[264138]: 2025-12-06 07:18:17.95399948 +0000 UTC m=+0.046886862 container died 1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:18:17 np0005548731 NetworkManager[49182]: <info>  [1765005497.9573] device (tap0355ba71-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.962 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00290|binding|INFO|Releasing lport 509f7f18-31bd-42cd-83f5-e9b62a4e200b from this chassis (sb_readonly=0)
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00291|binding|INFO|Setting lport 509f7f18-31bd-42cd-83f5-e9b62a4e200b down in Southbound
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00292|binding|INFO|Removing iface tap509f7f18-31 ovn-installed in OVS
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.966 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:17.976 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:d0:96 10.2.2.100'], port_security=['fa:16:3e:3a:d0:96 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '237cb9be-2d8b-4ac0-bb7a-8d0c44abab0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=732bca81-ac12-4eee-8f7d-ffbf43eb8c29, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=509f7f18-31bd-42cd-83f5-e9b62a4e200b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:18:17 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5-userdata-shm.mount: Deactivated successfully.
Dec  6 02:18:17 np0005548731 systemd[1]: var-lib-containers-storage-overlay-d909fbe269594c6a5a0b52c4e0d43221b5bfd0ad074ad79ed5c29a32af488c79-merged.mount: Deactivated successfully.
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00293|binding|INFO|Releasing lport 0355ba71-a598-4e22-a0b7-c5c10b61733e from this chassis (sb_readonly=0)
Dec  6 02:18:17 np0005548731 nova_compute[232433]: 2025-12-06 07:18:17.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00294|binding|INFO|Setting lport 0355ba71-a598-4e22-a0b7-c5c10b61733e down in Southbound
Dec  6 02:18:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:18:17Z|00295|binding|INFO|Removing iface tap0355ba71-a5 ovn-installed in OVS
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.001 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 podman[264138]: 2025-12-06 07:18:18.002929502 +0000 UTC m=+0.095816884 container cleanup 1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.005 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:48:7a 10.2.2.200'], port_security=['fa:16:3e:a4:48:7a 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '067b423b-4ac2-4ca9-9000-44b6fb2af34e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f938a037b8141cf9408cbf6f5cd081d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '237cb9be-2d8b-4ac0-bb7a-8d0c44abab0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=732bca81-ac12-4eee-8f7d-ffbf43eb8c29, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=0355ba71-a598-4e22-a0b7-c5c10b61733e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:18:18 np0005548731 systemd[1]: libpod-conmon-1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5.scope: Deactivated successfully.
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.012 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Dec  6 02:18:18 np0005548731 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004b.scope: Consumed 17.066s CPU time.
Dec  6 02:18:18 np0005548731 systemd-machined[195355]: Machine qemu-32-instance-0000004b terminated.
Dec  6 02:18:18 np0005548731 podman[264190]: 2025-12-06 07:18:18.063022535 +0000 UTC m=+0.038312494 container remove 1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.068 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6cdd56dc-4aa8-4536-a02a-f411c4643a14]: (4, ('Sat Dec  6 07:18:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745 (1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5)\n1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5\nSat Dec  6 07:18:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745 (1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5)\n1db47b1cbe1fd964d95d61b843c4e9a2e5536e6abd6996693dec2aabcfb9a7d5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.069 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[468b25c3-fb27-4bc2-bc11-b0a42d3cd238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.070 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1867060b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.072 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 kernel: tap1867060b-20: left promiscuous mode
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.102 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:18:18 np0005548731 NetworkManager[49182]: <info>  [1765005498.1068] manager: (tap5408643b-79): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.107 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a710dc26-8894-4341-8365-1726bb56fcd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 NetworkManager[49182]: <info>  [1765005498.1180] manager: (tap0f3261df-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.126 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[62b5efd2-448f-4382-9a81-464133f2e24e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.129 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4820f6-a5b4-4d9f-b891-3db44344800e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 NetworkManager[49182]: <info>  [1765005498.1386] manager: (tap0afb38e3-89): new Tun device (/org/freedesktop/NetworkManager/Devices/155)
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.149 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2eee197a-3e9a-4341-8e39-2eff38c7b69b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572379, 'reachable_time': 35428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264236, 'error': None, 'target': 'ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 systemd[1]: run-netns-ovnmeta\x2d1867060b\x2d2830\x2d47a2\x2dbf0a\x2d46a10388f745.mount: Deactivated successfully.
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.154 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1867060b-2830-47a2-bf0a-46a10388f745 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.155 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdafbe7-c8c5-463e-bac5-c6b067e905d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.156 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 0f3261df-8a99-4884-8db8-4ab5250dafc3 in datapath 83f8ee1c-bee2-4425-8792-4c822f63072c unbound from our chassis#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.158 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 83f8ee1c-bee2-4425-8792-4c822f63072c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.159 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0e37ee74-270f-46b7-9dd9-7ff54e4fdf1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.159 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c namespace which is not needed anymore#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.196 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.222 232437 INFO nova.virt.libvirt.driver [-] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Instance destroyed successfully.#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.223 232437 DEBUG nova.objects.instance [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lazy-loading 'resources' on Instance uuid 067b423b-4ac2-4ca9-9000-44b6fb2af34e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.233 232437 DEBUG nova.compute.manager [req-3127c6ff-d191-4102-b918-e2de4adb19df req-5483fddb-587a-4142-9597-c6208706ae7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-5408643b-7986-461d-867d-9f6545eeabb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.234 232437 DEBUG oslo_concurrency.lockutils [req-3127c6ff-d191-4102-b918-e2de4adb19df req-5483fddb-587a-4142-9597-c6208706ae7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.234 232437 DEBUG oslo_concurrency.lockutils [req-3127c6ff-d191-4102-b918-e2de4adb19df req-5483fddb-587a-4142-9597-c6208706ae7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.234 232437 DEBUG oslo_concurrency.lockutils [req-3127c6ff-d191-4102-b918-e2de4adb19df req-5483fddb-587a-4142-9597-c6208706ae7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.234 232437 DEBUG nova.compute.manager [req-3127c6ff-d191-4102-b918-e2de4adb19df req-5483fddb-587a-4142-9597-c6208706ae7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-unplugged-5408643b-7986-461d-867d-9f6545eeabb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.235 232437 DEBUG nova.compute.manager [req-3127c6ff-d191-4102-b918-e2de4adb19df req-5483fddb-587a-4142-9597-c6208706ae7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-5408643b-7986-461d-867d-9f6545eeabb3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.236 232437 DEBUG nova.virt.libvirt.vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.237 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.238 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:c0:da,bridge_name='br-int',has_traffic_filtering=True,id=5408643b-7986-461d-867d-9f6545eeabb3,network=Network(1867060b-2830-47a2-bf0a-46a10388f745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5408643b-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.238 232437 DEBUG os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:c0:da,bridge_name='br-int',has_traffic_filtering=True,id=5408643b-7986-461d-867d-9f6545eeabb3,network=Network(1867060b-2830-47a2-bf0a-46a10388f745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5408643b-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.239 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.239 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5408643b-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.240 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.242 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.256 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.259 232437 INFO os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:c0:da,bridge_name='br-int',has_traffic_filtering=True,id=5408643b-7986-461d-867d-9f6545eeabb3,network=Network(1867060b-2830-47a2-bf0a-46a10388f745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5408643b-79')#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.260 232437 DEBUG nova.virt.libvirt.vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.260 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.261 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:5a:4c,bridge_name='br-int',has_traffic_filtering=True,id=0f3261df-8a99-4884-8db8-4ab5250dafc3,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0f3261df-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.261 232437 DEBUG os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:5a:4c,bridge_name='br-int',has_traffic_filtering=True,id=0f3261df-8a99-4884-8db8-4ab5250dafc3,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0f3261df-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.262 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.262 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f3261df-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.263 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.265 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.275 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.277 232437 INFO os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:5a:4c,bridge_name='br-int',has_traffic_filtering=True,id=0f3261df-8a99-4884-8db8-4ab5250dafc3,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0f3261df-8a')#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.278 232437 DEBUG nova.virt.libvirt.vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.278 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.279 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7f:ee,bridge_name='br-int',has_traffic_filtering=True,id=0afb38e3-893f-4379-98a8-4a56e2b84f9d,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0afb38e3-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.279 232437 DEBUG os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7f:ee,bridge_name='br-int',has_traffic_filtering=True,id=0afb38e3-893f-4379-98a8-4a56e2b84f9d,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0afb38e3-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.280 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.280 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0afb38e3-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.282 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.293 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.295 232437 INFO os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7f:ee,bridge_name='br-int',has_traffic_filtering=True,id=0afb38e3-893f-4379-98a8-4a56e2b84f9d,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0afb38e3-89')#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.295 232437 DEBUG nova.virt.libvirt.vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "address": "fa:16:3e:44:6f:c0", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape27e5b98-b4", "ovs_interfaceid": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.296 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "address": "fa:16:3e:44:6f:c0", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape27e5b98-b4", "ovs_interfaceid": "e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.296 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:6f:c0,bridge_name='br-int',has_traffic_filtering=True,id=e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape27e5b98-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.297 232437 DEBUG os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:6f:c0,bridge_name='br-int',has_traffic_filtering=True,id=e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape27e5b98-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.298 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.298 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape27e5b98-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.299 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.300 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:18:18 np0005548731 neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c[263530]: [NOTICE]   (263534) : haproxy version is 2.8.14-c23fe91
Dec  6 02:18:18 np0005548731 neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c[263530]: [NOTICE]   (263534) : path to executable is /usr/sbin/haproxy
Dec  6 02:18:18 np0005548731 neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c[263530]: [WARNING]  (263534) : Exiting Master process...
Dec  6 02:18:18 np0005548731 neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c[263530]: [WARNING]  (263534) : Exiting Master process...
Dec  6 02:18:18 np0005548731 neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c[263530]: [ALERT]    (263534) : Current worker (263536) exited with code 143 (Terminated)
Dec  6 02:18:18 np0005548731 neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c[263530]: [WARNING]  (263534) : All workers exited. Exiting... (0)
Dec  6 02:18:18 np0005548731 systemd[1]: libpod-ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299.scope: Deactivated successfully.
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.309 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.311 232437 INFO os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:6f:c0,bridge_name='br-int',has_traffic_filtering=True,id=e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape27e5b98-b4')#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.312 232437 DEBUG nova.virt.libvirt.vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.312 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.312 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:78:59,bridge_name='br-int',has_traffic_filtering=True,id=5984d840-e1cd-45dc-87cb-a337e73a753c,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5984d840-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.313 232437 DEBUG os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:78:59,bridge_name='br-int',has_traffic_filtering=True,id=5984d840-e1cd-45dc-87cb-a337e73a753c,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5984d840-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.314 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.314 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5984d840-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.315 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 podman[264322]: 2025-12-06 07:18:18.316822725 +0000 UTC m=+0.045905019 container died ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.318 232437 DEBUG nova.compute.manager [req-8263ccfc-7cc7-4699-b420-7d2a77511721 req-106a068a-e6d7-4f1a-8b38-93f8a7c37260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.319 232437 DEBUG oslo_concurrency.lockutils [req-8263ccfc-7cc7-4699-b420-7d2a77511721 req-106a068a-e6d7-4f1a-8b38-93f8a7c37260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.319 232437 DEBUG oslo_concurrency.lockutils [req-8263ccfc-7cc7-4699-b420-7d2a77511721 req-106a068a-e6d7-4f1a-8b38-93f8a7c37260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.319 232437 DEBUG oslo_concurrency.lockutils [req-8263ccfc-7cc7-4699-b420-7d2a77511721 req-106a068a-e6d7-4f1a-8b38-93f8a7c37260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.319 232437 DEBUG nova.compute.manager [req-8263ccfc-7cc7-4699-b420-7d2a77511721 req-106a068a-e6d7-4f1a-8b38-93f8a7c37260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-unplugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.319 232437 DEBUG nova.compute.manager [req-8263ccfc-7cc7-4699-b420-7d2a77511721 req-106a068a-e6d7-4f1a-8b38-93f8a7c37260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.320 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.322 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.323 232437 INFO os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:78:59,bridge_name='br-int',has_traffic_filtering=True,id=5984d840-e1cd-45dc-87cb-a337e73a753c,network=Network(83f8ee1c-bee2-4425-8792-4c822f63072c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5984d840-e1')#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.324 232437 DEBUG nova.virt.libvirt.vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.324 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.325 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:d0:96,bridge_name='br-int',has_traffic_filtering=True,id=509f7f18-31bd-42cd-83f5-e9b62a4e200b,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509f7f18-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.325 232437 DEBUG os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:d0:96,bridge_name='br-int',has_traffic_filtering=True,id=509f7f18-31bd-42cd-83f5-e9b62a4e200b,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509f7f18-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.326 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.326 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap509f7f18-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.328 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.331 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.332 232437 INFO os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:d0:96,bridge_name='br-int',has_traffic_filtering=True,id=509f7f18-31bd-42cd-83f5-e9b62a4e200b,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509f7f18-31')#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.333 232437 DEBUG nova.virt.libvirt.vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1733639154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1733639154',id=75,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHdu5SJtQNPQGb6FScAscHwhPOc3vR8R0vyVdAVKvBqqjhGOukzZ8D/fDg4EduNBcT5BiiJbdANtRqZ5sQMLiRX9x5mErRanB2F2dkvm8uC5tUXlBwNqaOj1FVwBesGPnQ==',key_name='tempest-keypair-964256358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:45Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f938a037b8141cf9408cbf6f5cd081d',ramdisk_id='',reservation_id='r-5mbntsd9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-128611843',owner_user_name='tempest-TaggedBootDevicesTest-128611843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='197a9b0ee1db487d82542eb31e84f33e',uuid=067b423b-4ac2-4ca9-9000-44b6fb2af34e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.333 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converting VIF {"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.334 232437 DEBUG nova.network.os_vif_util [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:48:7a,bridge_name='br-int',has_traffic_filtering=True,id=0355ba71-a598-4e22-a0b7-c5c10b61733e,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0355ba71-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.334 232437 DEBUG os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:48:7a,bridge_name='br-int',has_traffic_filtering=True,id=0355ba71-a598-4e22-a0b7-c5c10b61733e,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0355ba71-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.335 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.335 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0355ba71-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.336 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.338 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.340 232437 INFO os_vif [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:48:7a,bridge_name='br-int',has_traffic_filtering=True,id=0355ba71-a598-4e22-a0b7-c5c10b61733e,network=Network(8856153c-fd33-4eca-913c-fbbc6c3bd29c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0355ba71-a5')#033[00m
Dec  6 02:18:18 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299-userdata-shm.mount: Deactivated successfully.
Dec  6 02:18:18 np0005548731 systemd[1]: var-lib-containers-storage-overlay-b93af762ec9ad47ceb9b136c4c3a97982506755fecfcf07c5759fc50eae26815-merged.mount: Deactivated successfully.
Dec  6 02:18:18 np0005548731 podman[264322]: 2025-12-06 07:18:18.351611892 +0000 UTC m=+0.080694186 container cleanup ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:18:18 np0005548731 systemd[1]: libpod-conmon-ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299.scope: Deactivated successfully.
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.394 232437 DEBUG nova.compute.manager [req-cefd3145-591e-4be6-b07b-9b20422982fe req-a956329f-93c9-4619-af61-10615c20dd1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.394 232437 DEBUG oslo_concurrency.lockutils [req-cefd3145-591e-4be6-b07b-9b20422982fe req-a956329f-93c9-4619-af61-10615c20dd1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.394 232437 DEBUG oslo_concurrency.lockutils [req-cefd3145-591e-4be6-b07b-9b20422982fe req-a956329f-93c9-4619-af61-10615c20dd1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.394 232437 DEBUG oslo_concurrency.lockutils [req-cefd3145-591e-4be6-b07b-9b20422982fe req-a956329f-93c9-4619-af61-10615c20dd1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.395 232437 DEBUG nova.compute.manager [req-cefd3145-591e-4be6-b07b-9b20422982fe req-a956329f-93c9-4619-af61-10615c20dd1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-unplugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.395 232437 DEBUG nova.compute.manager [req-cefd3145-591e-4be6-b07b-9b20422982fe req-a956329f-93c9-4619-af61-10615c20dd1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:18:18 np0005548731 podman[264369]: 2025-12-06 07:18:18.404566642 +0000 UTC m=+0.033422195 container remove ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.409 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc57f3d-2f11-48de-b0bb-67e8b8724fc6]: (4, ('Sat Dec  6 07:18:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c (ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299)\nff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299\nSat Dec  6 07:18:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c (ff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299)\nff47bd6677ae4ca0534768d06167f594fce13ad462ef7405d5b9b1401b9b1299\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.410 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[abc0d2c4-88cb-4234-868e-de8c93c9e2bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.411 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83f8ee1c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:18 np0005548731 kernel: tap83f8ee1c-b0: left promiscuous mode
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.413 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.417 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2af7ef-1d0a-4890-b023-8f0d3a6f7de8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.426 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.435 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0c67e611-86c7-453b-8652-07e3a7ad842a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.436 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[912312d3-ade1-4f54-9483-e3a0e98f7219]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.456 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a408ddae-aa03-4dce-bdb1-415b6e1b5ccb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572296, 'reachable_time': 40633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264395, 'error': None, 'target': 'ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.458 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-83f8ee1c-bee2-4425-8792-4c822f63072c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.458 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd4fa7e-d038-4eee-a9e2-238d6c065933]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.458 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 0afb38e3-893f-4379-98a8-4a56e2b84f9d in datapath 83f8ee1c-bee2-4425-8792-4c822f63072c unbound from our chassis#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.460 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 83f8ee1c-bee2-4425-8792-4c822f63072c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.460 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5e7b25c4-10b8-4fed-b8fb-3dc5952a344e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.461 143965 INFO neutron.agent.ovn.metadata.agent [-] Port e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 in datapath 83f8ee1c-bee2-4425-8792-4c822f63072c unbound from our chassis#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.462 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 83f8ee1c-bee2-4425-8792-4c822f63072c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.463 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[795ff57d-fe6a-441e-800a-85df31edb2c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.463 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 5984d840-e1cd-45dc-87cb-a337e73a753c in datapath 83f8ee1c-bee2-4425-8792-4c822f63072c unbound from our chassis#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.464 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 83f8ee1c-bee2-4425-8792-4c822f63072c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.464 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[43797902-3edf-483e-9c75-87b03bf1a020]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.465 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 509f7f18-31bd-42cd-83f5-e9b62a4e200b in datapath 8856153c-fd33-4eca-913c-fbbc6c3bd29c unbound from our chassis#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.466 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8856153c-fd33-4eca-913c-fbbc6c3bd29c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.467 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7d6ea173-86a5-4ae2-b5ee-bb35c699e40a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:18.467 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c namespace which is not needed anymore#033[00m
Dec  6 02:18:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:18.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.781 232437 DEBUG nova.network.neutron [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updated VIF entry in instance network info cache for port a599f1a0-5413-4dc9-9ae4-d7ba512d761c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.782 232437 DEBUG nova.network.neutron [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:18:18 np0005548731 nova_compute[232433]: 2025-12-06 07:18:18.818 232437 DEBUG oslo_concurrency.lockutils [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:18:18 np0005548731 systemd[1]: run-netns-ovnmeta\x2d83f8ee1c\x2dbee2\x2d4425\x2d8792\x2d4c822f63072c.mount: Deactivated successfully.
Dec  6 02:18:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:18.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.909 232437 DEBUG nova.compute.manager [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.909 232437 DEBUG oslo_concurrency.lockutils [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.909 232437 DEBUG oslo_concurrency.lockutils [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.910 232437 DEBUG oslo_concurrency.lockutils [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.910 232437 DEBUG nova.compute.manager [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-unplugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.910 232437 DEBUG nova.compute.manager [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.910 232437 DEBUG nova.compute.manager [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.911 232437 DEBUG oslo_concurrency.lockutils [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.911 232437 DEBUG oslo_concurrency.lockutils [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.911 232437 DEBUG oslo_concurrency.lockutils [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.911 232437 DEBUG nova.compute.manager [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-plugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:19 np0005548731 nova_compute[232433]: 2025-12-06 07:18:19.912 232437 WARNING nova.compute.manager [req-69a50ea4-4200-446b-8689-82039a58cfcd req-1cfe8c0d-a9c3-40d9-ad2a-015a4c554036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-0f3261df-8a99-4884-8db8-4ab5250dafc3 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:18:20 np0005548731 neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c[263816]: [NOTICE]   (263820) : haproxy version is 2.8.14-c23fe91
Dec  6 02:18:20 np0005548731 neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c[263816]: [NOTICE]   (263820) : path to executable is /usr/sbin/haproxy
Dec  6 02:18:20 np0005548731 neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c[263816]: [WARNING]  (263820) : Exiting Master process...
Dec  6 02:18:20 np0005548731 neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c[263816]: [WARNING]  (263820) : Exiting Master process...
Dec  6 02:18:20 np0005548731 neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c[263816]: [ALERT]    (263820) : Current worker (263822) exited with code 143 (Terminated)
Dec  6 02:18:20 np0005548731 neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c[263816]: [WARNING]  (263820) : All workers exited. Exiting... (0)
Dec  6 02:18:20 np0005548731 systemd[1]: libpod-13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606.scope: Deactivated successfully.
Dec  6 02:18:20 np0005548731 podman[264413]: 2025-12-06 07:18:20.140329498 +0000 UTC m=+1.599141451 container died 13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.431 232437 DEBUG nova.compute.manager [req-7487d773-b4f3-4e6a-977c-7f288d9f11aa req-1fe3fe08-2690-4e37-a32b-585ceb068bdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.432 232437 DEBUG oslo_concurrency.lockutils [req-7487d773-b4f3-4e6a-977c-7f288d9f11aa req-1fe3fe08-2690-4e37-a32b-585ceb068bdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.432 232437 DEBUG oslo_concurrency.lockutils [req-7487d773-b4f3-4e6a-977c-7f288d9f11aa req-1fe3fe08-2690-4e37-a32b-585ceb068bdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.432 232437 DEBUG oslo_concurrency.lockutils [req-7487d773-b4f3-4e6a-977c-7f288d9f11aa req-1fe3fe08-2690-4e37-a32b-585ceb068bdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.433 232437 DEBUG nova.compute.manager [req-7487d773-b4f3-4e6a-977c-7f288d9f11aa req-1fe3fe08-2690-4e37-a32b-585ceb068bdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-plugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.433 232437 WARNING nova.compute.manager [req-7487d773-b4f3-4e6a-977c-7f288d9f11aa req-1fe3fe08-2690-4e37-a32b-585ceb068bdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-509f7f18-31bd-42cd-83f5-e9b62a4e200b for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.455 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-5408643b-7986-461d-867d-9f6545eeabb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.455 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.456 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.456 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.456 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-plugged-5408643b-7986-461d-867d-9f6545eeabb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.456 232437 WARNING nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-5408643b-7986-461d-867d-9f6545eeabb3 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.456 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.457 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.457 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.457 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.457 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-unplugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.457 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.458 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.458 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.458 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.459 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.459 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-plugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.459 232437 WARNING nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.460 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-0355ba71-a598-4e22-a0b7-c5c10b61733e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.460 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.460 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.460 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.460 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-unplugged-0355ba71-a598-4e22-a0b7-c5c10b61733e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.461 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-0355ba71-a598-4e22-a0b7-c5c10b61733e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.461 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-0355ba71-a598-4e22-a0b7-c5c10b61733e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.461 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.461 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.461 232437 DEBUG oslo_concurrency.lockutils [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.462 232437 DEBUG nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-plugged-0355ba71-a598-4e22-a0b7-c5c10b61733e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.462 232437 WARNING nova.compute.manager [req-eed56f1c-9b59-4c99-a8cf-d84106d6d056 req-f731d92f-64f2-4ae0-9fe9-b6d84533fbab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-0355ba71-a598-4e22-a0b7-c5c10b61733e for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:18:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:18:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:20.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.629 232437 DEBUG nova.compute.manager [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.630 232437 DEBUG oslo_concurrency.lockutils [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.630 232437 DEBUG oslo_concurrency.lockutils [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.631 232437 DEBUG oslo_concurrency.lockutils [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.631 232437 DEBUG nova.compute.manager [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-plugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.632 232437 WARNING nova.compute.manager [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-0afb38e3-893f-4379-98a8-4a56e2b84f9d for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.632 232437 DEBUG nova.compute.manager [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-5984d840-e1cd-45dc-87cb-a337e73a753c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.632 232437 DEBUG oslo_concurrency.lockutils [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.633 232437 DEBUG oslo_concurrency.lockutils [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.633 232437 DEBUG oslo_concurrency.lockutils [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.634 232437 DEBUG nova.compute.manager [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-unplugged-5984d840-e1cd-45dc-87cb-a337e73a753c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.634 232437 DEBUG nova.compute.manager [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-unplugged-5984d840-e1cd-45dc-87cb-a337e73a753c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.635 232437 DEBUG nova.compute.manager [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-plugged-5984d840-e1cd-45dc-87cb-a337e73a753c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.635 232437 DEBUG oslo_concurrency.lockutils [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.636 232437 DEBUG oslo_concurrency.lockutils [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.636 232437 DEBUG oslo_concurrency.lockutils [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.637 232437 DEBUG nova.compute.manager [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] No waiting events found dispatching network-vif-plugged-5984d840-e1cd-45dc-87cb-a337e73a753c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:20 np0005548731 nova_compute[232433]: 2025-12-06 07:18:20.637 232437 WARNING nova.compute.manager [req-9585bf99-e8bd-43e3-8291-f7ff45deb7d3 req-d42e9e2d-9a96-40a7-89e2-36ee8df073e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received unexpected event network-vif-plugged-5984d840-e1cd-45dc-87cb-a337e73a753c for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:18:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:21.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e253 e253: 3 total, 3 up, 3 in
Dec  6 02:18:21 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606-userdata-shm.mount: Deactivated successfully.
Dec  6 02:18:21 np0005548731 systemd[1]: var-lib-containers-storage-overlay-871924b6cb342e86e814aae8fb0d25c4f6afec51b9e6f508da1e9ffead9c3195-merged.mount: Deactivated successfully.
Dec  6 02:18:21 np0005548731 podman[264413]: 2025-12-06 07:18:21.109125377 +0000 UTC m=+2.567937330 container cleanup 13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:18:21 np0005548731 systemd[1]: libpod-conmon-13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606.scope: Deactivated successfully.
Dec  6 02:18:21 np0005548731 nova_compute[232433]: 2025-12-06 07:18:21.203 232437 INFO nova.virt.libvirt.driver [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Deleting instance files /var/lib/nova/instances/067b423b-4ac2-4ca9-9000-44b6fb2af34e_del#033[00m
Dec  6 02:18:21 np0005548731 nova_compute[232433]: 2025-12-06 07:18:21.204 232437 INFO nova.virt.libvirt.driver [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Deletion of /var/lib/nova/instances/067b423b-4ac2-4ca9-9000-44b6fb2af34e_del complete#033[00m
Dec  6 02:18:21 np0005548731 nova_compute[232433]: 2025-12-06 07:18:21.255 232437 INFO nova.compute.manager [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Took 3.57 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:18:21 np0005548731 nova_compute[232433]: 2025-12-06 07:18:21.255 232437 DEBUG oslo.service.loopingcall [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:18:21 np0005548731 nova_compute[232433]: 2025-12-06 07:18:21.256 232437 DEBUG nova.compute.manager [-] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:18:21 np0005548731 nova_compute[232433]: 2025-12-06 07:18:21.256 232437 DEBUG nova.network.neutron [-] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:18:21 np0005548731 podman[264444]: 2025-12-06 07:18:21.451127316 +0000 UTC m=+0.320712300 container remove 13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.456 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[88486613-6b33-45e1-87a2-b1f092486e61]: (4, ('Sat Dec  6 07:18:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c (13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606)\n13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606\nSat Dec  6 07:18:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c (13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606)\n13ee545b537ab401986c39bd7ee02219eb0b952114d767e801b4b4b3d6e20606\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.458 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[258492fc-1997-4a71-9bda-c723d5e98335]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.460 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8856153c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:21 np0005548731 kernel: tap8856153c-f0: left promiscuous mode
Dec  6 02:18:21 np0005548731 nova_compute[232433]: 2025-12-06 07:18:21.463 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:21 np0005548731 nova_compute[232433]: 2025-12-06 07:18:21.475 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.479 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2c55fd8d-8683-493d-ae05-49a814badd5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.502 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b745d376-7ed7-4dc1-b0e9-5edcb451e415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.504 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[80ce9b63-33da-441a-bde4-c600ae65e1c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.520 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[743f34c7-acb3-42ff-ab1d-99503d06fd72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572598, 'reachable_time': 30742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264458, 'error': None, 'target': 'ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.523 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8856153c-fd33-4eca-913c-fbbc6c3bd29c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.523 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[428a5209-505b-4d65-a278-d0eb8b663b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.524 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 0355ba71-a598-4e22-a0b7-c5c10b61733e in datapath 8856153c-fd33-4eca-913c-fbbc6c3bd29c unbound from our chassis#033[00m
Dec  6 02:18:21 np0005548731 systemd[1]: run-netns-ovnmeta\x2d8856153c\x2dfd33\x2d4eca\x2d913c\x2dfbbc6c3bd29c.mount: Deactivated successfully.
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.525 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8856153c-fd33-4eca-913c-fbbc6c3bd29c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:18:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:21.526 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4c905585-e45a-47c6-8242-ebd8f6d1e40b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:18:21 np0005548731 nova_compute[232433]: 2025-12-06 07:18:21.960 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:22.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:18:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:23.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:18:23 np0005548731 nova_compute[232433]: 2025-12-06 07:18:23.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:23 np0005548731 nova_compute[232433]: 2025-12-06 07:18:23.609 232437 DEBUG nova.compute.manager [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:23 np0005548731 nova_compute[232433]: 2025-12-06 07:18:23.609 232437 DEBUG oslo_concurrency.lockutils [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:23 np0005548731 nova_compute[232433]: 2025-12-06 07:18:23.609 232437 DEBUG oslo_concurrency.lockutils [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:23 np0005548731 nova_compute[232433]: 2025-12-06 07:18:23.610 232437 DEBUG oslo_concurrency.lockutils [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:23 np0005548731 nova_compute[232433]: 2025-12-06 07:18:23.610 232437 DEBUG nova.compute.manager [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:23 np0005548731 nova_compute[232433]: 2025-12-06 07:18:23.610 232437 WARNING nova.compute.manager [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state resized and task_state None.#033[00m
Dec  6 02:18:23 np0005548731 nova_compute[232433]: 2025-12-06 07:18:23.735 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:24.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:18:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:25.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.203 232437 DEBUG oslo_concurrency.lockutils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.203 232437 DEBUG oslo_concurrency.lockutils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.203 232437 DEBUG nova.compute.manager [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Going to confirm migration 11 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.756 232437 DEBUG nova.compute.manager [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.756 232437 DEBUG oslo_concurrency.lockutils [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.757 232437 DEBUG oslo_concurrency.lockutils [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.757 232437 DEBUG oslo_concurrency.lockutils [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.757 232437 DEBUG nova.compute.manager [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.757 232437 WARNING nova.compute.manager [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state resized and task_state None.#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.942 232437 DEBUG neutronclient.v2_0.client [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port a599f1a0-5413-4dc9-9ae4-d7ba512d761c for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.942 232437 DEBUG oslo_concurrency.lockutils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.943 232437 DEBUG oslo_concurrency.lockutils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.943 232437 DEBUG nova.network.neutron [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:18:25 np0005548731 nova_compute[232433]: 2025-12-06 07:18:25.943 232437 DEBUG nova.objects.instance [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'info_cache' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:18:26 np0005548731 nova_compute[232433]: 2025-12-06 07:18:26.066 232437 DEBUG nova.compute.manager [req-280b433c-29d7-499a-8cac-c8a2bd416016 req-c014f58a-0279-4ef4-a807-7fb9e8cdb6ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-deleted-e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:26 np0005548731 nova_compute[232433]: 2025-12-06 07:18:26.067 232437 INFO nova.compute.manager [req-280b433c-29d7-499a-8cac-c8a2bd416016 req-c014f58a-0279-4ef4-a807-7fb9e8cdb6ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Neutron deleted interface e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:18:26 np0005548731 nova_compute[232433]: 2025-12-06 07:18:26.067 232437 DEBUG nova.network.neutron [req-280b433c-29d7-499a-8cac-c8a2bd416016 req-c014f58a-0279-4ef4-a807-7fb9e8cdb6ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [{"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "address": "fa:16:3e:a4:48:7a", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0355ba71-a5", "ovs_interfaceid": "0355ba71-a598-4e22-a0b7-c5c10b61733e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:18:26 np0005548731 nova_compute[232433]: 2025-12-06 07:18:26.103 232437 DEBUG nova.compute.manager [req-280b433c-29d7-499a-8cac-c8a2bd416016 req-c014f58a-0279-4ef4-a807-7fb9e8cdb6ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Detach interface failed, port_id=e27e5b98-b454-4f5a-90a9-3f7a5c8f1a91, reason: Instance 067b423b-4ac2-4ca9-9000-44b6fb2af34e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:18:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:18:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:26.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:18:26 np0005548731 nova_compute[232433]: 2025-12-06 07:18:26.665 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005491.6640582, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:18:26 np0005548731 nova_compute[232433]: 2025-12-06 07:18:26.665 232437 INFO nova.compute.manager [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:18:26 np0005548731 nova_compute[232433]: 2025-12-06 07:18:26.683 232437 DEBUG nova.compute.manager [None req-469683bd-3a54-4886-85e7-dc0fcb04c414 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:18:26 np0005548731 nova_compute[232433]: 2025-12-06 07:18:26.686 232437 DEBUG nova.compute.manager [None req-469683bd-3a54-4886-85e7-dc0fcb04c414 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:18:26 np0005548731 nova_compute[232433]: 2025-12-06 07:18:26.707 232437 INFO nova.compute.manager [None req-469683bd-3a54-4886-85e7-dc0fcb04c414 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Dec  6 02:18:26 np0005548731 podman[264519]: 2025-12-06 07:18:26.900295632 +0000 UTC m=+0.052059539 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 02:18:26 np0005548731 podman[264517]: 2025-12-06 07:18:26.914989209 +0000 UTC m=+0.071695266 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 02:18:26 np0005548731 podman[264518]: 2025-12-06 07:18:26.921394126 +0000 UTC m=+0.075930311 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:18:26 np0005548731 nova_compute[232433]: 2025-12-06 07:18:26.961 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:27.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:27 np0005548731 nova_compute[232433]: 2025-12-06 07:18:27.678 232437 DEBUG nova.network.neutron [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:18:27 np0005548731 nova_compute[232433]: 2025-12-06 07:18:27.702 232437 DEBUG oslo_concurrency.lockutils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:18:27 np0005548731 nova_compute[232433]: 2025-12-06 07:18:27.703 232437 DEBUG nova.objects.instance [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'migration_context' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:18:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:27 np0005548731 nova_compute[232433]: 2025-12-06 07:18:27.788 232437 DEBUG nova.storage.rbd_utils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] removing snapshot(nova-resize) on rbd image(c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.244 232437 DEBUG nova.compute.manager [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-deleted-0355ba71-a598-4e22-a0b7-c5c10b61733e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.245 232437 INFO nova.compute.manager [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Neutron deleted interface 0355ba71-a598-4e22-a0b7-c5c10b61733e; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.245 232437 DEBUG nova.network.neutron [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [{"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "5984d840-e1cd-45dc-87cb-a337e73a753c", "address": "fa:16:3e:0b:78:59", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5984d840-e1", "ovs_interfaceid": "5984d840-e1cd-45dc-87cb-a337e73a753c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.273 232437 DEBUG nova.compute.manager [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Detach interface failed, port_id=0355ba71-a598-4e22-a0b7-c5c10b61733e, reason: Instance 067b423b-4ac2-4ca9-9000-44b6fb2af34e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.273 232437 DEBUG nova.compute.manager [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-deleted-5984d840-e1cd-45dc-87cb-a337e73a753c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.273 232437 INFO nova.compute.manager [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Neutron deleted interface 5984d840-e1cd-45dc-87cb-a337e73a753c; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.273 232437 DEBUG nova.network.neutron [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [{"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "address": "fa:16:3e:3a:d0:96", "network": {"id": "8856153c-fd33-4eca-913c-fbbc6c3bd29c", "bridge": "br-int", "label": "tempest-device-tagging-net2-1411782491", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509f7f18-31", "ovs_interfaceid": "509f7f18-31bd-42cd-83f5-e9b62a4e200b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.318 232437 DEBUG nova.compute.manager [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Detach interface failed, port_id=5984d840-e1cd-45dc-87cb-a337e73a753c, reason: Instance 067b423b-4ac2-4ca9-9000-44b6fb2af34e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.318 232437 DEBUG nova.compute.manager [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-deleted-509f7f18-31bd-42cd-83f5-e9b62a4e200b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.318 232437 INFO nova.compute.manager [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Neutron deleted interface 509f7f18-31bd-42cd-83f5-e9b62a4e200b; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.318 232437 DEBUG nova.network.neutron [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [{"id": "5408643b-7986-461d-867d-9f6545eeabb3", "address": "fa:16:3e:43:c0:da", "network": {"id": "1867060b-2830-47a2-bf0a-46a10388f745", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-1327949345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5408643b-79", "ovs_interfaceid": "5408643b-7986-461d-867d-9f6545eeabb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "address": "fa:16:3e:33:5a:4c", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f3261df-8a", "ovs_interfaceid": "0f3261df-8a99-4884-8db8-4ab5250dafc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "address": "fa:16:3e:b0:7f:ee", "network": {"id": "83f8ee1c-bee2-4425-8792-4c822f63072c", "bridge": "br-int", "label": "tempest-device-tagging-net1-1182028938", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f938a037b8141cf9408cbf6f5cd081d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afb38e3-89", "ovs_interfaceid": "0afb38e3-893f-4379-98a8-4a56e2b84f9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.333 232437 DEBUG nova.network.neutron [-] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.338 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.388 232437 DEBUG nova.compute.manager [req-0839fda5-95e0-4605-8585-aeffbc505911 req-599b141a-46c2-4cb8-9f7d-4cda11dd4191 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Detach interface failed, port_id=509f7f18-31bd-42cd-83f5-e9b62a4e200b, reason: Instance 067b423b-4ac2-4ca9-9000-44b6fb2af34e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.402 232437 INFO nova.compute.manager [-] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Took 7.15 seconds to deallocate network for instance.#033[00m
Dec  6 02:18:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e254 e254: 3 total, 3 up, 3 in
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.584 232437 DEBUG nova.virt.libvirt.vif [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:18:23Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:18:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.585 232437 DEBUG nova.network.os_vif_util [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.586 232437 DEBUG nova.network.os_vif_util [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.586 232437 DEBUG os_vif [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.588 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.589 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa599f1a0-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.589 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.591 232437 INFO os_vif [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54')#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.591 232437 DEBUG oslo_concurrency.lockutils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.592 232437 DEBUG oslo_concurrency.lockutils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:28.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:28 np0005548731 nova_compute[232433]: 2025-12-06 07:18:28.688 232437 DEBUG oslo_concurrency.processutils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:18:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:18:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:29.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:18:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:18:29 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1035860075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:18:29 np0005548731 nova_compute[232433]: 2025-12-06 07:18:29.139 232437 DEBUG oslo_concurrency.processutils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:18:29 np0005548731 nova_compute[232433]: 2025-12-06 07:18:29.146 232437 DEBUG nova.compute.provider_tree [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:18:29 np0005548731 nova_compute[232433]: 2025-12-06 07:18:29.166 232437 DEBUG nova.scheduler.client.report [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:18:29 np0005548731 nova_compute[232433]: 2025-12-06 07:18:29.255 232437 DEBUG oslo_concurrency.lockutils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:29 np0005548731 nova_compute[232433]: 2025-12-06 07:18:29.416 232437 INFO nova.scheduler.client.report [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Deleted allocation for migration e712029d-e908-4f4e-9c2c-a2742ca0daa7#033[00m
Dec  6 02:18:29 np0005548731 nova_compute[232433]: 2025-12-06 07:18:29.426 232437 INFO nova.compute.manager [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Took 1.02 seconds to detach 3 volumes for instance.#033[00m
Dec  6 02:18:29 np0005548731 nova_compute[232433]: 2025-12-06 07:18:29.508 232437 DEBUG oslo_concurrency.lockutils [None req-6d421d1e-6f12-4866-a372-bffb57119d92 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:29 np0005548731 nova_compute[232433]: 2025-12-06 07:18:29.515 232437 DEBUG oslo_concurrency.lockutils [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:18:29 np0005548731 nova_compute[232433]: 2025-12-06 07:18:29.516 232437 DEBUG oslo_concurrency.lockutils [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:18:29 np0005548731 nova_compute[232433]: 2025-12-06 07:18:29.575 232437 DEBUG oslo_concurrency.processutils [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:18:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:18:29 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2140050300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:18:30 np0005548731 nova_compute[232433]: 2025-12-06 07:18:30.003 232437 DEBUG oslo_concurrency.processutils [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:18:30 np0005548731 nova_compute[232433]: 2025-12-06 07:18:30.008 232437 DEBUG nova.compute.provider_tree [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:18:30 np0005548731 nova_compute[232433]: 2025-12-06 07:18:30.024 232437 DEBUG nova.scheduler.client.report [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:18:30 np0005548731 nova_compute[232433]: 2025-12-06 07:18:30.052 232437 DEBUG oslo_concurrency.lockutils [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:30 np0005548731 nova_compute[232433]: 2025-12-06 07:18:30.089 232437 INFO nova.scheduler.client.report [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Deleted allocations for instance 067b423b-4ac2-4ca9-9000-44b6fb2af34e#033[00m
Dec  6 02:18:30 np0005548731 nova_compute[232433]: 2025-12-06 07:18:30.187 232437 DEBUG oslo_concurrency.lockutils [None req-e8ca760e-f12e-4680-8874-77b32b866da9 197a9b0ee1db487d82542eb31e84f33e 8f938a037b8141cf9408cbf6f5cd081d - - default default] Lock "067b423b-4ac2-4ca9-9000-44b6fb2af34e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:18:30 np0005548731 nova_compute[232433]: 2025-12-06 07:18:30.339 232437 DEBUG nova.compute.manager [req-e1a608c4-eb81-4e2c-8e29-9645f4b19344 req-e5e8ea97-aa33-4b89-9eef-53b349c8420a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Received event network-vif-deleted-5408643b-7986-461d-867d-9f6545eeabb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:18:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:18:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:30.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:18:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:18:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:31.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:18:31 np0005548731 nova_compute[232433]: 2025-12-06 07:18:31.962 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:32.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:33.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:33 np0005548731 nova_compute[232433]: 2025-12-06 07:18:33.221 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005498.2200353, 067b423b-4ac2-4ca9-9000-44b6fb2af34e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:18:33 np0005548731 nova_compute[232433]: 2025-12-06 07:18:33.221 232437 INFO nova.compute.manager [-] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:18:33 np0005548731 nova_compute[232433]: 2025-12-06 07:18:33.244 232437 DEBUG nova.compute.manager [None req-f199a15c-ccab-455d-92c6-01b1d83125c7 - - - - - -] [instance: 067b423b-4ac2-4ca9-9000-44b6fb2af34e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:18:33 np0005548731 nova_compute[232433]: 2025-12-06 07:18:33.339 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:34.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:35.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:36 np0005548731 nova_compute[232433]: 2025-12-06 07:18:36.082 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:36 np0005548731 nova_compute[232433]: 2025-12-06 07:18:36.368 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:18:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:36.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:18:36 np0005548731 nova_compute[232433]: 2025-12-06 07:18:36.963 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:37.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 e255: 3 total, 3 up, 3 in
Dec  6 02:18:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:38 np0005548731 nova_compute[232433]: 2025-12-06 07:18:38.341 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:38.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:39.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:18:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:40.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:18:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:41.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:41 np0005548731 nova_compute[232433]: 2025-12-06 07:18:41.966 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:18:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:42.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:18:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:18:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:43.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:18:43 np0005548731 nova_compute[232433]: 2025-12-06 07:18:43.343 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:44.381 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:18:44 np0005548731 nova_compute[232433]: 2025-12-06 07:18:44.382 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:44.382 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:18:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:18:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:44.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:18:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:18:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:18:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:18:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:18:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:45.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:18:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3859210869' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:18:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:18:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3859210869' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:18:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:18:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:46.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:46 np0005548731 nova_compute[232433]: 2025-12-06 07:18:46.967 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:47.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:18:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:18:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:18:47.385 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:18:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:48 np0005548731 nova_compute[232433]: 2025-12-06 07:18:48.344 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:48.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:49.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:50.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:18:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:51.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:18:51 np0005548731 nova_compute[232433]: 2025-12-06 07:18:51.969 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:52 np0005548731 nova_compute[232433]: 2025-12-06 07:18:52.129 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:18:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:18:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:52.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:18:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:53.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:53 np0005548731 nova_compute[232433]: 2025-12-06 07:18:53.347 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:18:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:18:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:54.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:18:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:55.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:18:55 np0005548731 nova_compute[232433]: 2025-12-06 07:18:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:18:55 np0005548731 nova_compute[232433]: 2025-12-06 07:18:55.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:18:56 np0005548731 nova_compute[232433]: 2025-12-06 07:18:56.581 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:18:56 np0005548731 nova_compute[232433]: 2025-12-06 07:18:56.582 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:18:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:18:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:56.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:18:56 np0005548731 nova_compute[232433]: 2025-12-06 07:18:56.970 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:18:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:57.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:18:57 np0005548731 nova_compute[232433]: 2025-12-06 07:18:57.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:18:57 np0005548731 nova_compute[232433]: 2025-12-06 07:18:57.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:18:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:18:57 np0005548731 podman[264907]: 2025-12-06 07:18:57.930690628 +0000 UTC m=+0.079557228 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec  6 02:18:57 np0005548731 podman[264909]: 2025-12-06 07:18:57.932100442 +0000 UTC m=+0.074364491 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 02:18:57 np0005548731 podman[264908]: 2025-12-06 07:18:57.98745822 +0000 UTC m=+0.137810396 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:18:58 np0005548731 nova_compute[232433]: 2025-12-06 07:18:58.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:18:58 np0005548731 nova_compute[232433]: 2025-12-06 07:18:58.349 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:18:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:58.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:18:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:18:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:18:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:59.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:00 np0005548731 nova_compute[232433]: 2025-12-06 07:19:00.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:19:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:00.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:00.860 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:19:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:00.861 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:19:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:00.861 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:19:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:01.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:01 np0005548731 nova_compute[232433]: 2025-12-06 07:19:01.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:19:01 np0005548731 nova_compute[232433]: 2025-12-06 07:19:01.971 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.142 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.143 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.143 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.143 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.143 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:19:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:19:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2383397060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.599 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:19:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:02.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.767 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:19:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.768 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4623MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.768 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.768 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.909 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.910 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:19:02 np0005548731 nova_compute[232433]: 2025-12-06 07:19:02.985 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 02:19:03 np0005548731 nova_compute[232433]: 2025-12-06 07:19:03.010 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 02:19:03 np0005548731 nova_compute[232433]: 2025-12-06 07:19:03.011 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 02:19:03 np0005548731 nova_compute[232433]: 2025-12-06 07:19:03.050 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 02:19:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:03.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:03 np0005548731 nova_compute[232433]: 2025-12-06 07:19:03.113 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 02:19:03 np0005548731 nova_compute[232433]: 2025-12-06 07:19:03.170 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:19:03 np0005548731 nova_compute[232433]: 2025-12-06 07:19:03.351 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:19:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/134940863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:19:03 np0005548731 nova_compute[232433]: 2025-12-06 07:19:03.612 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:19:03 np0005548731 nova_compute[232433]: 2025-12-06 07:19:03.618 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:19:03 np0005548731 nova_compute[232433]: 2025-12-06 07:19:03.644 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:19:04 np0005548731 nova_compute[232433]: 2025-12-06 07:19:04.019 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:19:04 np0005548731 nova_compute[232433]: 2025-12-06 07:19:04.019 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:19:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:04.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:05.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:06 np0005548731 nova_compute[232433]: 2025-12-06 07:19:06.020 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:19:06 np0005548731 nova_compute[232433]: 2025-12-06 07:19:06.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:19:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:06.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:06 np0005548731 nova_compute[232433]: 2025-12-06 07:19:06.972 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:07.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:08 np0005548731 nova_compute[232433]: 2025-12-06 07:19:08.354 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:08.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:19:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2115880549' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:19:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:19:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2115880549' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:19:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:09.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:10.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:11.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:11 np0005548731 nova_compute[232433]: 2025-12-06 07:19:11.973 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:12.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:13.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:13 np0005548731 nova_compute[232433]: 2025-12-06 07:19:13.356 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:14.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:15.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.157 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.158 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.179 232437 DEBUG nova.compute.manager [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.313 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.314 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.328 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.328 232437 INFO nova.compute.claims [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.499 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:19:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:16.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:19:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3659152555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.922 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.928 232437 DEBUG nova.compute.provider_tree [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.951 232437 DEBUG nova.scheduler.client.report [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.975 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.975 232437 DEBUG nova.compute.manager [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:19:16 np0005548731 nova_compute[232433]: 2025-12-06 07:19:16.978 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.035 232437 DEBUG nova.compute.manager [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.036 232437 DEBUG nova.network.neutron [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.059 232437 INFO nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:19:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:17.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.131 232437 DEBUG nova.compute.manager [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.268 232437 DEBUG nova.compute.manager [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.269 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.269 232437 INFO nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Creating image(s)#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.295 232437 DEBUG nova.storage.rbd_utils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 57f8a652-f11e-4342-bae0-13c592a18ad2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.324 232437 DEBUG nova.storage.rbd_utils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 57f8a652-f11e-4342-bae0-13c592a18ad2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.352 232437 DEBUG nova.storage.rbd_utils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 57f8a652-f11e-4342-bae0-13c592a18ad2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.355 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.418 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.419 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.420 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.420 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.443 232437 DEBUG nova.storage.rbd_utils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 57f8a652-f11e-4342-bae0-13c592a18ad2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.447 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 57f8a652-f11e-4342-bae0-13c592a18ad2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:19:17 np0005548731 nova_compute[232433]: 2025-12-06 07:19:17.471 232437 DEBUG nova.policy [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '06f5b46553b24b39a1493d96ec4e503e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35df5125c2cf4d29a6b975951af14910', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:19:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:18 np0005548731 nova_compute[232433]: 2025-12-06 07:19:18.358 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:18 np0005548731 nova_compute[232433]: 2025-12-06 07:19:18.400 232437 DEBUG nova.network.neutron [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Successfully created port: 945300f6-d49b-4923-bb2f-c73996fd2be2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:19:18 np0005548731 nova_compute[232433]: 2025-12-06 07:19:18.403 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 57f8a652-f11e-4342-bae0-13c592a18ad2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.956s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:19:18 np0005548731 nova_compute[232433]: 2025-12-06 07:19:18.472 232437 DEBUG nova.storage.rbd_utils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] resizing rbd image 57f8a652-f11e-4342-bae0-13c592a18ad2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:19:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:18.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:19.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:19 np0005548731 nova_compute[232433]: 2025-12-06 07:19:19.323 232437 DEBUG nova.objects.instance [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'migration_context' on Instance uuid 57f8a652-f11e-4342-bae0-13c592a18ad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:19:19 np0005548731 nova_compute[232433]: 2025-12-06 07:19:19.337 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:19:19 np0005548731 nova_compute[232433]: 2025-12-06 07:19:19.338 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Ensure instance console log exists: /var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:19:19 np0005548731 nova_compute[232433]: 2025-12-06 07:19:19.338 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:19:19 np0005548731 nova_compute[232433]: 2025-12-06 07:19:19.338 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:19:19 np0005548731 nova_compute[232433]: 2025-12-06 07:19:19.339 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:19:19 np0005548731 nova_compute[232433]: 2025-12-06 07:19:19.869 232437 DEBUG nova.network.neutron [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Successfully updated port: 945300f6-d49b-4923-bb2f-c73996fd2be2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:19:19 np0005548731 nova_compute[232433]: 2025-12-06 07:19:19.891 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:19:19 np0005548731 nova_compute[232433]: 2025-12-06 07:19:19.892 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:19:19 np0005548731 nova_compute[232433]: 2025-12-06 07:19:19.892 232437 DEBUG nova.network.neutron [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:19:20 np0005548731 nova_compute[232433]: 2025-12-06 07:19:20.099 232437 DEBUG nova.compute.manager [req-5cd74b10-fccd-48ac-8ea6-df6bdf7dc9c3 req-0adafe7e-9d43-46f8-a5fa-1473379c924b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-changed-945300f6-d49b-4923-bb2f-c73996fd2be2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:19:20 np0005548731 nova_compute[232433]: 2025-12-06 07:19:20.099 232437 DEBUG nova.compute.manager [req-5cd74b10-fccd-48ac-8ea6-df6bdf7dc9c3 req-0adafe7e-9d43-46f8-a5fa-1473379c924b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Refreshing instance network info cache due to event network-changed-945300f6-d49b-4923-bb2f-c73996fd2be2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:19:20 np0005548731 nova_compute[232433]: 2025-12-06 07:19:20.099 232437 DEBUG oslo_concurrency.lockutils [req-5cd74b10-fccd-48ac-8ea6-df6bdf7dc9c3 req-0adafe7e-9d43-46f8-a5fa-1473379c924b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:19:20 np0005548731 nova_compute[232433]: 2025-12-06 07:19:20.159 232437 DEBUG nova.network.neutron [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:19:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:20.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:21.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.491 232437 DEBUG nova.network.neutron [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updating instance_info_cache with network_info: [{"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.522 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.523 232437 DEBUG nova.compute.manager [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Instance network_info: |[{"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.523 232437 DEBUG oslo_concurrency.lockutils [req-5cd74b10-fccd-48ac-8ea6-df6bdf7dc9c3 req-0adafe7e-9d43-46f8-a5fa-1473379c924b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.523 232437 DEBUG nova.network.neutron [req-5cd74b10-fccd-48ac-8ea6-df6bdf7dc9c3 req-0adafe7e-9d43-46f8-a5fa-1473379c924b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Refreshing network info cache for port 945300f6-d49b-4923-bb2f-c73996fd2be2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.526 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Start _get_guest_xml network_info=[{"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.530 232437 WARNING nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.536 232437 DEBUG nova.virt.libvirt.host [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.537 232437 DEBUG nova.virt.libvirt.host [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.539 232437 DEBUG nova.virt.libvirt.host [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.540 232437 DEBUG nova.virt.libvirt.host [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.541 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.541 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.542 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.542 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.542 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.542 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.543 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.543 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.543 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.543 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.543 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.544 232437 DEBUG nova.virt.hardware [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.547 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:19:21 np0005548731 nova_compute[232433]: 2025-12-06 07:19:21.979 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:19:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3193354649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:19:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:22.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.047 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.073 232437 DEBUG nova.storage.rbd_utils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 57f8a652-f11e-4342-bae0-13c592a18ad2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.077 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:19:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:23.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.240 232437 DEBUG nova.network.neutron [req-5cd74b10-fccd-48ac-8ea6-df6bdf7dc9c3 req-0adafe7e-9d43-46f8-a5fa-1473379c924b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updated VIF entry in instance network info cache for port 945300f6-d49b-4923-bb2f-c73996fd2be2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.241 232437 DEBUG nova.network.neutron [req-5cd74b10-fccd-48ac-8ea6-df6bdf7dc9c3 req-0adafe7e-9d43-46f8-a5fa-1473379c924b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updating instance_info_cache with network_info: [{"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.260 232437 DEBUG oslo_concurrency.lockutils [req-5cd74b10-fccd-48ac-8ea6-df6bdf7dc9c3 req-0adafe7e-9d43-46f8-a5fa-1473379c924b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.360 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:19:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3612372411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.512 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.514 232437 DEBUG nova.virt.libvirt.vif [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:19:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-336213926',display_name='tempest-tempest.common.compute-instance-336213926',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-336213926',id=80,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g3bk7kc7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:19:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=57f8a652-f11e-4342-bae0-13c592a18ad2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.515 232437 DEBUG nova.network.os_vif_util [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.516 232437 DEBUG nova.network.os_vif_util [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:d0:b6,bridge_name='br-int',has_traffic_filtering=True,id=945300f6-d49b-4923-bb2f-c73996fd2be2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap945300f6-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.517 232437 DEBUG nova.objects.instance [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'pci_devices' on Instance uuid 57f8a652-f11e-4342-bae0-13c592a18ad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.534 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  <uuid>57f8a652-f11e-4342-bae0-13c592a18ad2</uuid>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  <name>instance-00000050</name>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <nova:name>tempest-tempest.common.compute-instance-336213926</nova:name>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:19:21</nova:creationTime>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <nova:port uuid="945300f6-d49b-4923-bb2f-c73996fd2be2">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <entry name="serial">57f8a652-f11e-4342-bae0-13c592a18ad2</entry>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <entry name="uuid">57f8a652-f11e-4342-bae0-13c592a18ad2</entry>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/57f8a652-f11e-4342-bae0-13c592a18ad2_disk">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/57f8a652-f11e-4342-bae0-13c592a18ad2_disk.config">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:94:d0:b6"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <target dev="tap945300f6-d4"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/console.log" append="off"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:19:23 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:19:23 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:19:23 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:19:23 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.536 232437 DEBUG nova.compute.manager [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Preparing to wait for external event network-vif-plugged-945300f6-d49b-4923-bb2f-c73996fd2be2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.536 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.537 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.537 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.538 232437 DEBUG nova.virt.libvirt.vif [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:19:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-336213926',display_name='tempest-tempest.common.compute-instance-336213926',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-336213926',id=80,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g3bk7kc7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:19:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=57f8a652-f11e-4342-bae0-13c592a18ad2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.538 232437 DEBUG nova.network.os_vif_util [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.538 232437 DEBUG nova.network.os_vif_util [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:d0:b6,bridge_name='br-int',has_traffic_filtering=True,id=945300f6-d49b-4923-bb2f-c73996fd2be2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap945300f6-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.539 232437 DEBUG os_vif [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:d0:b6,bridge_name='br-int',has_traffic_filtering=True,id=945300f6-d49b-4923-bb2f-c73996fd2be2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap945300f6-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.539 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.539 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.540 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.542 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap945300f6-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.543 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap945300f6-d4, col_values=(('external_ids', {'iface-id': '945300f6-d49b-4923-bb2f-c73996fd2be2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:d0:b6', 'vm-uuid': '57f8a652-f11e-4342-bae0-13c592a18ad2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.544 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:23 np0005548731 NetworkManager[49182]: <info>  [1765005563.5448] manager: (tap945300f6-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.546 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.551 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.551 232437 INFO os_vif [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:d0:b6,bridge_name='br-int',has_traffic_filtering=True,id=945300f6-d49b-4923-bb2f-c73996fd2be2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap945300f6-d4')#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.604 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.604 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.605 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:94:d0:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.605 232437 INFO nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Using config drive#033[00m
Dec  6 02:19:23 np0005548731 nova_compute[232433]: 2025-12-06 07:19:23.628 232437 DEBUG nova.storage.rbd_utils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 57f8a652-f11e-4342-bae0-13c592a18ad2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.109 232437 INFO nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Creating config drive at /var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/disk.config#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.116 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_q0lvywd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.247 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_q0lvywd" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.270 232437 DEBUG nova.storage.rbd_utils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 57f8a652-f11e-4342-bae0-13c592a18ad2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.273 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/disk.config 57f8a652-f11e-4342-bae0-13c592a18ad2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.400 232437 DEBUG oslo_concurrency.processutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/disk.config 57f8a652-f11e-4342-bae0-13c592a18ad2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.401 232437 INFO nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Deleting local config drive /var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/disk.config because it was imported into RBD.#033[00m
Dec  6 02:19:24 np0005548731 kernel: tap945300f6-d4: entered promiscuous mode
Dec  6 02:19:24 np0005548731 NetworkManager[49182]: <info>  [1765005564.4428] manager: (tap945300f6-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/157)
Dec  6 02:19:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:19:24Z|00296|binding|INFO|Claiming lport 945300f6-d49b-4923-bb2f-c73996fd2be2 for this chassis.
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.443 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:19:24Z|00297|binding|INFO|945300f6-d49b-4923-bb2f-c73996fd2be2: Claiming fa:16:3e:94:d0:b6 10.100.0.6
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.447 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.451 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.455 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.462 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 NetworkManager[49182]: <info>  [1765005564.4626] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Dec  6 02:19:24 np0005548731 NetworkManager[49182]: <info>  [1765005564.4630] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Dec  6 02:19:24 np0005548731 systemd-udevd[265452]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:19:24 np0005548731 systemd-machined[195355]: New machine qemu-33-instance-00000050.
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.474 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:d0:b6 10.100.0.6'], port_security=['fa:16:3e:94:d0:b6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '57f8a652-f11e-4342-bae0-13c592a18ad2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6207e763-a213-4f4e-8aa9-04781b6722bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=945300f6-d49b-4923-bb2f-c73996fd2be2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.475 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 945300f6-d49b-4923-bb2f-c73996fd2be2 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 bound to our chassis#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.477 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26#033[00m
Dec  6 02:19:24 np0005548731 NetworkManager[49182]: <info>  [1765005564.4831] device (tap945300f6-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:19:24 np0005548731 NetworkManager[49182]: <info>  [1765005564.4839] device (tap945300f6-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.487 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[865e24ca-84be-4f19-9fc6-a6984d49fbba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.488 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61a21643-71 in ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.489 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61a21643-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.490 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[54158e52-f7e9-47e6-a5ce-222dfca74b28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.490 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2a6971-0194-41b4-84a2-9155d67a6a96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.499 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[9532dbd9-d83f-4b94-a791-e80afc26dd17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.522 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[71d5503b-5353-4de3-8a1e-71b5413e9150]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 systemd[1]: Started Virtual Machine qemu-33-instance-00000050.
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.546 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ad929c-7eee-4ead-bb7f-b1c506ffdccb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 NetworkManager[49182]: <info>  [1765005564.5563] manager: (tap61a21643-70): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.556 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[23e89e78-d535-4dc9-813d-21b069378f37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.586 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d26be6-6d3b-4b18-88f2-11ad71831825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.588 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c598cb6a-499b-4e3b-9682-301104c3cc8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 NetworkManager[49182]: <info>  [1765005564.6072] device (tap61a21643-70): carrier: link connected
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.614 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bec5b904-d36b-4e0d-a402-70fbb922a509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.630 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ad361c6c-f214-4629-bd7a-61de2485f53a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582661, 'reachable_time': 15929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265485, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.645 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb0c5d1-65ac-4822-923e-2a8c1eb35055]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:67b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 582661, 'tstamp': 582661}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265486, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.662 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b9368b74-885f-4b40-be9e-ae16acbc403a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582661, 'reachable_time': 15929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265487, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.667 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:24.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.691 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.701 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[99ca298a-b6d4-42ba-92a5-61523fd2a16f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:19:24Z|00298|binding|INFO|Setting lport 945300f6-d49b-4923-bb2f-c73996fd2be2 ovn-installed in OVS
Dec  6 02:19:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:19:24Z|00299|binding|INFO|Setting lport 945300f6-d49b-4923-bb2f-c73996fd2be2 up in Southbound
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.702 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.753 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[43546241-92d1-4772-876a-32ebd90ff1a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.755 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.755 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.755 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.757 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 NetworkManager[49182]: <info>  [1765005564.7578] manager: (tap61a21643-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Dec  6 02:19:24 np0005548731 kernel: tap61a21643-70: entered promiscuous mode
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.760 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:19:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:19:24Z|00300|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.761 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.762 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.763 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.763 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c711c0fd-5208-413e-b2b3-3cc42e8f446e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.764 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-61a21643-77ba-4a09-8184-10dc4bd52b26
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:19:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:24.765 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'env', 'PROCESS_TAG=haproxy-61a21643-77ba-4a09-8184-10dc4bd52b26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61a21643-77ba-4a09-8184-10dc4bd52b26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.774 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:19:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 7476 writes, 39K keys, 7476 commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 7476 writes, 7476 syncs, 1.00 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1561 writes, 7352 keys, 1561 commit groups, 1.0 writes per commit group, ingest: 15.55 MB, 0.03 MB/s#012Interval WAL: 1561 writes, 1561 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.7      2.13              0.13        20    0.106       0      0       0.0       0.0#012  L6      1/0    9.15 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8     64.6     53.3      3.26              0.53        19    0.172    107K    11K       0.0       0.0#012 Sum      1/0    9.15 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8     39.1     40.8      5.39              0.66        39    0.138    107K    11K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3     66.5     66.3      0.55              0.11         6    0.091     21K   2100       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     64.6     53.3      3.26              0.53        19    0.172    107K    11K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.7      2.12              0.13        19    0.112       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.045, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.21 GB write, 0.07 MB/s write, 0.21 GB read, 0.07 MB/s read, 5.4 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 23.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000158 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1325,22.39 MB,7.36668%) FilterBlock(39,305.67 KB,0.0981933%) IndexBlock(39,533.58 KB,0.171405%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.964 232437 DEBUG nova.compute.manager [req-82b4bb01-5b28-4c22-b86b-093dd4896e6a req-7c514ea7-99c9-4407-8b48-08cc0848f623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-vif-plugged-945300f6-d49b-4923-bb2f-c73996fd2be2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.965 232437 DEBUG oslo_concurrency.lockutils [req-82b4bb01-5b28-4c22-b86b-093dd4896e6a req-7c514ea7-99c9-4407-8b48-08cc0848f623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.965 232437 DEBUG oslo_concurrency.lockutils [req-82b4bb01-5b28-4c22-b86b-093dd4896e6a req-7c514ea7-99c9-4407-8b48-08cc0848f623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.966 232437 DEBUG oslo_concurrency.lockutils [req-82b4bb01-5b28-4c22-b86b-093dd4896e6a req-7c514ea7-99c9-4407-8b48-08cc0848f623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:19:24 np0005548731 nova_compute[232433]: 2025-12-06 07:19:24.966 232437 DEBUG nova.compute.manager [req-82b4bb01-5b28-4c22-b86b-093dd4896e6a req-7c514ea7-99c9-4407-8b48-08cc0848f623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Processing event network-vif-plugged-945300f6-d49b-4923-bb2f-c73996fd2be2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:19:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:25.079 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.080 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:25 np0005548731 podman[265537]: 2025-12-06 07:19:25.100785407 +0000 UTC m=+0.045353405 container create 391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  6 02:19:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:25.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:25 np0005548731 systemd[1]: Started libpod-conmon-391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973.scope.
Dec  6 02:19:25 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:19:25 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567beccdc315963a6e46d72b2beaef5e73442ad91a15e4ef72933fdb7046e57b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:19:25 np0005548731 podman[265537]: 2025-12-06 07:19:25.167895071 +0000 UTC m=+0.112463099 container init 391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:19:25 np0005548731 podman[265537]: 2025-12-06 07:19:25.077348786 +0000 UTC m=+0.021916804 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:19:25 np0005548731 podman[265537]: 2025-12-06 07:19:25.173031857 +0000 UTC m=+0.117599855 container start 391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:19:25 np0005548731 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[265552]: [NOTICE]   (265556) : New worker (265558) forked
Dec  6 02:19:25 np0005548731 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[265552]: [NOTICE]   (265556) : Loading success.
Dec  6 02:19:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:25.225 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.599 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005565.5991154, 57f8a652-f11e-4342-bae0-13c592a18ad2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.600 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] VM Started (Lifecycle Event)#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.603 232437 DEBUG nova.compute.manager [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.607 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.610 232437 INFO nova.virt.libvirt.driver [-] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Instance spawned successfully.#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.610 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.627 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.630 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.637 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.637 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.638 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.638 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.638 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.639 232437 DEBUG nova.virt.libvirt.driver [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.662 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.663 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005565.5993423, 57f8a652-f11e-4342-bae0-13c592a18ad2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.663 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.707 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.712 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005565.6063147, 57f8a652-f11e-4342-bae0-13c592a18ad2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.712 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.724 232437 INFO nova.compute.manager [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Took 8.46 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.725 232437 DEBUG nova.compute.manager [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.737 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.742 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.768 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.793 232437 INFO nova.compute.manager [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Took 9.52 seconds to build instance.#033[00m
Dec  6 02:19:25 np0005548731 nova_compute[232433]: 2025-12-06 07:19:25.812 232437 DEBUG oslo_concurrency.lockutils [None req-056dfa96-3193-43da-afb1-a26f5bdf549c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:19:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:26.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:26 np0005548731 nova_compute[232433]: 2025-12-06 07:19:26.980 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:27.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:27 np0005548731 nova_compute[232433]: 2025-12-06 07:19:27.806 232437 DEBUG nova.compute.manager [req-a5c364ba-9b7d-4139-861c-e6da5db56fba req-3ec093e9-fe99-4cad-ba17-4f1896dffc4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-vif-plugged-945300f6-d49b-4923-bb2f-c73996fd2be2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:19:27 np0005548731 nova_compute[232433]: 2025-12-06 07:19:27.807 232437 DEBUG oslo_concurrency.lockutils [req-a5c364ba-9b7d-4139-861c-e6da5db56fba req-3ec093e9-fe99-4cad-ba17-4f1896dffc4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:19:27 np0005548731 nova_compute[232433]: 2025-12-06 07:19:27.807 232437 DEBUG oslo_concurrency.lockutils [req-a5c364ba-9b7d-4139-861c-e6da5db56fba req-3ec093e9-fe99-4cad-ba17-4f1896dffc4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:19:27 np0005548731 nova_compute[232433]: 2025-12-06 07:19:27.807 232437 DEBUG oslo_concurrency.lockutils [req-a5c364ba-9b7d-4139-861c-e6da5db56fba req-3ec093e9-fe99-4cad-ba17-4f1896dffc4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:19:27 np0005548731 nova_compute[232433]: 2025-12-06 07:19:27.807 232437 DEBUG nova.compute.manager [req-a5c364ba-9b7d-4139-861c-e6da5db56fba req-3ec093e9-fe99-4cad-ba17-4f1896dffc4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] No waiting events found dispatching network-vif-plugged-945300f6-d49b-4923-bb2f-c73996fd2be2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:19:27 np0005548731 nova_compute[232433]: 2025-12-06 07:19:27.807 232437 WARNING nova.compute.manager [req-a5c364ba-9b7d-4139-861c-e6da5db56fba req-3ec093e9-fe99-4cad-ba17-4f1896dffc4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received unexpected event network-vif-plugged-945300f6-d49b-4923-bb2f-c73996fd2be2 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:19:28 np0005548731 nova_compute[232433]: 2025-12-06 07:19:28.545 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:28.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:28 np0005548731 podman[265593]: 2025-12-06 07:19:28.904157876 +0000 UTC m=+0.060012842 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:19:28 np0005548731 podman[265595]: 2025-12-06 07:19:28.91005031 +0000 UTC m=+0.061440738 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:19:28 np0005548731 podman[265594]: 2025-12-06 07:19:28.961223365 +0000 UTC m=+0.114872828 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:19:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:29.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:30.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.708270) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005570708309, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2426, "num_deletes": 252, "total_data_size": 5804580, "memory_usage": 5875264, "flush_reason": "Manual Compaction"}
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005570727059, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3783982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37135, "largest_seqno": 39556, "table_properties": {"data_size": 3774226, "index_size": 6122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20835, "raw_average_key_size": 20, "raw_value_size": 3754504, "raw_average_value_size": 3721, "num_data_blocks": 266, "num_entries": 1009, "num_filter_entries": 1009, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005353, "oldest_key_time": 1765005353, "file_creation_time": 1765005570, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 18917 microseconds, and 7449 cpu microseconds.
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.727188) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3783982 bytes OK
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.727232) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.729337) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.729349) EVENT_LOG_v1 {"time_micros": 1765005570729345, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.729363) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5793934, prev total WAL file size 5793934, number of live WAL files 2.
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.731169) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3695KB)], [69(9364KB)]
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005570731247, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 13373681, "oldest_snapshot_seqno": -1}
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 7066 keys, 11345288 bytes, temperature: kUnknown
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005570817300, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 11345288, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11297125, "index_size": 29363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 181152, "raw_average_key_size": 25, "raw_value_size": 11169625, "raw_average_value_size": 1580, "num_data_blocks": 1169, "num_entries": 7066, "num_filter_entries": 7066, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765005570, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.817531) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11345288 bytes
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.818771) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.3 rd, 131.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.1 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 7589, records dropped: 523 output_compression: NoCompression
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.818790) EVENT_LOG_v1 {"time_micros": 1765005570818781, "job": 42, "event": "compaction_finished", "compaction_time_micros": 86121, "compaction_time_cpu_micros": 25873, "output_level": 6, "num_output_files": 1, "total_output_size": 11345288, "num_input_records": 7589, "num_output_records": 7066, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005570819592, "job": 42, "event": "table_file_deletion", "file_number": 71}
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005570821336, "job": 42, "event": "table_file_deletion", "file_number": 69}
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.731061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.821383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.821389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.821391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.821392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:30 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:30.821394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:31.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:19:31.228 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:19:31 np0005548731 nova_compute[232433]: 2025-12-06 07:19:31.983 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:32.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:33.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:33 np0005548731 nova_compute[232433]: 2025-12-06 07:19:33.547 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:33.932653) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005573932755, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 293, "num_deletes": 259, "total_data_size": 119952, "memory_usage": 127080, "flush_reason": "Manual Compaction"}
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005573935783, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 78856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39561, "largest_seqno": 39849, "table_properties": {"data_size": 76933, "index_size": 151, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4841, "raw_average_key_size": 17, "raw_value_size": 73097, "raw_average_value_size": 263, "num_data_blocks": 7, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005571, "oldest_key_time": 1765005571, "file_creation_time": 1765005573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 3208 microseconds, and 888 cpu microseconds.
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:33.935862) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 78856 bytes OK
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:33.935903) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:33.937053) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:33.937065) EVENT_LOG_v1 {"time_micros": 1765005573937062, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:33.937076) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 117738, prev total WAL file size 117738, number of live WAL files 2.
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:33.937583) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303038' seq:72057594037927935, type:22 .. '6C6F676D0031323633' seq:0, type:0; will stop at (end)
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(77KB)], [72(10MB)]
Dec  6 02:19:33 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005573937647, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 11424144, "oldest_snapshot_seqno": -1}
Dec  6 02:19:33 np0005548731 nova_compute[232433]: 2025-12-06 07:19:33.972 232437 DEBUG nova.compute.manager [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-changed-945300f6-d49b-4923-bb2f-c73996fd2be2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:19:33 np0005548731 nova_compute[232433]: 2025-12-06 07:19:33.972 232437 DEBUG nova.compute.manager [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Refreshing instance network info cache due to event network-changed-945300f6-d49b-4923-bb2f-c73996fd2be2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:19:33 np0005548731 nova_compute[232433]: 2025-12-06 07:19:33.972 232437 DEBUG oslo_concurrency.lockutils [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:19:33 np0005548731 nova_compute[232433]: 2025-12-06 07:19:33.972 232437 DEBUG oslo_concurrency.lockutils [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:19:33 np0005548731 nova_compute[232433]: 2025-12-06 07:19:33.973 232437 DEBUG nova.network.neutron [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Refreshing network info cache for port 945300f6-d49b-4923-bb2f-c73996fd2be2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6818 keys, 11286779 bytes, temperature: kUnknown
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005574029071, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 11286779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11239817, "index_size": 28802, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 176955, "raw_average_key_size": 25, "raw_value_size": 11116087, "raw_average_value_size": 1630, "num_data_blocks": 1143, "num_entries": 6818, "num_filter_entries": 6818, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765005573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:34.029299) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11286779 bytes
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:34.031072) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.9 rd, 123.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.8 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(288.0) write-amplify(143.1) OK, records in: 7343, records dropped: 525 output_compression: NoCompression
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:34.031089) EVENT_LOG_v1 {"time_micros": 1765005574031081, "job": 44, "event": "compaction_finished", "compaction_time_micros": 91495, "compaction_time_cpu_micros": 28514, "output_level": 6, "num_output_files": 1, "total_output_size": 11286779, "num_input_records": 7343, "num_output_records": 6818, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005574031245, "job": 44, "event": "table_file_deletion", "file_number": 74}
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005574032910, "job": 44, "event": "table_file_deletion", "file_number": 72}
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:33.937467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:34.032945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:34.032948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:34.032950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:34.032951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:19:34.032952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:19:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:34.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:35.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:35 np0005548731 nova_compute[232433]: 2025-12-06 07:19:35.406 232437 DEBUG nova.network.neutron [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updated VIF entry in instance network info cache for port 945300f6-d49b-4923-bb2f-c73996fd2be2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:19:35 np0005548731 nova_compute[232433]: 2025-12-06 07:19:35.406 232437 DEBUG nova.network.neutron [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updating instance_info_cache with network_info: [{"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:19:35 np0005548731 nova_compute[232433]: 2025-12-06 07:19:35.421 232437 DEBUG oslo_concurrency.lockutils [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:19:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:36.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:36 np0005548731 nova_compute[232433]: 2025-12-06 07:19:36.985 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:37.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:38 np0005548731 nova_compute[232433]: 2025-12-06 07:19:38.412 232437 DEBUG nova.compute.manager [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-changed-945300f6-d49b-4923-bb2f-c73996fd2be2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:19:38 np0005548731 nova_compute[232433]: 2025-12-06 07:19:38.412 232437 DEBUG nova.compute.manager [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Refreshing instance network info cache due to event network-changed-945300f6-d49b-4923-bb2f-c73996fd2be2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:19:38 np0005548731 nova_compute[232433]: 2025-12-06 07:19:38.413 232437 DEBUG oslo_concurrency.lockutils [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:19:38 np0005548731 nova_compute[232433]: 2025-12-06 07:19:38.413 232437 DEBUG oslo_concurrency.lockutils [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:19:38 np0005548731 nova_compute[232433]: 2025-12-06 07:19:38.413 232437 DEBUG nova.network.neutron [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Refreshing network info cache for port 945300f6-d49b-4923-bb2f-c73996fd2be2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:19:38 np0005548731 nova_compute[232433]: 2025-12-06 07:19:38.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:38.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:39.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:40 np0005548731 nova_compute[232433]: 2025-12-06 07:19:40.508 232437 DEBUG nova.network.neutron [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updated VIF entry in instance network info cache for port 945300f6-d49b-4923-bb2f-c73996fd2be2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:19:40 np0005548731 nova_compute[232433]: 2025-12-06 07:19:40.508 232437 DEBUG nova.network.neutron [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updating instance_info_cache with network_info: [{"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:19:40 np0005548731 nova_compute[232433]: 2025-12-06 07:19:40.538 232437 DEBUG oslo_concurrency.lockutils [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:19:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:40.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:41.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:19:41Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:94:d0:b6 10.100.0.6
Dec  6 02:19:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:19:41Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:94:d0:b6 10.100.0.6
Dec  6 02:19:41 np0005548731 nova_compute[232433]: 2025-12-06 07:19:41.987 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:42.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:43.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:43 np0005548731 ovn_controller[133927]: 2025-12-06T07:19:43Z|00301|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec  6 02:19:43 np0005548731 nova_compute[232433]: 2025-12-06 07:19:43.243 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:43 np0005548731 nova_compute[232433]: 2025-12-06 07:19:43.552 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:44.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:45.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:46.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:46 np0005548731 nova_compute[232433]: 2025-12-06 07:19:46.989 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:47.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:48 np0005548731 nova_compute[232433]: 2025-12-06 07:19:48.553 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:48.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:49.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:50.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:51 np0005548731 nova_compute[232433]: 2025-12-06 07:19:51.116 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:51.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e256 e256: 3 total, 3 up, 3 in
Dec  6 02:19:51 np0005548731 nova_compute[232433]: 2025-12-06 07:19:51.990 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:52 np0005548731 nova_compute[232433]: 2025-12-06 07:19:52.115 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:19:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:52.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:53.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:53 np0005548731 nova_compute[232433]: 2025-12-06 07:19:53.466 232437 DEBUG oslo_concurrency.lockutils [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "interface-57f8a652-f11e-4342-bae0-13c592a18ad2-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:19:53 np0005548731 nova_compute[232433]: 2025-12-06 07:19:53.466 232437 DEBUG oslo_concurrency.lockutils [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-57f8a652-f11e-4342-bae0-13c592a18ad2-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:19:53 np0005548731 nova_compute[232433]: 2025-12-06 07:19:53.467 232437 DEBUG nova.objects.instance [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'flavor' on Instance uuid 57f8a652-f11e-4342-bae0-13c592a18ad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:19:53 np0005548731 nova_compute[232433]: 2025-12-06 07:19:53.555 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:53 np0005548731 nova_compute[232433]: 2025-12-06 07:19:53.859 232437 DEBUG nova.compute.manager [req-6ca0215b-efef-49a0-ae6a-b1f2bcf46694 req-f43728b1-fabd-4253-96cd-d2675957a66b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-changed-945300f6-d49b-4923-bb2f-c73996fd2be2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:19:53 np0005548731 nova_compute[232433]: 2025-12-06 07:19:53.859 232437 DEBUG nova.compute.manager [req-6ca0215b-efef-49a0-ae6a-b1f2bcf46694 req-f43728b1-fabd-4253-96cd-d2675957a66b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Refreshing instance network info cache due to event network-changed-945300f6-d49b-4923-bb2f-c73996fd2be2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:19:53 np0005548731 nova_compute[232433]: 2025-12-06 07:19:53.859 232437 DEBUG oslo_concurrency.lockutils [req-6ca0215b-efef-49a0-ae6a-b1f2bcf46694 req-f43728b1-fabd-4253-96cd-d2675957a66b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:19:53 np0005548731 nova_compute[232433]: 2025-12-06 07:19:53.860 232437 DEBUG oslo_concurrency.lockutils [req-6ca0215b-efef-49a0-ae6a-b1f2bcf46694 req-f43728b1-fabd-4253-96cd-d2675957a66b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:19:53 np0005548731 nova_compute[232433]: 2025-12-06 07:19:53.860 232437 DEBUG nova.network.neutron [req-6ca0215b-efef-49a0-ae6a-b1f2bcf46694 req-f43728b1-fabd-4253-96cd-d2675957a66b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Refreshing network info cache for port 945300f6-d49b-4923-bb2f-c73996fd2be2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:19:54 np0005548731 nova_compute[232433]: 2025-12-06 07:19:54.195 232437 DEBUG nova.objects.instance [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'pci_requests' on Instance uuid 57f8a652-f11e-4342-bae0-13c592a18ad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:19:54 np0005548731 nova_compute[232433]: 2025-12-06 07:19:54.213 232437 DEBUG nova.network.neutron [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:19:54 np0005548731 nova_compute[232433]: 2025-12-06 07:19:54.446 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:54 np0005548731 nova_compute[232433]: 2025-12-06 07:19:54.646 232437 DEBUG nova.policy [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '06f5b46553b24b39a1493d96ec4e503e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35df5125c2cf4d29a6b975951af14910', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:19:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:19:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:54.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:19:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:19:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:55.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.282 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.574 232437 DEBUG nova.network.neutron [req-6ca0215b-efef-49a0-ae6a-b1f2bcf46694 req-f43728b1-fabd-4253-96cd-d2675957a66b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updated VIF entry in instance network info cache for port 945300f6-d49b-4923-bb2f-c73996fd2be2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.574 232437 DEBUG nova.network.neutron [req-6ca0215b-efef-49a0-ae6a-b1f2bcf46694 req-f43728b1-fabd-4253-96cd-d2675957a66b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updating instance_info_cache with network_info: [{"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.610 232437 DEBUG oslo_concurrency.lockutils [req-6ca0215b-efef-49a0-ae6a-b1f2bcf46694 req-f43728b1-fabd-4253-96cd-d2675957a66b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.611 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.611 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.611 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 57f8a652-f11e-4342-bae0-13c592a18ad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.893 232437 DEBUG nova.network.neutron [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Successfully updated port: 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:19:55 np0005548731 nova_compute[232433]: 2025-12-06 07:19:55.908 232437 DEBUG oslo_concurrency.lockutils [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:19:56 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:19:56 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:19:56 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:19:56 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:19:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:19:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:56.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:19:56 np0005548731 nova_compute[232433]: 2025-12-06 07:19:56.994 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:57.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:19:58 np0005548731 nova_compute[232433]: 2025-12-06 07:19:58.007 232437 DEBUG nova.compute.manager [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-changed-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:19:58 np0005548731 nova_compute[232433]: 2025-12-06 07:19:58.007 232437 DEBUG nova.compute.manager [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Refreshing instance network info cache due to event network-changed-3036e2e9-ad2c-4f44-96e5-c7dcdea69629. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:19:58 np0005548731 nova_compute[232433]: 2025-12-06 07:19:58.007 232437 DEBUG oslo_concurrency.lockutils [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:19:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:19:58Z|00302|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec  6 02:19:58 np0005548731 nova_compute[232433]: 2025-12-06 07:19:58.270 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:58 np0005548731 nova_compute[232433]: 2025-12-06 07:19:58.557 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:19:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:58.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:19:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:19:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:59.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:19:59 np0005548731 podman[265854]: 2025-12-06 07:19:59.896603411 +0000 UTC m=+0.052742025 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:19:59 np0005548731 podman[265856]: 2025-12-06 07:19:59.904397401 +0000 UTC m=+0.055677776 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 02:19:59 np0005548731 podman[265855]: 2025-12-06 07:19:59.96309514 +0000 UTC m=+0.116514338 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.168 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updating instance_info_cache with network_info: [{"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.201 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.201 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.201 232437 DEBUG oslo_concurrency.lockutils [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.202 232437 DEBUG nova.network.neutron [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.202 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.203 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.203 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.203 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.203 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.422 232437 WARNING nova.network.neutron [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] 61a21643-77ba-4a09-8184-10dc4bd52b26 already exists in list: networks containing: ['61a21643-77ba-4a09-8184-10dc4bd52b26']. ignoring it#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.422 232437 WARNING nova.network.neutron [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] 61a21643-77ba-4a09-8184-10dc4bd52b26 already exists in list: networks containing: ['61a21643-77ba-4a09-8184-10dc4bd52b26']. ignoring it#033[00m
Dec  6 02:20:00 np0005548731 nova_compute[232433]: 2025-12-06 07:20:00.422 232437 WARNING nova.network.neutron [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 already exists in list: port_ids containing: ['3036e2e9-ad2c-4f44-96e5-c7dcdea69629']. ignoring it#033[00m
Dec  6 02:20:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:00.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:00.862 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:00.862 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:00.862 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:01.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:01 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 02:20:01 np0005548731 nova_compute[232433]: 2025-12-06 07:20:01.994 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:02.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:20:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:03 np0005548731 nova_compute[232433]: 2025-12-06 07:20:03.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:03.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:03 np0005548731 nova_compute[232433]: 2025-12-06 07:20:03.558 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.133 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.133 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.133 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.387 232437 DEBUG nova.network.neutron [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updating instance_info_cache with network_info: [{"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.432 232437 DEBUG oslo_concurrency.lockutils [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.433 232437 DEBUG oslo_concurrency.lockutils [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.433 232437 DEBUG nova.network.neutron [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Refreshing network info cache for port 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.437 232437 DEBUG nova.virt.libvirt.vif [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:19:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-336213926',display_name='tempest-tempest.common.compute-instance-336213926',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-336213926',id=80,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:25Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g3bk7kc7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=57f8a652-f11e-4342-bae0-13c592a18ad2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.437 232437 DEBUG nova.network.os_vif_util [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.438 232437 DEBUG nova.network.os_vif_util [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.438 232437 DEBUG os_vif [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.439 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.440 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.440 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.443 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.443 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3036e2e9-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.443 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3036e2e9-ad, col_values=(('external_ids', {'iface-id': '3036e2e9-ad2c-4f44-96e5-c7dcdea69629', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:2a:84', 'vm-uuid': '57f8a652-f11e-4342-bae0-13c592a18ad2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:04 np0005548731 NetworkManager[49182]: <info>  [1765005604.4588] manager: (tap3036e2e9-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.462 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.465 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.466 232437 INFO os_vif [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad')#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.467 232437 DEBUG nova.virt.libvirt.vif [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:19:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-336213926',display_name='tempest-tempest.common.compute-instance-336213926',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-336213926',id=80,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:25Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g3bk7kc7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=57f8a652-f11e-4342-bae0-13c592a18ad2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.467 232437 DEBUG nova.network.os_vif_util [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.468 232437 DEBUG nova.network.os_vif_util [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.470 232437 DEBUG nova.virt.libvirt.guest [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] attach device xml: <interface type="ethernet">
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <mac address="fa:16:3e:46:2a:84"/>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <model type="virtio"/>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <mtu size="1442"/>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <target dev="tap3036e2e9-ad"/>
Dec  6 02:20:04 np0005548731 nova_compute[232433]: </interface>
Dec  6 02:20:04 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:20:04 np0005548731 kernel: tap3036e2e9-ad: entered promiscuous mode
Dec  6 02:20:04 np0005548731 NetworkManager[49182]: <info>  [1765005604.4829] manager: (tap3036e2e9-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Dec  6 02:20:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:04Z|00303|binding|INFO|Claiming lport 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 for this chassis.
Dec  6 02:20:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:04Z|00304|binding|INFO|3036e2e9-ad2c-4f44-96e5-c7dcdea69629: Claiming fa:16:3e:46:2a:84 10.100.0.14
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.484 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:04Z|00305|binding|INFO|Setting lport 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 ovn-installed in OVS
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.508 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.511 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:04 np0005548731 systemd-udevd[266042]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:20:04 np0005548731 NetworkManager[49182]: <info>  [1765005604.5388] device (tap3036e2e9-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:20:04 np0005548731 NetworkManager[49182]: <info>  [1765005604.5400] device (tap3036e2e9-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:20:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:20:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/529352192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.568 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:04Z|00306|binding|INFO|Setting lport 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 up in Southbound
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.574 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:2a:84 10.100.0.14'], port_security=['fa:16:3e:46:2a:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1354203550', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '57f8a652-f11e-4342-bae0-13c592a18ad2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1354203550', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=3036e2e9-ad2c-4f44-96e5-c7dcdea69629) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.576 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 bound to our chassis#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.577 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.593 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[af2d05d4-c1ca-4a04-843d-74b49fb4591c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.604 232437 DEBUG nova.virt.libvirt.driver [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.605 232437 DEBUG nova.virt.libvirt.driver [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.605 232437 DEBUG nova.virt.libvirt.driver [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:94:d0:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.605 232437 DEBUG nova.virt.libvirt.driver [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:46:2a:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.627 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcc3ce4-68ed-4676-98e1-0bb877cb8864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.629 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[8801fb82-a13a-4098-ae65-f76b196bd9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.636 232437 DEBUG nova.virt.libvirt.guest [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <nova:name>tempest-tempest.common.compute-instance-336213926</nova:name>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 07:20:04</nova:creationTime>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 02:20:04 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:    <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:    <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:    <nova:port uuid="945300f6-d49b-4923-bb2f-c73996fd2be2">
Dec  6 02:20:04 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:    <nova:port uuid="3036e2e9-ad2c-4f44-96e5-c7dcdea69629">
Dec  6 02:20:04 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:20:04 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 02:20:04 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 02:20:04 np0005548731 nova_compute[232433]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.646 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.646 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.657 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7bb71a-d914-4212-b290-d4821d6f9587]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.673 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[22fc4d26-d598-492e-9135-c9beebd21fed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582661, 'reachable_time': 18555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266053, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.675 232437 DEBUG oslo_concurrency.lockutils [None req-b2c8cb97-cee4-4efd-a67d-fd204f4eb232 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-57f8a652-f11e-4342-bae0-13c592a18ad2-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.688 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e48f551f-372b-446b-bc35-d3353a5706a0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 582673, 'tstamp': 582673}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266054, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 582675, 'tstamp': 582675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266054, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.689 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.691 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.692 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.692 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.692 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.693 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:04.693 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:04.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.805 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.806 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4392MB free_disk=20.85144805908203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.806 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.806 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.883 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 57f8a652-f11e-4342-bae0-13c592a18ad2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.883 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.883 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.909 232437 DEBUG nova.compute.manager [req-cec0c9b4-0f23-45dc-ac12-7a2fb57bd32f req-fb5d5b12-bf40-4549-8250-5f575e905acb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.910 232437 DEBUG oslo_concurrency.lockutils [req-cec0c9b4-0f23-45dc-ac12-7a2fb57bd32f req-fb5d5b12-bf40-4549-8250-5f575e905acb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.911 232437 DEBUG oslo_concurrency.lockutils [req-cec0c9b4-0f23-45dc-ac12-7a2fb57bd32f req-fb5d5b12-bf40-4549-8250-5f575e905acb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.911 232437 DEBUG oslo_concurrency.lockutils [req-cec0c9b4-0f23-45dc-ac12-7a2fb57bd32f req-fb5d5b12-bf40-4549-8250-5f575e905acb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.911 232437 DEBUG nova.compute.manager [req-cec0c9b4-0f23-45dc-ac12-7a2fb57bd32f req-fb5d5b12-bf40-4549-8250-5f575e905acb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] No waiting events found dispatching network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.912 232437 WARNING nova.compute.manager [req-cec0c9b4-0f23-45dc-ac12-7a2fb57bd32f req-fb5d5b12-bf40-4549-8250-5f575e905acb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received unexpected event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:20:04 np0005548731 nova_compute[232433]: 2025-12-06 07:20:04.932 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:05.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:20:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/378322614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.355 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.360 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.377 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.415 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.415 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.418 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.418 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.419 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.420 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.531 232437 DEBUG oslo_concurrency.lockutils [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "interface-57f8a652-f11e-4342-bae0-13c592a18ad2-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.531 232437 DEBUG oslo_concurrency.lockutils [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-57f8a652-f11e-4342-bae0-13c592a18ad2-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.546 232437 DEBUG nova.objects.instance [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'flavor' on Instance uuid 57f8a652-f11e-4342-bae0-13c592a18ad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.568 232437 DEBUG nova.virt.libvirt.vif [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:19:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-336213926',display_name='tempest-tempest.common.compute-instance-336213926',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-336213926',id=80,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:25Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g3bk7kc7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=57f8a652-f11e-4342-bae0-13c592a18ad2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.569 232437 DEBUG nova.network.os_vif_util [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.570 232437 DEBUG nova.network.os_vif_util [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.573 232437 DEBUG nova.virt.libvirt.guest [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.577 232437 DEBUG nova.virt.libvirt.guest [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.579 232437 DEBUG nova.virt.libvirt.driver [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Attempting to detach device tap3036e2e9-ad from instance 57f8a652-f11e-4342-bae0-13c592a18ad2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.579 232437 DEBUG nova.virt.libvirt.guest [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] detach device xml: <interface type="ethernet">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <mac address="fa:16:3e:46:2a:84"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <model type="virtio"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <mtu size="1442"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <target dev="tap3036e2e9-ad"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: </interface>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.585 232437 DEBUG nova.virt.libvirt.guest [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.588 232437 DEBUG nova.virt.libvirt.guest [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface>not found in domain: <domain type='kvm' id='33'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <name>instance-00000050</name>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <uuid>57f8a652-f11e-4342-bae0-13c592a18ad2</uuid>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:name>tempest-tempest.common.compute-instance-336213926</nova:name>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 07:20:04</nova:creationTime>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:port uuid="945300f6-d49b-4923-bb2f-c73996fd2be2">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:port uuid="3036e2e9-ad2c-4f44-96e5-c7dcdea69629">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <memory unit='KiB'>131072</memory>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <vcpu placement='static'>1</vcpu>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <resource>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <partition>/machine</partition>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </resource>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <sysinfo type='smbios'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='manufacturer'>RDO</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='serial'>57f8a652-f11e-4342-bae0-13c592a18ad2</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='uuid'>57f8a652-f11e-4342-bae0-13c592a18ad2</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='family'>Virtual Machine</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <boot dev='hd'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <smbios mode='sysinfo'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <vmcoreinfo state='on'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <model fallback='forbid'>Nehalem</model>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <feature policy='require' name='x2apic'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <feature policy='require' name='hypervisor'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <feature policy='require' name='vme'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <clock offset='utc'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <timer name='hpet' present='no'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <on_poweroff>destroy</on_poweroff>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <on_reboot>restart</on_reboot>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <on_crash>destroy</on_crash>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <disk type='network' device='disk'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/57f8a652-f11e-4342-bae0-13c592a18ad2_disk' index='2'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target dev='vda' bus='virtio'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='virtio-disk0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <disk type='network' device='cdrom'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/57f8a652-f11e-4342-bae0-13c592a18ad2_disk.config' index='1'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target dev='sda' bus='sata'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <readonly/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='sata0-0-0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pcie.0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='1' port='0x10'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='2' port='0x11'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='3' port='0x12'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.3'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='4' port='0x13'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.4'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='5' port='0x14'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.5'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='6' port='0x15'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.6'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='7' port='0x16'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.7'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='8' port='0x17'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.8'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='9' port='0x18'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.9'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='10' port='0x19'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.10'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='11' port='0x1a'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.11'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='12' port='0x1b'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.12'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='13' port='0x1c'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.13'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='14' port='0x1d'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.14'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='15' port='0x1e'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.15'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='16' port='0x1f'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.16'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='17' port='0x20'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.17'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='18' port='0x21'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.18'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='19' port='0x22'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.19'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='20' port='0x23'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.20'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='21' port='0x24'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.21'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='22' port='0x25'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.22'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='23' port='0x26'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.23'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='24' port='0x27'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.24'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='25' port='0x28'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.25'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-pci-bridge'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.26'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='usb'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='sata' index='0'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='ide'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:94:d0:b6'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target dev='tap945300f6-d4'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='net0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:46:2a:84'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target dev='tap3036e2e9-ad'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='net1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <serial type='pty'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/console.log' append='off'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target type='isa-serial' port='0'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <model name='isa-serial'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      </target>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/console.log' append='off'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target type='serial' port='0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </console>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <input type='tablet' bus='usb'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='input0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='usb' bus='0' port='1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <input type='mouse' bus='ps2'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='input1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <input type='keyboard' bus='ps2'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='input2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <listen type='address' address='::0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <audio id='1' type='none'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='video0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <watchdog model='itco' action='reset'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='watchdog0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </watchdog>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <memballoon model='virtio'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <stats period='10'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='balloon0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <rng model='virtio'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <backend model='random'>/dev/urandom</backend>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='rng0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <label>system_u:system_r:svirt_t:s0:c408,c451</label>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c408,c451</imagelabel>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <label>+107:+107</label>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <imagelabel>+107:+107</imagelabel>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.589 232437 INFO nova.virt.libvirt.driver [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully detached device tap3036e2e9-ad from instance 57f8a652-f11e-4342-bae0-13c592a18ad2 from the persistent domain config.#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.590 232437 DEBUG nova.virt.libvirt.driver [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] (1/8): Attempting to detach device tap3036e2e9-ad with device alias net1 from instance 57f8a652-f11e-4342-bae0-13c592a18ad2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.590 232437 DEBUG nova.virt.libvirt.guest [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] detach device xml: <interface type="ethernet">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <mac address="fa:16:3e:46:2a:84"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <model type="virtio"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <mtu size="1442"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <target dev="tap3036e2e9-ad"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: </interface>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:20:05 np0005548731 kernel: tap3036e2e9-ad (unregistering): left promiscuous mode
Dec  6 02:20:05 np0005548731 NetworkManager[49182]: <info>  [1765005605.6315] device (tap3036e2e9-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:20:05 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:05Z|00307|binding|INFO|Releasing lport 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 from this chassis (sb_readonly=0)
Dec  6 02:20:05 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:05Z|00308|binding|INFO|Setting lport 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 down in Southbound
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.642 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:05 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:05Z|00309|binding|INFO|Removing iface tap3036e2e9-ad ovn-installed in OVS
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.643 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765005605.6436183, 57f8a652-f11e-4342-bae0-13c592a18ad2 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.644 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.645 232437 DEBUG nova.virt.libvirt.driver [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Start waiting for the detach event from libvirt for device tap3036e2e9-ad with device alias net1 for instance 57f8a652-f11e-4342-bae0-13c592a18ad2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.645 232437 DEBUG nova.virt.libvirt.guest [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.649 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:2a:84 10.100.0.14'], port_security=['fa:16:3e:46:2a:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1354203550', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '57f8a652-f11e-4342-bae0-13c592a18ad2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1354203550', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=3036e2e9-ad2c-4f44-96e5-c7dcdea69629) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.649 232437 DEBUG nova.virt.libvirt.guest [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface>not found in domain: <domain type='kvm' id='33'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <name>instance-00000050</name>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <uuid>57f8a652-f11e-4342-bae0-13c592a18ad2</uuid>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:name>tempest-tempest.common.compute-instance-336213926</nova:name>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 07:20:04</nova:creationTime>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:port uuid="945300f6-d49b-4923-bb2f-c73996fd2be2">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:port uuid="3036e2e9-ad2c-4f44-96e5-c7dcdea69629">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <memory unit='KiB'>131072</memory>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <vcpu placement='static'>1</vcpu>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <resource>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <partition>/machine</partition>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </resource>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <sysinfo type='smbios'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='manufacturer'>RDO</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='serial'>57f8a652-f11e-4342-bae0-13c592a18ad2</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='uuid'>57f8a652-f11e-4342-bae0-13c592a18ad2</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <entry name='family'>Virtual Machine</entry>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <boot dev='hd'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <smbios mode='sysinfo'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <vmcoreinfo state='on'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <model fallback='forbid'>Nehalem</model>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <feature policy='require' name='x2apic'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <feature policy='require' name='hypervisor'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <feature policy='require' name='vme'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <clock offset='utc'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <timer name='hpet' present='no'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <on_poweroff>destroy</on_poweroff>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <on_reboot>restart</on_reboot>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <on_crash>destroy</on_crash>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <disk type='network' device='disk'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/57f8a652-f11e-4342-bae0-13c592a18ad2_disk' index='2'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target dev='vda' bus='virtio'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='virtio-disk0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <disk type='network' device='cdrom'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/57f8a652-f11e-4342-bae0-13c592a18ad2_disk.config' index='1'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target dev='sda' bus='sata'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <readonly/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='sata0-0-0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pcie.0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='1' port='0x10'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='2' port='0x11'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='3' port='0x12'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.3'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='4' port='0x13'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.4'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='5' port='0x14'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.5'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='6' port='0x15'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.6'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='7' port='0x16'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.7'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='8' port='0x17'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.8'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='9' port='0x18'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.9'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='10' port='0x19'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.10'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='11' port='0x1a'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.11'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='12' port='0x1b'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.12'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='13' port='0x1c'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.13'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='14' port='0x1d'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.14'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='15' port='0x1e'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.15'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='16' port='0x1f'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.16'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='17' port='0x20'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.17'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='18' port='0x21'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.18'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='19' port='0x22'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.19'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='20' port='0x23'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.20'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='21' port='0x24'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.21'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='22' port='0x25'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.22'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='23' port='0x26'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.23'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='24' port='0x27'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.24'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target chassis='25' port='0x28'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.25'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model name='pcie-pci-bridge'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='pci.26'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='usb'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <controller type='sata' index='0'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='ide'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:94:d0:b6'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target dev='tap945300f6-d4'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='net0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <serial type='pty'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/console.log' append='off'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target type='isa-serial' port='0'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:        <model name='isa-serial'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      </target>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2/console.log' append='off'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <target type='serial' port='0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </console>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <input type='tablet' bus='usb'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='input0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='usb' bus='0' port='1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <input type='mouse' bus='ps2'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='input1'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <input type='keyboard' bus='ps2'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='input2'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <listen type='address' address='::0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <audio id='1' type='none'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='video0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <watchdog model='itco' action='reset'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='watchdog0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </watchdog>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <memballoon model='virtio'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <stats period='10'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='balloon0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <rng model='virtio'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <backend model='random'>/dev/urandom</backend>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <alias name='rng0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <label>system_u:system_r:svirt_t:s0:c408,c451</label>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c408,c451</imagelabel>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <label>+107:+107</label>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <imagelabel>+107:+107</imagelabel>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.649 232437 INFO nova.virt.libvirt.driver [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully detached device tap3036e2e9-ad from instance 57f8a652-f11e-4342-bae0-13c592a18ad2 from the live domain config.#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.650 232437 DEBUG nova.virt.libvirt.vif [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:19:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-336213926',display_name='tempest-tempest.common.compute-instance-336213926',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-336213926',id=80,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:25Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g3bk7kc7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=57f8a652-f11e-4342-bae0-13c592a18ad2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.650 232437 DEBUG nova.network.os_vif_util [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.651 232437 DEBUG nova.network.os_vif_util [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.651 232437 DEBUG os_vif [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.650 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 unbound from our chassis#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.651 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.654 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.654 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3036e2e9-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.655 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.657 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.661 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.663 232437 INFO os_vif [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad')#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.664 232437 DEBUG nova.virt.libvirt.guest [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:name>tempest-tempest.common.compute-instance-336213926</nova:name>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 07:20:05</nova:creationTime>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    <nova:port uuid="945300f6-d49b-4923-bb2f-c73996fd2be2">
Dec  6 02:20:05 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:20:05 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 02:20:05 np0005548731 nova_compute[232433]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.668 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1faa5a-7dce-470f-8f87-28ddd5fd0a3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.696 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[06cc7dd2-69f4-4e63-a7a8-25d69841c585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.698 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[667fc14b-0865-447c-866a-83d0aa272ae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.724 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[38ac3afd-4b88-4d68-915e-afa9563af01f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.743 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2bfcac99-0d95-40af-b799-72c06289ae94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582661, 'reachable_time': 18555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266086, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.757 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2467e1-2c40-4540-8dd9-bcbb967c541c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 582673, 'tstamp': 582673}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266087, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 582675, 'tstamp': 582675}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266087, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.759 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:05 np0005548731 nova_compute[232433]: 2025-12-06 07:20:05.761 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.761 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.761 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.762 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:05.762 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:06 np0005548731 nova_compute[232433]: 2025-12-06 07:20:06.347 232437 DEBUG nova.network.neutron [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updated VIF entry in instance network info cache for port 3036e2e9-ad2c-4f44-96e5-c7dcdea69629. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:20:06 np0005548731 nova_compute[232433]: 2025-12-06 07:20:06.348 232437 DEBUG nova.network.neutron [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updating instance_info_cache with network_info: [{"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:20:06 np0005548731 nova_compute[232433]: 2025-12-06 07:20:06.375 232437 DEBUG oslo_concurrency.lockutils [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:20:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:20:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:06.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:20:06 np0005548731 nova_compute[232433]: 2025-12-06 07:20:06.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.021 232437 DEBUG nova.compute.manager [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.021 232437 DEBUG oslo_concurrency.lockutils [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.021 232437 DEBUG oslo_concurrency.lockutils [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.021 232437 DEBUG oslo_concurrency.lockutils [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.022 232437 DEBUG nova.compute.manager [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] No waiting events found dispatching network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.022 232437 WARNING nova.compute.manager [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received unexpected event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.022 232437 DEBUG nova.compute.manager [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-vif-unplugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.022 232437 DEBUG oslo_concurrency.lockutils [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.022 232437 DEBUG oslo_concurrency.lockutils [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.023 232437 DEBUG oslo_concurrency.lockutils [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.023 232437 DEBUG nova.compute.manager [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] No waiting events found dispatching network-vif-unplugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.023 232437 WARNING nova.compute.manager [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received unexpected event network-vif-unplugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.023 232437 DEBUG nova.compute.manager [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.023 232437 DEBUG oslo_concurrency.lockutils [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.023 232437 DEBUG oslo_concurrency.lockutils [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.024 232437 DEBUG oslo_concurrency.lockutils [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.024 232437 DEBUG nova.compute.manager [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] No waiting events found dispatching network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.024 232437 WARNING nova.compute.manager [req-63463f95-4574-4be6-9067-59795ed8708c req-d27b6a2e-8167-45e8-a6d1-3e248cfb194d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received unexpected event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:20:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:20:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:07.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:20:07 np0005548731 nova_compute[232433]: 2025-12-06 07:20:07.416 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:08 np0005548731 nova_compute[232433]: 2025-12-06 07:20:08.678 232437 DEBUG oslo_concurrency.lockutils [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:20:08 np0005548731 nova_compute[232433]: 2025-12-06 07:20:08.678 232437 DEBUG oslo_concurrency.lockutils [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:20:08 np0005548731 nova_compute[232433]: 2025-12-06 07:20:08.678 232437 DEBUG nova.network.neutron [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:20:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:20:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2441678181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:20:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:20:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2441678181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:20:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:08.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:09.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.373 232437 DEBUG oslo_concurrency.lockutils [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.374 232437 DEBUG oslo_concurrency.lockutils [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.374 232437 DEBUG oslo_concurrency.lockutils [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.374 232437 DEBUG oslo_concurrency.lockutils [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.375 232437 DEBUG oslo_concurrency.lockutils [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.376 232437 INFO nova.compute.manager [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Terminating instance#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.377 232437 DEBUG nova.compute.manager [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:20:09 np0005548731 kernel: tap945300f6-d4 (unregistering): left promiscuous mode
Dec  6 02:20:09 np0005548731 NetworkManager[49182]: <info>  [1765005609.4584] device (tap945300f6-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:20:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:09Z|00310|binding|INFO|Releasing lport 945300f6-d49b-4923-bb2f-c73996fd2be2 from this chassis (sb_readonly=0)
Dec  6 02:20:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:09Z|00311|binding|INFO|Setting lport 945300f6-d49b-4923-bb2f-c73996fd2be2 down in Southbound
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.471 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:09Z|00312|binding|INFO|Removing iface tap945300f6-d4 ovn-installed in OVS
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.478 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:d0:b6 10.100.0.6'], port_security=['fa:16:3e:94:d0:b6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '57f8a652-f11e-4342-bae0-13c592a18ad2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6207e763-a213-4f4e-8aa9-04781b6722bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=945300f6-d49b-4923-bb2f-c73996fd2be2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.479 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 945300f6-d49b-4923-bb2f-c73996fd2be2 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 unbound from our chassis#033[00m
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.481 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61a21643-77ba-4a09-8184-10dc4bd52b26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.482 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f2fc66-6f7d-4850-8442-e5e032b75d23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.483 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 namespace which is not needed anymore#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.500 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:09 np0005548731 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000050.scope: Deactivated successfully.
Dec  6 02:20:09 np0005548731 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000050.scope: Consumed 14.342s CPU time.
Dec  6 02:20:09 np0005548731 systemd-machined[195355]: Machine qemu-33-instance-00000050 terminated.
Dec  6 02:20:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e257 e257: 3 total, 3 up, 3 in
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.609 232437 INFO nova.virt.libvirt.driver [-] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Instance destroyed successfully.#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.609 232437 DEBUG nova.objects.instance [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'resources' on Instance uuid 57f8a652-f11e-4342-bae0-13c592a18ad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:09 np0005548731 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[265552]: [NOTICE]   (265556) : haproxy version is 2.8.14-c23fe91
Dec  6 02:20:09 np0005548731 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[265552]: [NOTICE]   (265556) : path to executable is /usr/sbin/haproxy
Dec  6 02:20:09 np0005548731 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[265552]: [WARNING]  (265556) : Exiting Master process...
Dec  6 02:20:09 np0005548731 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[265552]: [ALERT]    (265556) : Current worker (265558) exited with code 143 (Terminated)
Dec  6 02:20:09 np0005548731 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[265552]: [WARNING]  (265556) : All workers exited. Exiting... (0)
Dec  6 02:20:09 np0005548731 systemd[1]: libpod-391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973.scope: Deactivated successfully.
Dec  6 02:20:09 np0005548731 podman[266114]: 2025-12-06 07:20:09.629612339 +0000 UTC m=+0.051082834 container died 391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.639 232437 DEBUG nova.virt.libvirt.vif [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:19:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-336213926',display_name='tempest-tempest.common.compute-instance-336213926',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-336213926',id=80,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:25Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g3bk7kc7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=57f8a652-f11e-4342-bae0-13c592a18ad2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.640 232437 DEBUG nova.network.os_vif_util [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.641 232437 DEBUG nova.network.os_vif_util [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:94:d0:b6,bridge_name='br-int',has_traffic_filtering=True,id=945300f6-d49b-4923-bb2f-c73996fd2be2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap945300f6-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.642 232437 DEBUG os_vif [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:d0:b6,bridge_name='br-int',has_traffic_filtering=True,id=945300f6-d49b-4923-bb2f-c73996fd2be2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap945300f6-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.644 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.644 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap945300f6-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.647 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.650 232437 INFO os_vif [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:d0:b6,bridge_name='br-int',has_traffic_filtering=True,id=945300f6-d49b-4923-bb2f-c73996fd2be2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap945300f6-d4')#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.651 232437 DEBUG nova.virt.libvirt.vif [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:19:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-336213926',display_name='tempest-tempest.common.compute-instance-336213926',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-336213926',id=80,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:25Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g3bk7kc7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=57f8a652-f11e-4342-bae0-13c592a18ad2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.651 232437 DEBUG nova.network.os_vif_util [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.652 232437 DEBUG nova.network.os_vif_util [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.652 232437 DEBUG os_vif [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.654 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.654 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3036e2e9-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.655 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.657 232437 INFO os_vif [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad')#033[00m
Dec  6 02:20:09 np0005548731 systemd[1]: var-lib-containers-storage-overlay-567beccdc315963a6e46d72b2beaef5e73442ad91a15e4ef72933fdb7046e57b-merged.mount: Deactivated successfully.
Dec  6 02:20:09 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973-userdata-shm.mount: Deactivated successfully.
Dec  6 02:20:09 np0005548731 podman[266114]: 2025-12-06 07:20:09.676802068 +0000 UTC m=+0.098272453 container cleanup 391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:20:09 np0005548731 systemd[1]: libpod-conmon-391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973.scope: Deactivated successfully.
Dec  6 02:20:09 np0005548731 podman[266169]: 2025-12-06 07:20:09.745648975 +0000 UTC m=+0.047744854 container remove 391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.751 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[89b49d95-9817-4013-a5e0-a4ee6fabc313]: (4, ('Sat Dec  6 07:20:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 (391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973)\n391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973\nSat Dec  6 07:20:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 (391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973)\n391af68a55ef50bd67fb2228694a2fdc35a763df5c3307671d2236dcb1503973\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.753 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4dbba7d3-ac4f-470d-8fd7-1d8950595922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.754 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.757 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:09 np0005548731 kernel: tap61a21643-70: left promiscuous mode
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.761 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae13a6d-ecca-4d29-9511-dd40c2942a20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.774 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.783 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcd4d83-3eec-4484-bcf2-13fa9532fd5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.784 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[769dd941-91d4-45c9-9ca7-29db93401aa7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.799 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5b966632-0ca1-4f03-a960-3a6f5158975b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582654, 'reachable_time': 20220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266187, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:09 np0005548731 systemd[1]: run-netns-ovnmeta\x2d61a21643\x2d77ba\x2d4a09\x2d8184\x2d10dc4bd52b26.mount: Deactivated successfully.
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.803 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:20:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:09.803 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[01a75bea-4a47-4541-a7ca-c7535a2f7b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.881 232437 DEBUG nova.compute.manager [req-43efaf5d-fef0-411c-9ddd-499e2739bb05 req-d02b9498-0201-4c49-b7e5-d13a2b9ee5ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-vif-unplugged-945300f6-d49b-4923-bb2f-c73996fd2be2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.883 232437 DEBUG oslo_concurrency.lockutils [req-43efaf5d-fef0-411c-9ddd-499e2739bb05 req-d02b9498-0201-4c49-b7e5-d13a2b9ee5ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.884 232437 DEBUG oslo_concurrency.lockutils [req-43efaf5d-fef0-411c-9ddd-499e2739bb05 req-d02b9498-0201-4c49-b7e5-d13a2b9ee5ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.885 232437 DEBUG oslo_concurrency.lockutils [req-43efaf5d-fef0-411c-9ddd-499e2739bb05 req-d02b9498-0201-4c49-b7e5-d13a2b9ee5ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.885 232437 DEBUG nova.compute.manager [req-43efaf5d-fef0-411c-9ddd-499e2739bb05 req-d02b9498-0201-4c49-b7e5-d13a2b9ee5ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] No waiting events found dispatching network-vif-unplugged-945300f6-d49b-4923-bb2f-c73996fd2be2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:09 np0005548731 nova_compute[232433]: 2025-12-06 07:20:09.886 232437 DEBUG nova.compute.manager [req-43efaf5d-fef0-411c-9ddd-499e2739bb05 req-d02b9498-0201-4c49-b7e5-d13a2b9ee5ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-vif-unplugged-945300f6-d49b-4923-bb2f-c73996fd2be2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:20:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:10.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:10 np0005548731 nova_compute[232433]: 2025-12-06 07:20:10.804 232437 INFO nova.network.neutron [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Port 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  6 02:20:10 np0005548731 nova_compute[232433]: 2025-12-06 07:20:10.804 232437 DEBUG nova.network.neutron [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updating instance_info_cache with network_info: [{"id": "945300f6-d49b-4923-bb2f-c73996fd2be2", "address": "fa:16:3e:94:d0:b6", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap945300f6-d4", "ovs_interfaceid": "945300f6-d49b-4923-bb2f-c73996fd2be2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:20:10 np0005548731 nova_compute[232433]: 2025-12-06 07:20:10.841 232437 DEBUG oslo_concurrency.lockutils [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-57f8a652-f11e-4342-bae0-13c592a18ad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:20:10 np0005548731 nova_compute[232433]: 2025-12-06 07:20:10.873 232437 DEBUG oslo_concurrency.lockutils [None req-53983015-c7b3-4a33-ad48-1f2f3fc4d2d9 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-57f8a652-f11e-4342-bae0-13c592a18ad2-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.151 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.151 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:11.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.194 232437 DEBUG nova.compute.manager [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.304 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.305 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.311 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.311 232437 INFO nova.compute.claims [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.441 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.880 232437 INFO nova.virt.libvirt.driver [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Deleting instance files /var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2_del#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.881 232437 INFO nova.virt.libvirt.driver [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Deletion of /var/lib/nova/instances/57f8a652-f11e-4342-bae0-13c592a18ad2_del complete#033[00m
Dec  6 02:20:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:20:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/100782781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.912 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.920 232437 DEBUG nova.compute.provider_tree [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.959 232437 DEBUG nova.scheduler.client.report [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.987 232437 INFO nova.compute.manager [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Took 2.61 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.988 232437 DEBUG oslo.service.loopingcall [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.988 232437 DEBUG nova.compute.manager [-] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:20:11 np0005548731 nova_compute[232433]: 2025-12-06 07:20:11.988 232437 DEBUG nova.network.neutron [-] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.191 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.191 232437 DEBUG nova.compute.manager [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.254 232437 DEBUG nova.compute.manager [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.255 232437 DEBUG nova.network.neutron [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.278 232437 INFO nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.296 232437 DEBUG nova.compute.manager [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.411 232437 DEBUG nova.compute.manager [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.413 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.413 232437 INFO nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Creating image(s)#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.437 232437 DEBUG nova.storage.rbd_utils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.463 232437 DEBUG nova.storage.rbd_utils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.483 232437 DEBUG nova.storage.rbd_utils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.487 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.545 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.546 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.546 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.546 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.566 232437 DEBUG nova.storage.rbd_utils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.569 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:12.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:12 np0005548731 nova_compute[232433]: 2025-12-06 07:20:12.906 232437 DEBUG nova.policy [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3d8ea3dcb5b4ee3b9bd24c62fdd61b2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da41861044a74eee9bc96841e54c57cc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:20:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:13.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.303 232437 DEBUG nova.compute.manager [req-0f6bf70b-8b0d-4b07-b967-4fa1939e9276 req-9f61552d-3ee1-44c5-821d-478e2460490a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-vif-plugged-945300f6-d49b-4923-bb2f-c73996fd2be2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.303 232437 DEBUG oslo_concurrency.lockutils [req-0f6bf70b-8b0d-4b07-b967-4fa1939e9276 req-9f61552d-3ee1-44c5-821d-478e2460490a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.304 232437 DEBUG oslo_concurrency.lockutils [req-0f6bf70b-8b0d-4b07-b967-4fa1939e9276 req-9f61552d-3ee1-44c5-821d-478e2460490a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.304 232437 DEBUG oslo_concurrency.lockutils [req-0f6bf70b-8b0d-4b07-b967-4fa1939e9276 req-9f61552d-3ee1-44c5-821d-478e2460490a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.304 232437 DEBUG nova.compute.manager [req-0f6bf70b-8b0d-4b07-b967-4fa1939e9276 req-9f61552d-3ee1-44c5-821d-478e2460490a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] No waiting events found dispatching network-vif-plugged-945300f6-d49b-4923-bb2f-c73996fd2be2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.304 232437 WARNING nova.compute.manager [req-0f6bf70b-8b0d-4b07-b967-4fa1939e9276 req-9f61552d-3ee1-44c5-821d-478e2460490a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received unexpected event network-vif-plugged-945300f6-d49b-4923-bb2f-c73996fd2be2 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.322 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.753s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.384 232437 DEBUG nova.storage.rbd_utils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] resizing rbd image 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.471 232437 DEBUG nova.objects.instance [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'migration_context' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.488 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.488 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Ensure instance console log exists: /var/lib/nova/instances/9b6b3b79-95ba-4b05-b308-4cb69b61cab5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.488 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.489 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:13 np0005548731 nova_compute[232433]: 2025-12-06 07:20:13.489 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:14 np0005548731 nova_compute[232433]: 2025-12-06 07:20:14.648 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:14.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:14 np0005548731 nova_compute[232433]: 2025-12-06 07:20:14.742 232437 DEBUG nova.network.neutron [-] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:20:14 np0005548731 nova_compute[232433]: 2025-12-06 07:20:14.759 232437 DEBUG nova.network.neutron [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Successfully created port: ee1e1d41-5735-4e5a-8a77-1785e7a5214a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:20:14 np0005548731 nova_compute[232433]: 2025-12-06 07:20:14.786 232437 INFO nova.compute.manager [-] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Took 2.80 seconds to deallocate network for instance.#033[00m
Dec  6 02:20:14 np0005548731 nova_compute[232433]: 2025-12-06 07:20:14.830 232437 DEBUG oslo_concurrency.lockutils [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:14 np0005548731 nova_compute[232433]: 2025-12-06 07:20:14.831 232437 DEBUG oslo_concurrency.lockutils [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:14 np0005548731 nova_compute[232433]: 2025-12-06 07:20:14.903 232437 DEBUG oslo_concurrency.processutils [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:15.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:20:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2102635153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:20:15 np0005548731 nova_compute[232433]: 2025-12-06 07:20:15.316 232437 DEBUG oslo_concurrency.processutils [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:15 np0005548731 nova_compute[232433]: 2025-12-06 07:20:15.322 232437 DEBUG nova.compute.provider_tree [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:20:15 np0005548731 nova_compute[232433]: 2025-12-06 07:20:15.346 232437 DEBUG nova.scheduler.client.report [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:20:15 np0005548731 nova_compute[232433]: 2025-12-06 07:20:15.376 232437 DEBUG oslo_concurrency.lockutils [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:15 np0005548731 nova_compute[232433]: 2025-12-06 07:20:15.400 232437 INFO nova.scheduler.client.report [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Deleted allocations for instance 57f8a652-f11e-4342-bae0-13c592a18ad2#033[00m
Dec  6 02:20:15 np0005548731 nova_compute[232433]: 2025-12-06 07:20:15.540 232437 DEBUG oslo_concurrency.lockutils [None req-6e07a5c7-c10d-4efe-acd7-f33a632e327e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "57f8a652-f11e-4342-bae0-13c592a18ad2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:15 np0005548731 nova_compute[232433]: 2025-12-06 07:20:15.853 232437 DEBUG nova.network.neutron [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Successfully updated port: ee1e1d41-5735-4e5a-8a77-1785e7a5214a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:20:15 np0005548731 nova_compute[232433]: 2025-12-06 07:20:15.873 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:20:15 np0005548731 nova_compute[232433]: 2025-12-06 07:20:15.873 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquired lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:20:15 np0005548731 nova_compute[232433]: 2025-12-06 07:20:15.874 232437 DEBUG nova.network.neutron [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.215 232437 DEBUG nova.network.neutron [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.567 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "2092b945-90e0-4a04-aabb-cc48efe223da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.567 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.585 232437 DEBUG nova.compute.manager [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.651 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.651 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.657 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.658 232437 INFO nova.compute.claims [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:20:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:16.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.817 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.878 232437 DEBUG nova.compute.manager [req-3acae6d2-585d-436a-a4da-983486c009e2 req-630eb704-13b8-48d3-b8ab-1f89e7a1af11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Received event network-vif-deleted-945300f6-d49b-4923-bb2f-c73996fd2be2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.879 232437 DEBUG nova.compute.manager [req-3acae6d2-585d-436a-a4da-983486c009e2 req-630eb704-13b8-48d3-b8ab-1f89e7a1af11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received event network-changed-ee1e1d41-5735-4e5a-8a77-1785e7a5214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.879 232437 DEBUG nova.compute.manager [req-3acae6d2-585d-436a-a4da-983486c009e2 req-630eb704-13b8-48d3-b8ab-1f89e7a1af11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Refreshing instance network info cache due to event network-changed-ee1e1d41-5735-4e5a-8a77-1785e7a5214a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:20:16 np0005548731 nova_compute[232433]: 2025-12-06 07:20:16.880 232437 DEBUG oslo_concurrency.lockutils [req-3acae6d2-585d-436a-a4da-983486c009e2 req-630eb704-13b8-48d3-b8ab-1f89e7a1af11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.002 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.177 232437 DEBUG nova.network.neutron [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Updating instance_info_cache with network_info: [{"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.198 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Releasing lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.198 232437 DEBUG nova.compute.manager [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Instance network_info: |[{"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.199 232437 DEBUG oslo_concurrency.lockutils [req-3acae6d2-585d-436a-a4da-983486c009e2 req-630eb704-13b8-48d3-b8ab-1f89e7a1af11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.200 232437 DEBUG nova.network.neutron [req-3acae6d2-585d-436a-a4da-983486c009e2 req-630eb704-13b8-48d3-b8ab-1f89e7a1af11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Refreshing network info cache for port ee1e1d41-5735-4e5a-8a77-1785e7a5214a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:20:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:20:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:17.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.205 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Start _get_guest_xml network_info=[{"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.211 232437 WARNING nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.222 232437 DEBUG nova.virt.libvirt.host [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.223 232437 DEBUG nova.virt.libvirt.host [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.228 232437 DEBUG nova.virt.libvirt.host [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.229 232437 DEBUG nova.virt.libvirt.host [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.232 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.232 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.234 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.234 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.235 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.235 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.236 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.236 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.237 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.237 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.238 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.238 232437 DEBUG nova.virt.hardware [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.245 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:20:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2191333907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.278 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.285 232437 DEBUG nova.compute.provider_tree [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.300 232437 DEBUG nova.scheduler.client.report [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.328 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.329 232437 DEBUG nova.compute.manager [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.382 232437 DEBUG nova.compute.manager [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.383 232437 DEBUG nova.network.neutron [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.448 232437 INFO nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.478 232437 DEBUG nova.compute.manager [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.561 232437 DEBUG nova.compute.manager [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.566 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.567 232437 INFO nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Creating image(s)#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.592 232437 DEBUG nova.storage.rbd_utils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 2092b945-90e0-4a04-aabb-cc48efe223da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.620 232437 DEBUG nova.storage.rbd_utils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 2092b945-90e0-4a04-aabb-cc48efe223da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.646 232437 DEBUG nova.storage.rbd_utils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 2092b945-90e0-4a04-aabb-cc48efe223da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.649 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:20:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4124821027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.708 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.709 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.710 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.710 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.740 232437 DEBUG nova.storage.rbd_utils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 2092b945-90e0-4a04-aabb-cc48efe223da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.744 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2092b945-90e0-4a04-aabb-cc48efe223da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.770 232437 DEBUG nova.policy [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3d8ea3dcb5b4ee3b9bd24c62fdd61b2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da41861044a74eee9bc96841e54c57cc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.773 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.810 232437 DEBUG nova.storage.rbd_utils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:17 np0005548731 nova_compute[232433]: 2025-12-06 07:20:17.815 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.243 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2092b945-90e0-4a04-aabb-cc48efe223da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:20:18 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1593184799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.281 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.283 232437 DEBUG nova.virt.libvirt.vif [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-17034118',display_name='tempest-ListServerFiltersTestJSON-instance-17034118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-17034118',id=83,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-jvcfgmi9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:20:12Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=9b6b3b79-95ba-4b05-b308-4cb69b61cab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.283 232437 DEBUG nova.network.os_vif_util [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.284 232437 DEBUG nova.network.os_vif_util [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.285 232437 DEBUG nova.objects.instance [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.324 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  <uuid>9b6b3b79-95ba-4b05-b308-4cb69b61cab5</uuid>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  <name>instance-00000053</name>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-17034118</nova:name>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:20:17</nova:creationTime>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <nova:user uuid="e3d8ea3dcb5b4ee3b9bd24c62fdd61b2">tempest-ListServerFiltersTestJSON-1681699386-project-member</nova:user>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <nova:project uuid="da41861044a74eee9bc96841e54c57cc">tempest-ListServerFiltersTestJSON-1681699386</nova:project>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <nova:port uuid="ee1e1d41-5735-4e5a-8a77-1785e7a5214a">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <entry name="serial">9b6b3b79-95ba-4b05-b308-4cb69b61cab5</entry>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <entry name="uuid">9b6b3b79-95ba-4b05-b308-4cb69b61cab5</entry>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk.config">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:3f:d5:79"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <target dev="tapee1e1d41-57"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/9b6b3b79-95ba-4b05-b308-4cb69b61cab5/console.log" append="off"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:20:18 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:20:18 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:20:18 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:20:18 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.325 232437 DEBUG nova.compute.manager [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Preparing to wait for external event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.325 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.325 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.326 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.326 232437 DEBUG nova.virt.libvirt.vif [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-17034118',display_name='tempest-ListServerFiltersTestJSON-instance-17034118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-17034118',id=83,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-jvcfgmi9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:20:12Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=9b6b3b79-95ba-4b05-b308-4cb69b61cab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.327 232437 DEBUG nova.network.os_vif_util [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.327 232437 DEBUG nova.network.os_vif_util [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.328 232437 DEBUG os_vif [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.330 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.330 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.335 232437 DEBUG nova.storage.rbd_utils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] resizing rbd image 2092b945-90e0-4a04-aabb-cc48efe223da_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.373 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.373 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee1e1d41-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.374 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee1e1d41-57, col_values=(('external_ids', {'iface-id': 'ee1e1d41-5735-4e5a-8a77-1785e7a5214a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:d5:79', 'vm-uuid': '9b6b3b79-95ba-4b05-b308-4cb69b61cab5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.376 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:18 np0005548731 NetworkManager[49182]: <info>  [1765005618.3770] manager: (tapee1e1d41-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.379 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.381 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.383 232437 INFO os_vif [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57')#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.445 232437 DEBUG nova.objects.instance [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'migration_context' on Instance uuid 2092b945-90e0-4a04-aabb-cc48efe223da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.462 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.462 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Ensure instance console log exists: /var/lib/nova/instances/2092b945-90e0-4a04-aabb-cc48efe223da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.463 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.463 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.463 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.464 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.465 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.465 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] No VIF found with MAC fa:16:3e:3f:d5:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.465 232437 INFO nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Using config drive#033[00m
Dec  6 02:20:18 np0005548731 nova_compute[232433]: 2025-12-06 07:20:18.490 232437 DEBUG nova.storage.rbd_utils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:18.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:19 np0005548731 nova_compute[232433]: 2025-12-06 07:20:19.028 232437 INFO nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Creating config drive at /var/lib/nova/instances/9b6b3b79-95ba-4b05-b308-4cb69b61cab5/disk.config#033[00m
Dec  6 02:20:19 np0005548731 nova_compute[232433]: 2025-12-06 07:20:19.033 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b6b3b79-95ba-4b05-b308-4cb69b61cab5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg0rm5d4s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:19 np0005548731 nova_compute[232433]: 2025-12-06 07:20:19.170 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b6b3b79-95ba-4b05-b308-4cb69b61cab5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg0rm5d4s" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:19 np0005548731 nova_compute[232433]: 2025-12-06 07:20:19.197 232437 DEBUG nova.storage.rbd_utils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:19 np0005548731 nova_compute[232433]: 2025-12-06 07:20:19.201 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b6b3b79-95ba-4b05-b308-4cb69b61cab5/disk.config 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:19.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:19 np0005548731 nova_compute[232433]: 2025-12-06 07:20:19.239 232437 DEBUG nova.network.neutron [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Successfully created port: 05b4782c-e038-4443-a445-b3ff5c13e826 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:20:19 np0005548731 nova_compute[232433]: 2025-12-06 07:20:19.400 232437 DEBUG nova.network.neutron [req-3acae6d2-585d-436a-a4da-983486c009e2 req-630eb704-13b8-48d3-b8ab-1f89e7a1af11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Updated VIF entry in instance network info cache for port ee1e1d41-5735-4e5a-8a77-1785e7a5214a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:20:19 np0005548731 nova_compute[232433]: 2025-12-06 07:20:19.400 232437 DEBUG nova.network.neutron [req-3acae6d2-585d-436a-a4da-983486c009e2 req-630eb704-13b8-48d3-b8ab-1f89e7a1af11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Updating instance_info_cache with network_info: [{"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:20:19 np0005548731 nova_compute[232433]: 2025-12-06 07:20:19.418 232437 DEBUG oslo_concurrency.lockutils [req-3acae6d2-585d-436a-a4da-983486c009e2 req-630eb704-13b8-48d3-b8ab-1f89e7a1af11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:20:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 e258: 3 total, 3 up, 3 in
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.228 232437 DEBUG oslo_concurrency.processutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b6b3b79-95ba-4b05-b308-4cb69b61cab5/disk.config 9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.228 232437 INFO nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Deleting local config drive /var/lib/nova/instances/9b6b3b79-95ba-4b05-b308-4cb69b61cab5/disk.config because it was imported into RBD.#033[00m
Dec  6 02:20:20 np0005548731 kernel: tapee1e1d41-57: entered promiscuous mode
Dec  6 02:20:20 np0005548731 NetworkManager[49182]: <info>  [1765005620.2862] manager: (tapee1e1d41-57): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.288 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:20Z|00313|binding|INFO|Claiming lport ee1e1d41-5735-4e5a-8a77-1785e7a5214a for this chassis.
Dec  6 02:20:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:20Z|00314|binding|INFO|ee1e1d41-5735-4e5a-8a77-1785e7a5214a: Claiming fa:16:3e:3f:d5:79 10.100.0.4
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.292 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.298 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:d5:79 10.100.0.4'], port_security=['fa:16:3e:3f:d5:79 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9b6b3b79-95ba-4b05-b308-4cb69b61cab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da41861044a74eee9bc96841e54c57cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef95b64-f92c-4c6f-a9f3-c169d32d1826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ca3fb81-8bc9-4918-afd1-69859f260a75, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ee1e1d41-5735-4e5a-8a77-1785e7a5214a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.301 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ee1e1d41-5735-4e5a-8a77-1785e7a5214a in datapath b71f2bd2-56d4-4afa-bd37-2f65a2d061fe bound to our chassis#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.304 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b71f2bd2-56d4-4afa-bd37-2f65a2d061fe#033[00m
Dec  6 02:20:20 np0005548731 systemd-machined[195355]: New machine qemu-34-instance-00000053.
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.319 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2e08f266-554f-4d1d-8476-34845b646052]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.321 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb71f2bd2-51 in ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.323 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb71f2bd2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.323 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2a87b7-1e08-4d57-872d-ec5aecdda03b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.324 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8c677dd8-3740-4a8d-a346-e46038b4f007]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:20 np0005548731 systemd[1]: Started Virtual Machine qemu-34-instance-00000053.
Dec  6 02:20:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:20Z|00315|binding|INFO|Setting lport ee1e1d41-5735-4e5a-8a77-1785e7a5214a ovn-installed in OVS
Dec  6 02:20:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:20Z|00316|binding|INFO|Setting lport ee1e1d41-5735-4e5a-8a77-1785e7a5214a up in Southbound
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:20 np0005548731 systemd-udevd[266733]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.340 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[3f99fbe4-1ec9-4d08-8cc2-1cd506699ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 NetworkManager[49182]: <info>  [1765005620.3518] device (tapee1e1d41-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:20:20 np0005548731 NetworkManager[49182]: <info>  [1765005620.3531] device (tapee1e1d41-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.366 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d963eb96-c68c-4ec3-b183-fbf0f71812fb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.393 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[937c5af3-7e49-4c46-8da1-23c5061d73f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 NetworkManager[49182]: <info>  [1765005620.3996] manager: (tapb71f2bd2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/166)
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.400 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf265b9-db68-4ca9-a077-a1a8e7bae290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.427 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[587c9942-a22a-4c3f-ba1c-77faed030964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.431 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[2f35f146-d488-470d-b02e-aba4b12393aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 NetworkManager[49182]: <info>  [1765005620.4536] device (tapb71f2bd2-50): carrier: link connected
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.460 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[112ca363-559b-464d-9d55-937fce65fdcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.477 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1db18500-cf99-42d6-a302-379ec5caab93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb71f2bd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:86:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588246, 'reachable_time': 30434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266764, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.494 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2c60fbcf-0ff6-495e-a493-8a34d0662309]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:8633'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588246, 'tstamp': 588246}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266765, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.512 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3e42f4-f75a-4d7e-8001-cc2f48583b9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb71f2bd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:86:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588246, 'reachable_time': 30434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266766, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.546 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cb180694-6e86-426d-8598-783f7561a09c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.604 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[37fee7eb-3cfc-43ec-ba86-5c056d589af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.606 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb71f2bd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.606 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.606 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb71f2bd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.608 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:20 np0005548731 NetworkManager[49182]: <info>  [1765005620.6089] manager: (tapb71f2bd2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Dec  6 02:20:20 np0005548731 kernel: tapb71f2bd2-50: entered promiscuous mode
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.615 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.616 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb71f2bd2-50, col_values=(('external_ids', {'iface-id': '4c28345a-5229-44ea-b258-9761247daac8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.618 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:20Z|00317|binding|INFO|Releasing lport 4c28345a-5229-44ea-b258-9761247daac8 from this chassis (sb_readonly=0)
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.635 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.637 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b71f2bd2-56d4-4afa-bd37-2f65a2d061fe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b71f2bd2-56d4-4afa-bd37-2f65a2d061fe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.638 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[87ba87e2-965d-44df-81e2-87ea88ee7690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.639 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/b71f2bd2-56d4-4afa-bd37-2f65a2d061fe.pid.haproxy
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID b71f2bd2-56d4-4afa-bd37-2f65a2d061fe
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:20:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:20.640 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'env', 'PROCESS_TAG=haproxy-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b71f2bd2-56d4-4afa-bd37-2f65a2d061fe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.696 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005620.6963403, 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.697 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] VM Started (Lifecycle Event)#033[00m
Dec  6 02:20:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:20.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.750 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.756 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005620.6973581, 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.757 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.782 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.787 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.809 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.980 232437 DEBUG nova.compute.manager [req-0e0fee29-3517-4f25-9211-54d3fec8ad3b req-a5c20971-4337-4f6f-8e10-f5a5a0fde5a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.981 232437 DEBUG oslo_concurrency.lockutils [req-0e0fee29-3517-4f25-9211-54d3fec8ad3b req-a5c20971-4337-4f6f-8e10-f5a5a0fde5a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.981 232437 DEBUG oslo_concurrency.lockutils [req-0e0fee29-3517-4f25-9211-54d3fec8ad3b req-a5c20971-4337-4f6f-8e10-f5a5a0fde5a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.981 232437 DEBUG oslo_concurrency.lockutils [req-0e0fee29-3517-4f25-9211-54d3fec8ad3b req-a5c20971-4337-4f6f-8e10-f5a5a0fde5a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.982 232437 DEBUG nova.compute.manager [req-0e0fee29-3517-4f25-9211-54d3fec8ad3b req-a5c20971-4337-4f6f-8e10-f5a5a0fde5a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Processing event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.982 232437 DEBUG nova.compute.manager [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.985 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005620.9851696, 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.985 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.986 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.988 232437 INFO nova.virt.libvirt.driver [-] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Instance spawned successfully.#033[00m
Dec  6 02:20:20 np0005548731 nova_compute[232433]: 2025-12-06 07:20:20.989 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:20:20 np0005548731 podman[266840]: 2025-12-06 07:20:20.996166786 +0000 UTC m=+0.046076592 container create f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.009 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.015 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.019 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.020 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.020 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.021 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.021 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.021 232437 DEBUG nova.virt.libvirt.driver [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:21 np0005548731 systemd[1]: Started libpod-conmon-f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68.scope.
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.057 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:20:21 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:20:21 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1244017ade4aa5a970e4c883735ddcca0455ef0724308b5760a114f1ec5aadda/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:20:21 np0005548731 podman[266840]: 2025-12-06 07:20:20.969609639 +0000 UTC m=+0.019519465 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:20:21 np0005548731 podman[266840]: 2025-12-06 07:20:21.079037225 +0000 UTC m=+0.128947051 container init f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:20:21 np0005548731 podman[266840]: 2025-12-06 07:20:21.084560139 +0000 UTC m=+0.134469945 container start f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.093 232437 INFO nova.compute.manager [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Took 8.68 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.093 232437 DEBUG nova.compute.manager [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:21 np0005548731 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[266855]: [NOTICE]   (266859) : New worker (266861) forked
Dec  6 02:20:21 np0005548731 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[266855]: [NOTICE]   (266859) : Loading success.
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.168 232437 INFO nova.compute.manager [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Took 9.89 seconds to build instance.#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.185 232437 DEBUG oslo_concurrency.lockutils [None req-ef2f1643-6672-4a04-bc43-eaa096e38b8d e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:21.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.250 232437 DEBUG nova.network.neutron [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Successfully updated port: 05b4782c-e038-4443-a445-b3ff5c13e826 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.269 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "refresh_cache-2092b945-90e0-4a04-aabb-cc48efe223da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.269 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquired lock "refresh_cache-2092b945-90e0-4a04-aabb-cc48efe223da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.270 232437 DEBUG nova.network.neutron [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.348 232437 DEBUG nova.compute.manager [req-c9d875fb-a95a-4548-8920-2e72c0535917 req-841d560f-5725-464a-a8b6-e9ee4e21216c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Received event network-changed-05b4782c-e038-4443-a445-b3ff5c13e826 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.348 232437 DEBUG nova.compute.manager [req-c9d875fb-a95a-4548-8920-2e72c0535917 req-841d560f-5725-464a-a8b6-e9ee4e21216c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Refreshing instance network info cache due to event network-changed-05b4782c-e038-4443-a445-b3ff5c13e826. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.348 232437 DEBUG oslo_concurrency.lockutils [req-c9d875fb-a95a-4548-8920-2e72c0535917 req-841d560f-5725-464a-a8b6-e9ee4e21216c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2092b945-90e0-4a04-aabb-cc48efe223da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:20:21 np0005548731 nova_compute[232433]: 2025-12-06 07:20:21.749 232437 DEBUG nova.network.neutron [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.003 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.722 232437 DEBUG nova.network.neutron [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Updating instance_info_cache with network_info: [{"id": "05b4782c-e038-4443-a445-b3ff5c13e826", "address": "fa:16:3e:44:86:a3", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05b4782c-e0", "ovs_interfaceid": "05b4782c-e038-4443-a445-b3ff5c13e826", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:20:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:22.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.754 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Releasing lock "refresh_cache-2092b945-90e0-4a04-aabb-cc48efe223da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.754 232437 DEBUG nova.compute.manager [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Instance network_info: |[{"id": "05b4782c-e038-4443-a445-b3ff5c13e826", "address": "fa:16:3e:44:86:a3", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05b4782c-e0", "ovs_interfaceid": "05b4782c-e038-4443-a445-b3ff5c13e826", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.755 232437 DEBUG oslo_concurrency.lockutils [req-c9d875fb-a95a-4548-8920-2e72c0535917 req-841d560f-5725-464a-a8b6-e9ee4e21216c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2092b945-90e0-4a04-aabb-cc48efe223da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.755 232437 DEBUG nova.network.neutron [req-c9d875fb-a95a-4548-8920-2e72c0535917 req-841d560f-5725-464a-a8b6-e9ee4e21216c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Refreshing network info cache for port 05b4782c-e038-4443-a445-b3ff5c13e826 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.760 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Start _get_guest_xml network_info=[{"id": "05b4782c-e038-4443-a445-b3ff5c13e826", "address": "fa:16:3e:44:86:a3", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05b4782c-e0", "ovs_interfaceid": "05b4782c-e038-4443-a445-b3ff5c13e826", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.768 232437 WARNING nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.773 232437 DEBUG nova.virt.libvirt.host [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.775 232437 DEBUG nova.virt.libvirt.host [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:20:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.785 232437 DEBUG nova.virt.libvirt.host [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.786 232437 DEBUG nova.virt.libvirt.host [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.787 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.787 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.787 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.788 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.788 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.788 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.788 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.788 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.789 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.789 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.789 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.789 232437 DEBUG nova.virt.hardware [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:20:22 np0005548731 nova_compute[232433]: 2025-12-06 07:20:22.792 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.107 232437 DEBUG nova.compute.manager [req-bd840872-4bab-4649-b8fa-674b789e3e02 req-81e3d66f-801d-4e85-88ec-c775c32dfbb2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.108 232437 DEBUG oslo_concurrency.lockutils [req-bd840872-4bab-4649-b8fa-674b789e3e02 req-81e3d66f-801d-4e85-88ec-c775c32dfbb2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.108 232437 DEBUG oslo_concurrency.lockutils [req-bd840872-4bab-4649-b8fa-674b789e3e02 req-81e3d66f-801d-4e85-88ec-c775c32dfbb2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.109 232437 DEBUG oslo_concurrency.lockutils [req-bd840872-4bab-4649-b8fa-674b789e3e02 req-81e3d66f-801d-4e85-88ec-c775c32dfbb2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.109 232437 DEBUG nova.compute.manager [req-bd840872-4bab-4649-b8fa-674b789e3e02 req-81e3d66f-801d-4e85-88ec-c775c32dfbb2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] No waiting events found dispatching network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.109 232437 WARNING nova.compute.manager [req-bd840872-4bab-4649-b8fa-674b789e3e02 req-81e3d66f-801d-4e85-88ec-c775c32dfbb2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received unexpected event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a for instance with vm_state active and task_state None.#033[00m
Dec  6 02:20:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:20:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:23.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:20:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:20:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3992363602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.251 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.286 232437 DEBUG nova.storage.rbd_utils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 2092b945-90e0-4a04-aabb-cc48efe223da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.290 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.376 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:20:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/511668033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.731 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.733 232437 DEBUG nova.virt.libvirt.vif [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:20:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-10615280',display_name='tempest-ListServerFiltersTestJSON-instance-10615280',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-10615280',id=85,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-p00aeetk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:20:17Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=2092b945-90e0-4a04-aabb-cc48efe223da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05b4782c-e038-4443-a445-b3ff5c13e826", "address": "fa:16:3e:44:86:a3", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05b4782c-e0", "ovs_interfaceid": "05b4782c-e038-4443-a445-b3ff5c13e826", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.733 232437 DEBUG nova.network.os_vif_util [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "05b4782c-e038-4443-a445-b3ff5c13e826", "address": "fa:16:3e:44:86:a3", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05b4782c-e0", "ovs_interfaceid": "05b4782c-e038-4443-a445-b3ff5c13e826", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.734 232437 DEBUG nova.network.os_vif_util [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:86:a3,bridge_name='br-int',has_traffic_filtering=True,id=05b4782c-e038-4443-a445-b3ff5c13e826,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05b4782c-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.736 232437 DEBUG nova.objects.instance [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 2092b945-90e0-4a04-aabb-cc48efe223da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.759 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  <uuid>2092b945-90e0-4a04-aabb-cc48efe223da</uuid>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  <name>instance-00000055</name>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  <memory>196608</memory>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-10615280</nova:name>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:20:22</nova:creationTime>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.micro">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <nova:memory>192</nova:memory>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <nova:user uuid="e3d8ea3dcb5b4ee3b9bd24c62fdd61b2">tempest-ListServerFiltersTestJSON-1681699386-project-member</nova:user>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <nova:project uuid="da41861044a74eee9bc96841e54c57cc">tempest-ListServerFiltersTestJSON-1681699386</nova:project>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <nova:port uuid="05b4782c-e038-4443-a445-b3ff5c13e826">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <entry name="serial">2092b945-90e0-4a04-aabb-cc48efe223da</entry>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <entry name="uuid">2092b945-90e0-4a04-aabb-cc48efe223da</entry>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2092b945-90e0-4a04-aabb-cc48efe223da_disk">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2092b945-90e0-4a04-aabb-cc48efe223da_disk.config">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:44:86:a3"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <target dev="tap05b4782c-e0"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/2092b945-90e0-4a04-aabb-cc48efe223da/console.log" append="off"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:20:23 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:20:23 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:20:23 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:20:23 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.761 232437 DEBUG nova.compute.manager [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Preparing to wait for external event network-vif-plugged-05b4782c-e038-4443-a445-b3ff5c13e826 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.761 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.761 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.761 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.762 232437 DEBUG nova.virt.libvirt.vif [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:20:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-10615280',display_name='tempest-ListServerFiltersTestJSON-instance-10615280',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-10615280',id=85,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-p00aeetk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:20:17Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=2092b945-90e0-4a04-aabb-cc48efe223da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05b4782c-e038-4443-a445-b3ff5c13e826", "address": "fa:16:3e:44:86:a3", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05b4782c-e0", "ovs_interfaceid": "05b4782c-e038-4443-a445-b3ff5c13e826", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.762 232437 DEBUG nova.network.os_vif_util [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "05b4782c-e038-4443-a445-b3ff5c13e826", "address": "fa:16:3e:44:86:a3", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05b4782c-e0", "ovs_interfaceid": "05b4782c-e038-4443-a445-b3ff5c13e826", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.763 232437 DEBUG nova.network.os_vif_util [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:86:a3,bridge_name='br-int',has_traffic_filtering=True,id=05b4782c-e038-4443-a445-b3ff5c13e826,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05b4782c-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.763 232437 DEBUG os_vif [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:86:a3,bridge_name='br-int',has_traffic_filtering=True,id=05b4782c-e038-4443-a445-b3ff5c13e826,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05b4782c-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.764 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.764 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.765 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.767 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.767 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05b4782c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.768 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05b4782c-e0, col_values=(('external_ids', {'iface-id': '05b4782c-e038-4443-a445-b3ff5c13e826', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:86:a3', 'vm-uuid': '2092b945-90e0-4a04-aabb-cc48efe223da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.769 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:23 np0005548731 NetworkManager[49182]: <info>  [1765005623.7701] manager: (tap05b4782c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.773 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.777 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.778 232437 INFO os_vif [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:86:a3,bridge_name='br-int',has_traffic_filtering=True,id=05b4782c-e038-4443-a445-b3ff5c13e826,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05b4782c-e0')#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.835 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.835 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.836 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] No VIF found with MAC fa:16:3e:44:86:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.836 232437 INFO nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Using config drive#033[00m
Dec  6 02:20:23 np0005548731 nova_compute[232433]: 2025-12-06 07:20:23.865 232437 DEBUG nova.storage.rbd_utils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 2092b945-90e0-4a04-aabb-cc48efe223da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.436 232437 INFO nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Creating config drive at /var/lib/nova/instances/2092b945-90e0-4a04-aabb-cc48efe223da/disk.config#033[00m
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.446 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2092b945-90e0-4a04-aabb-cc48efe223da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp89_80_qm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.597 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2092b945-90e0-4a04-aabb-cc48efe223da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp89_80_qm" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.636 232437 DEBUG nova.storage.rbd_utils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 2092b945-90e0-4a04-aabb-cc48efe223da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.642 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2092b945-90e0-4a04-aabb-cc48efe223da/disk.config 2092b945-90e0-4a04-aabb-cc48efe223da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.676 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005609.6067755, 57f8a652-f11e-4342-bae0-13c592a18ad2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.678 232437 INFO nova.compute.manager [-] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.707 232437 DEBUG nova.compute.manager [None req-ec688b1b-fc7b-4898-83a5-69cdff9001e9 - - - - - -] [instance: 57f8a652-f11e-4342-bae0-13c592a18ad2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:20:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:24.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.818 232437 DEBUG oslo_concurrency.processutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2092b945-90e0-4a04-aabb-cc48efe223da/disk.config 2092b945-90e0-4a04-aabb-cc48efe223da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.819 232437 INFO nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Deleting local config drive /var/lib/nova/instances/2092b945-90e0-4a04-aabb-cc48efe223da/disk.config because it was imported into RBD.#033[00m
Dec  6 02:20:24 np0005548731 kernel: tap05b4782c-e0: entered promiscuous mode
Dec  6 02:20:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:24Z|00318|binding|INFO|Claiming lport 05b4782c-e038-4443-a445-b3ff5c13e826 for this chassis.
Dec  6 02:20:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:24Z|00319|binding|INFO|05b4782c-e038-4443-a445-b3ff5c13e826: Claiming fa:16:3e:44:86:a3 10.100.0.7
Dec  6 02:20:24 np0005548731 NetworkManager[49182]: <info>  [1765005624.8710] manager: (tap05b4782c-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/169)
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.874 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:24Z|00320|binding|INFO|Setting lport 05b4782c-e038-4443-a445-b3ff5c13e826 ovn-installed in OVS
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.887 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:24 np0005548731 nova_compute[232433]: 2025-12-06 07:20:24.889 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:24 np0005548731 systemd-udevd[267059]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:20:24 np0005548731 NetworkManager[49182]: <info>  [1765005624.9092] device (tap05b4782c-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:20:24 np0005548731 NetworkManager[49182]: <info>  [1765005624.9103] device (tap05b4782c-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:20:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:25Z|00321|binding|INFO|Setting lport 05b4782c-e038-4443-a445-b3ff5c13e826 up in Southbound
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.038 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:86:a3 10.100.0.7'], port_security=['fa:16:3e:44:86:a3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2092b945-90e0-4a04-aabb-cc48efe223da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da41861044a74eee9bc96841e54c57cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef95b64-f92c-4c6f-a9f3-c169d32d1826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ca3fb81-8bc9-4918-afd1-69859f260a75, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=05b4782c-e038-4443-a445-b3ff5c13e826) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.040 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 05b4782c-e038-4443-a445-b3ff5c13e826 in datapath b71f2bd2-56d4-4afa-bd37-2f65a2d061fe bound to our chassis#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.041 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b71f2bd2-56d4-4afa-bd37-2f65a2d061fe#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.061 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[354151fa-e618-4e5d-be18-a956d6d21824]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:25 np0005548731 systemd-machined[195355]: New machine qemu-35-instance-00000055.
Dec  6 02:20:25 np0005548731 systemd[1]: Started Virtual Machine qemu-35-instance-00000055.
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.099 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[2918a634-45a0-4d67-9d81-7f3f696918ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.103 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[889a72bb-cebb-4eda-8582-8f076180a0b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.139 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b61c278b-8c6b-4cec-a8fd-24d1ade94f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.160 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2c058a8f-d909-49b0-89ed-6d01626f6a70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb71f2bd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:86:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588246, 'reachable_time': 30434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267076, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.181 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c247314c-705a-4f12-b931-93f94d3b9582]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb71f2bd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588258, 'tstamp': 588258}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267078, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb71f2bd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588260, 'tstamp': 588260}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267078, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.182 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb71f2bd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:25 np0005548731 nova_compute[232433]: 2025-12-06 07:20:25.184 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:25 np0005548731 nova_compute[232433]: 2025-12-06 07:20:25.185 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.185 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb71f2bd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.185 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.186 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb71f2bd2-50, col_values=(('external_ids', {'iface-id': '4c28345a-5229-44ea-b258-9761247daac8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:25.186 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:25.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:25 np0005548731 nova_compute[232433]: 2025-12-06 07:20:25.218 232437 DEBUG nova.network.neutron [req-c9d875fb-a95a-4548-8920-2e72c0535917 req-841d560f-5725-464a-a8b6-e9ee4e21216c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Updated VIF entry in instance network info cache for port 05b4782c-e038-4443-a445-b3ff5c13e826. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:20:25 np0005548731 nova_compute[232433]: 2025-12-06 07:20:25.219 232437 DEBUG nova.network.neutron [req-c9d875fb-a95a-4548-8920-2e72c0535917 req-841d560f-5725-464a-a8b6-e9ee4e21216c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Updating instance_info_cache with network_info: [{"id": "05b4782c-e038-4443-a445-b3ff5c13e826", "address": "fa:16:3e:44:86:a3", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05b4782c-e0", "ovs_interfaceid": "05b4782c-e038-4443-a445-b3ff5c13e826", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:20:25 np0005548731 nova_compute[232433]: 2025-12-06 07:20:25.246 232437 DEBUG oslo_concurrency.lockutils [req-c9d875fb-a95a-4548-8920-2e72c0535917 req-841d560f-5725-464a-a8b6-e9ee4e21216c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2092b945-90e0-4a04-aabb-cc48efe223da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:20:25 np0005548731 nova_compute[232433]: 2025-12-06 07:20:25.283 232437 DEBUG nova.compute.manager [req-124cefa6-22e6-4189-ae35-ba222184898a req-a8487d1a-b13e-4dcf-b8eb-138b1edbfddb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Received event network-vif-plugged-05b4782c-e038-4443-a445-b3ff5c13e826 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:25 np0005548731 nova_compute[232433]: 2025-12-06 07:20:25.284 232437 DEBUG oslo_concurrency.lockutils [req-124cefa6-22e6-4189-ae35-ba222184898a req-a8487d1a-b13e-4dcf-b8eb-138b1edbfddb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:25 np0005548731 nova_compute[232433]: 2025-12-06 07:20:25.284 232437 DEBUG oslo_concurrency.lockutils [req-124cefa6-22e6-4189-ae35-ba222184898a req-a8487d1a-b13e-4dcf-b8eb-138b1edbfddb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:25 np0005548731 nova_compute[232433]: 2025-12-06 07:20:25.284 232437 DEBUG oslo_concurrency.lockutils [req-124cefa6-22e6-4189-ae35-ba222184898a req-a8487d1a-b13e-4dcf-b8eb-138b1edbfddb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:25 np0005548731 nova_compute[232433]: 2025-12-06 07:20:25.285 232437 DEBUG nova.compute.manager [req-124cefa6-22e6-4189-ae35-ba222184898a req-a8487d1a-b13e-4dcf-b8eb-138b1edbfddb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Processing event network-vif-plugged-05b4782c-e038-4443-a445-b3ff5c13e826 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.166 232437 DEBUG nova.compute.manager [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.167 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005626.167069, 2092b945-90e0-4a04-aabb-cc48efe223da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.167 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] VM Started (Lifecycle Event)#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.172 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.176 232437 INFO nova.virt.libvirt.driver [-] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Instance spawned successfully.#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.176 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.198 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.202 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.207 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.208 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.208 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.208 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.209 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.209 232437 DEBUG nova.virt.libvirt.driver [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.245 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.246 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005626.167185, 2092b945-90e0-4a04-aabb-cc48efe223da => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.246 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.272 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.275 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005626.171455, 2092b945-90e0-4a04-aabb-cc48efe223da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.275 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.309 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.313 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.331 232437 INFO nova.compute.manager [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Took 8.77 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.331 232437 DEBUG nova.compute.manager [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.340 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.446 232437 INFO nova.compute.manager [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Took 9.82 seconds to build instance.#033[00m
Dec  6 02:20:26 np0005548731 nova_compute[232433]: 2025-12-06 07:20:26.465 232437 DEBUG oslo_concurrency.lockutils [None req-fba387fe-5a0b-4442-9569-ddd580fae573 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:26.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:27 np0005548731 nova_compute[232433]: 2025-12-06 07:20:27.005 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:20:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:27.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:20:27 np0005548731 nova_compute[232433]: 2025-12-06 07:20:27.407 232437 DEBUG nova.compute.manager [req-aa583ede-e33f-49fd-91d0-8450f4fd11c4 req-de6ade5f-893e-41dd-b1c9-2f74e16a2310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Received event network-vif-plugged-05b4782c-e038-4443-a445-b3ff5c13e826 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:27 np0005548731 nova_compute[232433]: 2025-12-06 07:20:27.408 232437 DEBUG oslo_concurrency.lockutils [req-aa583ede-e33f-49fd-91d0-8450f4fd11c4 req-de6ade5f-893e-41dd-b1c9-2f74e16a2310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:27 np0005548731 nova_compute[232433]: 2025-12-06 07:20:27.408 232437 DEBUG oslo_concurrency.lockutils [req-aa583ede-e33f-49fd-91d0-8450f4fd11c4 req-de6ade5f-893e-41dd-b1c9-2f74e16a2310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:27 np0005548731 nova_compute[232433]: 2025-12-06 07:20:27.408 232437 DEBUG oslo_concurrency.lockutils [req-aa583ede-e33f-49fd-91d0-8450f4fd11c4 req-de6ade5f-893e-41dd-b1c9-2f74e16a2310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:27 np0005548731 nova_compute[232433]: 2025-12-06 07:20:27.409 232437 DEBUG nova.compute.manager [req-aa583ede-e33f-49fd-91d0-8450f4fd11c4 req-de6ade5f-893e-41dd-b1c9-2f74e16a2310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] No waiting events found dispatching network-vif-plugged-05b4782c-e038-4443-a445-b3ff5c13e826 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:27 np0005548731 nova_compute[232433]: 2025-12-06 07:20:27.409 232437 WARNING nova.compute.manager [req-aa583ede-e33f-49fd-91d0-8450f4fd11c4 req-de6ade5f-893e-41dd-b1c9-2f74e16a2310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Received unexpected event network-vif-plugged-05b4782c-e038-4443-a445-b3ff5c13e826 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:20:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:28.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:28 np0005548731 nova_compute[232433]: 2025-12-06 07:20:28.770 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:29.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:30.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:30 np0005548731 podman[267130]: 2025-12-06 07:20:30.919110008 +0000 UTC m=+0.074948837 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 02:20:30 np0005548731 podman[267128]: 2025-12-06 07:20:30.93812486 +0000 UTC m=+0.094789589 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:20:30 np0005548731 podman[267129]: 2025-12-06 07:20:30.969868603 +0000 UTC m=+0.121340436 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller)
Dec  6 02:20:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:20:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3002.4 total, 600.0 interval#012Cumulative writes: 31K writes, 129K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 31K writes, 10K syncs, 2.92 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9046 writes, 36K keys, 9046 commit groups, 1.0 writes per commit group, ingest: 38.17 MB, 0.06 MB/s#012Interval WAL: 9046 writes, 3412 syncs, 2.65 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 02:20:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:20:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:31.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:20:32 np0005548731 nova_compute[232433]: 2025-12-06 07:20:32.007 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:32.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:33.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:33 np0005548731 nova_compute[232433]: 2025-12-06 07:20:33.772 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:34.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:35.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:35 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:35Z|00322|binding|INFO|Releasing lport 4c28345a-5229-44ea-b258-9761247daac8 from this chassis (sb_readonly=0)
Dec  6 02:20:35 np0005548731 nova_compute[232433]: 2025-12-06 07:20:35.305 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:35 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:35Z|00323|binding|INFO|Releasing lport 4c28345a-5229-44ea-b258-9761247daac8 from this chassis (sb_readonly=0)
Dec  6 02:20:35 np0005548731 nova_compute[232433]: 2025-12-06 07:20:35.511 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:36 np0005548731 nova_compute[232433]: 2025-12-06 07:20:36.648 232437 DEBUG oslo_concurrency.lockutils [None req-1cf3454e-4244-4eea-ab27-3f0653861e16 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:36 np0005548731 nova_compute[232433]: 2025-12-06 07:20:36.649 232437 DEBUG oslo_concurrency.lockutils [None req-1cf3454e-4244-4eea-ab27-3f0653861e16 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:36 np0005548731 nova_compute[232433]: 2025-12-06 07:20:36.649 232437 DEBUG nova.compute.manager [None req-1cf3454e-4244-4eea-ab27-3f0653861e16 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:36 np0005548731 nova_compute[232433]: 2025-12-06 07:20:36.654 232437 DEBUG nova.compute.manager [None req-1cf3454e-4244-4eea-ab27-3f0653861e16 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec  6 02:20:36 np0005548731 nova_compute[232433]: 2025-12-06 07:20:36.657 232437 DEBUG nova.objects.instance [None req-1cf3454e-4244-4eea-ab27-3f0653861e16 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'flavor' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:36 np0005548731 nova_compute[232433]: 2025-12-06 07:20:36.687 232437 DEBUG nova.virt.libvirt.driver [None req-1cf3454e-4244-4eea-ab27-3f0653861e16 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:20:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:36Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:d5:79 10.100.0.4
Dec  6 02:20:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:36Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:d5:79 10.100.0.4
Dec  6 02:20:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:36.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:37 np0005548731 nova_compute[232433]: 2025-12-06 07:20:37.042 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:37.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:38.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:38 np0005548731 nova_compute[232433]: 2025-12-06 07:20:38.773 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:20:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:39.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:20:39 np0005548731 nova_compute[232433]: 2025-12-06 07:20:39.707 232437 INFO nova.virt.libvirt.driver [None req-1cf3454e-4244-4eea-ab27-3f0653861e16 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Instance shutdown successfully after 3 seconds.#033[00m
Dec  6 02:20:40 np0005548731 kernel: tapee1e1d41-57 (unregistering): left promiscuous mode
Dec  6 02:20:40 np0005548731 NetworkManager[49182]: <info>  [1765005640.5562] device (tapee1e1d41-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:20:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:40Z|00324|binding|INFO|Releasing lport ee1e1d41-5735-4e5a-8a77-1785e7a5214a from this chassis (sb_readonly=0)
Dec  6 02:20:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:40Z|00325|binding|INFO|Setting lport ee1e1d41-5735-4e5a-8a77-1785e7a5214a down in Southbound
Dec  6 02:20:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:40Z|00326|binding|INFO|Removing iface tapee1e1d41-57 ovn-installed in OVS
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.566 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.568 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.574 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:d5:79 10.100.0.4'], port_security=['fa:16:3e:3f:d5:79 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9b6b3b79-95ba-4b05-b308-4cb69b61cab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da41861044a74eee9bc96841e54c57cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef95b64-f92c-4c6f-a9f3-c169d32d1826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ca3fb81-8bc9-4918-afd1-69859f260a75, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ee1e1d41-5735-4e5a-8a77-1785e7a5214a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.576 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ee1e1d41-5735-4e5a-8a77-1785e7a5214a in datapath b71f2bd2-56d4-4afa-bd37-2f65a2d061fe unbound from our chassis#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.578 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b71f2bd2-56d4-4afa-bd37-2f65a2d061fe#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.581 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.598 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d44219bf-e03a-41aa-9fe6-8526fb4fd316]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:40 np0005548731 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000053.scope: Deactivated successfully.
Dec  6 02:20:40 np0005548731 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000053.scope: Consumed 13.831s CPU time.
Dec  6 02:20:40 np0005548731 systemd-machined[195355]: Machine qemu-34-instance-00000053 terminated.
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.634 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ada8da66-d4c8-48e7-aca5-3253d17807a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.638 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[805e77ec-e8c6-4ac3-9115-75fd5817252f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.669 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[75d0a183-00f6-4109-96e3-81017a2bc4fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.686 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6a918344-1d7e-485d-abf6-96e5686e98dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb71f2bd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:86:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588246, 'reachable_time': 30434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267205, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.702 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[151d1d81-082f-454d-ac9a-dc55eab1cdb8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb71f2bd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588258, 'tstamp': 588258}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267206, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb71f2bd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588260, 'tstamp': 588260}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267206, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.703 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb71f2bd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.704 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.708 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb71f2bd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.709 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.708 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.709 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb71f2bd2-50, col_values=(('external_ids', {'iface-id': '4c28345a-5229-44ea-b258-9761247daac8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:40.709 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.724 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.730 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.737 232437 INFO nova.virt.libvirt.driver [-] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Instance destroyed successfully.#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.737 232437 DEBUG nova.objects.instance [None req-1cf3454e-4244-4eea-ab27-3f0653861e16 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.754 232437 DEBUG nova.compute.manager [None req-1cf3454e-4244-4eea-ab27-3f0653861e16 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:40.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.821 232437 DEBUG oslo_concurrency.lockutils [None req-1cf3454e-4244-4eea-ab27-3f0653861e16 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 4.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.972 232437 DEBUG nova.compute.manager [req-337d51e1-7662-4a53-a397-bddc1de0675c req-1a5710e9-5aee-4ac5-b7ee-521aa0d411d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received event network-vif-unplugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.973 232437 DEBUG oslo_concurrency.lockutils [req-337d51e1-7662-4a53-a397-bddc1de0675c req-1a5710e9-5aee-4ac5-b7ee-521aa0d411d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.973 232437 DEBUG oslo_concurrency.lockutils [req-337d51e1-7662-4a53-a397-bddc1de0675c req-1a5710e9-5aee-4ac5-b7ee-521aa0d411d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.973 232437 DEBUG oslo_concurrency.lockutils [req-337d51e1-7662-4a53-a397-bddc1de0675c req-1a5710e9-5aee-4ac5-b7ee-521aa0d411d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.973 232437 DEBUG nova.compute.manager [req-337d51e1-7662-4a53-a397-bddc1de0675c req-1a5710e9-5aee-4ac5-b7ee-521aa0d411d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] No waiting events found dispatching network-vif-unplugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:40 np0005548731 nova_compute[232433]: 2025-12-06 07:20:40.974 232437 WARNING nova.compute.manager [req-337d51e1-7662-4a53-a397-bddc1de0675c req-1a5710e9-5aee-4ac5-b7ee-521aa0d411d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received unexpected event network-vif-unplugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a for instance with vm_state stopped and task_state None.#033[00m
Dec  6 02:20:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:41.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:42 np0005548731 nova_compute[232433]: 2025-12-06 07:20:42.043 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:42.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:43.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.441 232437 DEBUG nova.compute.manager [req-9be1cb0e-b6c1-4216-8dcb-efd4f3f8d15c req-29a41d9c-8ba1-4388-b37e-358575ba6417 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.441 232437 DEBUG oslo_concurrency.lockutils [req-9be1cb0e-b6c1-4216-8dcb-efd4f3f8d15c req-29a41d9c-8ba1-4388-b37e-358575ba6417 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.442 232437 DEBUG oslo_concurrency.lockutils [req-9be1cb0e-b6c1-4216-8dcb-efd4f3f8d15c req-29a41d9c-8ba1-4388-b37e-358575ba6417 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.442 232437 DEBUG oslo_concurrency.lockutils [req-9be1cb0e-b6c1-4216-8dcb-efd4f3f8d15c req-29a41d9c-8ba1-4388-b37e-358575ba6417 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.442 232437 DEBUG nova.compute.manager [req-9be1cb0e-b6c1-4216-8dcb-efd4f3f8d15c req-29a41d9c-8ba1-4388-b37e-358575ba6417 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] No waiting events found dispatching network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.442 232437 WARNING nova.compute.manager [req-9be1cb0e-b6c1-4216-8dcb-efd4f3f8d15c req-29a41d9c-8ba1-4388-b37e-358575ba6417 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received unexpected event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a for instance with vm_state stopped and task_state None.#033[00m
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.699 232437 DEBUG nova.objects.instance [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'flavor' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.722 232437 DEBUG oslo_concurrency.lockutils [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.723 232437 DEBUG oslo_concurrency.lockutils [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquired lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.723 232437 DEBUG nova.network.neutron [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.723 232437 DEBUG nova.objects.instance [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'info_cache' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:43 np0005548731 nova_compute[232433]: 2025-12-06 07:20:43.775 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:20:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:20:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:20:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:45.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.636 232437 DEBUG nova.network.neutron [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Updating instance_info_cache with network_info: [{"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.653 232437 DEBUG oslo_concurrency.lockutils [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Releasing lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.689 232437 INFO nova.virt.libvirt.driver [-] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Instance destroyed successfully.#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.690 232437 DEBUG nova.objects.instance [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.712 232437 DEBUG nova.objects.instance [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'resources' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.736 232437 DEBUG nova.virt.libvirt.vif [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-17034118',display_name='tempest-ListServerFiltersTestJSON-instance-17034118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-17034118',id=83,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-jvcfgmi9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:20:40Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=9b6b3b79-95ba-4b05-b308-4cb69b61cab5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.736 232437 DEBUG nova.network.os_vif_util [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.737 232437 DEBUG nova.network.os_vif_util [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.737 232437 DEBUG os_vif [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.738 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.739 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee1e1d41-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.740 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.742 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.744 232437 INFO os_vif [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57')#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.750 232437 DEBUG nova.virt.libvirt.driver [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Start _get_guest_xml network_info=[{"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.753 232437 WARNING nova.virt.libvirt.driver [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.757 232437 DEBUG nova.virt.libvirt.host [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.758 232437 DEBUG nova.virt.libvirt.host [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.761 232437 DEBUG nova.virt.libvirt.host [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.761 232437 DEBUG nova.virt.libvirt.host [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.762 232437 DEBUG nova.virt.libvirt.driver [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.762 232437 DEBUG nova.virt.hardware [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.763 232437 DEBUG nova.virt.hardware [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.763 232437 DEBUG nova.virt.hardware [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.763 232437 DEBUG nova.virt.hardware [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.763 232437 DEBUG nova.virt.hardware [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.763 232437 DEBUG nova.virt.hardware [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.763 232437 DEBUG nova.virt.hardware [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.763 232437 DEBUG nova.virt.hardware [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.764 232437 DEBUG nova.virt.hardware [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.764 232437 DEBUG nova.virt.hardware [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.764 232437 DEBUG nova.virt.hardware [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.764 232437 DEBUG nova.objects.instance [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:45 np0005548731 nova_compute[232433]: 2025-12-06 07:20:45.778 232437 DEBUG oslo_concurrency.processutils [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:20:46 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2225651671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.249 232437 DEBUG oslo_concurrency.processutils [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.294 232437 DEBUG oslo_concurrency.processutils [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:20:46 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:46Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:86:a3 10.100.0.7
Dec  6 02:20:46 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:46Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:86:a3 10.100.0.7
Dec  6 02:20:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:20:46 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3154939193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:20:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:46.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.903 232437 DEBUG oslo_concurrency.processutils [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.905 232437 DEBUG nova.virt.libvirt.vif [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-17034118',display_name='tempest-ListServerFiltersTestJSON-instance-17034118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-17034118',id=83,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-jvcfgmi9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:20:40Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=9b6b3b79-95ba-4b05-b308-4cb69b61cab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.905 232437 DEBUG nova.network.os_vif_util [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.907 232437 DEBUG nova.network.os_vif_util [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.908 232437 DEBUG nova.objects.instance [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.921 232437 DEBUG nova.virt.libvirt.driver [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  <uuid>9b6b3b79-95ba-4b05-b308-4cb69b61cab5</uuid>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  <name>instance-00000053</name>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-17034118</nova:name>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:20:45</nova:creationTime>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <nova:user uuid="e3d8ea3dcb5b4ee3b9bd24c62fdd61b2">tempest-ListServerFiltersTestJSON-1681699386-project-member</nova:user>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <nova:project uuid="da41861044a74eee9bc96841e54c57cc">tempest-ListServerFiltersTestJSON-1681699386</nova:project>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <nova:port uuid="ee1e1d41-5735-4e5a-8a77-1785e7a5214a">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <entry name="serial">9b6b3b79-95ba-4b05-b308-4cb69b61cab5</entry>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <entry name="uuid">9b6b3b79-95ba-4b05-b308-4cb69b61cab5</entry>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9b6b3b79-95ba-4b05-b308-4cb69b61cab5_disk.config">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:3f:d5:79"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <target dev="tapee1e1d41-57"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/9b6b3b79-95ba-4b05-b308-4cb69b61cab5/console.log" append="off"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <input type="keyboard" bus="usb"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:20:46 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:20:46 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:20:46 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:20:46 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.923 232437 DEBUG nova.virt.libvirt.driver [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.923 232437 DEBUG nova.virt.libvirt.driver [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.924 232437 DEBUG nova.virt.libvirt.vif [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-17034118',display_name='tempest-ListServerFiltersTestJSON-instance-17034118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-17034118',id=83,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-jvcfgmi9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:20:40Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=9b6b3b79-95ba-4b05-b308-4cb69b61cab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.924 232437 DEBUG nova.network.os_vif_util [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.925 232437 DEBUG nova.network.os_vif_util [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.925 232437 DEBUG os_vif [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.926 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.926 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.927 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.929 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.929 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee1e1d41-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.930 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee1e1d41-57, col_values=(('external_ids', {'iface-id': 'ee1e1d41-5735-4e5a-8a77-1785e7a5214a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:d5:79', 'vm-uuid': '9b6b3b79-95ba-4b05-b308-4cb69b61cab5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.931 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:46 np0005548731 NetworkManager[49182]: <info>  [1765005646.9326] manager: (tapee1e1d41-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.934 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.938 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:46 np0005548731 nova_compute[232433]: 2025-12-06 07:20:46.939 232437 INFO os_vif [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57')#033[00m
Dec  6 02:20:46 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Dec  6 02:20:47 np0005548731 kernel: tapee1e1d41-57: entered promiscuous mode
Dec  6 02:20:47 np0005548731 NetworkManager[49182]: <info>  [1765005647.0113] manager: (tapee1e1d41-57): new Tun device (/org/freedesktop/NetworkManager/Devices/171)
Dec  6 02:20:47 np0005548731 nova_compute[232433]: 2025-12-06 07:20:47.011 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:47Z|00327|binding|INFO|Claiming lport ee1e1d41-5735-4e5a-8a77-1785e7a5214a for this chassis.
Dec  6 02:20:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:47Z|00328|binding|INFO|ee1e1d41-5735-4e5a-8a77-1785e7a5214a: Claiming fa:16:3e:3f:d5:79 10.100.0.4
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.021 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:d5:79 10.100.0.4'], port_security=['fa:16:3e:3f:d5:79 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9b6b3b79-95ba-4b05-b308-4cb69b61cab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da41861044a74eee9bc96841e54c57cc', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9ef95b64-f92c-4c6f-a9f3-c169d32d1826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ca3fb81-8bc9-4918-afd1-69859f260a75, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ee1e1d41-5735-4e5a-8a77-1785e7a5214a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.023 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ee1e1d41-5735-4e5a-8a77-1785e7a5214a in datapath b71f2bd2-56d4-4afa-bd37-2f65a2d061fe bound to our chassis#033[00m
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.024 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b71f2bd2-56d4-4afa-bd37-2f65a2d061fe#033[00m
Dec  6 02:20:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:47Z|00329|binding|INFO|Setting lport ee1e1d41-5735-4e5a-8a77-1785e7a5214a ovn-installed in OVS
Dec  6 02:20:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:20:47Z|00330|binding|INFO|Setting lport ee1e1d41-5735-4e5a-8a77-1785e7a5214a up in Southbound
Dec  6 02:20:47 np0005548731 nova_compute[232433]: 2025-12-06 07:20:47.032 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:47 np0005548731 nova_compute[232433]: 2025-12-06 07:20:47.033 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:47 np0005548731 nova_compute[232433]: 2025-12-06 07:20:47.043 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:47 np0005548731 systemd-udevd[267348]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.045 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2f334b-3bd1-4d13-8447-dae251a5f55a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:47 np0005548731 systemd-machined[195355]: New machine qemu-36-instance-00000053.
Dec  6 02:20:47 np0005548731 NetworkManager[49182]: <info>  [1765005647.0583] device (tapee1e1d41-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:20:47 np0005548731 NetworkManager[49182]: <info>  [1765005647.0594] device (tapee1e1d41-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:20:47 np0005548731 systemd[1]: Started Virtual Machine qemu-36-instance-00000053.
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.075 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[638206ef-4938-4aa1-86c2-6346765e81de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.077 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7d44453b-937f-43df-a6c3-02f4f244a762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.105 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3341e0b8-5bfd-416d-96ee-cb25baf0c272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.120 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c75c2c17-0ebf-4df7-843d-36742adab2bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb71f2bd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:86:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588246, 'reachable_time': 30434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267358, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.134 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c56cc311-fb02-415a-8422-52ead0176473]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb71f2bd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588258, 'tstamp': 588258}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267362, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb71f2bd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588260, 'tstamp': 588260}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267362, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.135 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb71f2bd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:47 np0005548731 nova_compute[232433]: 2025-12-06 07:20:47.181 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:47 np0005548731 nova_compute[232433]: 2025-12-06 07:20:47.182 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.182 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb71f2bd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.182 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.183 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb71f2bd2-50, col_values=(('external_ids', {'iface-id': '4c28345a-5229-44ea-b258-9761247daac8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:20:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:47.183 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:20:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:47.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.141 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.142 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005648.1408088, 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.143 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.145 232437 DEBUG nova.compute.manager [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.148 232437 INFO nova.virt.libvirt.driver [-] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Instance rebooted successfully.#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.149 232437 DEBUG nova.compute.manager [None req-905a8f2b-f022-47a6-88fd-d6bd02e3c739 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.180 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.184 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.213 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.215 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005648.1413674, 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.215 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] VM Started (Lifecycle Event)#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.235 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.238 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.382 232437 DEBUG nova.compute.manager [req-bdf6d2c5-d6d8-4fab-b648-28ca67230ae8 req-9cfdcf7f-f477-4a38-b84f-9994e1198cb1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.382 232437 DEBUG oslo_concurrency.lockutils [req-bdf6d2c5-d6d8-4fab-b648-28ca67230ae8 req-9cfdcf7f-f477-4a38-b84f-9994e1198cb1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.382 232437 DEBUG oslo_concurrency.lockutils [req-bdf6d2c5-d6d8-4fab-b648-28ca67230ae8 req-9cfdcf7f-f477-4a38-b84f-9994e1198cb1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.383 232437 DEBUG oslo_concurrency.lockutils [req-bdf6d2c5-d6d8-4fab-b648-28ca67230ae8 req-9cfdcf7f-f477-4a38-b84f-9994e1198cb1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.383 232437 DEBUG nova.compute.manager [req-bdf6d2c5-d6d8-4fab-b648-28ca67230ae8 req-9cfdcf7f-f477-4a38-b84f-9994e1198cb1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] No waiting events found dispatching network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:48 np0005548731 nova_compute[232433]: 2025-12-06 07:20:48.383 232437 WARNING nova.compute.manager [req-bdf6d2c5-d6d8-4fab-b648-28ca67230ae8 req-9cfdcf7f-f477-4a38-b84f-9994e1198cb1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received unexpected event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a for instance with vm_state active and task_state None.#033[00m
Dec  6 02:20:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:48.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:49.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:50 np0005548731 nova_compute[232433]: 2025-12-06 07:20:50.481 232437 DEBUG nova.compute.manager [req-8d8db7bd-36e8-4b93-8344-fba3ec704c95 req-a36aba87-06e0-4e84-a78c-dbe926459771 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:20:50 np0005548731 nova_compute[232433]: 2025-12-06 07:20:50.481 232437 DEBUG oslo_concurrency.lockutils [req-8d8db7bd-36e8-4b93-8344-fba3ec704c95 req-a36aba87-06e0-4e84-a78c-dbe926459771 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:20:50 np0005548731 nova_compute[232433]: 2025-12-06 07:20:50.482 232437 DEBUG oslo_concurrency.lockutils [req-8d8db7bd-36e8-4b93-8344-fba3ec704c95 req-a36aba87-06e0-4e84-a78c-dbe926459771 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:20:50 np0005548731 nova_compute[232433]: 2025-12-06 07:20:50.482 232437 DEBUG oslo_concurrency.lockutils [req-8d8db7bd-36e8-4b93-8344-fba3ec704c95 req-a36aba87-06e0-4e84-a78c-dbe926459771 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:20:50 np0005548731 nova_compute[232433]: 2025-12-06 07:20:50.482 232437 DEBUG nova.compute.manager [req-8d8db7bd-36e8-4b93-8344-fba3ec704c95 req-a36aba87-06e0-4e84-a78c-dbe926459771 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] No waiting events found dispatching network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:20:50 np0005548731 nova_compute[232433]: 2025-12-06 07:20:50.482 232437 WARNING nova.compute.manager [req-8d8db7bd-36e8-4b93-8344-fba3ec704c95 req-a36aba87-06e0-4e84-a78c-dbe926459771 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received unexpected event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a for instance with vm_state active and task_state None.#033[00m
Dec  6 02:20:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:50.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:51.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:51 np0005548731 nova_compute[232433]: 2025-12-06 07:20:51.932 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:52 np0005548731 nova_compute[232433]: 2025-12-06 07:20:52.046 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:52.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:53 np0005548731 nova_compute[232433]: 2025-12-06 07:20:53.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:20:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:20:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:54.313 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:20:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:54.315 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:20:54 np0005548731 nova_compute[232433]: 2025-12-06 07:20:54.314 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:54.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:55 np0005548731 nova_compute[232433]: 2025-12-06 07:20:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:55 np0005548731 nova_compute[232433]: 2025-12-06 07:20:55.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:20:55 np0005548731 nova_compute[232433]: 2025-12-06 07:20:55.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:20:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:55.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:55 np0005548731 nova_compute[232433]: 2025-12-06 07:20:55.369 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:20:55 np0005548731 nova_compute[232433]: 2025-12-06 07:20:55.370 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:20:55 np0005548731 nova_compute[232433]: 2025-12-06 07:20:55.370 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:20:55 np0005548731 nova_compute[232433]: 2025-12-06 07:20:55.371 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:20:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:56.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:56 np0005548731 nova_compute[232433]: 2025-12-06 07:20:56.936 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:57 np0005548731 nova_compute[232433]: 2025-12-06 07:20:57.049 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:20:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:57.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:20:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:20:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:58.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:20:58 np0005548731 nova_compute[232433]: 2025-12-06 07:20:58.800 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Updating instance_info_cache with network_info: [{"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:20:58 np0005548731 nova_compute[232433]: 2025-12-06 07:20:58.828 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-9b6b3b79-95ba-4b05-b308-4cb69b61cab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:20:58 np0005548731 nova_compute[232433]: 2025-12-06 07:20:58.829 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:20:58 np0005548731 nova_compute[232433]: 2025-12-06 07:20:58.829 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:58 np0005548731 nova_compute[232433]: 2025-12-06 07:20:58.829 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:58 np0005548731 nova_compute[232433]: 2025-12-06 07:20:58.829 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:20:59 np0005548731 nova_compute[232433]: 2025-12-06 07:20:59.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:20:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:20:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:20:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:59.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:20:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:20:59.317 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:00.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:00.863 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:00.863 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:00.864 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:01.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:01Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:d5:79 10.100.0.4
Dec  6 02:21:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:01Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:d5:79 10.100.0.4
Dec  6 02:21:01 np0005548731 nova_compute[232433]: 2025-12-06 07:21:01.541 232437 DEBUG oslo_concurrency.lockutils [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "2092b945-90e0-4a04-aabb-cc48efe223da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:01 np0005548731 nova_compute[232433]: 2025-12-06 07:21:01.542 232437 DEBUG oslo_concurrency.lockutils [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:01 np0005548731 nova_compute[232433]: 2025-12-06 07:21:01.542 232437 DEBUG oslo_concurrency.lockutils [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:01 np0005548731 nova_compute[232433]: 2025-12-06 07:21:01.543 232437 DEBUG oslo_concurrency.lockutils [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:01 np0005548731 nova_compute[232433]: 2025-12-06 07:21:01.543 232437 DEBUG oslo_concurrency.lockutils [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:01 np0005548731 nova_compute[232433]: 2025-12-06 07:21:01.544 232437 INFO nova.compute.manager [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Terminating instance#033[00m
Dec  6 02:21:01 np0005548731 nova_compute[232433]: 2025-12-06 07:21:01.546 232437 DEBUG nova.compute.manager [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:21:01 np0005548731 nova_compute[232433]: 2025-12-06 07:21:01.939 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:01 np0005548731 kernel: tap05b4782c-e0 (unregistering): left promiscuous mode
Dec  6 02:21:01 np0005548731 NetworkManager[49182]: <info>  [1765005661.9555] device (tap05b4782c-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:21:01 np0005548731 podman[267413]: 2025-12-06 07:21:01.956727459 +0000 UTC m=+0.103234905 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 02:21:01 np0005548731 nova_compute[232433]: 2025-12-06 07:21:01.964 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:01Z|00331|binding|INFO|Releasing lport 05b4782c-e038-4443-a445-b3ff5c13e826 from this chassis (sb_readonly=0)
Dec  6 02:21:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:01Z|00332|binding|INFO|Setting lport 05b4782c-e038-4443-a445-b3ff5c13e826 down in Southbound
Dec  6 02:21:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:01Z|00333|binding|INFO|Removing iface tap05b4782c-e0 ovn-installed in OVS
Dec  6 02:21:01 np0005548731 podman[267415]: 2025-12-06 07:21:01.969624743 +0000 UTC m=+0.112579262 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:21:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:01.971 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:86:a3 10.100.0.7'], port_security=['fa:16:3e:44:86:a3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2092b945-90e0-4a04-aabb-cc48efe223da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da41861044a74eee9bc96841e54c57cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef95b64-f92c-4c6f-a9f3-c169d32d1826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ca3fb81-8bc9-4918-afd1-69859f260a75, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=05b4782c-e038-4443-a445-b3ff5c13e826) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:21:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:01.972 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 05b4782c-e038-4443-a445-b3ff5c13e826 in datapath b71f2bd2-56d4-4afa-bd37-2f65a2d061fe unbound from our chassis#033[00m
Dec  6 02:21:01 np0005548731 nova_compute[232433]: 2025-12-06 07:21:01.967 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:01.974 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b71f2bd2-56d4-4afa-bd37-2f65a2d061fe#033[00m
Dec  6 02:21:01 np0005548731 nova_compute[232433]: 2025-12-06 07:21:01.984 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:01.996 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf87888-190d-4c1e-a6b3-74ff78273daf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:02 np0005548731 podman[267414]: 2025-12-06 07:21:02.005494297 +0000 UTC m=+0.151293535 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller)
Dec  6 02:21:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:02.025 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf4a86d-91ee-4fb5-918b-a119bb1de877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:02.028 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[2818b9cc-d817-4efc-8d8f-f2b9ab56550f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:02 np0005548731 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000055.scope: Deactivated successfully.
Dec  6 02:21:02 np0005548731 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000055.scope: Consumed 15.740s CPU time.
Dec  6 02:21:02 np0005548731 systemd-machined[195355]: Machine qemu-35-instance-00000055 terminated.
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.052 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:02.058 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc03ace-8449-4998-ba9a-6cfeaeb8e381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:02.075 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a82d6113-a26a-4dcd-96a5-ed75adf37daf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb71f2bd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:86:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588246, 'reachable_time': 30434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267487, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:02.092 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[eb96c61a-789d-433a-b89f-24942e044b95]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb71f2bd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588258, 'tstamp': 588258}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267488, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb71f2bd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588260, 'tstamp': 588260}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267488, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:02.093 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb71f2bd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.095 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.098 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:02.099 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb71f2bd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:02.099 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:21:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:02.099 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb71f2bd2-50, col_values=(('external_ids', {'iface-id': '4c28345a-5229-44ea-b258-9761247daac8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:02.100 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.180 232437 INFO nova.virt.libvirt.driver [-] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Instance destroyed successfully.#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.180 232437 DEBUG nova.objects.instance [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'resources' on Instance uuid 2092b945-90e0-4a04-aabb-cc48efe223da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.196 232437 DEBUG nova.virt.libvirt.vif [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-10615280',display_name='tempest-ListServerFiltersTestJSON-instance-10615280',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-10615280',id=85,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-p00aeetk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:20:26Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=2092b945-90e0-4a04-aabb-cc48efe223da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05b4782c-e038-4443-a445-b3ff5c13e826", "address": "fa:16:3e:44:86:a3", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05b4782c-e0", "ovs_interfaceid": "05b4782c-e038-4443-a445-b3ff5c13e826", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.196 232437 DEBUG nova.network.os_vif_util [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "05b4782c-e038-4443-a445-b3ff5c13e826", "address": "fa:16:3e:44:86:a3", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05b4782c-e0", "ovs_interfaceid": "05b4782c-e038-4443-a445-b3ff5c13e826", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.196 232437 DEBUG nova.network.os_vif_util [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:86:a3,bridge_name='br-int',has_traffic_filtering=True,id=05b4782c-e038-4443-a445-b3ff5c13e826,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05b4782c-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.197 232437 DEBUG os_vif [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:86:a3,bridge_name='br-int',has_traffic_filtering=True,id=05b4782c-e038-4443-a445-b3ff5c13e826,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05b4782c-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.198 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.199 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05b4782c-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.200 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.201 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:02 np0005548731 nova_compute[232433]: 2025-12-06 07:21:02.203 232437 INFO os_vif [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:86:a3,bridge_name='br-int',has_traffic_filtering=True,id=05b4782c-e038-4443-a445-b3ff5c13e826,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05b4782c-e0')#033[00m
Dec  6 02:21:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:02.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:03 np0005548731 nova_compute[232433]: 2025-12-06 07:21:03.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:21:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:21:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:03.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.135 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.135 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:21:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3572484380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.563 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.645 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.646 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.648 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.648 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:21:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:21:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:04.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.802 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.803 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4378MB free_disk=20.811664581298828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.803 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:04 np0005548731 nova_compute[232433]: 2025-12-06 07:21:04.803 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:21:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 02:21:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.080 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.081 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 2092b945-90e0-4a04-aabb-cc48efe223da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.081 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.081 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.149 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.189 232437 DEBUG nova.compute.manager [req-dfa1449b-b9e3-41f6-a427-2823fd4720c8 req-f2caf181-8f1d-4bb7-aad7-b0fb926e22d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Received event network-vif-unplugged-05b4782c-e038-4443-a445-b3ff5c13e826 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.190 232437 DEBUG oslo_concurrency.lockutils [req-dfa1449b-b9e3-41f6-a427-2823fd4720c8 req-f2caf181-8f1d-4bb7-aad7-b0fb926e22d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.190 232437 DEBUG oslo_concurrency.lockutils [req-dfa1449b-b9e3-41f6-a427-2823fd4720c8 req-f2caf181-8f1d-4bb7-aad7-b0fb926e22d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.190 232437 DEBUG oslo_concurrency.lockutils [req-dfa1449b-b9e3-41f6-a427-2823fd4720c8 req-f2caf181-8f1d-4bb7-aad7-b0fb926e22d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.190 232437 DEBUG nova.compute.manager [req-dfa1449b-b9e3-41f6-a427-2823fd4720c8 req-f2caf181-8f1d-4bb7-aad7-b0fb926e22d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] No waiting events found dispatching network-vif-unplugged-05b4782c-e038-4443-a445-b3ff5c13e826 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.191 232437 DEBUG nova.compute.manager [req-dfa1449b-b9e3-41f6-a427-2823fd4720c8 req-f2caf181-8f1d-4bb7-aad7-b0fb926e22d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Received event network-vif-unplugged-05b4782c-e038-4443-a445-b3ff5c13e826 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:21:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:05.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:21:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/934338195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.587 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.593 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.606 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.637 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:21:05 np0005548731 nova_compute[232433]: 2025-12-06 07:21:05.637 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:06.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.054 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.200 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.267 232437 DEBUG nova.compute.manager [req-7875e81c-d21e-4ed5-9b61-840dc132e1ad req-1409cdb7-793f-4d4d-9ec8-216250f4ff59 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Received event network-vif-plugged-05b4782c-e038-4443-a445-b3ff5c13e826 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.268 232437 DEBUG oslo_concurrency.lockutils [req-7875e81c-d21e-4ed5-9b61-840dc132e1ad req-1409cdb7-793f-4d4d-9ec8-216250f4ff59 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.268 232437 DEBUG oslo_concurrency.lockutils [req-7875e81c-d21e-4ed5-9b61-840dc132e1ad req-1409cdb7-793f-4d4d-9ec8-216250f4ff59 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.268 232437 DEBUG oslo_concurrency.lockutils [req-7875e81c-d21e-4ed5-9b61-840dc132e1ad req-1409cdb7-793f-4d4d-9ec8-216250f4ff59 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.269 232437 DEBUG nova.compute.manager [req-7875e81c-d21e-4ed5-9b61-840dc132e1ad req-1409cdb7-793f-4d4d-9ec8-216250f4ff59 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] No waiting events found dispatching network-vif-plugged-05b4782c-e038-4443-a445-b3ff5c13e826 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.269 232437 WARNING nova.compute.manager [req-7875e81c-d21e-4ed5-9b61-840dc132e1ad req-1409cdb7-793f-4d4d-9ec8-216250f4ff59 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Received unexpected event network-vif-plugged-05b4782c-e038-4443-a445-b3ff5c13e826 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:21:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:21:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:07.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:21:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:21:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:21:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:21:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2443488506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.657 232437 INFO nova.virt.libvirt.driver [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Deleting instance files /var/lib/nova/instances/2092b945-90e0-4a04-aabb-cc48efe223da_del#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.658 232437 INFO nova.virt.libvirt.driver [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Deletion of /var/lib/nova/instances/2092b945-90e0-4a04-aabb-cc48efe223da_del complete#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.712 232437 INFO nova.compute.manager [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Took 6.17 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.713 232437 DEBUG oslo.service.loopingcall [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.713 232437 DEBUG nova.compute.manager [-] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:21:07 np0005548731 nova_compute[232433]: 2025-12-06 07:21:07.713 232437 DEBUG nova.network.neutron [-] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:21:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:21:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:21:08 np0005548731 nova_compute[232433]: 2025-12-06 07:21:08.542 232437 DEBUG nova.network.neutron [-] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:21:08 np0005548731 nova_compute[232433]: 2025-12-06 07:21:08.577 232437 INFO nova.compute.manager [-] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Took 0.86 seconds to deallocate network for instance.#033[00m
Dec  6 02:21:08 np0005548731 nova_compute[232433]: 2025-12-06 07:21:08.623 232437 DEBUG oslo_concurrency.lockutils [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:08 np0005548731 nova_compute[232433]: 2025-12-06 07:21:08.624 232437 DEBUG oslo_concurrency.lockutils [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:21:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/559791660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:21:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:21:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/559791660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:21:08 np0005548731 nova_compute[232433]: 2025-12-06 07:21:08.774 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:21:08 np0005548731 nova_compute[232433]: 2025-12-06 07:21:08.776 232437 DEBUG nova.compute.manager [req-26255c94-f4a2-4e3c-ba67-5faad267f906 req-e59f147e-b9f4-4564-a7e0-28ff339e43c9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Received event network-vif-deleted-05b4782c-e038-4443-a445-b3ff5c13e826 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:08.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:08 np0005548731 nova_compute[232433]: 2025-12-06 07:21:08.805 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:21:08 np0005548731 nova_compute[232433]: 2025-12-06 07:21:08.876 232437 DEBUG oslo_concurrency.processutils [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:09.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:21:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2657122733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:21:09 np0005548731 nova_compute[232433]: 2025-12-06 07:21:09.296 232437 DEBUG oslo_concurrency.processutils [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:09 np0005548731 nova_compute[232433]: 2025-12-06 07:21:09.302 232437 DEBUG nova.compute.provider_tree [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:21:09 np0005548731 nova_compute[232433]: 2025-12-06 07:21:09.321 232437 DEBUG nova.scheduler.client.report [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:21:09 np0005548731 nova_compute[232433]: 2025-12-06 07:21:09.345 232437 DEBUG oslo_concurrency.lockutils [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:09 np0005548731 nova_compute[232433]: 2025-12-06 07:21:09.382 232437 INFO nova.scheduler.client.report [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Deleted allocations for instance 2092b945-90e0-4a04-aabb-cc48efe223da#033[00m
Dec  6 02:21:09 np0005548731 nova_compute[232433]: 2025-12-06 07:21:09.451 232437 DEBUG oslo_concurrency.lockutils [None req-6acfc63d-40c0-4bea-9487-95c8e89dcd98 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "2092b945-90e0-4a04-aabb-cc48efe223da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:10.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:11.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:12 np0005548731 nova_compute[232433]: 2025-12-06 07:21:12.100 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:12 np0005548731 nova_compute[232433]: 2025-12-06 07:21:12.202 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:12.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:12 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:21:12 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:21:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:21:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:13.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.743 232437 DEBUG oslo_concurrency.lockutils [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.744 232437 DEBUG oslo_concurrency.lockutils [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.744 232437 DEBUG oslo_concurrency.lockutils [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.744 232437 DEBUG oslo_concurrency.lockutils [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.744 232437 DEBUG oslo_concurrency.lockutils [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.745 232437 INFO nova.compute.manager [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Terminating instance#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.746 232437 DEBUG nova.compute.manager [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:21:13 np0005548731 kernel: tapee1e1d41-57 (unregistering): left promiscuous mode
Dec  6 02:21:13 np0005548731 NetworkManager[49182]: <info>  [1765005673.8033] device (tapee1e1d41-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:21:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:13Z|00334|binding|INFO|Releasing lport ee1e1d41-5735-4e5a-8a77-1785e7a5214a from this chassis (sb_readonly=0)
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.813 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:13Z|00335|binding|INFO|Setting lport ee1e1d41-5735-4e5a-8a77-1785e7a5214a down in Southbound
Dec  6 02:21:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:13Z|00336|binding|INFO|Removing iface tapee1e1d41-57 ovn-installed in OVS
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.815 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:13.819 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:d5:79 10.100.0.4'], port_security=['fa:16:3e:3f:d5:79 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9b6b3b79-95ba-4b05-b308-4cb69b61cab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da41861044a74eee9bc96841e54c57cc', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9ef95b64-f92c-4c6f-a9f3-c169d32d1826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ca3fb81-8bc9-4918-afd1-69859f260a75, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ee1e1d41-5735-4e5a-8a77-1785e7a5214a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:21:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:13.821 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ee1e1d41-5735-4e5a-8a77-1785e7a5214a in datapath b71f2bd2-56d4-4afa-bd37-2f65a2d061fe unbound from our chassis#033[00m
Dec  6 02:21:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:13.822 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:21:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:13.823 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[03186a5b-302a-48c5-be70-506c1d045bba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:13.823 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe namespace which is not needed anymore#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.835 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:13 np0005548731 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000053.scope: Deactivated successfully.
Dec  6 02:21:13 np0005548731 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000053.scope: Consumed 13.949s CPU time.
Dec  6 02:21:13 np0005548731 systemd-machined[195355]: Machine qemu-36-instance-00000053 terminated.
Dec  6 02:21:13 np0005548731 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[266855]: [NOTICE]   (266859) : haproxy version is 2.8.14-c23fe91
Dec  6 02:21:13 np0005548731 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[266855]: [NOTICE]   (266859) : path to executable is /usr/sbin/haproxy
Dec  6 02:21:13 np0005548731 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[266855]: [WARNING]  (266859) : Exiting Master process...
Dec  6 02:21:13 np0005548731 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[266855]: [ALERT]    (266859) : Current worker (266861) exited with code 143 (Terminated)
Dec  6 02:21:13 np0005548731 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[266855]: [WARNING]  (266859) : All workers exited. Exiting... (0)
Dec  6 02:21:13 np0005548731 systemd[1]: libpod-f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68.scope: Deactivated successfully.
Dec  6 02:21:13 np0005548731 podman[267971]: 2025-12-06 07:21:13.953615807 +0000 UTC m=+0.041717026 container died f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.977 232437 INFO nova.virt.libvirt.driver [-] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Instance destroyed successfully.#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.977 232437 DEBUG nova.objects.instance [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'resources' on Instance uuid 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:21:13 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68-userdata-shm.mount: Deactivated successfully.
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.989 232437 DEBUG nova.virt.libvirt.vif [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-17034118',display_name='tempest-ListServerFiltersTestJSON-instance-17034118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-17034118',id=83,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-jvcfgmi9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:20:48Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=9b6b3b79-95ba-4b05-b308-4cb69b61cab5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.990 232437 DEBUG nova.network.os_vif_util [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "address": "fa:16:3e:3f:d5:79", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1e1d41-57", "ovs_interfaceid": "ee1e1d41-5735-4e5a-8a77-1785e7a5214a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.992 232437 DEBUG nova.network.os_vif_util [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.992 232437 DEBUG os_vif [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:21:13 np0005548731 systemd[1]: var-lib-containers-storage-overlay-1244017ade4aa5a970e4c883735ddcca0455ef0724308b5760a114f1ec5aadda-merged.mount: Deactivated successfully.
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.994 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.994 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee1e1d41-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.996 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:13 np0005548731 nova_compute[232433]: 2025-12-06 07:21:13.998 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.001 232437 INFO os_vif [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:d5:79,bridge_name='br-int',has_traffic_filtering=True,id=ee1e1d41-5735-4e5a-8a77-1785e7a5214a,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1e1d41-57')#033[00m
Dec  6 02:21:14 np0005548731 podman[267971]: 2025-12-06 07:21:14.005592183 +0000 UTC m=+0.093693402 container cleanup f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:21:14 np0005548731 systemd[1]: libpod-conmon-f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68.scope: Deactivated successfully.
Dec  6 02:21:14 np0005548731 podman[268021]: 2025-12-06 07:21:14.068934046 +0000 UTC m=+0.042819294 container remove f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  6 02:21:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:14.074 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[78539222-9673-4f54-9bad-95b99881e50a]: (4, ('Sat Dec  6 07:21:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe (f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68)\nf92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68\nSat Dec  6 07:21:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe (f92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68)\nf92cb6a8fc5cd867eb42cc92a8ace0b8c2fc3bd720946fddbae8164be4fcdf68\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:14.075 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[11c0c9b7-d7e4-4fdb-98c1-d9ba4443f135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:14.076 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb71f2bd2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.078 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:14 np0005548731 kernel: tapb71f2bd2-50: left promiscuous mode
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.091 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:14.094 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5e3e2a-d410-4052-b6d4-81835a5f31bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.107 232437 DEBUG nova.compute.manager [req-70cfde1f-99e4-41c4-93a6-1eb1526ff54c req-7ceda024-520c-4783-b255-32c5992553ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received event network-vif-unplugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.108 232437 DEBUG oslo_concurrency.lockutils [req-70cfde1f-99e4-41c4-93a6-1eb1526ff54c req-7ceda024-520c-4783-b255-32c5992553ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.108 232437 DEBUG oslo_concurrency.lockutils [req-70cfde1f-99e4-41c4-93a6-1eb1526ff54c req-7ceda024-520c-4783-b255-32c5992553ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.108 232437 DEBUG oslo_concurrency.lockutils [req-70cfde1f-99e4-41c4-93a6-1eb1526ff54c req-7ceda024-520c-4783-b255-32c5992553ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.108 232437 DEBUG nova.compute.manager [req-70cfde1f-99e4-41c4-93a6-1eb1526ff54c req-7ceda024-520c-4783-b255-32c5992553ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] No waiting events found dispatching network-vif-unplugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.109 232437 DEBUG nova.compute.manager [req-70cfde1f-99e4-41c4-93a6-1eb1526ff54c req-7ceda024-520c-4783-b255-32c5992553ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received event network-vif-unplugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:21:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:14.113 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[44f38b8a-3e68-4b2e-8bc0-d2da262023c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:14.113 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[83f3107e-e13e-42c5-b6c9-c658f07e196b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:14.127 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[55490352-39e9-461a-9f46-c023c946b063]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588239, 'reachable_time': 26482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268042, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:14.130 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:21:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:14.130 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf143c4-6cb7-4fb3-8787-d632760613ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:14 np0005548731 systemd[1]: run-netns-ovnmeta\x2db71f2bd2\x2d56d4\x2d4afa\x2dbd37\x2d2f65a2d061fe.mount: Deactivated successfully.
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.380 232437 INFO nova.virt.libvirt.driver [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Deleting instance files /var/lib/nova/instances/9b6b3b79-95ba-4b05-b308-4cb69b61cab5_del#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.381 232437 INFO nova.virt.libvirt.driver [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Deletion of /var/lib/nova/instances/9b6b3b79-95ba-4b05-b308-4cb69b61cab5_del complete#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.459 232437 INFO nova.compute.manager [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.460 232437 DEBUG oslo.service.loopingcall [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.460 232437 DEBUG nova.compute.manager [-] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:21:14 np0005548731 nova_compute[232433]: 2025-12-06 07:21:14.461 232437 DEBUG nova.network.neutron [-] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:21:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:14.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:15.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:15 np0005548731 nova_compute[232433]: 2025-12-06 07:21:15.396 232437 DEBUG nova.network.neutron [-] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:21:15 np0005548731 nova_compute[232433]: 2025-12-06 07:21:15.420 232437 INFO nova.compute.manager [-] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Took 0.96 seconds to deallocate network for instance.#033[00m
Dec  6 02:21:15 np0005548731 nova_compute[232433]: 2025-12-06 07:21:15.477 232437 DEBUG oslo_concurrency.lockutils [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:15 np0005548731 nova_compute[232433]: 2025-12-06 07:21:15.478 232437 DEBUG oslo_concurrency.lockutils [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:15 np0005548731 nova_compute[232433]: 2025-12-06 07:21:15.527 232437 DEBUG oslo_concurrency.processutils [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:21:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3709436296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:21:15 np0005548731 nova_compute[232433]: 2025-12-06 07:21:15.924 232437 DEBUG oslo_concurrency.processutils [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:15 np0005548731 nova_compute[232433]: 2025-12-06 07:21:15.929 232437 DEBUG nova.compute.provider_tree [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:21:15 np0005548731 nova_compute[232433]: 2025-12-06 07:21:15.946 232437 DEBUG nova.scheduler.client.report [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:21:15 np0005548731 nova_compute[232433]: 2025-12-06 07:21:15.965 232437 DEBUG oslo_concurrency.lockutils [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:15 np0005548731 nova_compute[232433]: 2025-12-06 07:21:15.990 232437 INFO nova.scheduler.client.report [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Deleted allocations for instance 9b6b3b79-95ba-4b05-b308-4cb69b61cab5#033[00m
Dec  6 02:21:16 np0005548731 nova_compute[232433]: 2025-12-06 07:21:16.051 232437 DEBUG oslo_concurrency.lockutils [None req-bc563657-b1a8-4ede-a908-8abac66ffd56 e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:16 np0005548731 nova_compute[232433]: 2025-12-06 07:21:16.178 232437 DEBUG nova.compute.manager [req-148fccd7-600d-4f04-92b6-c4f7aac082de req-59eb09c3-39b3-4106-b085-d381d4a15fea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:16 np0005548731 nova_compute[232433]: 2025-12-06 07:21:16.179 232437 DEBUG oslo_concurrency.lockutils [req-148fccd7-600d-4f04-92b6-c4f7aac082de req-59eb09c3-39b3-4106-b085-d381d4a15fea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:16 np0005548731 nova_compute[232433]: 2025-12-06 07:21:16.179 232437 DEBUG oslo_concurrency.lockutils [req-148fccd7-600d-4f04-92b6-c4f7aac082de req-59eb09c3-39b3-4106-b085-d381d4a15fea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:16 np0005548731 nova_compute[232433]: 2025-12-06 07:21:16.179 232437 DEBUG oslo_concurrency.lockutils [req-148fccd7-600d-4f04-92b6-c4f7aac082de req-59eb09c3-39b3-4106-b085-d381d4a15fea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9b6b3b79-95ba-4b05-b308-4cb69b61cab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:16 np0005548731 nova_compute[232433]: 2025-12-06 07:21:16.180 232437 DEBUG nova.compute.manager [req-148fccd7-600d-4f04-92b6-c4f7aac082de req-59eb09c3-39b3-4106-b085-d381d4a15fea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] No waiting events found dispatching network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:21:16 np0005548731 nova_compute[232433]: 2025-12-06 07:21:16.180 232437 WARNING nova.compute.manager [req-148fccd7-600d-4f04-92b6-c4f7aac082de req-59eb09c3-39b3-4106-b085-d381d4a15fea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received unexpected event network-vif-plugged-ee1e1d41-5735-4e5a-8a77-1785e7a5214a for instance with vm_state deleted and task_state None.#033[00m
Dec  6 02:21:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:16.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:17 np0005548731 nova_compute[232433]: 2025-12-06 07:21:17.102 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:17 np0005548731 nova_compute[232433]: 2025-12-06 07:21:17.179 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005662.1786253, 2092b945-90e0-4a04-aabb-cc48efe223da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:21:17 np0005548731 nova_compute[232433]: 2025-12-06 07:21:17.180 232437 INFO nova.compute.manager [-] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:21:17 np0005548731 nova_compute[232433]: 2025-12-06 07:21:17.208 232437 DEBUG nova.compute.manager [None req-ad453ec3-4902-4698-b2e4-caa55ff033a7 - - - - - -] [instance: 2092b945-90e0-4a04-aabb-cc48efe223da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:21:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:21:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:17.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:21:17 np0005548731 nova_compute[232433]: 2025-12-06 07:21:17.478 232437 DEBUG nova.compute.manager [req-537f6014-de8d-4e4b-b660-681d4a883063 req-13400ffd-1ee4-4fa5-b224-3c7de3abc99b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Received event network-vif-deleted-ee1e1d41-5735-4e5a-8a77-1785e7a5214a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:17 np0005548731 nova_compute[232433]: 2025-12-06 07:21:17.992 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:18.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:18 np0005548731 nova_compute[232433]: 2025-12-06 07:21:18.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:19.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:21:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:20.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:21:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:21:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:21.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:21:22 np0005548731 nova_compute[232433]: 2025-12-06 07:21:22.104 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:22.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:23.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:24 np0005548731 nova_compute[232433]: 2025-12-06 07:21:24.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:24.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:21:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:21:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:26.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:27 np0005548731 nova_compute[232433]: 2025-12-06 07:21:27.106 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:27.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:28.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:28 np0005548731 nova_compute[232433]: 2025-12-06 07:21:28.976 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005673.9748583, 9b6b3b79-95ba-4b05-b308-4cb69b61cab5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:21:28 np0005548731 nova_compute[232433]: 2025-12-06 07:21:28.977 232437 INFO nova.compute.manager [-] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:21:28 np0005548731 nova_compute[232433]: 2025-12-06 07:21:28.998 232437 DEBUG nova.compute.manager [None req-f116158f-c6ea-40bc-a676-3f89b0c6ca80 - - - - - -] [instance: 9b6b3b79-95ba-4b05-b308-4cb69b61cab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:21:29 np0005548731 nova_compute[232433]: 2025-12-06 07:21:29.042 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:29.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:30.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:31.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.109 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.227 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.227 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.244 232437 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.335 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.336 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.359 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.359 232437 INFO nova.compute.claims [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.497 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:32 np0005548731 systemd[1]: Starting dnf makecache...
Dec  6 02:21:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:32.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:32 np0005548731 podman[268146]: 2025-12-06 07:21:32.889556384 +0000 UTC m=+0.054099326 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec  6 02:21:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:21:32 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/743799355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:21:32 np0005548731 podman[268148]: 2025-12-06 07:21:32.90099072 +0000 UTC m=+0.060604778 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.922 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:32 np0005548731 podman[268147]: 2025-12-06 07:21:32.926492909 +0000 UTC m=+0.088979129 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.929 232437 DEBUG nova.compute.provider_tree [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.944 232437 DEBUG nova.scheduler.client.report [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.971 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:32 np0005548731 nova_compute[232433]: 2025-12-06 07:21:32.972 232437 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.026 232437 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.026 232437 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.057 232437 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:21:33 np0005548731 dnf[268149]: Metadata cache refreshed recently.
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.088 232437 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:21:33 np0005548731 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  6 02:21:33 np0005548731 systemd[1]: Finished dnf makecache.
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.276 232437 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.280 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.281 232437 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Creating image(s)#033[00m
Dec  6 02:21:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:33.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.315 232437 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.354 232437 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.390 232437 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.394 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.474 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.475 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.476 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.477 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.506 232437 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.510 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.539 232437 DEBUG nova.policy [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a52e2b4388994d8791443483bd42cc33', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b558585a6aa14470bdad319926a98046', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.909 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:33 np0005548731 nova_compute[232433]: 2025-12-06 07:21:33.984 232437 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] resizing rbd image 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:21:34 np0005548731 nova_compute[232433]: 2025-12-06 07:21:34.044 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:34 np0005548731 nova_compute[232433]: 2025-12-06 07:21:34.157 232437 DEBUG nova.objects.instance [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lazy-loading 'migration_context' on Instance uuid 84bceb35-5d7d-4199-b98e-3dc437c4a7a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:21:34 np0005548731 nova_compute[232433]: 2025-12-06 07:21:34.175 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:21:34 np0005548731 nova_compute[232433]: 2025-12-06 07:21:34.176 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Ensure instance console log exists: /var/lib/nova/instances/84bceb35-5d7d-4199-b98e-3dc437c4a7a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:21:34 np0005548731 nova_compute[232433]: 2025-12-06 07:21:34.176 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:34 np0005548731 nova_compute[232433]: 2025-12-06 07:21:34.176 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:34 np0005548731 nova_compute[232433]: 2025-12-06 07:21:34.177 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:34.238 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:21:34 np0005548731 nova_compute[232433]: 2025-12-06 07:21:34.239 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:34.239 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:21:34 np0005548731 nova_compute[232433]: 2025-12-06 07:21:34.696 232437 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Successfully created port: 79a92003-0946-49ab-b85a-8fea0cd7c3cb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:21:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:34.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:21:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:35.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:21:35 np0005548731 nova_compute[232433]: 2025-12-06 07:21:35.780 232437 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Successfully updated port: 79a92003-0946-49ab-b85a-8fea0cd7c3cb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:21:35 np0005548731 nova_compute[232433]: 2025-12-06 07:21:35.801 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "refresh_cache-84bceb35-5d7d-4199-b98e-3dc437c4a7a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:21:35 np0005548731 nova_compute[232433]: 2025-12-06 07:21:35.801 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquired lock "refresh_cache-84bceb35-5d7d-4199-b98e-3dc437c4a7a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:21:35 np0005548731 nova_compute[232433]: 2025-12-06 07:21:35.801 232437 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:21:35 np0005548731 nova_compute[232433]: 2025-12-06 07:21:35.980 232437 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.818 232437 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Updating instance_info_cache with network_info: [{"id": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "address": "fa:16:3e:9e:94:65", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79a92003-09", "ovs_interfaceid": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:21:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:36.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.848 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Releasing lock "refresh_cache-84bceb35-5d7d-4199-b98e-3dc437c4a7a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.849 232437 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Instance network_info: |[{"id": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "address": "fa:16:3e:9e:94:65", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79a92003-09", "ovs_interfaceid": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.851 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Start _get_guest_xml network_info=[{"id": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "address": "fa:16:3e:9e:94:65", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79a92003-09", "ovs_interfaceid": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.855 232437 WARNING nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.864 232437 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.865 232437 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.868 232437 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.869 232437 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.870 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.870 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.870 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.870 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.871 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.871 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.871 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.871 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.872 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.872 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.872 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.872 232437 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:21:36 np0005548731 nova_compute[232433]: 2025-12-06 07:21:36.875 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:37 np0005548731 nova_compute[232433]: 2025-12-06 07:21:37.111 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:37.240 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:21:37 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1888702857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:21:37 np0005548731 nova_compute[232433]: 2025-12-06 07:21:37.303 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:37.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:37 np0005548731 nova_compute[232433]: 2025-12-06 07:21:37.341 232437 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:37 np0005548731 nova_compute[232433]: 2025-12-06 07:21:37.348 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:37 np0005548731 nova_compute[232433]: 2025-12-06 07:21:37.705 232437 DEBUG nova.compute.manager [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Received event network-changed-79a92003-0946-49ab-b85a-8fea0cd7c3cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:37 np0005548731 nova_compute[232433]: 2025-12-06 07:21:37.707 232437 DEBUG nova.compute.manager [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Refreshing instance network info cache due to event network-changed-79a92003-0946-49ab-b85a-8fea0cd7c3cb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:21:37 np0005548731 nova_compute[232433]: 2025-12-06 07:21:37.708 232437 DEBUG oslo_concurrency.lockutils [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-84bceb35-5d7d-4199-b98e-3dc437c4a7a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:21:37 np0005548731 nova_compute[232433]: 2025-12-06 07:21:37.708 232437 DEBUG oslo_concurrency.lockutils [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-84bceb35-5d7d-4199-b98e-3dc437c4a7a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:21:37 np0005548731 nova_compute[232433]: 2025-12-06 07:21:37.709 232437 DEBUG nova.network.neutron [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Refreshing network info cache for port 79a92003-0946-49ab-b85a-8fea0cd7c3cb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:21:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:21:37 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2419729644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:21:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.062 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.714s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.064 232437 DEBUG nova.virt.libvirt.vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1422412228',display_name='tempest-ListServersNegativeTestJSON-server-1422412228-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1422412228-1',id=89,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b558585a6aa14470bdad319926a98046',ramdisk_id='',reservation_id='r-46lv2gf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-179719916',owner_user_name='tempest-ListServersNegativeTestJSON-179719916-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:33Z,user_data=None,user_id='a52e2b4388994d8791443483bd42cc33',uuid=84bceb35-5d7d-4199-b98e-3dc437c4a7a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "address": "fa:16:3e:9e:94:65", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79a92003-09", "ovs_interfaceid": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.064 232437 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converting VIF {"id": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "address": "fa:16:3e:9e:94:65", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79a92003-09", "ovs_interfaceid": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.065 232437 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:94:65,bridge_name='br-int',has_traffic_filtering=True,id=79a92003-0946-49ab-b85a-8fea0cd7c3cb,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79a92003-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.066 232437 DEBUG nova.objects.instance [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84bceb35-5d7d-4199-b98e-3dc437c4a7a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.096 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  <uuid>84bceb35-5d7d-4199-b98e-3dc437c4a7a3</uuid>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  <name>instance-00000059</name>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1422412228-1</nova:name>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:21:36</nova:creationTime>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <nova:user uuid="a52e2b4388994d8791443483bd42cc33">tempest-ListServersNegativeTestJSON-179719916-project-member</nova:user>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <nova:project uuid="b558585a6aa14470bdad319926a98046">tempest-ListServersNegativeTestJSON-179719916</nova:project>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <nova:port uuid="79a92003-0946-49ab-b85a-8fea0cd7c3cb">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <entry name="serial">84bceb35-5d7d-4199-b98e-3dc437c4a7a3</entry>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <entry name="uuid">84bceb35-5d7d-4199-b98e-3dc437c4a7a3</entry>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk.config">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:9e:94:65"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <target dev="tap79a92003-09"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/84bceb35-5d7d-4199-b98e-3dc437c4a7a3/console.log" append="off"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:21:38 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:21:38 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:21:38 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:21:38 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.097 232437 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Preparing to wait for external event network-vif-plugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.097 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.098 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.098 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.098 232437 DEBUG nova.virt.libvirt.vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1422412228',display_name='tempest-ListServersNegativeTestJSON-server-1422412228-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1422412228-1',id=89,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b558585a6aa14470bdad319926a98046',ramdisk_id='',reservation_id='r-46lv2gf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-179719916',owner_user_name='tempest-ListServersNegativeTestJSON-179719916-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:33Z,user_data=None,user_id='a52e2b4388994d8791443483bd42cc33',uuid=84bceb35-5d7d-4199-b98e-3dc437c4a7a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "address": "fa:16:3e:9e:94:65", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79a92003-09", "ovs_interfaceid": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.099 232437 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converting VIF {"id": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "address": "fa:16:3e:9e:94:65", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79a92003-09", "ovs_interfaceid": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.099 232437 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:94:65,bridge_name='br-int',has_traffic_filtering=True,id=79a92003-0946-49ab-b85a-8fea0cd7c3cb,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79a92003-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.100 232437 DEBUG os_vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:94:65,bridge_name='br-int',has_traffic_filtering=True,id=79a92003-0946-49ab-b85a-8fea0cd7c3cb,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79a92003-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.100 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.100 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.101 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.103 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.104 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79a92003-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.104 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79a92003-09, col_values=(('external_ids', {'iface-id': '79a92003-0946-49ab-b85a-8fea0cd7c3cb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:94:65', 'vm-uuid': '84bceb35-5d7d-4199-b98e-3dc437c4a7a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.154 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:38 np0005548731 NetworkManager[49182]: <info>  [1765005698.1553] manager: (tap79a92003-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.158 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.160 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.160 232437 INFO os_vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:94:65,bridge_name='br-int',has_traffic_filtering=True,id=79a92003-0946-49ab-b85a-8fea0cd7c3cb,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79a92003-09')#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.219 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.219 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.219 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] No VIF found with MAC fa:16:3e:9e:94:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.220 232437 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Using config drive#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.247 232437 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.604 232437 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Creating config drive at /var/lib/nova/instances/84bceb35-5d7d-4199-b98e-3dc437c4a7a3/disk.config#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.610 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84bceb35-5d7d-4199-b98e-3dc437c4a7a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmzjf4uhz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.742 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84bceb35-5d7d-4199-b98e-3dc437c4a7a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmzjf4uhz" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.780 232437 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:38 np0005548731 nova_compute[232433]: 2025-12-06 07:21:38.785 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84bceb35-5d7d-4199-b98e-3dc437c4a7a3/disk.config 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:38.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.050 232437 DEBUG nova.network.neutron [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Updated VIF entry in instance network info cache for port 79a92003-0946-49ab-b85a-8fea0cd7c3cb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.051 232437 DEBUG nova.network.neutron [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Updating instance_info_cache with network_info: [{"id": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "address": "fa:16:3e:9e:94:65", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79a92003-09", "ovs_interfaceid": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.068 232437 DEBUG oslo_concurrency.lockutils [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-84bceb35-5d7d-4199-b98e-3dc437c4a7a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.183 232437 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84bceb35-5d7d-4199-b98e-3dc437c4a7a3/disk.config 84bceb35-5d7d-4199-b98e-3dc437c4a7a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.184 232437 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Deleting local config drive /var/lib/nova/instances/84bceb35-5d7d-4199-b98e-3dc437c4a7a3/disk.config because it was imported into RBD.#033[00m
Dec  6 02:21:39 np0005548731 kernel: tap79a92003-09: entered promiscuous mode
Dec  6 02:21:39 np0005548731 NetworkManager[49182]: <info>  [1765005699.2353] manager: (tap79a92003-09): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Dec  6 02:21:39 np0005548731 systemd-udevd[268512]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:21:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:39Z|00337|binding|INFO|Claiming lport 79a92003-0946-49ab-b85a-8fea0cd7c3cb for this chassis.
Dec  6 02:21:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:39Z|00338|binding|INFO|79a92003-0946-49ab-b85a-8fea0cd7c3cb: Claiming fa:16:3e:9e:94:65 10.100.0.7
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.272 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.279 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.286 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:94:65 10.100.0.7'], port_security=['fa:16:3e:9e:94:65 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '84bceb35-5d7d-4199-b98e-3dc437c4a7a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b558585a6aa14470bdad319926a98046', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0eb8f52e-2f68-4151-8464-0d3b0eb6798f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c0e6fde-267f-49e6-86d0-b0c0ced92a7c, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=79a92003-0946-49ab-b85a-8fea0cd7c3cb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:21:39 np0005548731 NetworkManager[49182]: <info>  [1765005699.2885] device (tap79a92003-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.288 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 79a92003-0946-49ab-b85a-8fea0cd7c3cb in datapath 77f3ccc8-bb54-46a1-b015-cc5b8a445202 bound to our chassis#033[00m
Dec  6 02:21:39 np0005548731 NetworkManager[49182]: <info>  [1765005699.2892] device (tap79a92003-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.291 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77f3ccc8-bb54-46a1-b015-cc5b8a445202#033[00m
Dec  6 02:21:39 np0005548731 systemd-machined[195355]: New machine qemu-37-instance-00000059.
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.302 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[69a6253f-049b-4c9f-85e2-5e816f85d77e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.303 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77f3ccc8-b1 in ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.304 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77f3ccc8-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.305 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8038cc-f3a9-488d-9906-cdd48243d270]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.305 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1e866e1f-eee4-4274-88c2-18676a29e22d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.317 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[0e900e84-378a-4002-bfd9-0f4c3764a3a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.317 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.318 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:39.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:39 np0005548731 systemd[1]: Started Virtual Machine qemu-37-instance-00000059.
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.335 232437 DEBUG nova.compute.manager [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.341 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9e4b78-21dc-4035-8607-14ed584db0d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:39Z|00339|binding|INFO|Setting lport 79a92003-0946-49ab-b85a-8fea0cd7c3cb ovn-installed in OVS
Dec  6 02:21:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:39Z|00340|binding|INFO|Setting lport 79a92003-0946-49ab-b85a-8fea0cd7c3cb up in Southbound
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.349 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.375 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb43628-e0e7-4f1d-9fcc-9f75cc9a0cb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 NetworkManager[49182]: <info>  [1765005699.3820] manager: (tap77f3ccc8-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/174)
Dec  6 02:21:39 np0005548731 systemd-udevd[268516]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.382 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf6d638-b2d7-4c45-b17f-53f2cfd440b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.411 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7bddb821-d95b-4f23-9d98-a7da425cd62f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.414 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e3222958-0c69-40b2-825b-38c12aea682d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.416 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.417 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.423 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.423 232437 INFO nova.compute.claims [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:21:39 np0005548731 NetworkManager[49182]: <info>  [1765005699.4339] device (tap77f3ccc8-b0): carrier: link connected
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.440 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbe9927-9d66-4892-bc81-6a02392a67fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.454 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d5af9a7e-0fb3-4174-8b8d-15a379091141]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77f3ccc8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:76:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596144, 'reachable_time': 36595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268548, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.470 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fab10ac0-decf-4217-8a72-9bde4d1257fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:769c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596144, 'tstamp': 596144}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268549, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.485 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9f6d94-7410-4bb6-a926-762f95ccc9e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77f3ccc8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:76:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596144, 'reachable_time': 36595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268550, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.511 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1210bd31-eeb7-447e-858f-af523b5bd80a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.550 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.570 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9d64e4-fa85-459b-8f8a-c40e817ad3ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.572 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77f3ccc8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.572 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.572 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77f3ccc8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:39 np0005548731 NetworkManager[49182]: <info>  [1765005699.5747] manager: (tap77f3ccc8-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Dec  6 02:21:39 np0005548731 kernel: tap77f3ccc8-b0: entered promiscuous mode
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.577 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77f3ccc8-b0, col_values=(('external_ids', {'iface-id': '556411fa-e8d1-4a0d-8496-4416c2200434'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:39Z|00341|binding|INFO|Releasing lport 556411fa-e8d1-4a0d-8496-4416c2200434 from this chassis (sb_readonly=0)
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.594 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77f3ccc8-bb54-46a1-b015-cc5b8a445202.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77f3ccc8-bb54-46a1-b015-cc5b8a445202.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.595 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[938b5294-db6a-4e37-b7b4-ee674ad55d70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.596 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-77f3ccc8-bb54-46a1-b015-cc5b8a445202
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/77f3ccc8-bb54-46a1-b015-cc5b8a445202.pid.haproxy
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 77f3ccc8-bb54-46a1-b015-cc5b8a445202
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:21:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:39.597 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'env', 'PROCESS_TAG=haproxy-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77f3ccc8-bb54-46a1-b015-cc5b8a445202.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.597 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.805 232437 DEBUG nova.compute.manager [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Received event network-vif-plugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.806 232437 DEBUG oslo_concurrency.lockutils [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.807 232437 DEBUG oslo_concurrency.lockutils [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.807 232437 DEBUG oslo_concurrency.lockutils [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.808 232437 DEBUG nova.compute.manager [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Processing event network-vif-plugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.808 232437 DEBUG nova.compute.manager [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Received event network-vif-plugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.808 232437 DEBUG oslo_concurrency.lockutils [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.809 232437 DEBUG oslo_concurrency.lockutils [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.809 232437 DEBUG oslo_concurrency.lockutils [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.809 232437 DEBUG nova.compute.manager [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] No waiting events found dispatching network-vif-plugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:21:39 np0005548731 nova_compute[232433]: 2025-12-06 07:21:39.810 232437 WARNING nova.compute.manager [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Received unexpected event network-vif-plugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:21:39 np0005548731 podman[268602]: 2025-12-06 07:21:39.932084996 +0000 UTC m=+0.048245440 container create 82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:21:39 np0005548731 systemd[1]: Started libpod-conmon-82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee.scope.
Dec  6 02:21:39 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:21:39 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea0177ad7655fdc21aa4a2a6d6034a901553adb51dd3343ad63c728ebd73d77/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:21:40 np0005548731 podman[268602]: 2025-12-06 07:21:40.002303674 +0000 UTC m=+0.118464138 container init 82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:21:40 np0005548731 podman[268602]: 2025-12-06 07:21:39.908004702 +0000 UTC m=+0.024165166 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:21:40 np0005548731 podman[268602]: 2025-12-06 07:21:40.008108359 +0000 UTC m=+0.124268813 container start 82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Dec  6 02:21:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:21:40 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3014881078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:21:40 np0005548731 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[268617]: [NOTICE]   (268622) : New worker (268625) forked
Dec  6 02:21:40 np0005548731 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[268617]: [NOTICE]   (268622) : Loading success.
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.034 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.039 232437 DEBUG nova.compute.provider_tree [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.058 232437 DEBUG nova.scheduler.client.report [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.091 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.092 232437 DEBUG nova.compute.manager [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.141 232437 DEBUG nova.compute.manager [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.142 232437 DEBUG nova.network.neutron [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.161 232437 INFO nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.178 232437 DEBUG nova.compute.manager [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.234 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005700.2335975, 84bceb35-5d7d-4199-b98e-3dc437c4a7a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.234 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] VM Started (Lifecycle Event)#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.240 232437 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.244 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.246 232437 INFO nova.virt.libvirt.driver [-] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Instance spawned successfully.#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.247 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.269 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.275 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.277 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.278 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.278 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.279 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.279 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.279 232437 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.305 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.306 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005700.2345102, 84bceb35-5d7d-4199-b98e-3dc437c4a7a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.306 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.308 232437 DEBUG nova.policy [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2404fce4d1f48779c97d3cbcb6ae594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9536676f60844b6f802518771f02409f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.324 232437 DEBUG nova.compute.manager [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.325 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.325 232437 INFO nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Creating image(s)#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.347 232437 DEBUG nova.storage.rbd_utils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] rbd image 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.372 232437 DEBUG nova.storage.rbd_utils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] rbd image 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.395 232437 DEBUG nova.storage.rbd_utils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] rbd image 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.399 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.423 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.426 232437 INFO nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Took 7.15 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.427 232437 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.430 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005700.2434576, 84bceb35-5d7d-4199-b98e-3dc437c4a7a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.430 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.457 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.458 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.459 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.459 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.482 232437 DEBUG nova.storage.rbd_utils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] rbd image 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.485 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.513 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.523 232437 INFO nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Took 8.22 seconds to build instance.#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.526 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:21:40 np0005548731 nova_compute[232433]: 2025-12-06 07:21:40.550 232437 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:41 np0005548731 nova_compute[232433]: 2025-12-06 07:21:41.180 232437 DEBUG nova.network.neutron [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Successfully created port: 8af7a557-97f6-420e-99b0-83eced102922 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:21:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:41 np0005548731 nova_compute[232433]: 2025-12-06 07:21:41.873 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:41 np0005548731 nova_compute[232433]: 2025-12-06 07:21:41.941 232437 DEBUG nova.storage.rbd_utils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] resizing rbd image 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:21:42 np0005548731 nova_compute[232433]: 2025-12-06 07:21:42.113 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:42 np0005548731 nova_compute[232433]: 2025-12-06 07:21:42.115 232437 DEBUG nova.network.neutron [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Successfully updated port: 8af7a557-97f6-420e-99b0-83eced102922 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:21:42 np0005548731 nova_compute[232433]: 2025-12-06 07:21:42.130 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:21:42 np0005548731 nova_compute[232433]: 2025-12-06 07:21:42.131 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquired lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:21:42 np0005548731 nova_compute[232433]: 2025-12-06 07:21:42.131 232437 DEBUG nova.network.neutron [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:21:42 np0005548731 nova_compute[232433]: 2025-12-06 07:21:42.261 232437 DEBUG nova.network.neutron [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:21:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:42 np0005548731 nova_compute[232433]: 2025-12-06 07:21:42.993 232437 DEBUG nova.network.neutron [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updating instance_info_cache with network_info: [{"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.028 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Releasing lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.028 232437 DEBUG nova.compute.manager [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Instance network_info: |[{"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.155 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:43.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.517 232437 DEBUG nova.objects.instance [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lazy-loading 'migration_context' on Instance uuid 6dc14838-5602-4f43-a3e3-2374b4f603eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.545 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.546 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Ensure instance console log exists: /var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.546 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.547 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.547 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.549 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Start _get_guest_xml network_info=[{"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.553 232437 WARNING nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.557 232437 DEBUG nova.virt.libvirt.host [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.557 232437 DEBUG nova.virt.libvirt.host [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.562 232437 DEBUG nova.virt.libvirt.host [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.562 232437 DEBUG nova.virt.libvirt.host [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.563 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.564 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.564 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.565 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.565 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.565 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.566 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.566 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.567 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.567 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.567 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.568 232437 DEBUG nova.virt.hardware [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:21:43 np0005548731 nova_compute[232433]: 2025-12-06 07:21:43.571 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:21:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2109164090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.048 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.075 232437 DEBUG nova.storage.rbd_utils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] rbd image 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.080 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.204 232437 DEBUG nova.compute.manager [req-0a6b0e5c-f156-4926-aeca-0b9332fd2238 req-d14349c5-0711-4400-b3a3-9b6453de2270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-changed-8af7a557-97f6-420e-99b0-83eced102922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.205 232437 DEBUG nova.compute.manager [req-0a6b0e5c-f156-4926-aeca-0b9332fd2238 req-d14349c5-0711-4400-b3a3-9b6453de2270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Refreshing instance network info cache due to event network-changed-8af7a557-97f6-420e-99b0-83eced102922. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.205 232437 DEBUG oslo_concurrency.lockutils [req-0a6b0e5c-f156-4926-aeca-0b9332fd2238 req-d14349c5-0711-4400-b3a3-9b6453de2270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.206 232437 DEBUG oslo_concurrency.lockutils [req-0a6b0e5c-f156-4926-aeca-0b9332fd2238 req-d14349c5-0711-4400-b3a3-9b6453de2270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.206 232437 DEBUG nova.network.neutron [req-0a6b0e5c-f156-4926-aeca-0b9332fd2238 req-d14349c5-0711-4400-b3a3-9b6453de2270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Refreshing network info cache for port 8af7a557-97f6-420e-99b0-83eced102922 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:21:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:21:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1530936474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.628 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.630 232437 DEBUG nova.virt.libvirt.vif [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1149920199',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1149920199',id=92,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwWYRzhpCEWsRkCaBhzHfhpPxKJasHgXvUlHzaDRDZXID15xokDKbMJM14TkjwZa8JWqEfbHVbjprM+jVuVPSPnfeGy8c3QiNdj4kuOXnNZ8rc8TAxyYOMgQOcRiCAPIw==',key_name='tempest-keypair-1877451409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9536676f60844b6f802518771f02409f',ramdisk_id='',reservation_id='r-bxlx3of0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-1465336039',owner_user_name='tempest-TaggedAttachmentsTest-1465336039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2404fce4d1f48779c97d3cbcb6ae594',uuid=6dc14838-5602-4f43-a3e3-2374b4f603eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.631 232437 DEBUG nova.network.os_vif_util [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converting VIF {"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.631 232437 DEBUG nova.network.os_vif_util [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:41:a3,bridge_name='br-int',has_traffic_filtering=True,id=8af7a557-97f6-420e-99b0-83eced102922,network=Network(8b1d33d1-6678-4c36-a5fc-301790dcb0d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8af7a557-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.633 232437 DEBUG nova.objects.instance [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lazy-loading 'pci_devices' on Instance uuid 6dc14838-5602-4f43-a3e3-2374b4f603eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.647 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  <uuid>6dc14838-5602-4f43-a3e3-2374b4f603eb</uuid>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  <name>instance-0000005c</name>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <nova:name>tempest-device-tagging-server-1149920199</nova:name>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:21:43</nova:creationTime>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <nova:user uuid="c2404fce4d1f48779c97d3cbcb6ae594">tempest-TaggedAttachmentsTest-1465336039-project-member</nova:user>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <nova:project uuid="9536676f60844b6f802518771f02409f">tempest-TaggedAttachmentsTest-1465336039</nova:project>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <nova:port uuid="8af7a557-97f6-420e-99b0-83eced102922">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <entry name="serial">6dc14838-5602-4f43-a3e3-2374b4f603eb</entry>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <entry name="uuid">6dc14838-5602-4f43-a3e3-2374b4f603eb</entry>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/6dc14838-5602-4f43-a3e3-2374b4f603eb_disk">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/6dc14838-5602-4f43-a3e3-2374b4f603eb_disk.config">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:f8:41:a3"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <target dev="tap8af7a557-97"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/console.log" append="off"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:21:44 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:21:44 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:21:44 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:21:44 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.651 232437 DEBUG nova.compute.manager [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Preparing to wait for external event network-vif-plugged-8af7a557-97f6-420e-99b0-83eced102922 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.652 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.652 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.652 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.653 232437 DEBUG nova.virt.libvirt.vif [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1149920199',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1149920199',id=92,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwWYRzhpCEWsRkCaBhzHfhpPxKJasHgXvUlHzaDRDZXID15xokDKbMJM14TkjwZa8JWqEfbHVbjprM+jVuVPSPnfeGy8c3QiNdj4kuOXnNZ8rc8TAxyYOMgQOcRiCAPIw==',key_name='tempest-keypair-1877451409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9536676f60844b6f802518771f02409f',ramdisk_id='',reservation_id='r-bxlx3of0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-1465336039',owner_user_name='tempest-TaggedAttachmentsTest-1465336039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2404fce4d1f48779c97d3cbcb6ae594',uuid=6dc14838-5602-4f43-a3e3-2374b4f603eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.653 232437 DEBUG nova.network.os_vif_util [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converting VIF {"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.654 232437 DEBUG nova.network.os_vif_util [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:41:a3,bridge_name='br-int',has_traffic_filtering=True,id=8af7a557-97f6-420e-99b0-83eced102922,network=Network(8b1d33d1-6678-4c36-a5fc-301790dcb0d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8af7a557-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.654 232437 DEBUG os_vif [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:41:a3,bridge_name='br-int',has_traffic_filtering=True,id=8af7a557-97f6-420e-99b0-83eced102922,network=Network(8b1d33d1-6678-4c36-a5fc-301790dcb0d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8af7a557-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.655 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.656 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.657 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.659 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.659 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8af7a557-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.659 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8af7a557-97, col_values=(('external_ids', {'iface-id': '8af7a557-97f6-420e-99b0-83eced102922', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:41:a3', 'vm-uuid': '6dc14838-5602-4f43-a3e3-2374b4f603eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.661 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:44 np0005548731 NetworkManager[49182]: <info>  [1765005704.6622] manager: (tap8af7a557-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.664 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.668 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.669 232437 INFO os_vif [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:41:a3,bridge_name='br-int',has_traffic_filtering=True,id=8af7a557-97f6-420e-99b0-83eced102922,network=Network(8b1d33d1-6678-4c36-a5fc-301790dcb0d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8af7a557-97')#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.729 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.730 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.731 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] No VIF found with MAC fa:16:3e:f8:41:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.731 232437 INFO nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Using config drive#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.772 232437 DEBUG nova.storage.rbd_utils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] rbd image 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:44.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.905 232437 DEBUG oslo_concurrency.lockutils [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.906 232437 DEBUG oslo_concurrency.lockutils [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.906 232437 DEBUG oslo_concurrency.lockutils [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.906 232437 DEBUG oslo_concurrency.lockutils [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.907 232437 DEBUG oslo_concurrency.lockutils [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.908 232437 INFO nova.compute.manager [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Terminating instance#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.909 232437 DEBUG nova.compute.manager [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:21:44 np0005548731 kernel: tap79a92003-09 (unregistering): left promiscuous mode
Dec  6 02:21:44 np0005548731 NetworkManager[49182]: <info>  [1765005704.9487] device (tap79a92003-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.952 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:44Z|00342|binding|INFO|Releasing lport 79a92003-0946-49ab-b85a-8fea0cd7c3cb from this chassis (sb_readonly=0)
Dec  6 02:21:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:44Z|00343|binding|INFO|Setting lport 79a92003-0946-49ab-b85a-8fea0cd7c3cb down in Southbound
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.961 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:44Z|00344|binding|INFO|Removing iface tap79a92003-09 ovn-installed in OVS
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.964 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:44.973 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:94:65 10.100.0.7'], port_security=['fa:16:3e:9e:94:65 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '84bceb35-5d7d-4199-b98e-3dc437c4a7a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b558585a6aa14470bdad319926a98046', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0eb8f52e-2f68-4151-8464-0d3b0eb6798f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c0e6fde-267f-49e6-86d0-b0c0ced92a7c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=79a92003-0946-49ab-b85a-8fea0cd7c3cb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:21:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:44.975 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 79a92003-0946-49ab-b85a-8fea0cd7c3cb in datapath 77f3ccc8-bb54-46a1-b015-cc5b8a445202 unbound from our chassis#033[00m
Dec  6 02:21:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:44.977 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77f3ccc8-bb54-46a1-b015-cc5b8a445202, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:21:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:44.978 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7c453998-eed8-44d0-af4e-f5e7257f683a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:44.979 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 namespace which is not needed anymore#033[00m
Dec  6 02:21:44 np0005548731 nova_compute[232433]: 2025-12-06 07:21:44.979 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:44 np0005548731 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000059.scope: Deactivated successfully.
Dec  6 02:21:44 np0005548731 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000059.scope: Consumed 5.593s CPU time.
Dec  6 02:21:44 np0005548731 systemd-machined[195355]: Machine qemu-37-instance-00000059 terminated.
Dec  6 02:21:45 np0005548731 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[268617]: [NOTICE]   (268622) : haproxy version is 2.8.14-c23fe91
Dec  6 02:21:45 np0005548731 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[268617]: [NOTICE]   (268622) : path to executable is /usr/sbin/haproxy
Dec  6 02:21:45 np0005548731 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[268617]: [WARNING]  (268622) : Exiting Master process...
Dec  6 02:21:45 np0005548731 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[268617]: [ALERT]    (268622) : Current worker (268625) exited with code 143 (Terminated)
Dec  6 02:21:45 np0005548731 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[268617]: [WARNING]  (268622) : All workers exited. Exiting... (0)
Dec  6 02:21:45 np0005548731 systemd[1]: libpod-82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee.scope: Deactivated successfully.
Dec  6 02:21:45 np0005548731 podman[269002]: 2025-12-06 07:21:45.104716173 +0000 UTC m=+0.044473785 container died 82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:21:45 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee-userdata-shm.mount: Deactivated successfully.
Dec  6 02:21:45 np0005548731 systemd[1]: var-lib-containers-storage-overlay-8ea0177ad7655fdc21aa4a2a6d6034a901553adb51dd3343ad63c728ebd73d77-merged.mount: Deactivated successfully.
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.142 232437 INFO nova.virt.libvirt.driver [-] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Instance destroyed successfully.#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.143 232437 DEBUG nova.objects.instance [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lazy-loading 'resources' on Instance uuid 84bceb35-5d7d-4199-b98e-3dc437c4a7a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:21:45 np0005548731 podman[269002]: 2025-12-06 07:21:45.146459789 +0000 UTC m=+0.086217401 container cleanup 82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:21:45 np0005548731 systemd[1]: libpod-conmon-82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee.scope: Deactivated successfully.
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.161 232437 DEBUG nova.virt.libvirt.vif [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1422412228',display_name='tempest-ListServersNegativeTestJSON-server-1422412228-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1422412228-1',id=89,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:21:40Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b558585a6aa14470bdad319926a98046',ramdisk_id='',reservation_id='r-46lv2gf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-179719916',owner_user_name='tempest-ListServersNegativeTestJSON-179719916-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:40Z,user_data=None,user_id='a52e2b4388994d8791443483bd42cc33',uuid=84bceb35-5d7d-4199-b98e-3dc437c4a7a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "address": "fa:16:3e:9e:94:65", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79a92003-09", "ovs_interfaceid": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.162 232437 DEBUG nova.network.os_vif_util [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converting VIF {"id": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "address": "fa:16:3e:9e:94:65", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79a92003-09", "ovs_interfaceid": "79a92003-0946-49ab-b85a-8fea0cd7c3cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.163 232437 DEBUG nova.network.os_vif_util [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:94:65,bridge_name='br-int',has_traffic_filtering=True,id=79a92003-0946-49ab-b85a-8fea0cd7c3cb,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79a92003-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.163 232437 DEBUG os_vif [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:94:65,bridge_name='br-int',has_traffic_filtering=True,id=79a92003-0946-49ab-b85a-8fea0cd7c3cb,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79a92003-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.165 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.166 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79a92003-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.168 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.171 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.173 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.176 232437 INFO os_vif [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:94:65,bridge_name='br-int',has_traffic_filtering=True,id=79a92003-0946-49ab-b85a-8fea0cd7c3cb,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79a92003-09')#033[00m
Dec  6 02:21:45 np0005548731 podman[269043]: 2025-12-06 07:21:45.214926133 +0000 UTC m=+0.044553917 container remove 82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.216 232437 INFO nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Creating config drive at /var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/disk.config#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.221 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd97f11-ebe2-40d9-8535-e85bec875d00]: (4, ('Sat Dec  6 07:21:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 (82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee)\n82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee\nSat Dec  6 07:21:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 (82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee)\n82ca8fe0efd56e1ef419f51277d0e61b19d2d3f08f3f6ba31ec9e7a31ed8a5ee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.223 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bd0f2860-0059-4a60-bbc7-1accd06a5889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.223 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5yb9c5tu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:45 np0005548731 kernel: tap77f3ccc8-b0: left promiscuous mode
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.224 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77f3ccc8-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:45 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 02:21:45 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.249 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.250 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a065b1fd-e9f9-47df-84f0-bd055bbfc9b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.262 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[af95482a-ac36-42d6-aa87-4e98f3557610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.264 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a937df49-2d56-498f-b872-3bc0beac771e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.278 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[530e2944-d595-4b23-9950-03e522a6ccba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596137, 'reachable_time': 30925, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269086, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.281 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.281 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[f3365988-a796-457f-bb1e-0901e05c9fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 systemd[1]: run-netns-ovnmeta\x2d77f3ccc8\x2dbb54\x2d46a1\x2db015\x2dcc5b8a445202.mount: Deactivated successfully.
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.358 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5yb9c5tu" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.387 232437 DEBUG nova.storage.rbd_utils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] rbd image 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.391 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/disk.config 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:45.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.789 232437 DEBUG nova.network.neutron [req-0a6b0e5c-f156-4926-aeca-0b9332fd2238 req-d14349c5-0711-4400-b3a3-9b6453de2270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updated VIF entry in instance network info cache for port 8af7a557-97f6-420e-99b0-83eced102922. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.789 232437 DEBUG nova.network.neutron [req-0a6b0e5c-f156-4926-aeca-0b9332fd2238 req-d14349c5-0711-4400-b3a3-9b6453de2270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updating instance_info_cache with network_info: [{"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.811 232437 DEBUG oslo_concurrency.processutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/disk.config 6dc14838-5602-4f43-a3e3-2374b4f603eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.812 232437 INFO nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Deleting local config drive /var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/disk.config because it was imported into RBD.#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.815 232437 DEBUG oslo_concurrency.lockutils [req-0a6b0e5c-f156-4926-aeca-0b9332fd2238 req-d14349c5-0711-4400-b3a3-9b6453de2270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:21:45 np0005548731 kernel: tap8af7a557-97: entered promiscuous mode
Dec  6 02:21:45 np0005548731 systemd-udevd[268981]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:21:45 np0005548731 NetworkManager[49182]: <info>  [1765005705.8597] manager: (tap8af7a557-97): new Tun device (/org/freedesktop/NetworkManager/Devices/177)
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.859 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:45Z|00345|binding|INFO|Claiming lport 8af7a557-97f6-420e-99b0-83eced102922 for this chassis.
Dec  6 02:21:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:45Z|00346|binding|INFO|8af7a557-97f6-420e-99b0-83eced102922: Claiming fa:16:3e:f8:41:a3 10.100.0.10
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.863 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:45 np0005548731 NetworkManager[49182]: <info>  [1765005705.8700] device (tap8af7a557-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:21:45 np0005548731 NetworkManager[49182]: <info>  [1765005705.8710] device (tap8af7a557-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.871 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:41:a3 10.100.0.10'], port_security=['fa:16:3e:f8:41:a3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6dc14838-5602-4f43-a3e3-2374b4f603eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b1d33d1-6678-4c36-a5fc-301790dcb0d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9536676f60844b6f802518771f02409f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e8943e4-be94-466a-916f-cd8e61b4a537', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9df0203-2150-4578-816a-9c2f6170e755, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8af7a557-97f6-420e-99b0-83eced102922) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.873 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8af7a557-97f6-420e-99b0-83eced102922 in datapath 8b1d33d1-6678-4c36-a5fc-301790dcb0d0 bound to our chassis#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.875 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b1d33d1-6678-4c36-a5fc-301790dcb0d0#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.887 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[801b2ea3-e34e-4fa7-b140-52c2335ee9d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.888 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8b1d33d1-61 in ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.891 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8b1d33d1-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.892 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[533d57bf-c1e7-42ac-800d-c37204d7466c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 systemd-machined[195355]: New machine qemu-38-instance-0000005c.
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.892 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3ec1a3-07fb-40d3-962e-ccff901a8490]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.904 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[16e7dd49-e9a0-47bf-83ac-8389f94b4eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 systemd[1]: Started Virtual Machine qemu-38-instance-0000005c.
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.929 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cef848bc-7c6f-4760-9cad-2614728cdb03]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.960 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[16c38518-d2fc-40d1-8b38-d093bac30ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.962 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:45 np0005548731 NetworkManager[49182]: <info>  [1765005705.9663] manager: (tap8b1d33d1-60): new Veth device (/org/freedesktop/NetworkManager/Devices/178)
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.965 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d3e0da-c174-43b9-a1c7-527cfa21b0e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:45Z|00347|binding|INFO|Setting lport 8af7a557-97f6-420e-99b0-83eced102922 ovn-installed in OVS
Dec  6 02:21:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:45Z|00348|binding|INFO|Setting lport 8af7a557-97f6-420e-99b0-83eced102922 up in Southbound
Dec  6 02:21:45 np0005548731 nova_compute[232433]: 2025-12-06 07:21:45.972 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.996 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[85ce908c-660a-4c2e-b4b6-6ce5b9d5b705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:45.999 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e37bdef5-9664-4e80-adfd-042952ce6ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:46 np0005548731 NetworkManager[49182]: <info>  [1765005706.0188] device (tap8b1d33d1-60): carrier: link connected
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.022 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d7020c88-6dab-4604-b13f-a33bacae51b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.038 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[176718af-e36f-4678-9fdc-56aff4a36982]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b1d33d1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:17:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596802, 'reachable_time': 16951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269168, 'error': None, 'target': 'ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.054 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b23b61-4d96-4a5b-9a35-47cfab895170]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:1721'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596802, 'tstamp': 596802}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269169, 'error': None, 'target': 'ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.072 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f6f36d-4ac4-4a2d-9a47-675453bc0981]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b1d33d1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:17:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596802, 'reachable_time': 16951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269170, 'error': None, 'target': 'ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.102 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2a375240-c8bd-4f81-941a-3466bc5ad1db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.156 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[66363087-166a-4324-851b-6130ceba3101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.158 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b1d33d1-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.158 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.158 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b1d33d1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:46 np0005548731 NetworkManager[49182]: <info>  [1765005706.1609] manager: (tap8b1d33d1-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.160 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:46 np0005548731 kernel: tap8b1d33d1-60: entered promiscuous mode
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.163 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.164 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b1d33d1-60, col_values=(('external_ids', {'iface-id': 'c094330f-7681-4d93-b475-107e1129147c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.165 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:46 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:46Z|00349|binding|INFO|Releasing lport c094330f-7681-4d93-b475-107e1129147c from this chassis (sb_readonly=0)
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.183 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.185 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.185 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8b1d33d1-6678-4c36-a5fc-301790dcb0d0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8b1d33d1-6678-4c36-a5fc-301790dcb0d0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.186 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4e33bc8c-fe1e-4e37-be4d-43df21b0b953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.186 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-8b1d33d1-6678-4c36-a5fc-301790dcb0d0
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/8b1d33d1-6678-4c36-a5fc-301790dcb0d0.pid.haproxy
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 8b1d33d1-6678-4c36-a5fc-301790dcb0d0
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:21:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:21:46.187 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0', 'env', 'PROCESS_TAG=haproxy-8b1d33d1-6678-4c36-a5fc-301790dcb0d0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8b1d33d1-6678-4c36-a5fc-301790dcb0d0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.298 232437 DEBUG nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Received event network-vif-unplugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.299 232437 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.299 232437 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.299 232437 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.299 232437 DEBUG nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] No waiting events found dispatching network-vif-unplugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.300 232437 DEBUG nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Received event network-vif-unplugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.300 232437 DEBUG nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Received event network-vif-plugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.300 232437 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.300 232437 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.300 232437 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.301 232437 DEBUG nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] No waiting events found dispatching network-vif-plugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:21:46 np0005548731 nova_compute[232433]: 2025-12-06 07:21:46.301 232437 WARNING nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Received unexpected event network-vif-plugged-79a92003-0946-49ab-b85a-8fea0cd7c3cb for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:21:46 np0005548731 podman[269201]: 2025-12-06 07:21:46.530636219 +0000 UTC m=+0.044917436 container create 987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:21:46 np0005548731 systemd[1]: Started libpod-conmon-987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b.scope.
Dec  6 02:21:46 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:21:46 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dba60727404efbf25cc816bc7c8155dd4afdacd2e9d2d34694a560251fa3f238/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:21:46 np0005548731 podman[269201]: 2025-12-06 07:21:46.600735104 +0000 UTC m=+0.115016341 container init 987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:21:46 np0005548731 podman[269201]: 2025-12-06 07:21:46.507664674 +0000 UTC m=+0.021945911 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:21:46 np0005548731 podman[269201]: 2025-12-06 07:21:46.605302519 +0000 UTC m=+0.119583736 container start 987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 02:21:46 np0005548731 neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0[269236]: [NOTICE]   (269240) : New worker (269242) forked
Dec  6 02:21:46 np0005548731 neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0[269236]: [NOTICE]   (269240) : Loading success.
Dec  6 02:21:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:46.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:47 np0005548731 nova_compute[232433]: 2025-12-06 07:21:47.160 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:47.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:47 np0005548731 nova_compute[232433]: 2025-12-06 07:21:47.741 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005707.7409368, 6dc14838-5602-4f43-a3e3-2374b4f603eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:21:47 np0005548731 nova_compute[232433]: 2025-12-06 07:21:47.742 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] VM Started (Lifecycle Event)#033[00m
Dec  6 02:21:47 np0005548731 nova_compute[232433]: 2025-12-06 07:21:47.765 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:21:47 np0005548731 nova_compute[232433]: 2025-12-06 07:21:47.769 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005707.7411144, 6dc14838-5602-4f43-a3e3-2374b4f603eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:21:47 np0005548731 nova_compute[232433]: 2025-12-06 07:21:47.770 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:21:47 np0005548731 nova_compute[232433]: 2025-12-06 07:21:47.787 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:21:47 np0005548731 nova_compute[232433]: 2025-12-06 07:21:47.791 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:21:47 np0005548731 nova_compute[232433]: 2025-12-06 07:21:47.815 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:21:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.413 232437 DEBUG nova.compute.manager [req-59005031-5de7-4427-8e40-727cc60cc1b9 req-c86474ee-8b5f-4639-8720-0930da8efbc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-vif-plugged-8af7a557-97f6-420e-99b0-83eced102922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.413 232437 DEBUG oslo_concurrency.lockutils [req-59005031-5de7-4427-8e40-727cc60cc1b9 req-c86474ee-8b5f-4639-8720-0930da8efbc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.414 232437 DEBUG oslo_concurrency.lockutils [req-59005031-5de7-4427-8e40-727cc60cc1b9 req-c86474ee-8b5f-4639-8720-0930da8efbc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.414 232437 DEBUG oslo_concurrency.lockutils [req-59005031-5de7-4427-8e40-727cc60cc1b9 req-c86474ee-8b5f-4639-8720-0930da8efbc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.414 232437 DEBUG nova.compute.manager [req-59005031-5de7-4427-8e40-727cc60cc1b9 req-c86474ee-8b5f-4639-8720-0930da8efbc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Processing event network-vif-plugged-8af7a557-97f6-420e-99b0-83eced102922 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.414 232437 DEBUG nova.compute.manager [req-59005031-5de7-4427-8e40-727cc60cc1b9 req-c86474ee-8b5f-4639-8720-0930da8efbc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-vif-plugged-8af7a557-97f6-420e-99b0-83eced102922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.414 232437 DEBUG oslo_concurrency.lockutils [req-59005031-5de7-4427-8e40-727cc60cc1b9 req-c86474ee-8b5f-4639-8720-0930da8efbc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.415 232437 DEBUG oslo_concurrency.lockutils [req-59005031-5de7-4427-8e40-727cc60cc1b9 req-c86474ee-8b5f-4639-8720-0930da8efbc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.415 232437 DEBUG oslo_concurrency.lockutils [req-59005031-5de7-4427-8e40-727cc60cc1b9 req-c86474ee-8b5f-4639-8720-0930da8efbc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.415 232437 DEBUG nova.compute.manager [req-59005031-5de7-4427-8e40-727cc60cc1b9 req-c86474ee-8b5f-4639-8720-0930da8efbc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] No waiting events found dispatching network-vif-plugged-8af7a557-97f6-420e-99b0-83eced102922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.415 232437 WARNING nova.compute.manager [req-59005031-5de7-4427-8e40-727cc60cc1b9 req-c86474ee-8b5f-4639-8720-0930da8efbc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received unexpected event network-vif-plugged-8af7a557-97f6-420e-99b0-83eced102922 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.416 232437 DEBUG nova.compute.manager [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.419 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005708.419493, 6dc14838-5602-4f43-a3e3-2374b4f603eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.420 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.421 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.425 232437 INFO nova.virt.libvirt.driver [-] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Instance spawned successfully.#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.425 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.441 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.446 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.449 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.449 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.450 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.450 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.450 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.451 232437 DEBUG nova.virt.libvirt.driver [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.471 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.499 232437 INFO nova.compute.manager [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Took 8.17 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.500 232437 DEBUG nova.compute.manager [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.554 232437 INFO nova.compute.manager [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Took 9.17 seconds to build instance.#033[00m
Dec  6 02:21:48 np0005548731 nova_compute[232433]: 2025-12-06 07:21:48.572 232437 DEBUG oslo_concurrency.lockutils [None req-fda6bedb-fa77-40ef-9313-0ab64b2b782b c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:48.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:49.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:50 np0005548731 nova_compute[232433]: 2025-12-06 07:21:50.043 232437 INFO nova.virt.libvirt.driver [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Deleting instance files /var/lib/nova/instances/84bceb35-5d7d-4199-b98e-3dc437c4a7a3_del#033[00m
Dec  6 02:21:50 np0005548731 nova_compute[232433]: 2025-12-06 07:21:50.044 232437 INFO nova.virt.libvirt.driver [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Deletion of /var/lib/nova/instances/84bceb35-5d7d-4199-b98e-3dc437c4a7a3_del complete#033[00m
Dec  6 02:21:50 np0005548731 nova_compute[232433]: 2025-12-06 07:21:50.097 232437 INFO nova.compute.manager [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Took 5.19 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:21:50 np0005548731 nova_compute[232433]: 2025-12-06 07:21:50.097 232437 DEBUG oslo.service.loopingcall [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:21:50 np0005548731 nova_compute[232433]: 2025-12-06 07:21:50.097 232437 DEBUG nova.compute.manager [-] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:21:50 np0005548731 nova_compute[232433]: 2025-12-06 07:21:50.098 232437 DEBUG nova.network.neutron [-] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:21:50 np0005548731 nova_compute[232433]: 2025-12-06 07:21:50.170 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:50.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.230 232437 DEBUG nova.network.neutron [-] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.250 232437 INFO nova.compute.manager [-] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Took 1.15 seconds to deallocate network for instance.#033[00m
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.311 232437 DEBUG oslo_concurrency.lockutils [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.312 232437 DEBUG oslo_concurrency.lockutils [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:21:51 np0005548731 NetworkManager[49182]: <info>  [1765005711.3167] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.316 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:51 np0005548731 NetworkManager[49182]: <info>  [1765005711.3175] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.320 232437 DEBUG nova.compute.manager [req-1bcd0a7e-ab42-441a-ad26-5dd41495e8a3 req-656125f7-c7e8-4dba-87e2-de1ff434d4ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Received event network-vif-deleted-79a92003-0946-49ab-b85a-8fea0cd7c3cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.400 232437 DEBUG oslo_concurrency.processutils [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:21:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:51.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.492 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:51Z|00350|binding|INFO|Releasing lport c094330f-7681-4d93-b475-107e1129147c from this chassis (sb_readonly=0)
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.508 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:21:51 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4182883929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.836 232437 DEBUG oslo_concurrency.processutils [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.842 232437 DEBUG nova.compute.provider_tree [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.858 232437 DEBUG nova.scheduler.client.report [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.875 232437 DEBUG oslo_concurrency.lockutils [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.898 232437 INFO nova.scheduler.client.report [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Deleted allocations for instance 84bceb35-5d7d-4199-b98e-3dc437c4a7a3#033[00m
Dec  6 02:21:51 np0005548731 nova_compute[232433]: 2025-12-06 07:21:51.972 232437 DEBUG oslo_concurrency.lockutils [None req-9dadc567-c16e-4784-a68f-5e7c198417a2 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "84bceb35-5d7d-4199-b98e-3dc437c4a7a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:21:52 np0005548731 nova_compute[232433]: 2025-12-06 07:21:52.043 232437 DEBUG nova.compute.manager [req-3f119f40-8371-4920-912d-c663ea3aed27 req-6bb069b9-3ad0-4887-aac6-f9fa7e8377ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-changed-8af7a557-97f6-420e-99b0-83eced102922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:21:52 np0005548731 nova_compute[232433]: 2025-12-06 07:21:52.044 232437 DEBUG nova.compute.manager [req-3f119f40-8371-4920-912d-c663ea3aed27 req-6bb069b9-3ad0-4887-aac6-f9fa7e8377ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Refreshing instance network info cache due to event network-changed-8af7a557-97f6-420e-99b0-83eced102922. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:21:52 np0005548731 nova_compute[232433]: 2025-12-06 07:21:52.045 232437 DEBUG oslo_concurrency.lockutils [req-3f119f40-8371-4920-912d-c663ea3aed27 req-6bb069b9-3ad0-4887-aac6-f9fa7e8377ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:21:52 np0005548731 nova_compute[232433]: 2025-12-06 07:21:52.045 232437 DEBUG oslo_concurrency.lockutils [req-3f119f40-8371-4920-912d-c663ea3aed27 req-6bb069b9-3ad0-4887-aac6-f9fa7e8377ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:21:52 np0005548731 nova_compute[232433]: 2025-12-06 07:21:52.045 232437 DEBUG nova.network.neutron [req-3f119f40-8371-4920-912d-c663ea3aed27 req-6bb069b9-3ad0-4887-aac6-f9fa7e8377ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Refreshing network info cache for port 8af7a557-97f6-420e-99b0-83eced102922 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:21:52 np0005548731 nova_compute[232433]: 2025-12-06 07:21:52.162 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:52 np0005548731 ovn_controller[133927]: 2025-12-06T07:21:52Z|00351|binding|INFO|Releasing lport c094330f-7681-4d93-b475-107e1129147c from this chassis (sb_readonly=0)
Dec  6 02:21:52 np0005548731 nova_compute[232433]: 2025-12-06 07:21:52.517 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:52.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:53 np0005548731 nova_compute[232433]: 2025-12-06 07:21:53.311 232437 DEBUG nova.network.neutron [req-3f119f40-8371-4920-912d-c663ea3aed27 req-6bb069b9-3ad0-4887-aac6-f9fa7e8377ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updated VIF entry in instance network info cache for port 8af7a557-97f6-420e-99b0-83eced102922. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:21:53 np0005548731 nova_compute[232433]: 2025-12-06 07:21:53.312 232437 DEBUG nova.network.neutron [req-3f119f40-8371-4920-912d-c663ea3aed27 req-6bb069b9-3ad0-4887-aac6-f9fa7e8377ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updating instance_info_cache with network_info: [{"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:21:53 np0005548731 nova_compute[232433]: 2025-12-06 07:21:53.335 232437 DEBUG oslo_concurrency.lockutils [req-3f119f40-8371-4920-912d-c663ea3aed27 req-6bb069b9-3ad0-4887-aac6-f9fa7e8377ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:21:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:21:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:53.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:21:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:21:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:54.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:21:55 np0005548731 nova_compute[232433]: 2025-12-06 07:21:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:21:55 np0005548731 nova_compute[232433]: 2025-12-06 07:21:55.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:21:55 np0005548731 nova_compute[232433]: 2025-12-06 07:21:55.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:21:55 np0005548731 nova_compute[232433]: 2025-12-06 07:21:55.128 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:21:55 np0005548731 nova_compute[232433]: 2025-12-06 07:21:55.211 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:55.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:56.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:57 np0005548731 nova_compute[232433]: 2025-12-06 07:21:57.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:21:57 np0005548731 nova_compute[232433]: 2025-12-06 07:21:57.164 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:57.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:21:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:58.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:21:59 np0005548731 nova_compute[232433]: 2025-12-06 07:21:59.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:21:59 np0005548731 nova_compute[232433]: 2025-12-06 07:21:59.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:21:59 np0005548731 nova_compute[232433]: 2025-12-06 07:21:59.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:21:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:21:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:21:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:59.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:00 np0005548731 nova_compute[232433]: 2025-12-06 07:22:00.141 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005705.1388342, 84bceb35-5d7d-4199-b98e-3dc437c4a7a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:22:00 np0005548731 nova_compute[232433]: 2025-12-06 07:22:00.142 232437 INFO nova.compute.manager [-] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:22:00 np0005548731 nova_compute[232433]: 2025-12-06 07:22:00.165 232437 DEBUG nova.compute.manager [None req-d595c8a0-1872-4f84-8d63-83beb99402e9 - - - - - -] [instance: 84bceb35-5d7d-4199-b98e-3dc437c4a7a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:22:00 np0005548731 nova_compute[232433]: 2025-12-06 07:22:00.214 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:00.863 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:00.864 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:00.865 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:00.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:01 np0005548731 nova_compute[232433]: 2025-12-06 07:22:01.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:22:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:01.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:02 np0005548731 nova_compute[232433]: 2025-12-06 07:22:02.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:22:02 np0005548731 nova_compute[232433]: 2025-12-06 07:22:02.166 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:02 np0005548731 nova_compute[232433]: 2025-12-06 07:22:02.639 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:02.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:03 np0005548731 nova_compute[232433]: 2025-12-06 07:22:03.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:22:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:03.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:03 np0005548731 podman[269310]: 2025-12-06 07:22:03.906532868 +0000 UTC m=+0.060344112 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:22:03 np0005548731 podman[269308]: 2025-12-06 07:22:03.906574899 +0000 UTC m=+0.064423395 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  6 02:22:03 np0005548731 podman[269309]: 2025-12-06 07:22:03.961106644 +0000 UTC m=+0.118954860 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 02:22:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:04.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:05 np0005548731 nova_compute[232433]: 2025-12-06 07:22:05.251 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:05.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.131 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.131 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:22:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:22:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/948365744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.558 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.637 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.638 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.794 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.795 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4336MB free_disk=20.90306854248047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.796 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.796 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.867 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 6dc14838-5602-4f43-a3e3-2374b4f603eb actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.868 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.868 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:22:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:06.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:06 np0005548731 nova_compute[232433]: 2025-12-06 07:22:06.917 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:22:07 np0005548731 nova_compute[232433]: 2025-12-06 07:22:07.169 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:22:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/474933109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:22:07 np0005548731 nova_compute[232433]: 2025-12-06 07:22:07.383 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:22:07 np0005548731 nova_compute[232433]: 2025-12-06 07:22:07.389 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:22:07 np0005548731 nova_compute[232433]: 2025-12-06 07:22:07.424 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:22:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:07.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:07 np0005548731 nova_compute[232433]: 2025-12-06 07:22:07.544 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:22:07 np0005548731 nova_compute[232433]: 2025-12-06 07:22:07.545 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:08.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:09.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:09 np0005548731 nova_compute[232433]: 2025-12-06 07:22:09.545 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:22:10 np0005548731 nova_compute[232433]: 2025-12-06 07:22:10.255 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:10.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:11Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:41:a3 10.100.0.10
Dec  6 02:22:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:11Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:41:a3 10.100.0.10
Dec  6 02:22:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:11.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:12 np0005548731 nova_compute[232433]: 2025-12-06 07:22:12.171 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:12.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:12 np0005548731 podman[269651]: 2025-12-06 07:22:12.919469591 +0000 UTC m=+0.056492866 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  6 02:22:13 np0005548731 podman[269651]: 2025-12-06 07:22:13.043960837 +0000 UTC m=+0.180984092 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 02:22:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:13.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:13 np0005548731 podman[269808]: 2025-12-06 07:22:13.605269873 +0000 UTC m=+0.050821403 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:22:13 np0005548731 podman[269808]: 2025-12-06 07:22:13.618875194 +0000 UTC m=+0.064426694 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:22:13 np0005548731 podman[269875]: 2025-12-06 07:22:13.815293252 +0000 UTC m=+0.048230408 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, release=1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, description=keepalived for Ceph, name=keepalived, io.buildah.version=1.28.2, vcs-type=git, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec  6 02:22:13 np0005548731 podman[269875]: 2025-12-06 07:22:13.824882093 +0000 UTC m=+0.057819219 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, architecture=x86_64, io.buildah.version=1.28.2)
Dec  6 02:22:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:14.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:15 np0005548731 nova_compute[232433]: 2025-12-06 07:22:15.306 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:15.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:22:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:22:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:22:16 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:22:16 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:22:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:16.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:17 np0005548731 nova_compute[232433]: 2025-12-06 07:22:17.173 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:17.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:18.131 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:22:18 np0005548731 nova_compute[232433]: 2025-12-06 07:22:18.131 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:18.132 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:22:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:18Z|00352|binding|INFO|Releasing lport c094330f-7681-4d93-b475-107e1129147c from this chassis (sb_readonly=0)
Dec  6 02:22:18 np0005548731 nova_compute[232433]: 2025-12-06 07:22:18.344 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:18.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:19.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:20 np0005548731 nova_compute[232433]: 2025-12-06 07:22:20.311 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:22:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:20.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:22:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:21.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:22 np0005548731 nova_compute[232433]: 2025-12-06 07:22:22.174 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:22:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:22:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:22.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:23.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:24.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:25 np0005548731 nova_compute[232433]: 2025-12-06 07:22:25.002 232437 DEBUG oslo_concurrency.lockutils [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "interface-6dc14838-5602-4f43-a3e3-2374b4f603eb-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:25 np0005548731 nova_compute[232433]: 2025-12-06 07:22:25.002 232437 DEBUG oslo_concurrency.lockutils [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "interface-6dc14838-5602-4f43-a3e3-2374b4f603eb-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:25 np0005548731 nova_compute[232433]: 2025-12-06 07:22:25.003 232437 DEBUG nova.objects.instance [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lazy-loading 'flavor' on Instance uuid 6dc14838-5602-4f43-a3e3-2374b4f603eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:22:25 np0005548731 nova_compute[232433]: 2025-12-06 07:22:25.314 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:25 np0005548731 nova_compute[232433]: 2025-12-06 07:22:25.436 232437 DEBUG nova.objects.instance [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lazy-loading 'pci_requests' on Instance uuid 6dc14838-5602-4f43-a3e3-2374b4f603eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:22:25 np0005548731 nova_compute[232433]: 2025-12-06 07:22:25.453 232437 DEBUG nova.network.neutron [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:22:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:25.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:25 np0005548731 nova_compute[232433]: 2025-12-06 07:22:25.884 232437 DEBUG nova.policy [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2404fce4d1f48779c97d3cbcb6ae594', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9536676f60844b6f802518771f02409f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:22:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:26Z|00353|binding|INFO|Releasing lport c094330f-7681-4d93-b475-107e1129147c from this chassis (sb_readonly=0)
Dec  6 02:22:26 np0005548731 nova_compute[232433]: 2025-12-06 07:22:26.678 232437 DEBUG nova.network.neutron [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Successfully created port: 21d2e524-a6d2-4fa8-86b4-e2787c969566 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:22:26 np0005548731 nova_compute[232433]: 2025-12-06 07:22:26.728 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:26.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:27 np0005548731 nova_compute[232433]: 2025-12-06 07:22:27.177 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:22:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:27.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:22:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:27 np0005548731 nova_compute[232433]: 2025-12-06 07:22:27.960 232437 DEBUG nova.network.neutron [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Successfully updated port: 21d2e524-a6d2-4fa8-86b4-e2787c969566 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:22:27 np0005548731 nova_compute[232433]: 2025-12-06 07:22:27.979 232437 DEBUG oslo_concurrency.lockutils [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:22:27 np0005548731 nova_compute[232433]: 2025-12-06 07:22:27.980 232437 DEBUG oslo_concurrency.lockutils [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquired lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:22:27 np0005548731 nova_compute[232433]: 2025-12-06 07:22:27.980 232437 DEBUG nova.network.neutron [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:22:28 np0005548731 nova_compute[232433]: 2025-12-06 07:22:28.055 232437 DEBUG nova.compute.manager [req-7568d74f-4896-4d80-bbbe-aa85d2b2a645 req-67ccedfe-2e4a-4b9b-922f-382d2b846f32 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-changed-21d2e524-a6d2-4fa8-86b4-e2787c969566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:22:28 np0005548731 nova_compute[232433]: 2025-12-06 07:22:28.055 232437 DEBUG nova.compute.manager [req-7568d74f-4896-4d80-bbbe-aa85d2b2a645 req-67ccedfe-2e4a-4b9b-922f-382d2b846f32 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Refreshing instance network info cache due to event network-changed-21d2e524-a6d2-4fa8-86b4-e2787c969566. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:22:28 np0005548731 nova_compute[232433]: 2025-12-06 07:22:28.056 232437 DEBUG oslo_concurrency.lockutils [req-7568d74f-4896-4d80-bbbe-aa85d2b2a645 req-67ccedfe-2e4a-4b9b-922f-382d2b846f32 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:22:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:28.133 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:28.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:29.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:30 np0005548731 nova_compute[232433]: 2025-12-06 07:22:30.317 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:22:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:30.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:22:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:31.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:31 np0005548731 nova_compute[232433]: 2025-12-06 07:22:31.992 232437 DEBUG nova.network.neutron [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updating instance_info_cache with network_info: [{"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.013 232437 DEBUG oslo_concurrency.lockutils [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Releasing lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.014 232437 DEBUG oslo_concurrency.lockutils [req-7568d74f-4896-4d80-bbbe-aa85d2b2a645 req-67ccedfe-2e4a-4b9b-922f-382d2b846f32 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.014 232437 DEBUG nova.network.neutron [req-7568d74f-4896-4d80-bbbe-aa85d2b2a645 req-67ccedfe-2e4a-4b9b-922f-382d2b846f32 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Refreshing network info cache for port 21d2e524-a6d2-4fa8-86b4-e2787c969566 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.017 232437 DEBUG nova.virt.libvirt.vif [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1149920199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1149920199',id=92,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwWYRzhpCEWsRkCaBhzHfhpPxKJasHgXvUlHzaDRDZXID15xokDKbMJM14TkjwZa8JWqEfbHVbjprM+jVuVPSPnfeGy8c3QiNdj4kuOXnNZ8rc8TAxyYOMgQOcRiCAPIw==',key_name='tempest-keypair-1877451409',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:21:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9536676f60844b6f802518771f02409f',ramdisk_id='',reservation_id='r-bxlx3of0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1465336039',owner_user_name='tempest-TaggedAttachmentsTest-1465336039-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2404fce4d1f48779c97d3cbcb6ae594',uuid=6dc14838-5602-4f43-a3e3-2374b4f603eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.018 232437 DEBUG nova.network.os_vif_util [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converting VIF {"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.019 232437 DEBUG nova.network.os_vif_util [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.019 232437 DEBUG os_vif [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.020 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.020 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.021 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.025 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.025 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21d2e524-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.026 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21d2e524-a6, col_values=(('external_ids', {'iface-id': '21d2e524-a6d2-4fa8-86b4-e2787c969566', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:af:49', 'vm-uuid': '6dc14838-5602-4f43-a3e3-2374b4f603eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.063 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 NetworkManager[49182]: <info>  [1765005752.0646] manager: (tap21d2e524-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.065 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.070 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.071 232437 INFO os_vif [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6')#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.072 232437 DEBUG nova.virt.libvirt.vif [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1149920199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1149920199',id=92,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwWYRzhpCEWsRkCaBhzHfhpPxKJasHgXvUlHzaDRDZXID15xokDKbMJM14TkjwZa8JWqEfbHVbjprM+jVuVPSPnfeGy8c3QiNdj4kuOXnNZ8rc8TAxyYOMgQOcRiCAPIw==',key_name='tempest-keypair-1877451409',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:21:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9536676f60844b6f802518771f02409f',ramdisk_id='',reservation_id='r-bxlx3of0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1465336039',owner_user_name='tempest-TaggedAttachmentsTest-1465336039-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2404fce4d1f48779c97d3cbcb6ae594',uuid=6dc14838-5602-4f43-a3e3-2374b4f603eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.072 232437 DEBUG nova.network.os_vif_util [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converting VIF {"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.073 232437 DEBUG nova.network.os_vif_util [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.075 232437 DEBUG nova.virt.libvirt.guest [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] attach device xml: <interface type="ethernet">
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <mac address="fa:16:3e:c9:af:49"/>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <model type="virtio"/>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <mtu size="1442"/>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <target dev="tap21d2e524-a6"/>
Dec  6 02:22:32 np0005548731 nova_compute[232433]: </interface>
Dec  6 02:22:32 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:22:32 np0005548731 kernel: tap21d2e524-a6: entered promiscuous mode
Dec  6 02:22:32 np0005548731 NetworkManager[49182]: <info>  [1765005752.0906] manager: (tap21d2e524-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/183)
Dec  6 02:22:32 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:32Z|00354|binding|INFO|Claiming lport 21d2e524-a6d2-4fa8-86b4-e2787c969566 for this chassis.
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.091 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:32Z|00355|binding|INFO|21d2e524-a6d2-4fa8-86b4-e2787c969566: Claiming fa:16:3e:c9:af:49 10.10.10.87
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.100 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:af:49 10.10.10.87'], port_security=['fa:16:3e:c9:af:49 10.10.10.87'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.87/24', 'neutron:device_id': '6dc14838-5602-4f43-a3e3-2374b4f603eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79a28eee-40b4-43f3-98dc-313c48f19b7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9536676f60844b6f802518771f02409f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b77d15a-1c0a-449a-8ebe-5566a6edf729', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ceeabcb-498a-4c7a-a4f1-5db33adf6b22, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=21d2e524-a6d2-4fa8-86b4-e2787c969566) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.102 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 21d2e524-a6d2-4fa8-86b4-e2787c969566 in datapath 79a28eee-40b4-43f3-98dc-313c48f19b7c bound to our chassis#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.104 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79a28eee-40b4-43f3-98dc-313c48f19b7c#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.119 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9ac843-85a6-49c5-acab-3a38fdbab3f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.120 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79a28eee-41 in ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.122 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79a28eee-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.122 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b413eb-5a74-4917-b7b0-9c54ffeda1eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.123 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[48f2c100-161a-4105-8069-9f0dfab288a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 systemd-udevd[270159]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:22:32 np0005548731 NetworkManager[49182]: <info>  [1765005752.1419] device (tap21d2e524-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:22:32 np0005548731 NetworkManager[49182]: <info>  [1765005752.1432] device (tap21d2e524-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.144 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[0369555f-1c8f-428a-a2fd-0f765dac5c8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.151 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:32Z|00356|binding|INFO|Setting lport 21d2e524-a6d2-4fa8-86b4-e2787c969566 ovn-installed in OVS
Dec  6 02:22:32 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:32Z|00357|binding|INFO|Setting lport 21d2e524-a6d2-4fa8-86b4-e2787c969566 up in Southbound
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.155 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.162 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[27e1ba77-ac47-4ba2-b331-3f2ce4fea49f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.179 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.196 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e826dd18-18c6-4d81-93e0-55f5f86735f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.197 232437 DEBUG nova.virt.libvirt.driver [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.197 232437 DEBUG nova.virt.libvirt.driver [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.197 232437 DEBUG nova.virt.libvirt.driver [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] No VIF found with MAC fa:16:3e:f8:41:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.201 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4b35e1-5027-477c-91a4-ea228e516efb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 systemd-udevd[270162]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:22:32 np0005548731 NetworkManager[49182]: <info>  [1765005752.2021] manager: (tap79a28eee-40): new Veth device (/org/freedesktop/NetworkManager/Devices/184)
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.229 232437 DEBUG nova.virt.libvirt.guest [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <nova:name>tempest-device-tagging-server-1149920199</nova:name>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 07:22:32</nova:creationTime>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 02:22:32 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:    <nova:user uuid="c2404fce4d1f48779c97d3cbcb6ae594">tempest-TaggedAttachmentsTest-1465336039-project-member</nova:user>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:    <nova:project uuid="9536676f60844b6f802518771f02409f">tempest-TaggedAttachmentsTest-1465336039</nova:project>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:    <nova:port uuid="8af7a557-97f6-420e-99b0-83eced102922">
Dec  6 02:22:32 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:    <nova:port uuid="21d2e524-a6d2-4fa8-86b4-e2787c969566">
Dec  6 02:22:32 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.10.10.87" ipVersion="4"/>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:22:32 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 02:22:32 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 02:22:32 np0005548731 nova_compute[232433]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.237 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[733099e7-db06-4893-b27e-40856fc6b90d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.239 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[95b3b98b-cfad-4a37-a80f-ad9a9bc528f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.255 232437 DEBUG oslo_concurrency.lockutils [None req-ca38192d-437c-4129-82f1-167156c19f49 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "interface-6dc14838-5602-4f43-a3e3-2374b4f603eb-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:32 np0005548731 NetworkManager[49182]: <info>  [1765005752.2615] device (tap79a28eee-40): carrier: link connected
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.268 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9a6b2866-2fa4-40c4-a264-1fde64ceeffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.285 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[facb4c92-9e07-4441-a65b-0a258ef8a32b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79a28eee-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:b9:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601426, 'reachable_time': 25429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270186, 'error': None, 'target': 'ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.298 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a8551a-19c9-45c0-89fe-129b0693d3e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:b99d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601426, 'tstamp': 601426}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270187, 'error': None, 'target': 'ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.316 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[299a0c71-d604-484b-967b-26681206b3df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79a28eee-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:b9:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601426, 'reachable_time': 25429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270188, 'error': None, 'target': 'ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.343 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c4850002-f74f-4e63-bbad-e2c2af754caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.402 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed68829-179c-4b13-b4ff-fcca7be46013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.403 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79a28eee-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.404 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.404 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79a28eee-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.406 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 kernel: tap79a28eee-40: entered promiscuous mode
Dec  6 02:22:32 np0005548731 NetworkManager[49182]: <info>  [1765005752.4065] manager: (tap79a28eee-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.408 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.409 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79a28eee-40, col_values=(('external_ids', {'iface-id': '3957b48f-f389-4611-906b-3d83c79e4cbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:32 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:32Z|00358|binding|INFO|Releasing lport 3957b48f-f389-4611-906b-3d83c79e4cbc from this chassis (sb_readonly=0)
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.411 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 nova_compute[232433]: 2025-12-06 07:22:32.424 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.425 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79a28eee-40b4-43f3-98dc-313c48f19b7c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79a28eee-40b4-43f3-98dc-313c48f19b7c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.426 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3e7f9f-64d3-49b2-9c56-686e6c23369b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.427 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-79a28eee-40b4-43f3-98dc-313c48f19b7c
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/79a28eee-40b4-43f3-98dc-313c48f19b7c.pid.haproxy
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 79a28eee-40b4-43f3-98dc-313c48f19b7c
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:22:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:32.428 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c', 'env', 'PROCESS_TAG=haproxy-79a28eee-40b4-43f3-98dc-313c48f19b7c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79a28eee-40b4-43f3-98dc-313c48f19b7c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:22:32 np0005548731 podman[270220]: 2025-12-06 07:22:32.800218855 +0000 UTC m=+0.046470156 container create 1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:22:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:32 np0005548731 systemd[1]: Started libpod-conmon-1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b.scope.
Dec  6 02:22:32 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:22:32 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd7d28f7341177f8d88372a86ce9e141e11a4964632f0698fb573ad5193c3f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:22:32 np0005548731 podman[270220]: 2025-12-06 07:22:32.866011022 +0000 UTC m=+0.112262343 container init 1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 02:22:32 np0005548731 podman[270220]: 2025-12-06 07:22:32.871058888 +0000 UTC m=+0.117310189 container start 1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  6 02:22:32 np0005548731 podman[270220]: 2025-12-06 07:22:32.776058419 +0000 UTC m=+0.022309750 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:22:32 np0005548731 neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c[270235]: [NOTICE]   (270239) : New worker (270241) forked
Dec  6 02:22:32 np0005548731 neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c[270235]: [NOTICE]   (270239) : Loading success.
Dec  6 02:22:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:32.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.174 232437 DEBUG nova.compute.manager [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-vif-plugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.174 232437 DEBUG oslo_concurrency.lockutils [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.175 232437 DEBUG oslo_concurrency.lockutils [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.175 232437 DEBUG oslo_concurrency.lockutils [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.176 232437 DEBUG nova.compute.manager [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] No waiting events found dispatching network-vif-plugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.176 232437 WARNING nova.compute.manager [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received unexpected event network-vif-plugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.303 232437 DEBUG oslo_concurrency.lockutils [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.304 232437 DEBUG oslo_concurrency.lockutils [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.319 232437 DEBUG nova.objects.instance [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lazy-loading 'flavor' on Instance uuid 6dc14838-5602-4f43-a3e3-2374b4f603eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.369 232437 DEBUG oslo_concurrency.lockutils [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:22:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:33.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.610 232437 DEBUG oslo_concurrency.lockutils [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.611 232437 DEBUG oslo_concurrency.lockutils [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.611 232437 INFO nova.compute.manager [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Attaching volume b28b2367-60a4-4325-8a01-53a6de96135e to /dev/vdb#033[00m
Dec  6 02:22:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:33Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:af:49 10.10.10.87
Dec  6 02:22:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:33Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:af:49 10.10.10.87
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.802 232437 DEBUG os_brick.utils [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.805 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.815 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.815 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[496ed656-118d-4126-a75f-e8ff4084189f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.816 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.824 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.824 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[511963c2-8e76-486d-90dd-c68bf8a153a9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.826 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.834 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.834 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe5bee0-d6e0-42e8-acb9-4d1d247492d4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.835 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[8a74bb16-ddaa-4654-bd71-93a2e74480d5]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.836 232437 DEBUG oslo_concurrency.processutils [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.867 232437 DEBUG oslo_concurrency.processutils [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.868 232437 DEBUG os_brick.initiator.connectors.lightos [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.869 232437 DEBUG os_brick.initiator.connectors.lightos [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.869 232437 DEBUG os_brick.initiator.connectors.lightos [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.869 232437 DEBUG os_brick.utils [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:22:33 np0005548731 nova_compute[232433]: 2025-12-06 07:22:33.869 232437 DEBUG nova.virt.block_device [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updating existing volume attachment record: 9f2222d7-b216-413d-a5e8-1d5fe76054d6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:22:34 np0005548731 nova_compute[232433]: 2025-12-06 07:22:34.212 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:34 np0005548731 nova_compute[232433]: 2025-12-06 07:22:34.710 232437 DEBUG nova.network.neutron [req-7568d74f-4896-4d80-bbbe-aa85d2b2a645 req-67ccedfe-2e4a-4b9b-922f-382d2b846f32 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updated VIF entry in instance network info cache for port 21d2e524-a6d2-4fa8-86b4-e2787c969566. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:22:34 np0005548731 nova_compute[232433]: 2025-12-06 07:22:34.710 232437 DEBUG nova.network.neutron [req-7568d74f-4896-4d80-bbbe-aa85d2b2a645 req-67ccedfe-2e4a-4b9b-922f-382d2b846f32 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updating instance_info_cache with network_info: [{"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:22:34 np0005548731 nova_compute[232433]: 2025-12-06 07:22:34.817 232437 DEBUG oslo_concurrency.lockutils [req-7568d74f-4896-4d80-bbbe-aa85d2b2a645 req-67ccedfe-2e4a-4b9b-922f-382d2b846f32 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:22:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:34.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:34 np0005548731 podman[270260]: 2025-12-06 07:22:34.911371089 +0000 UTC m=+0.062330661 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:22:34 np0005548731 podman[270258]: 2025-12-06 07:22:34.919836352 +0000 UTC m=+0.070553628 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:22:34 np0005548731 podman[270259]: 2025-12-06 07:22:34.93532054 +0000 UTC m=+0.089390290 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 02:22:35 np0005548731 nova_compute[232433]: 2025-12-06 07:22:35.499 232437 DEBUG nova.compute.manager [req-ff3b3c3e-d3b6-4fda-96c7-64c01fc6e301 req-3e0d45fc-81fa-4f81-bafc-0b377ae2a154 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-vif-plugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:22:35 np0005548731 nova_compute[232433]: 2025-12-06 07:22:35.499 232437 DEBUG oslo_concurrency.lockutils [req-ff3b3c3e-d3b6-4fda-96c7-64c01fc6e301 req-3e0d45fc-81fa-4f81-bafc-0b377ae2a154 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:35 np0005548731 nova_compute[232433]: 2025-12-06 07:22:35.499 232437 DEBUG oslo_concurrency.lockutils [req-ff3b3c3e-d3b6-4fda-96c7-64c01fc6e301 req-3e0d45fc-81fa-4f81-bafc-0b377ae2a154 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:35 np0005548731 nova_compute[232433]: 2025-12-06 07:22:35.500 232437 DEBUG oslo_concurrency.lockutils [req-ff3b3c3e-d3b6-4fda-96c7-64c01fc6e301 req-3e0d45fc-81fa-4f81-bafc-0b377ae2a154 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:35 np0005548731 nova_compute[232433]: 2025-12-06 07:22:35.500 232437 DEBUG nova.compute.manager [req-ff3b3c3e-d3b6-4fda-96c7-64c01fc6e301 req-3e0d45fc-81fa-4f81-bafc-0b377ae2a154 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] No waiting events found dispatching network-vif-plugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:22:35 np0005548731 nova_compute[232433]: 2025-12-06 07:22:35.500 232437 WARNING nova.compute.manager [req-ff3b3c3e-d3b6-4fda-96c7-64c01fc6e301 req-3e0d45fc-81fa-4f81-bafc-0b377ae2a154 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received unexpected event network-vif-plugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:22:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:22:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:35.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:22:36 np0005548731 nova_compute[232433]: 2025-12-06 07:22:36.111 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:36 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:22:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:22:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:36.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:22:36 np0005548731 nova_compute[232433]: 2025-12-06 07:22:36.997 232437 DEBUG nova.objects.instance [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lazy-loading 'flavor' on Instance uuid 6dc14838-5602-4f43-a3e3-2374b4f603eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:22:37 np0005548731 nova_compute[232433]: 2025-12-06 07:22:37.020 232437 DEBUG nova.virt.libvirt.driver [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Attempting to attach volume b28b2367-60a4-4325-8a01-53a6de96135e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 02:22:37 np0005548731 nova_compute[232433]: 2025-12-06 07:22:37.025 232437 DEBUG nova.virt.libvirt.guest [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:22:37 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:22:37 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-b28b2367-60a4-4325-8a01-53a6de96135e">
Dec  6 02:22:37 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:22:37 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:22:37 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:22:37 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:22:37 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:22:37 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:22:37 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:22:37 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:22:37 np0005548731 nova_compute[232433]:  <serial>b28b2367-60a4-4325-8a01-53a6de96135e</serial>
Dec  6 02:22:37 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:22:37 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:22:37 np0005548731 nova_compute[232433]: 2025-12-06 07:22:37.064 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:37 np0005548731 nova_compute[232433]: 2025-12-06 07:22:37.171 232437 DEBUG nova.virt.libvirt.driver [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:22:37 np0005548731 nova_compute[232433]: 2025-12-06 07:22:37.172 232437 DEBUG nova.virt.libvirt.driver [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:22:37 np0005548731 nova_compute[232433]: 2025-12-06 07:22:37.172 232437 DEBUG nova.virt.libvirt.driver [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] No VIF found with MAC fa:16:3e:f8:41:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:22:37 np0005548731 nova_compute[232433]: 2025-12-06 07:22:37.182 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:37 np0005548731 nova_compute[232433]: 2025-12-06 07:22:37.453 232437 DEBUG oslo_concurrency.lockutils [None req-cebb2ef8-5d0d-4c2b-86bb-e64a0ef89010 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:38 np0005548731 nova_compute[232433]: 2025-12-06 07:22:38.351 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:38.792 144074 DEBUG eventlet.wsgi.server [-] (144074) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  6 02:22:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:38.793 144074 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Dec  6 02:22:38 np0005548731 ovn_metadata_agent[143960]: Accept: */*#015
Dec  6 02:22:38 np0005548731 ovn_metadata_agent[143960]: Connection: close#015
Dec  6 02:22:38 np0005548731 ovn_metadata_agent[143960]: Content-Type: text/plain#015
Dec  6 02:22:38 np0005548731 ovn_metadata_agent[143960]: Host: 169.254.169.254#015
Dec  6 02:22:38 np0005548731 ovn_metadata_agent[143960]: User-Agent: curl/7.84.0#015
Dec  6 02:22:38 np0005548731 ovn_metadata_agent[143960]: X-Forwarded-For: 10.100.0.10#015
Dec  6 02:22:38 np0005548731 ovn_metadata_agent[143960]: X-Ovn-Network-Id: 8b1d33d1-6678-4c36-a5fc-301790dcb0d0 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  6 02:22:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:22:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:38.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.267 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:39.289 144074 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  6 02:22:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:39.290 144074 INFO eventlet.wsgi.server [-] 10.100.0.10,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1918 time: 0.4970174#033[00m
Dec  6 02:22:39 np0005548731 haproxy-metadata-proxy-8b1d33d1-6678-4c36-a5fc-301790dcb0d0[269242]: 10.100.0.10:55602 [06/Dec/2025:07:22:38.791] listener listener/metadata 0/0/0/498/498 200 1902 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Dec  6 02:22:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:39.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.725 232437 DEBUG oslo_concurrency.lockutils [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.725 232437 DEBUG oslo_concurrency.lockutils [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.760 232437 INFO nova.compute.manager [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Detaching volume b28b2367-60a4-4325-8a01-53a6de96135e#033[00m
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.908 232437 INFO nova.virt.block_device [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Attempting to driver detach volume b28b2367-60a4-4325-8a01-53a6de96135e from mountpoint /dev/vdb#033[00m
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.915 232437 DEBUG nova.virt.libvirt.driver [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Attempting to detach device vdb from instance 6dc14838-5602-4f43-a3e3-2374b4f603eb from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.915 232437 DEBUG nova.virt.libvirt.guest [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-b28b2367-60a4-4325-8a01-53a6de96135e">
Dec  6 02:22:39 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  <serial>b28b2367-60a4-4325-8a01-53a6de96135e</serial>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:22:39 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.922 232437 INFO nova.virt.libvirt.driver [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Successfully detached device vdb from instance 6dc14838-5602-4f43-a3e3-2374b4f603eb from the persistent domain config.#033[00m
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.922 232437 DEBUG nova.virt.libvirt.driver [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 6dc14838-5602-4f43-a3e3-2374b4f603eb from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.923 232437 DEBUG nova.virt.libvirt.guest [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-b28b2367-60a4-4325-8a01-53a6de96135e">
Dec  6 02:22:39 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  <serial>b28b2367-60a4-4325-8a01-53a6de96135e</serial>
Dec  6 02:22:39 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Dec  6 02:22:39 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:22:39 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.972 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765005759.9723248, 6dc14838-5602-4f43-a3e3-2374b4f603eb => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.973 232437 DEBUG nova.virt.libvirt.driver [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 6dc14838-5602-4f43-a3e3-2374b4f603eb _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:22:39 np0005548731 nova_compute[232433]: 2025-12-06 07:22:39.975 232437 INFO nova.virt.libvirt.driver [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Successfully detached device vdb from instance 6dc14838-5602-4f43-a3e3-2374b4f603eb from the live domain config.#033[00m
Dec  6 02:22:40 np0005548731 nova_compute[232433]: 2025-12-06 07:22:40.168 232437 DEBUG nova.objects.instance [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lazy-loading 'flavor' on Instance uuid 6dc14838-5602-4f43-a3e3-2374b4f603eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:22:40 np0005548731 nova_compute[232433]: 2025-12-06 07:22:40.236 232437 DEBUG oslo_concurrency.lockutils [None req-ce5e0228-d861-45dc-aacf-d885457adc63 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:40.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.057 232437 DEBUG oslo_concurrency.lockutils [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "interface-6dc14838-5602-4f43-a3e3-2374b4f603eb-21d2e524-a6d2-4fa8-86b4-e2787c969566" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.057 232437 DEBUG oslo_concurrency.lockutils [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "interface-6dc14838-5602-4f43-a3e3-2374b4f603eb-21d2e524-a6d2-4fa8-86b4-e2787c969566" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.074 232437 DEBUG nova.objects.instance [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lazy-loading 'flavor' on Instance uuid 6dc14838-5602-4f43-a3e3-2374b4f603eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.099 232437 DEBUG nova.virt.libvirt.vif [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1149920199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1149920199',id=92,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwWYRzhpCEWsRkCaBhzHfhpPxKJasHgXvUlHzaDRDZXID15xokDKbMJM14TkjwZa8JWqEfbHVbjprM+jVuVPSPnfeGy8c3QiNdj4kuOXnNZ8rc8TAxyYOMgQOcRiCAPIw==',key_name='tempest-keypair-1877451409',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:21:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9536676f60844b6f802518771f02409f',ramdisk_id='',reservation_id='r-bxlx3of0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1465336039',owner_user_name='tempest-TaggedAttachmentsTest-1465336039-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2404fce4d1f48779c97d3cbcb6ae594',uuid=6dc14838-5602-4f43-a3e3-2374b4f603eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.100 232437 DEBUG nova.network.os_vif_util [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converting VIF {"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.101 232437 DEBUG nova.network.os_vif_util [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.103 232437 DEBUG nova.virt.libvirt.guest [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c9:af:49"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap21d2e524-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.106 232437 DEBUG nova.virt.libvirt.guest [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c9:af:49"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap21d2e524-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.108 232437 DEBUG nova.virt.libvirt.driver [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Attempting to detach device tap21d2e524-a6 from instance 6dc14838-5602-4f43-a3e3-2374b4f603eb from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.109 232437 DEBUG nova.virt.libvirt.guest [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] detach device xml: <interface type="ethernet">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <mac address="fa:16:3e:c9:af:49"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <model type="virtio"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <mtu size="1442"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <target dev="tap21d2e524-a6"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: </interface>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.113 232437 DEBUG nova.virt.libvirt.guest [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c9:af:49"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap21d2e524-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.116 232437 DEBUG nova.virt.libvirt.guest [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c9:af:49"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap21d2e524-a6"/></interface>not found in domain: <domain type='kvm' id='38'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <name>instance-0000005c</name>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <uuid>6dc14838-5602-4f43-a3e3-2374b4f603eb</uuid>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:name>tempest-device-tagging-server-1149920199</nova:name>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 07:22:32</nova:creationTime>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:user uuid="c2404fce4d1f48779c97d3cbcb6ae594">tempest-TaggedAttachmentsTest-1465336039-project-member</nova:user>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:project uuid="9536676f60844b6f802518771f02409f">tempest-TaggedAttachmentsTest-1465336039</nova:project>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:port uuid="8af7a557-97f6-420e-99b0-83eced102922">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:port uuid="21d2e524-a6d2-4fa8-86b4-e2787c969566">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.10.10.87" ipVersion="4"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <memory unit='KiB'>131072</memory>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <vcpu placement='static'>1</vcpu>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <resource>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <partition>/machine</partition>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </resource>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <sysinfo type='smbios'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='manufacturer'>RDO</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='serial'>6dc14838-5602-4f43-a3e3-2374b4f603eb</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='uuid'>6dc14838-5602-4f43-a3e3-2374b4f603eb</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='family'>Virtual Machine</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <boot dev='hd'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <smbios mode='sysinfo'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <vmcoreinfo state='on'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <model fallback='forbid'>Nehalem</model>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <feature policy='require' name='x2apic'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <feature policy='require' name='hypervisor'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <feature policy='require' name='vme'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <clock offset='utc'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <timer name='hpet' present='no'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <on_poweroff>destroy</on_poweroff>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <on_reboot>restart</on_reboot>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <on_crash>destroy</on_crash>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <disk type='network' device='disk'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/6dc14838-5602-4f43-a3e3-2374b4f603eb_disk' index='2'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target dev='vda' bus='virtio'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='virtio-disk0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <disk type='network' device='cdrom'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/6dc14838-5602-4f43-a3e3-2374b4f603eb_disk.config' index='1'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target dev='sda' bus='sata'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <readonly/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='sata0-0-0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pcie.0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='1' port='0x10'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='2' port='0x11'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='3' port='0x12'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.3'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='4' port='0x13'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.4'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='5' port='0x14'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.5'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='6' port='0x15'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.6'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='7' port='0x16'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.7'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='8' port='0x17'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.8'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='9' port='0x18'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.9'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='10' port='0x19'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.10'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='11' port='0x1a'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.11'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='12' port='0x1b'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.12'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='13' port='0x1c'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.13'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='14' port='0x1d'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.14'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='15' port='0x1e'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.15'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='16' port='0x1f'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.16'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='17' port='0x20'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.17'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='18' port='0x21'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.18'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='19' port='0x22'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.19'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='20' port='0x23'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.20'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='21' port='0x24'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.21'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='22' port='0x25'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.22'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='23' port='0x26'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.23'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='24' port='0x27'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.24'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='25' port='0x28'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.25'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-pci-bridge'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.26'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='usb'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='sata' index='0'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='ide'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:f8:41:a3'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target dev='tap8af7a557-97'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='net0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:c9:af:49'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target dev='tap21d2e524-a6'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='net1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <serial type='pty'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/console.log' append='off'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target type='isa-serial' port='0'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <model name='isa-serial'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      </target>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/console.log' append='off'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target type='serial' port='0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </console>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <input type='tablet' bus='usb'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='input0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='usb' bus='0' port='1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <input type='mouse' bus='ps2'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='input1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <input type='keyboard' bus='ps2'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='input2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <listen type='address' address='::0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <audio id='1' type='none'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='video0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <watchdog model='itco' action='reset'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='watchdog0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </watchdog>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <memballoon model='virtio'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <stats period='10'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='balloon0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <rng model='virtio'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <backend model='random'>/dev/urandom</backend>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='rng0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <label>system_u:system_r:svirt_t:s0:c276,c555</label>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c276,c555</imagelabel>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <label>+107:+107</label>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <imagelabel>+107:+107</imagelabel>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.117 232437 INFO nova.virt.libvirt.driver [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Successfully detached device tap21d2e524-a6 from instance 6dc14838-5602-4f43-a3e3-2374b4f603eb from the persistent domain config.#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.118 232437 DEBUG nova.virt.libvirt.driver [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] (1/8): Attempting to detach device tap21d2e524-a6 with device alias net1 from instance 6dc14838-5602-4f43-a3e3-2374b4f603eb from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.118 232437 DEBUG nova.virt.libvirt.guest [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] detach device xml: <interface type="ethernet">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <mac address="fa:16:3e:c9:af:49"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <model type="virtio"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <mtu size="1442"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <target dev="tap21d2e524-a6"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: </interface>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:22:41 np0005548731 kernel: tap21d2e524-a6 (unregistering): left promiscuous mode
Dec  6 02:22:41 np0005548731 NetworkManager[49182]: <info>  [1765005761.2200] device (tap21d2e524-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:22:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:41Z|00359|binding|INFO|Releasing lport 21d2e524-a6d2-4fa8-86b4-e2787c969566 from this chassis (sb_readonly=0)
Dec  6 02:22:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:41Z|00360|binding|INFO|Setting lport 21d2e524-a6d2-4fa8-86b4-e2787c969566 down in Southbound
Dec  6 02:22:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:41Z|00361|binding|INFO|Removing iface tap21d2e524-a6 ovn-installed in OVS
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.241 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.243 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.243 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765005761.2432032, 6dc14838-5602-4f43-a3e3-2374b4f603eb => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.245 232437 DEBUG nova.virt.libvirt.driver [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Start waiting for the detach event from libvirt for device tap21d2e524-a6 with device alias net1 for instance 6dc14838-5602-4f43-a3e3-2374b4f603eb _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.245 232437 DEBUG nova.virt.libvirt.guest [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c9:af:49"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap21d2e524-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.248 232437 DEBUG nova.virt.libvirt.guest [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c9:af:49"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap21d2e524-a6"/></interface>not found in domain: <domain type='kvm' id='38'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <name>instance-0000005c</name>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <uuid>6dc14838-5602-4f43-a3e3-2374b4f603eb</uuid>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:name>tempest-device-tagging-server-1149920199</nova:name>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 07:22:32</nova:creationTime>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:user uuid="c2404fce4d1f48779c97d3cbcb6ae594">tempest-TaggedAttachmentsTest-1465336039-project-member</nova:user>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:project uuid="9536676f60844b6f802518771f02409f">tempest-TaggedAttachmentsTest-1465336039</nova:project>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:port uuid="8af7a557-97f6-420e-99b0-83eced102922">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:port uuid="21d2e524-a6d2-4fa8-86b4-e2787c969566">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.10.10.87" ipVersion="4"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <memory unit='KiB'>131072</memory>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <vcpu placement='static'>1</vcpu>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <resource>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <partition>/machine</partition>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </resource>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <sysinfo type='smbios'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='manufacturer'>RDO</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='serial'>6dc14838-5602-4f43-a3e3-2374b4f603eb</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='uuid'>6dc14838-5602-4f43-a3e3-2374b4f603eb</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <entry name='family'>Virtual Machine</entry>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <boot dev='hd'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <smbios mode='sysinfo'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <vmcoreinfo state='on'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <model fallback='forbid'>Nehalem</model>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <feature policy='require' name='x2apic'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <feature policy='require' name='hypervisor'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <feature policy='require' name='vme'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <clock offset='utc'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <timer name='hpet' present='no'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <on_poweroff>destroy</on_poweroff>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <on_reboot>restart</on_reboot>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <on_crash>destroy</on_crash>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <disk type='network' device='disk'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/6dc14838-5602-4f43-a3e3-2374b4f603eb_disk' index='2'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target dev='vda' bus='virtio'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='virtio-disk0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <disk type='network' device='cdrom'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/6dc14838-5602-4f43-a3e3-2374b4f603eb_disk.config' index='1'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target dev='sda' bus='sata'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <readonly/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='sata0-0-0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pcie.0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='1' port='0x10'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='2' port='0x11'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='3' port='0x12'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.3'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='4' port='0x13'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.4'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='5' port='0x14'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.5'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='6' port='0x15'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.6'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='7' port='0x16'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.7'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='8' port='0x17'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.8'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='9' port='0x18'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.9'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='10' port='0x19'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.10'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='11' port='0x1a'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.11'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='12' port='0x1b'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.12'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='13' port='0x1c'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.13'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='14' port='0x1d'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.14'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='15' port='0x1e'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.15'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='16' port='0x1f'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.16'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='17' port='0x20'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.17'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='18' port='0x21'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.18'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='19' port='0x22'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.19'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='20' port='0x23'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.20'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='21' port='0x24'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.21'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='22' port='0x25'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.22'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='23' port='0x26'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.23'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='24' port='0x27'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.24'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target chassis='25' port='0x28'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.25'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model name='pcie-pci-bridge'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='pci.26'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='usb'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <controller type='sata' index='0'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='ide'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:f8:41:a3'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target dev='tap8af7a557-97'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='net0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <serial type='pty'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/console.log' append='off'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target type='isa-serial' port='0'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:        <model name='isa-serial'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      </target>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/console.log' append='off'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <target type='serial' port='0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </console>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <input type='tablet' bus='usb'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='input0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='usb' bus='0' port='1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <input type='mouse' bus='ps2'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='input1'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <input type='keyboard' bus='ps2'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='input2'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <listen type='address' address='::0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <audio id='1' type='none'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='video0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <watchdog model='itco' action='reset'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='watchdog0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </watchdog>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <memballoon model='virtio'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <stats period='10'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='balloon0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <rng model='virtio'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <backend model='random'>/dev/urandom</backend>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <alias name='rng0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <label>system_u:system_r:svirt_t:s0:c276,c555</label>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c276,c555</imagelabel>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <label>+107:+107</label>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <imagelabel>+107:+107</imagelabel>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.248 232437 INFO nova.virt.libvirt.driver [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Successfully detached device tap21d2e524-a6 from instance 6dc14838-5602-4f43-a3e3-2374b4f603eb from the live domain config.#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.249 232437 DEBUG nova.virt.libvirt.vif [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1149920199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1149920199',id=92,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwWYRzhpCEWsRkCaBhzHfhpPxKJasHgXvUlHzaDRDZXID15xokDKbMJM14TkjwZa8JWqEfbHVbjprM+jVuVPSPnfeGy8c3QiNdj4kuOXnNZ8rc8TAxyYOMgQOcRiCAPIw==',key_name='tempest-keypair-1877451409',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:21:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9536676f60844b6f802518771f02409f',ramdisk_id='',reservation_id='r-bxlx3of0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1465336039',owner_user_name='tempest-TaggedAttachmentsTest-1465336039-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2404fce4d1f48779c97d3cbcb6ae594',uuid=6dc14838-5602-4f43-a3e3-2374b4f603eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.249 232437 DEBUG nova.network.os_vif_util [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converting VIF {"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.250 232437 DEBUG nova.network.os_vif_util [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.250 232437 DEBUG os_vif [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.252 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.252 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21d2e524-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.254 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.255 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:af:49 10.10.10.87'], port_security=['fa:16:3e:c9:af:49 10.10.10.87'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.87/24', 'neutron:device_id': '6dc14838-5602-4f43-a3e3-2374b4f603eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79a28eee-40b4-43f3-98dc-313c48f19b7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9536676f60844b6f802518771f02409f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b77d15a-1c0a-449a-8ebe-5566a6edf729', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ceeabcb-498a-4c7a-a4f1-5db33adf6b22, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=21d2e524-a6d2-4fa8-86b4-e2787c969566) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.256 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.257 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 21d2e524-a6d2-4fa8-86b4-e2787c969566 in datapath 79a28eee-40b4-43f3-98dc-313c48f19b7c unbound from our chassis#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.258 232437 INFO os_vif [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6')#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.259 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79a28eee-40b4-43f3-98dc-313c48f19b7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.259 232437 DEBUG nova.virt.libvirt.guest [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:name>tempest-device-tagging-server-1149920199</nova:name>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 07:22:41</nova:creationTime>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:user uuid="c2404fce4d1f48779c97d3cbcb6ae594">tempest-TaggedAttachmentsTest-1465336039-project-member</nova:user>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:project uuid="9536676f60844b6f802518771f02409f">tempest-TaggedAttachmentsTest-1465336039</nova:project>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    <nova:port uuid="8af7a557-97f6-420e-99b0-83eced102922">
Dec  6 02:22:41 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:22:41 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 02:22:41 np0005548731 nova_compute[232433]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.259 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6912f4de-f0fa-473f-811c-7478fed227a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.260 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c namespace which is not needed anymore#033[00m
Dec  6 02:22:41 np0005548731 neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c[270235]: [NOTICE]   (270239) : haproxy version is 2.8.14-c23fe91
Dec  6 02:22:41 np0005548731 neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c[270235]: [NOTICE]   (270239) : path to executable is /usr/sbin/haproxy
Dec  6 02:22:41 np0005548731 neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c[270235]: [WARNING]  (270239) : Exiting Master process...
Dec  6 02:22:41 np0005548731 neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c[270235]: [WARNING]  (270239) : Exiting Master process...
Dec  6 02:22:41 np0005548731 neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c[270235]: [ALERT]    (270239) : Current worker (270241) exited with code 143 (Terminated)
Dec  6 02:22:41 np0005548731 neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c[270235]: [WARNING]  (270239) : All workers exited. Exiting... (0)
Dec  6 02:22:41 np0005548731 systemd[1]: libpod-1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b.scope: Deactivated successfully.
Dec  6 02:22:41 np0005548731 podman[270369]: 2025-12-06 07:22:41.426045413 +0000 UTC m=+0.065857160 container died 1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:22:41 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b-userdata-shm.mount: Deactivated successfully.
Dec  6 02:22:41 np0005548731 systemd[1]: var-lib-containers-storage-overlay-4cd7d28f7341177f8d88372a86ce9e141e11a4964632f0698fb573ad5193c3f9-merged.mount: Deactivated successfully.
Dec  6 02:22:41 np0005548731 podman[270369]: 2025-12-06 07:22:41.483108232 +0000 UTC m=+0.122919969 container cleanup 1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 02:22:41 np0005548731 systemd[1]: libpod-conmon-1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b.scope: Deactivated successfully.
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.489 232437 DEBUG nova.compute.manager [req-8fe9e568-7674-4542-abe1-65cba97a82cb req-aac62512-f9e7-44fe-b8f4-1fa1aad15dae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-vif-unplugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.489 232437 DEBUG oslo_concurrency.lockutils [req-8fe9e568-7674-4542-abe1-65cba97a82cb req-aac62512-f9e7-44fe-b8f4-1fa1aad15dae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.490 232437 DEBUG oslo_concurrency.lockutils [req-8fe9e568-7674-4542-abe1-65cba97a82cb req-aac62512-f9e7-44fe-b8f4-1fa1aad15dae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.490 232437 DEBUG oslo_concurrency.lockutils [req-8fe9e568-7674-4542-abe1-65cba97a82cb req-aac62512-f9e7-44fe-b8f4-1fa1aad15dae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.490 232437 DEBUG nova.compute.manager [req-8fe9e568-7674-4542-abe1-65cba97a82cb req-aac62512-f9e7-44fe-b8f4-1fa1aad15dae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] No waiting events found dispatching network-vif-unplugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.491 232437 WARNING nova.compute.manager [req-8fe9e568-7674-4542-abe1-65cba97a82cb req-aac62512-f9e7-44fe-b8f4-1fa1aad15dae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received unexpected event network-vif-unplugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:22:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:41.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:41 np0005548731 podman[270398]: 2025-12-06 07:22:41.548809627 +0000 UTC m=+0.044046843 container remove 1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.554 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[13bdb842-bc89-48b1-8b76-6da58eddbe17]: (4, ('Sat Dec  6 07:22:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c (1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b)\n1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b\nSat Dec  6 07:22:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c (1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b)\n1334926f81c8acdd02d793a2d87f71e8f4a0a9ca4bce28a3eb44ea8df058ca2b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.555 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8cdc18b0-06f0-424c-8fa1-84d2f0ba9d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.556 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79a28eee-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.558 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:41 np0005548731 kernel: tap79a28eee-40: left promiscuous mode
Dec  6 02:22:41 np0005548731 nova_compute[232433]: 2025-12-06 07:22:41.571 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.573 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d43ade-36d0-4f40-9116-105216286229]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.589 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2f356b68-a371-4b96-9d05-f841407acff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.591 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7683b1b9-fbca-4630-8193-2e93b3c42b1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.606 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[76fc7109-c7d9-49e4-8864-8f43f1b65b45]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601419, 'reachable_time': 33635, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270413, 'error': None, 'target': 'ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.608 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79a28eee-40b4-43f3-98dc-313c48f19b7c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:22:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:41.609 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[97bbe743-e9cd-42d0-85c3-da0fe052e0a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:41 np0005548731 systemd[1]: run-netns-ovnmeta\x2d79a28eee\x2d40b4\x2d43f3\x2d98dc\x2d313c48f19b7c.mount: Deactivated successfully.
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.184 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.197 232437 DEBUG oslo_concurrency.lockutils [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.197 232437 DEBUG oslo_concurrency.lockutils [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquired lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.197 232437 DEBUG nova.network.neutron [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.243 232437 DEBUG nova.compute.manager [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-vif-deleted-21d2e524-a6d2-4fa8-86b4-e2787c969566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.243 232437 INFO nova.compute.manager [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Neutron deleted interface 21d2e524-a6d2-4fa8-86b4-e2787c969566; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.243 232437 DEBUG nova.network.neutron [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updating instance_info_cache with network_info: [{"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.296 232437 DEBUG nova.objects.instance [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lazy-loading 'system_metadata' on Instance uuid 6dc14838-5602-4f43-a3e3-2374b4f603eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.319 232437 DEBUG nova.objects.instance [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lazy-loading 'flavor' on Instance uuid 6dc14838-5602-4f43-a3e3-2374b4f603eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.342 232437 DEBUG nova.virt.libvirt.vif [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1149920199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1149920199',id=92,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwWYRzhpCEWsRkCaBhzHfhpPxKJasHgXvUlHzaDRDZXID15xokDKbMJM14TkjwZa8JWqEfbHVbjprM+jVuVPSPnfeGy8c3QiNdj4kuOXnNZ8rc8TAxyYOMgQOcRiCAPIw==',key_name='tempest-keypair-1877451409',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:21:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9536676f60844b6f802518771f02409f',ramdisk_id='',reservation_id='r-bxlx3of0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1465336039',owner_user_name='tempest-TaggedAttachmentsTest-1465336039-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2404fce4d1f48779c97d3cbcb6ae594',uuid=6dc14838-5602-4f43-a3e3-2374b4f603eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.342 232437 DEBUG nova.network.os_vif_util [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converting VIF {"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.343 232437 DEBUG nova.network.os_vif_util [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.347 232437 DEBUG nova.virt.libvirt.guest [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c9:af:49"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap21d2e524-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.350 232437 DEBUG nova.virt.libvirt.guest [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c9:af:49"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap21d2e524-a6"/></interface>not found in domain: <domain type='kvm' id='38'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <name>instance-0000005c</name>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <uuid>6dc14838-5602-4f43-a3e3-2374b4f603eb</uuid>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:name>tempest-device-tagging-server-1149920199</nova:name>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 07:22:41</nova:creationTime>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:user uuid="c2404fce4d1f48779c97d3cbcb6ae594">tempest-TaggedAttachmentsTest-1465336039-project-member</nova:user>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:project uuid="9536676f60844b6f802518771f02409f">tempest-TaggedAttachmentsTest-1465336039</nova:project>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:port uuid="8af7a557-97f6-420e-99b0-83eced102922">
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 02:22:42 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <memory unit='KiB'>131072</memory>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <vcpu placement='static'>1</vcpu>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <resource>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <partition>/machine</partition>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </resource>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <sysinfo type='smbios'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='manufacturer'>RDO</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='serial'>6dc14838-5602-4f43-a3e3-2374b4f603eb</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='uuid'>6dc14838-5602-4f43-a3e3-2374b4f603eb</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='family'>Virtual Machine</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <boot dev='hd'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <smbios mode='sysinfo'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <vmcoreinfo state='on'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <model fallback='forbid'>Nehalem</model>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <feature policy='require' name='x2apic'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <feature policy='require' name='hypervisor'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <feature policy='require' name='vme'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <clock offset='utc'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <timer name='hpet' present='no'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <on_poweroff>destroy</on_poweroff>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <on_reboot>restart</on_reboot>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <on_crash>destroy</on_crash>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <disk type='network' device='disk'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/6dc14838-5602-4f43-a3e3-2374b4f603eb_disk' index='2'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target dev='vda' bus='virtio'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='virtio-disk0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <disk type='network' device='cdrom'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/6dc14838-5602-4f43-a3e3-2374b4f603eb_disk.config' index='1'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target dev='sda' bus='sata'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <readonly/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='sata0-0-0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pcie.0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='1' port='0x10'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='2' port='0x11'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='3' port='0x12'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.3'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='4' port='0x13'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.4'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='5' port='0x14'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.5'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='6' port='0x15'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.6'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='7' port='0x16'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.7'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='8' port='0x17'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.8'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='9' port='0x18'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.9'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='10' port='0x19'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.10'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='11' port='0x1a'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.11'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='12' port='0x1b'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.12'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='13' port='0x1c'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.13'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='14' port='0x1d'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.14'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='15' port='0x1e'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.15'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='16' port='0x1f'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.16'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='17' port='0x20'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.17'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='18' port='0x21'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.18'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='19' port='0x22'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.19'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='20' port='0x23'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.20'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='21' port='0x24'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.21'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='22' port='0x25'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.22'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='23' port='0x26'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.23'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='24' port='0x27'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.24'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='25' port='0x28'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.25'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-pci-bridge'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.26'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='usb'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='sata' index='0'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='ide'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:f8:41:a3'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target dev='tap8af7a557-97'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='net0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <serial type='pty'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/console.log' append='off'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target type='isa-serial' port='0'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <model name='isa-serial'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      </target>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/console.log' append='off'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target type='serial' port='0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </console>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <input type='tablet' bus='usb'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='input0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='usb' bus='0' port='1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <input type='mouse' bus='ps2'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='input1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <input type='keyboard' bus='ps2'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='input2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <listen type='address' address='::0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <audio id='1' type='none'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='video0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <watchdog model='itco' action='reset'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='watchdog0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </watchdog>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <memballoon model='virtio'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <stats period='10'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='balloon0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <rng model='virtio'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <backend model='random'>/dev/urandom</backend>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='rng0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <label>system_u:system_r:svirt_t:s0:c276,c555</label>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c276,c555</imagelabel>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <label>+107:+107</label>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <imagelabel>+107:+107</imagelabel>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:22:42 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:22:42 np0005548731 nova_compute[232433]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.351 232437 DEBUG nova.virt.libvirt.guest [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c9:af:49"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap21d2e524-a6"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.354 232437 DEBUG nova.virt.libvirt.guest [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c9:af:49"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap21d2e524-a6"/></interface>not found in domain: <domain type='kvm' id='38'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <name>instance-0000005c</name>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <uuid>6dc14838-5602-4f43-a3e3-2374b4f603eb</uuid>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:name>tempest-device-tagging-server-1149920199</nova:name>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 07:22:41</nova:creationTime>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:user uuid="c2404fce4d1f48779c97d3cbcb6ae594">tempest-TaggedAttachmentsTest-1465336039-project-member</nova:user>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:project uuid="9536676f60844b6f802518771f02409f">tempest-TaggedAttachmentsTest-1465336039</nova:project>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:port uuid="8af7a557-97f6-420e-99b0-83eced102922">
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 02:22:42 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <memory unit='KiB'>131072</memory>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <vcpu placement='static'>1</vcpu>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <resource>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <partition>/machine</partition>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </resource>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <sysinfo type='smbios'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='manufacturer'>RDO</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='serial'>6dc14838-5602-4f43-a3e3-2374b4f603eb</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='uuid'>6dc14838-5602-4f43-a3e3-2374b4f603eb</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <entry name='family'>Virtual Machine</entry>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <boot dev='hd'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <smbios mode='sysinfo'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <vmcoreinfo state='on'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <model fallback='forbid'>Nehalem</model>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <feature policy='require' name='x2apic'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <feature policy='require' name='hypervisor'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <feature policy='require' name='vme'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <clock offset='utc'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <timer name='hpet' present='no'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <on_poweroff>destroy</on_poweroff>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <on_reboot>restart</on_reboot>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <on_crash>destroy</on_crash>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <disk type='network' device='disk'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/6dc14838-5602-4f43-a3e3-2374b4f603eb_disk' index='2'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target dev='vda' bus='virtio'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='virtio-disk0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <disk type='network' device='cdrom'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/6dc14838-5602-4f43-a3e3-2374b4f603eb_disk.config' index='1'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target dev='sda' bus='sata'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <readonly/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='sata0-0-0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pcie.0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='1' port='0x10'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='2' port='0x11'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='3' port='0x12'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.3'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='4' port='0x13'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.4'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='5' port='0x14'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.5'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='6' port='0x15'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.6'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='7' port='0x16'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.7'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='8' port='0x17'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.8'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='9' port='0x18'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.9'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='10' port='0x19'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.10'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='11' port='0x1a'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.11'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='12' port='0x1b'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.12'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='13' port='0x1c'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.13'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='14' port='0x1d'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.14'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='15' port='0x1e'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.15'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='16' port='0x1f'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.16'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='17' port='0x20'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.17'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='18' port='0x21'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.18'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='19' port='0x22'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.19'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='20' port='0x23'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.20'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='21' port='0x24'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.21'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='22' port='0x25'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.22'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='23' port='0x26'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.23'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='24' port='0x27'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.24'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target chassis='25' port='0x28'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.25'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model name='pcie-pci-bridge'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='pci.26'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='usb'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <controller type='sata' index='0'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='ide'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </controller>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:f8:41:a3'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target dev='tap8af7a557-97'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='net0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <serial type='pty'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/console.log' append='off'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target type='isa-serial' port='0'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:        <model name='isa-serial'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      </target>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb/console.log' append='off'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <target type='serial' port='0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </console>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <input type='tablet' bus='usb'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='input0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='usb' bus='0' port='1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <input type='mouse' bus='ps2'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='input1'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <input type='keyboard' bus='ps2'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='input2'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </input>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <listen type='address' address='::0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <audio id='1' type='none'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='video0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <watchdog model='itco' action='reset'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='watchdog0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </watchdog>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <memballoon model='virtio'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <stats period='10'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='balloon0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <rng model='virtio'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <backend model='random'>/dev/urandom</backend>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <alias name='rng0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <label>system_u:system_r:svirt_t:s0:c276,c555</label>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c276,c555</imagelabel>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <label>+107:+107</label>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <imagelabel>+107:+107</imagelabel>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 02:22:42 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:22:42 np0005548731 nova_compute[232433]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.355 232437 WARNING nova.virt.libvirt.driver [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Detaching interface fa:16:3e:c9:af:49 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap21d2e524-a6' not found.#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.356 232437 DEBUG nova.virt.libvirt.vif [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1149920199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1149920199',id=92,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwWYRzhpCEWsRkCaBhzHfhpPxKJasHgXvUlHzaDRDZXID15xokDKbMJM14TkjwZa8JWqEfbHVbjprM+jVuVPSPnfeGy8c3QiNdj4kuOXnNZ8rc8TAxyYOMgQOcRiCAPIw==',key_name='tempest-keypair-1877451409',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:21:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9536676f60844b6f802518771f02409f',ramdisk_id='',reservation_id='r-bxlx3of0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1465336039',owner_user_name='tempest-TaggedAttachmentsTest-1465336039-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2404fce4d1f48779c97d3cbcb6ae594',uuid=6dc14838-5602-4f43-a3e3-2374b4f603eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.356 232437 DEBUG nova.network.os_vif_util [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converting VIF {"id": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "address": "fa:16:3e:c9:af:49", "network": {"id": "79a28eee-40b4-43f3-98dc-313c48f19b7c", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-654887586", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21d2e524-a6", "ovs_interfaceid": "21d2e524-a6d2-4fa8-86b4-e2787c969566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.357 232437 DEBUG nova.network.os_vif_util [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.357 232437 DEBUG os_vif [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.359 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.359 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21d2e524-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.359 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.362 232437 INFO os_vif [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:af:49,bridge_name='br-int',has_traffic_filtering=True,id=21d2e524-a6d2-4fa8-86b4-e2787c969566,network=Network(79a28eee-40b4-43f3-98dc-313c48f19b7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21d2e524-a6')#033[00m
Dec  6 02:22:42 np0005548731 nova_compute[232433]: 2025-12-06 07:22:42.363 232437 DEBUG nova.virt.libvirt.guest [req-d2fa3221-03d9-40ae-a408-ce5315067797 req-3fedbfd8-35f7-4d0e-b642-ef1e7c43d4e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:name>tempest-device-tagging-server-1149920199</nova:name>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 07:22:42</nova:creationTime>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:user uuid="c2404fce4d1f48779c97d3cbcb6ae594">tempest-TaggedAttachmentsTest-1465336039-project-member</nova:user>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:project uuid="9536676f60844b6f802518771f02409f">tempest-TaggedAttachmentsTest-1465336039</nova:project>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    <nova:port uuid="8af7a557-97f6-420e-99b0-83eced102922">
Dec  6 02:22:42 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 02:22:42 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 02:22:42 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 02:22:42 np0005548731 nova_compute[232433]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 02:22:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:22:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:42.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:22:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:43.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:43 np0005548731 nova_compute[232433]: 2025-12-06 07:22:43.539 232437 INFO nova.network.neutron [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Port 21d2e524-a6d2-4fa8-86b4-e2787c969566 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  6 02:22:43 np0005548731 nova_compute[232433]: 2025-12-06 07:22:43.540 232437 DEBUG nova.network.neutron [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updating instance_info_cache with network_info: [{"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:22:43 np0005548731 nova_compute[232433]: 2025-12-06 07:22:43.563 232437 DEBUG oslo_concurrency.lockutils [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Releasing lock "refresh_cache-6dc14838-5602-4f43-a3e3-2374b4f603eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:22:43 np0005548731 nova_compute[232433]: 2025-12-06 07:22:43.571 232437 DEBUG nova.compute.manager [req-4e431fec-d3fe-4f1a-82d1-6c8b675ddc6a req-4de65a05-9fcb-42ad-82ee-b596fdcb2a8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-vif-plugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:22:43 np0005548731 nova_compute[232433]: 2025-12-06 07:22:43.571 232437 DEBUG oslo_concurrency.lockutils [req-4e431fec-d3fe-4f1a-82d1-6c8b675ddc6a req-4de65a05-9fcb-42ad-82ee-b596fdcb2a8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:43 np0005548731 nova_compute[232433]: 2025-12-06 07:22:43.571 232437 DEBUG oslo_concurrency.lockutils [req-4e431fec-d3fe-4f1a-82d1-6c8b675ddc6a req-4de65a05-9fcb-42ad-82ee-b596fdcb2a8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:43 np0005548731 nova_compute[232433]: 2025-12-06 07:22:43.571 232437 DEBUG oslo_concurrency.lockutils [req-4e431fec-d3fe-4f1a-82d1-6c8b675ddc6a req-4de65a05-9fcb-42ad-82ee-b596fdcb2a8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:43 np0005548731 nova_compute[232433]: 2025-12-06 07:22:43.571 232437 DEBUG nova.compute.manager [req-4e431fec-d3fe-4f1a-82d1-6c8b675ddc6a req-4de65a05-9fcb-42ad-82ee-b596fdcb2a8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] No waiting events found dispatching network-vif-plugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:22:43 np0005548731 nova_compute[232433]: 2025-12-06 07:22:43.572 232437 WARNING nova.compute.manager [req-4e431fec-d3fe-4f1a-82d1-6c8b675ddc6a req-4de65a05-9fcb-42ad-82ee-b596fdcb2a8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received unexpected event network-vif-plugged-21d2e524-a6d2-4fa8-86b4-e2787c969566 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:22:43 np0005548731 nova_compute[232433]: 2025-12-06 07:22:43.590 232437 DEBUG oslo_concurrency.lockutils [None req-4c3a8c32-3d39-45ec-8642-2844328997b4 c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "interface-6dc14838-5602-4f43-a3e3-2374b4f603eb-21d2e524-a6d2-4fa8-86b4-e2787c969566" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:22:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:44.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:22:45 np0005548731 nova_compute[232433]: 2025-12-06 07:22:45.215 232437 DEBUG oslo_concurrency.lockutils [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:45 np0005548731 nova_compute[232433]: 2025-12-06 07:22:45.216 232437 DEBUG oslo_concurrency.lockutils [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:45 np0005548731 nova_compute[232433]: 2025-12-06 07:22:45.217 232437 DEBUG oslo_concurrency.lockutils [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:45 np0005548731 nova_compute[232433]: 2025-12-06 07:22:45.217 232437 DEBUG oslo_concurrency.lockutils [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:45 np0005548731 nova_compute[232433]: 2025-12-06 07:22:45.217 232437 DEBUG oslo_concurrency.lockutils [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:45 np0005548731 nova_compute[232433]: 2025-12-06 07:22:45.218 232437 INFO nova.compute.manager [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Terminating instance#033[00m
Dec  6 02:22:45 np0005548731 nova_compute[232433]: 2025-12-06 07:22:45.219 232437 DEBUG nova.compute.manager [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:22:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:22:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:45.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:22:45 np0005548731 kernel: tap8af7a557-97 (unregistering): left promiscuous mode
Dec  6 02:22:45 np0005548731 NetworkManager[49182]: <info>  [1765005765.9790] device (tap8af7a557-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:22:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:45Z|00362|binding|INFO|Releasing lport 8af7a557-97f6-420e-99b0-83eced102922 from this chassis (sb_readonly=0)
Dec  6 02:22:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:45Z|00363|binding|INFO|Setting lport 8af7a557-97f6-420e-99b0-83eced102922 down in Southbound
Dec  6 02:22:45 np0005548731 nova_compute[232433]: 2025-12-06 07:22:45.986 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:22:45Z|00364|binding|INFO|Removing iface tap8af7a557-97 ovn-installed in OVS
Dec  6 02:22:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:45.992 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:41:a3 10.100.0.10'], port_security=['fa:16:3e:f8:41:a3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6dc14838-5602-4f43-a3e3-2374b4f603eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b1d33d1-6678-4c36-a5fc-301790dcb0d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9536676f60844b6f802518771f02409f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e8943e4-be94-466a-916f-cd8e61b4a537', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9df0203-2150-4578-816a-9c2f6170e755, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8af7a557-97f6-420e-99b0-83eced102922) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:22:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:45.993 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8af7a557-97f6-420e-99b0-83eced102922 in datapath 8b1d33d1-6678-4c36-a5fc-301790dcb0d0 unbound from our chassis#033[00m
Dec  6 02:22:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:45.994 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b1d33d1-6678-4c36-a5fc-301790dcb0d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:22:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:45.995 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[756f331c-453a-4e57-a3b3-67dbca5908a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:45.996 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0 namespace which is not needed anymore#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.008 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:46 np0005548731 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Dec  6 02:22:46 np0005548731 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005c.scope: Consumed 16.349s CPU time.
Dec  6 02:22:46 np0005548731 systemd-machined[195355]: Machine qemu-38-instance-0000005c terminated.
Dec  6 02:22:46 np0005548731 neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0[269236]: [NOTICE]   (269240) : haproxy version is 2.8.14-c23fe91
Dec  6 02:22:46 np0005548731 neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0[269236]: [NOTICE]   (269240) : path to executable is /usr/sbin/haproxy
Dec  6 02:22:46 np0005548731 neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0[269236]: [WARNING]  (269240) : Exiting Master process...
Dec  6 02:22:46 np0005548731 neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0[269236]: [ALERT]    (269240) : Current worker (269242) exited with code 143 (Terminated)
Dec  6 02:22:46 np0005548731 neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0[269236]: [WARNING]  (269240) : All workers exited. Exiting... (0)
Dec  6 02:22:46 np0005548731 systemd[1]: libpod-987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b.scope: Deactivated successfully.
Dec  6 02:22:46 np0005548731 conmon[269236]: conmon 987ab1772203befc4d7d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b.scope/container/memory.events
Dec  6 02:22:46 np0005548731 podman[270491]: 2025-12-06 07:22:46.138300502 +0000 UTC m=+0.046328621 container died 987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:22:46 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b-userdata-shm.mount: Deactivated successfully.
Dec  6 02:22:46 np0005548731 systemd[1]: var-lib-containers-storage-overlay-dba60727404efbf25cc816bc7c8155dd4afdacd2e9d2d34694a560251fa3f238-merged.mount: Deactivated successfully.
Dec  6 02:22:46 np0005548731 podman[270491]: 2025-12-06 07:22:46.190160631 +0000 UTC m=+0.098188760 container cleanup 987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:22:46 np0005548731 systemd[1]: libpod-conmon-987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b.scope: Deactivated successfully.
Dec  6 02:22:46 np0005548731 podman[270520]: 2025-12-06 07:22:46.249431675 +0000 UTC m=+0.039617793 container remove 987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.254 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:46.256 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[914266e0-2266-468a-8505-79e51fa51984]: (4, ('Sat Dec  6 07:22:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0 (987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b)\n987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b\nSat Dec  6 07:22:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0 (987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b)\n987ab1772203befc4d7dce4685e6dad15705b3df3d0cce67a76169d2bdaad59b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:46.258 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc8dd5a-4377-4842-9151-fc0d96a79905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:46.259 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b1d33d1-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.260 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:46 np0005548731 kernel: tap8b1d33d1-60: left promiscuous mode
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.264 232437 INFO nova.virt.libvirt.driver [-] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Instance destroyed successfully.#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.264 232437 DEBUG nova.objects.instance [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lazy-loading 'resources' on Instance uuid 6dc14838-5602-4f43-a3e3-2374b4f603eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.277 232437 DEBUG nova.virt.libvirt.vif [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1149920199',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-1149920199',id=92,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwWYRzhpCEWsRkCaBhzHfhpPxKJasHgXvUlHzaDRDZXID15xokDKbMJM14TkjwZa8JWqEfbHVbjprM+jVuVPSPnfeGy8c3QiNdj4kuOXnNZ8rc8TAxyYOMgQOcRiCAPIw==',key_name='tempest-keypair-1877451409',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:21:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9536676f60844b6f802518771f02409f',ramdisk_id='',reservation_id='r-bxlx3of0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1465336039',owner_user_name='tempest-TaggedAttachmentsTest-1465336039-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2404fce4d1f48779c97d3cbcb6ae594',uuid=6dc14838-5602-4f43-a3e3-2374b4f603eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.277 232437 DEBUG nova.network.os_vif_util [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converting VIF {"id": "8af7a557-97f6-420e-99b0-83eced102922", "address": "fa:16:3e:f8:41:a3", "network": {"id": "8b1d33d1-6678-4c36-a5fc-301790dcb0d0", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-2120777627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9536676f60844b6f802518771f02409f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8af7a557-97", "ovs_interfaceid": "8af7a557-97f6-420e-99b0-83eced102922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.278 232437 DEBUG nova.network.os_vif_util [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:41:a3,bridge_name='br-int',has_traffic_filtering=True,id=8af7a557-97f6-420e-99b0-83eced102922,network=Network(8b1d33d1-6678-4c36-a5fc-301790dcb0d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8af7a557-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.278 232437 DEBUG os_vif [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:41:a3,bridge_name='br-int',has_traffic_filtering=True,id=8af7a557-97f6-420e-99b0-83eced102922,network=Network(8b1d33d1-6678-4c36-a5fc-301790dcb0d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8af7a557-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.279 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.280 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8af7a557-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.280 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.281 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:46.282 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dd91beba-471b-425c-b951-7e960fab6dd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.290 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.293 232437 INFO os_vif [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:41:a3,bridge_name='br-int',has_traffic_filtering=True,id=8af7a557-97f6-420e-99b0-83eced102922,network=Network(8b1d33d1-6678-4c36-a5fc-301790dcb0d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8af7a557-97')#033[00m
Dec  6 02:22:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:46.297 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[30fee56d-d75c-4b1a-9107-d8dd4548dde5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:46.298 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4f80bc27-a72d-4c83-9bb5-a0e2aa012a40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.310 232437 DEBUG nova.compute.manager [req-5c354b02-8106-48c3-b0ca-42a9cecd433e req-a150a809-a800-4ed4-aa1a-6ba875797807 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-vif-unplugged-8af7a557-97f6-420e-99b0-83eced102922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.310 232437 DEBUG oslo_concurrency.lockutils [req-5c354b02-8106-48c3-b0ca-42a9cecd433e req-a150a809-a800-4ed4-aa1a-6ba875797807 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.311 232437 DEBUG oslo_concurrency.lockutils [req-5c354b02-8106-48c3-b0ca-42a9cecd433e req-a150a809-a800-4ed4-aa1a-6ba875797807 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.311 232437 DEBUG oslo_concurrency.lockutils [req-5c354b02-8106-48c3-b0ca-42a9cecd433e req-a150a809-a800-4ed4-aa1a-6ba875797807 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.311 232437 DEBUG nova.compute.manager [req-5c354b02-8106-48c3-b0ca-42a9cecd433e req-a150a809-a800-4ed4-aa1a-6ba875797807 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] No waiting events found dispatching network-vif-unplugged-8af7a557-97f6-420e-99b0-83eced102922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:22:46 np0005548731 nova_compute[232433]: 2025-12-06 07:22:46.311 232437 DEBUG nova.compute.manager [req-5c354b02-8106-48c3-b0ca-42a9cecd433e req-a150a809-a800-4ed4-aa1a-6ba875797807 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-vif-unplugged-8af7a557-97f6-420e-99b0-83eced102922 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:22:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:46.312 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e22405de-a011-4b92-9460-e4012fb9adb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596796, 'reachable_time': 43897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270554, 'error': None, 'target': 'ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:46.314 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8b1d33d1-6678-4c36-a5fc-301790dcb0d0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:22:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:22:46.314 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[1127a985-21b0-4484-9b1d-b29051a08c4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:22:46 np0005548731 systemd[1]: run-netns-ovnmeta\x2d8b1d33d1\x2d6678\x2d4c36\x2da5fc\x2d301790dcb0d0.mount: Deactivated successfully.
Dec  6 02:22:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:22:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:46.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:22:47 np0005548731 nova_compute[232433]: 2025-12-06 07:22:47.187 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:47.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:47 np0005548731 nova_compute[232433]: 2025-12-06 07:22:47.988 232437 INFO nova.virt.libvirt.driver [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Deleting instance files /var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb_del#033[00m
Dec  6 02:22:47 np0005548731 nova_compute[232433]: 2025-12-06 07:22:47.989 232437 INFO nova.virt.libvirt.driver [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Deletion of /var/lib/nova/instances/6dc14838-5602-4f43-a3e3-2374b4f603eb_del complete#033[00m
Dec  6 02:22:48 np0005548731 nova_compute[232433]: 2025-12-06 07:22:48.102 232437 INFO nova.compute.manager [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Took 2.88 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:22:48 np0005548731 nova_compute[232433]: 2025-12-06 07:22:48.103 232437 DEBUG oslo.service.loopingcall [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:22:48 np0005548731 nova_compute[232433]: 2025-12-06 07:22:48.103 232437 DEBUG nova.compute.manager [-] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:22:48 np0005548731 nova_compute[232433]: 2025-12-06 07:22:48.104 232437 DEBUG nova.network.neutron [-] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:22:48 np0005548731 nova_compute[232433]: 2025-12-06 07:22:48.416 232437 DEBUG nova.compute.manager [req-86fd8f52-ab52-4598-9b34-598b6bd69bcf req-eee7eea2-4569-4221-b12e-b11bb2c3dfae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-vif-plugged-8af7a557-97f6-420e-99b0-83eced102922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:22:48 np0005548731 nova_compute[232433]: 2025-12-06 07:22:48.416 232437 DEBUG oslo_concurrency.lockutils [req-86fd8f52-ab52-4598-9b34-598b6bd69bcf req-eee7eea2-4569-4221-b12e-b11bb2c3dfae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:48 np0005548731 nova_compute[232433]: 2025-12-06 07:22:48.417 232437 DEBUG oslo_concurrency.lockutils [req-86fd8f52-ab52-4598-9b34-598b6bd69bcf req-eee7eea2-4569-4221-b12e-b11bb2c3dfae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:48 np0005548731 nova_compute[232433]: 2025-12-06 07:22:48.417 232437 DEBUG oslo_concurrency.lockutils [req-86fd8f52-ab52-4598-9b34-598b6bd69bcf req-eee7eea2-4569-4221-b12e-b11bb2c3dfae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:48 np0005548731 nova_compute[232433]: 2025-12-06 07:22:48.417 232437 DEBUG nova.compute.manager [req-86fd8f52-ab52-4598-9b34-598b6bd69bcf req-eee7eea2-4569-4221-b12e-b11bb2c3dfae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] No waiting events found dispatching network-vif-plugged-8af7a557-97f6-420e-99b0-83eced102922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:22:48 np0005548731 nova_compute[232433]: 2025-12-06 07:22:48.418 232437 WARNING nova.compute.manager [req-86fd8f52-ab52-4598-9b34-598b6bd69bcf req-eee7eea2-4569-4221-b12e-b11bb2c3dfae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received unexpected event network-vif-plugged-8af7a557-97f6-420e-99b0-83eced102922 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:22:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:22:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:48.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.128 232437 DEBUG nova.network.neutron [-] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.146 232437 INFO nova.compute.manager [-] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Took 1.04 seconds to deallocate network for instance.#033[00m
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.193 232437 DEBUG oslo_concurrency.lockutils [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.193 232437 DEBUG oslo_concurrency.lockutils [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.208 232437 DEBUG nova.compute.manager [req-f17d1412-e619-49f0-b298-dff67ec6145c req-f32dc0b7-c0a7-4529-8b05-076a4558fc08 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Received event network-vif-deleted-8af7a557-97f6-420e-99b0-83eced102922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.245 232437 DEBUG oslo_concurrency.processutils [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:22:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:49.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:22:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1015985670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.696 232437 DEBUG oslo_concurrency.processutils [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.702 232437 DEBUG nova.compute.provider_tree [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.725 232437 DEBUG nova.scheduler.client.report [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.752 232437 DEBUG oslo_concurrency.lockutils [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.791 232437 INFO nova.scheduler.client.report [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Deleted allocations for instance 6dc14838-5602-4f43-a3e3-2374b4f603eb#033[00m
Dec  6 02:22:49 np0005548731 nova_compute[232433]: 2025-12-06 07:22:49.922 232437 DEBUG oslo_concurrency.lockutils [None req-b76f8e20-aec2-46d5-b347-8226a15f056a c2404fce4d1f48779c97d3cbcb6ae594 9536676f60844b6f802518771f02409f - - default default] Lock "6dc14838-5602-4f43-a3e3-2374b4f603eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:22:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:50.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:51 np0005548731 nova_compute[232433]: 2025-12-06 07:22:51.282 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:22:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:51.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:22:52 np0005548731 nova_compute[232433]: 2025-12-06 07:22:52.188 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:52.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:53.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e259 e259: 3 total, 3 up, 3 in
Dec  6 02:22:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:22:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:54.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:22:55 np0005548731 nova_compute[232433]: 2025-12-06 07:22:55.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:22:55 np0005548731 nova_compute[232433]: 2025-12-06 07:22:55.139 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:55 np0005548731 nova_compute[232433]: 2025-12-06 07:22:55.474 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:56 np0005548731 nova_compute[232433]: 2025-12-06 07:22:56.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:22:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:56.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:22:57 np0005548731 nova_compute[232433]: 2025-12-06 07:22:57.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:22:57 np0005548731 nova_compute[232433]: 2025-12-06 07:22:57.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:22:57 np0005548731 nova_compute[232433]: 2025-12-06 07:22:57.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:22:57 np0005548731 nova_compute[232433]: 2025-12-06 07:22:57.123 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:22:57 np0005548731 nova_compute[232433]: 2025-12-06 07:22:57.190 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:22:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:22:58 np0005548731 nova_compute[232433]: 2025-12-06 07:22:58.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:22:58 np0005548731 nova_compute[232433]: 2025-12-06 07:22:58.295 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:22:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:58.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:22:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:22:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:22:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:59.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:00 np0005548731 nova_compute[232433]: 2025-12-06 07:23:00.107 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:00 np0005548731 nova_compute[232433]: 2025-12-06 07:23:00.107 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:23:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:00.864 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:23:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:00.865 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:23:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:00.865 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:23:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:00.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:01 np0005548731 nova_compute[232433]: 2025-12-06 07:23:01.263 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005766.2619333, 6dc14838-5602-4f43-a3e3-2374b4f603eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:23:01 np0005548731 nova_compute[232433]: 2025-12-06 07:23:01.263 232437 INFO nova.compute.manager [-] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:23:01 np0005548731 nova_compute[232433]: 2025-12-06 07:23:01.284 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:01 np0005548731 nova_compute[232433]: 2025-12-06 07:23:01.291 232437 DEBUG nova.compute.manager [None req-8d447cc2-1676-443a-83f3-56b7fa31dbb0 - - - - - -] [instance: 6dc14838-5602-4f43-a3e3-2374b4f603eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:23:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:01.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:02 np0005548731 nova_compute[232433]: 2025-12-06 07:23:02.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:02 np0005548731 nova_compute[232433]: 2025-12-06 07:23:02.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:02 np0005548731 nova_compute[232433]: 2025-12-06 07:23:02.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:23:02 np0005548731 nova_compute[232433]: 2025-12-06 07:23:02.122 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:23:02 np0005548731 nova_compute[232433]: 2025-12-06 07:23:02.191 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:02.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:03 np0005548731 nova_compute[232433]: 2025-12-06 07:23:03.122 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:03.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.741696) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783741752, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2274, "num_deletes": 255, "total_data_size": 5426411, "memory_usage": 5522368, "flush_reason": "Manual Compaction"}
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783761182, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2182472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39855, "largest_seqno": 42123, "table_properties": {"data_size": 2175312, "index_size": 3782, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19466, "raw_average_key_size": 21, "raw_value_size": 2159328, "raw_average_value_size": 2399, "num_data_blocks": 166, "num_entries": 900, "num_filter_entries": 900, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005574, "oldest_key_time": 1765005574, "file_creation_time": 1765005783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 19599 microseconds, and 5539 cpu microseconds.
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.761283) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2182472 bytes OK
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.761322) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.763795) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.763817) EVENT_LOG_v1 {"time_micros": 1765005783763810, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.763840) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 5416226, prev total WAL file size 5416226, number of live WAL files 2.
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.766174) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323731' seq:72057594037927935, type:22 .. '6D6772737461740031353236' seq:0, type:0; will stop at (end)
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2131KB)], [75(10MB)]
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783766272, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 13469251, "oldest_snapshot_seqno": -1}
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 7276 keys, 10897846 bytes, temperature: kUnknown
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783853872, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 10897846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10849926, "index_size": 28608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 187195, "raw_average_key_size": 25, "raw_value_size": 10720458, "raw_average_value_size": 1473, "num_data_blocks": 1140, "num_entries": 7276, "num_filter_entries": 7276, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765005783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.854135) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10897846 bytes
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.855431) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.6 rd, 124.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 10.8 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(11.2) write-amplify(5.0) OK, records in: 7718, records dropped: 442 output_compression: NoCompression
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.855458) EVENT_LOG_v1 {"time_micros": 1765005783855445, "job": 46, "event": "compaction_finished", "compaction_time_micros": 87662, "compaction_time_cpu_micros": 32378, "output_level": 6, "num_output_files": 1, "total_output_size": 10897846, "num_input_records": 7718, "num_output_records": 7276, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783856130, "job": 46, "event": "table_file_deletion", "file_number": 77}
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783859122, "job": 46, "event": "table_file_deletion", "file_number": 75}
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.765951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.859169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.859173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.859175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.859176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:03.859178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:04 np0005548731 nova_compute[232433]: 2025-12-06 07:23:04.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:05 np0005548731 podman[270623]: 2025-12-06 07:23:05.390957556 +0000 UTC m=+0.059619213 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:23:05 np0005548731 podman[270624]: 2025-12-06 07:23:05.413640485 +0000 UTC m=+0.080630290 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:23:05 np0005548731 podman[270625]: 2025-12-06 07:23:05.413685826 +0000 UTC m=+0.070373934 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:23:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:05.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.132 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.132 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:23:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:06.151 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:23:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:06.152 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.155 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:23:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3978424021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.561 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.713 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.714 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4510MB free_disk=20.900737762451172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.714 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.714 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.853 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.854 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:23:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:06.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:06 np0005548731 nova_compute[232433]: 2025-12-06 07:23:06.947 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:23:07 np0005548731 nova_compute[232433]: 2025-12-06 07:23:07.193 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:23:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2534009470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:23:07 np0005548731 nova_compute[232433]: 2025-12-06 07:23:07.373 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:23:07 np0005548731 nova_compute[232433]: 2025-12-06 07:23:07.379 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:23:07 np0005548731 nova_compute[232433]: 2025-12-06 07:23:07.399 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:23:07 np0005548731 nova_compute[232433]: 2025-12-06 07:23:07.446 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:23:07 np0005548731 nova_compute[232433]: 2025-12-06 07:23:07.447 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:23:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:07.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:08.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:09 np0005548731 nova_compute[232433]: 2025-12-06 07:23:09.449 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:09.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:10 np0005548731 nova_compute[232433]: 2025-12-06 07:23:10.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:10.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:11 np0005548731 nova_compute[232433]: 2025-12-06 07:23:11.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:11.154 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:23:11 np0005548731 nova_compute[232433]: 2025-12-06 07:23:11.286 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:23:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:11.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:23:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e260 e260: 3 total, 3 up, 3 in
Dec  6 02:23:12 np0005548731 nova_compute[232433]: 2025-12-06 07:23:12.195 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:12.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:13.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e261 e261: 3 total, 3 up, 3 in
Dec  6 02:23:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:14.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:15.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:16 np0005548731 nova_compute[232433]: 2025-12-06 07:23:16.288 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:16.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:17 np0005548731 nova_compute[232433]: 2025-12-06 07:23:17.233 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:17.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:18.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:19.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:20.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:21 np0005548731 nova_compute[232433]: 2025-12-06 07:23:21.289 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:21.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.050840) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802050871, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 444, "num_deletes": 251, "total_data_size": 574371, "memory_usage": 584176, "flush_reason": "Manual Compaction"}
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802055427, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 379122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42128, "largest_seqno": 42567, "table_properties": {"data_size": 376555, "index_size": 667, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6341, "raw_average_key_size": 19, "raw_value_size": 371351, "raw_average_value_size": 1121, "num_data_blocks": 30, "num_entries": 331, "num_filter_entries": 331, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005784, "oldest_key_time": 1765005784, "file_creation_time": 1765005802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 4637 microseconds, and 1669 cpu microseconds.
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.055473) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 379122 bytes OK
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.055491) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.057943) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.057967) EVENT_LOG_v1 {"time_micros": 1765005802057960, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.057985) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 571576, prev total WAL file size 571576, number of live WAL files 2.
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.058564) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(370KB)], [78(10MB)]
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802058828, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 11276968, "oldest_snapshot_seqno": -1}
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 7092 keys, 9294094 bytes, temperature: kUnknown
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802125978, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 9294094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9248947, "index_size": 26283, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17797, "raw_key_size": 184150, "raw_average_key_size": 25, "raw_value_size": 9124218, "raw_average_value_size": 1286, "num_data_blocks": 1033, "num_entries": 7092, "num_filter_entries": 7092, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765005802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.126371) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9294094 bytes
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.128673) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.6 rd, 138.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(54.3) write-amplify(24.5) OK, records in: 7607, records dropped: 515 output_compression: NoCompression
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.128690) EVENT_LOG_v1 {"time_micros": 1765005802128682, "job": 48, "event": "compaction_finished", "compaction_time_micros": 67280, "compaction_time_cpu_micros": 21210, "output_level": 6, "num_output_files": 1, "total_output_size": 9294094, "num_input_records": 7607, "num_output_records": 7092, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802129174, "job": 48, "event": "table_file_deletion", "file_number": 80}
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802130922, "job": 48, "event": "table_file_deletion", "file_number": 78}
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.058345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.131132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.131139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.131141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.131143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:23:22.131145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:23:22 np0005548731 nova_compute[232433]: 2025-12-06 07:23:22.234 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:22.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e262 e262: 3 total, 3 up, 3 in
Dec  6 02:23:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:23:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:23.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:23:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:23:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e263 e263: 3 total, 3 up, 3 in
Dec  6 02:23:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:23:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:24.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:23:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:25.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:26 np0005548731 nova_compute[232433]: 2025-12-06 07:23:26.118 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:26 np0005548731 nova_compute[232433]: 2025-12-06 07:23:26.118 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:23:26 np0005548731 nova_compute[232433]: 2025-12-06 07:23:26.290 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:23:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:26.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:23:27 np0005548731 nova_compute[232433]: 2025-12-06 07:23:27.274 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:27 np0005548731 nova_compute[232433]: 2025-12-06 07:23:27.340 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:23:27 np0005548731 nova_compute[232433]: 2025-12-06 07:23:27.341 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:23:27 np0005548731 nova_compute[232433]: 2025-12-06 07:23:27.358 232437 DEBUG nova.compute.manager [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:23:27 np0005548731 nova_compute[232433]: 2025-12-06 07:23:27.507 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:23:27 np0005548731 nova_compute[232433]: 2025-12-06 07:23:27.507 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:23:27 np0005548731 nova_compute[232433]: 2025-12-06 07:23:27.514 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:23:27 np0005548731 nova_compute[232433]: 2025-12-06 07:23:27.514 232437 INFO nova.compute.claims [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:23:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:27.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:27 np0005548731 nova_compute[232433]: 2025-12-06 07:23:27.619 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:23:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:23:28 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4237680350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.088 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.096 232437 DEBUG nova.compute.provider_tree [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.118 232437 DEBUG nova.scheduler.client.report [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.191 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.192 232437 DEBUG nova.compute.manager [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.233 232437 DEBUG nova.compute.manager [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.234 232437 DEBUG nova.network.neutron [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.251 232437 INFO nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.269 232437 DEBUG nova.compute.manager [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.370 232437 DEBUG nova.compute.manager [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.372 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.373 232437 INFO nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Creating image(s)#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.402 232437 DEBUG nova.storage.rbd_utils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.426 232437 DEBUG nova.storage.rbd_utils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.449 232437 DEBUG nova.storage.rbd_utils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.453 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.488 232437 DEBUG nova.policy [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd67c136e82ad4001b000848d75eef50d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88f5b34244614321a9b6e902eaba0ece', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.526 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.527 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.527 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.528 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.554 232437 DEBUG nova.storage.rbd_utils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:23:28 np0005548731 nova_compute[232433]: 2025-12-06 07:23:28.558 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f8846499-359a-4e65-9120-56011fbc0f1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:23:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:28.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:29 np0005548731 nova_compute[232433]: 2025-12-06 07:23:29.071 232437 DEBUG nova.network.neutron [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Successfully created port: 5c84e057-7718-465e-89ef-0ad3aa2c82b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:23:29 np0005548731 nova_compute[232433]: 2025-12-06 07:23:29.359 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f8846499-359a-4e65-9120-56011fbc0f1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.801s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:23:29 np0005548731 nova_compute[232433]: 2025-12-06 07:23:29.430 232437 DEBUG nova.storage.rbd_utils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] resizing rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:23:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:29.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.064 232437 DEBUG nova.network.neutron [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Successfully updated port: 5c84e057-7718-465e-89ef-0ad3aa2c82b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.079 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "refresh_cache-f8846499-359a-4e65-9120-56011fbc0f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.079 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquired lock "refresh_cache-f8846499-359a-4e65-9120-56011fbc0f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.079 232437 DEBUG nova.network.neutron [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.163 232437 DEBUG nova.compute.manager [req-73e9febf-06ef-44b7-aa86-bf52e4d5f885 req-0ad93acc-35b2-4a7b-82c4-ac04b980e474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received event network-changed-5c84e057-7718-465e-89ef-0ad3aa2c82b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.163 232437 DEBUG nova.compute.manager [req-73e9febf-06ef-44b7-aa86-bf52e4d5f885 req-0ad93acc-35b2-4a7b-82c4-ac04b980e474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Refreshing instance network info cache due to event network-changed-5c84e057-7718-465e-89ef-0ad3aa2c82b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.163 232437 DEBUG oslo_concurrency.lockutils [req-73e9febf-06ef-44b7-aa86-bf52e4d5f885 req-0ad93acc-35b2-4a7b-82c4-ac04b980e474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f8846499-359a-4e65-9120-56011fbc0f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.265 232437 DEBUG nova.network.neutron [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.268 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.304 232437 WARNING nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.304 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Triggering sync for uuid f8846499-359a-4e65-9120-56011fbc0f1d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  6 02:23:30 np0005548731 nova_compute[232433]: 2025-12-06 07:23:30.305 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:23:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:30.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:31 np0005548731 nova_compute[232433]: 2025-12-06 07:23:31.291 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:31.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:31 np0005548731 nova_compute[232433]: 2025-12-06 07:23:31.978 232437 DEBUG nova.network.neutron [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Updating instance_info_cache with network_info: [{"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:23:32 np0005548731 nova_compute[232433]: 2025-12-06 07:23:32.003 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Releasing lock "refresh_cache-f8846499-359a-4e65-9120-56011fbc0f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:23:32 np0005548731 nova_compute[232433]: 2025-12-06 07:23:32.004 232437 DEBUG nova.compute.manager [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Instance network_info: |[{"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:23:32 np0005548731 nova_compute[232433]: 2025-12-06 07:23:32.004 232437 DEBUG oslo_concurrency.lockutils [req-73e9febf-06ef-44b7-aa86-bf52e4d5f885 req-0ad93acc-35b2-4a7b-82c4-ac04b980e474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f8846499-359a-4e65-9120-56011fbc0f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:23:32 np0005548731 nova_compute[232433]: 2025-12-06 07:23:32.005 232437 DEBUG nova.network.neutron [req-73e9febf-06ef-44b7-aa86-bf52e4d5f885 req-0ad93acc-35b2-4a7b-82c4-ac04b980e474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Refreshing network info cache for port 5c84e057-7718-465e-89ef-0ad3aa2c82b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:23:32 np0005548731 nova_compute[232433]: 2025-12-06 07:23:32.276 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:32.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.171 232437 DEBUG nova.objects.instance [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'migration_context' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.189 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.190 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Ensure instance console log exists: /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.190 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.190 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.191 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.193 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Start _get_guest_xml network_info=[{"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.197 232437 WARNING nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.202 232437 DEBUG nova.virt.libvirt.host [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.202 232437 DEBUG nova.virt.libvirt.host [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.205 232437 DEBUG nova.virt.libvirt.host [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.205 232437 DEBUG nova.virt.libvirt.host [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.207 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.207 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.207 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.208 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.208 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.208 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.208 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.209 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.209 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.209 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.209 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.210 232437 DEBUG nova.virt.hardware [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:23:33 np0005548731 nova_compute[232433]: 2025-12-06 07:23:33.213 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:23:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:33.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:23:33 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/377675331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.243 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.275 232437 DEBUG nova.storage.rbd_utils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.280 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:23:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 e264: 3 total, 3 up, 3 in
Dec  6 02:23:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:23:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2798063090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.761 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.763 232437 DEBUG nova.virt.libvirt.vif [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:23:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-31720005',display_name='tempest-ServerDiskConfigTestJSON-server-31720005',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-31720005',id=98,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-lpgj2mdw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:23:28Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=f8846499-359a-4e65-9120-56011fbc0f1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.763 232437 DEBUG nova.network.os_vif_util [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.765 232437 DEBUG nova.network.os_vif_util [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.766 232437 DEBUG nova.objects.instance [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_devices' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.791 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  <uuid>f8846499-359a-4e65-9120-56011fbc0f1d</uuid>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  <name>instance-00000062</name>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-31720005</nova:name>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:23:33</nova:creationTime>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <nova:user uuid="d67c136e82ad4001b000848d75eef50d">tempest-ServerDiskConfigTestJSON-749654875-project-member</nova:user>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <nova:project uuid="88f5b34244614321a9b6e902eaba0ece">tempest-ServerDiskConfigTestJSON-749654875</nova:project>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <nova:port uuid="5c84e057-7718-465e-89ef-0ad3aa2c82b3">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <entry name="serial">f8846499-359a-4e65-9120-56011fbc0f1d</entry>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <entry name="uuid">f8846499-359a-4e65-9120-56011fbc0f1d</entry>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f8846499-359a-4e65-9120-56011fbc0f1d_disk">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f8846499-359a-4e65-9120-56011fbc0f1d_disk.config">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:3a:72:e0"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <target dev="tap5c84e057-77"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/console.log" append="off"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:23:34 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:23:34 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:23:34 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:23:34 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.793 232437 DEBUG nova.compute.manager [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Preparing to wait for external event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.793 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.794 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.794 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.795 232437 DEBUG nova.virt.libvirt.vif [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:23:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-31720005',display_name='tempest-ServerDiskConfigTestJSON-server-31720005',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-31720005',id=98,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-lpgj2mdw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:23:28Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=f8846499-359a-4e65-9120-56011fbc0f1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.795 232437 DEBUG nova.network.os_vif_util [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.796 232437 DEBUG nova.network.os_vif_util [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.796 232437 DEBUG os_vif [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.797 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.798 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.798 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.801 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.802 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c84e057-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.802 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5c84e057-77, col_values=(('external_ids', {'iface-id': '5c84e057-7718-465e-89ef-0ad3aa2c82b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:72:e0', 'vm-uuid': 'f8846499-359a-4e65-9120-56011fbc0f1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:23:34 np0005548731 NetworkManager[49182]: <info>  [1765005814.8051] manager: (tap5c84e057-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.804 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.806 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.810 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.811 232437 INFO os_vif [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77')#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.852 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.852 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.852 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No VIF found with MAC fa:16:3e:3a:72:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.853 232437 INFO nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Using config drive#033[00m
Dec  6 02:23:34 np0005548731 nova_compute[232433]: 2025-12-06 07:23:34.877 232437 DEBUG nova.storage.rbd_utils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:23:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:23:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:34.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:23:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:35.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:23:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:23:35 np0005548731 podman[271275]: 2025-12-06 07:23:35.892323959 +0000 UTC m=+0.054396073 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:23:35 np0005548731 podman[271277]: 2025-12-06 07:23:35.92232941 +0000 UTC m=+0.079016589 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 02:23:35 np0005548731 podman[271276]: 2025-12-06 07:23:35.941377678 +0000 UTC m=+0.103438452 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  6 02:23:35 np0005548731 nova_compute[232433]: 2025-12-06 07:23:35.950 232437 INFO nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Creating config drive at /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config#033[00m
Dec  6 02:23:35 np0005548731 nova_compute[232433]: 2025-12-06 07:23:35.956 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpus7ofvch execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:23:36 np0005548731 nova_compute[232433]: 2025-12-06 07:23:36.092 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpus7ofvch" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:23:36 np0005548731 nova_compute[232433]: 2025-12-06 07:23:36.136 232437 DEBUG nova.storage.rbd_utils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:23:36 np0005548731 nova_compute[232433]: 2025-12-06 07:23:36.139 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config f8846499-359a-4e65-9120-56011fbc0f1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:23:36 np0005548731 nova_compute[232433]: 2025-12-06 07:23:36.663 232437 DEBUG oslo_concurrency.processutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config f8846499-359a-4e65-9120-56011fbc0f1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:23:36 np0005548731 nova_compute[232433]: 2025-12-06 07:23:36.664 232437 INFO nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Deleting local config drive /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config because it was imported into RBD.#033[00m
Dec  6 02:23:36 np0005548731 nova_compute[232433]: 2025-12-06 07:23:36.666 232437 DEBUG nova.network.neutron [req-73e9febf-06ef-44b7-aa86-bf52e4d5f885 req-0ad93acc-35b2-4a7b-82c4-ac04b980e474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Updated VIF entry in instance network info cache for port 5c84e057-7718-465e-89ef-0ad3aa2c82b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:23:36 np0005548731 nova_compute[232433]: 2025-12-06 07:23:36.666 232437 DEBUG nova.network.neutron [req-73e9febf-06ef-44b7-aa86-bf52e4d5f885 req-0ad93acc-35b2-4a7b-82c4-ac04b980e474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Updating instance_info_cache with network_info: [{"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:23:36 np0005548731 nova_compute[232433]: 2025-12-06 07:23:36.684 232437 DEBUG oslo_concurrency.lockutils [req-73e9febf-06ef-44b7-aa86-bf52e4d5f885 req-0ad93acc-35b2-4a7b-82c4-ac04b980e474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f8846499-359a-4e65-9120-56011fbc0f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:23:36 np0005548731 NetworkManager[49182]: <info>  [1765005816.7260] manager: (tap5c84e057-77): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Dec  6 02:23:36 np0005548731 kernel: tap5c84e057-77: entered promiscuous mode
Dec  6 02:23:36 np0005548731 nova_compute[232433]: 2025-12-06 07:23:36.728 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:23:36Z|00365|binding|INFO|Claiming lport 5c84e057-7718-465e-89ef-0ad3aa2c82b3 for this chassis.
Dec  6 02:23:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:23:36Z|00366|binding|INFO|5c84e057-7718-465e-89ef-0ad3aa2c82b3: Claiming fa:16:3e:3a:72:e0 10.100.0.3
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.751 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:72:e0 10.100.0.3'], port_security=['fa:16:3e:3a:72:e0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8846499-359a-4e65-9120-56011fbc0f1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '2', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=5c84e057-7718-465e-89ef-0ad3aa2c82b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.752 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 5c84e057-7718-465e-89ef-0ad3aa2c82b3 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a bound to our chassis#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.754 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c014e4e-a182-4f60-8285-20525bc99e5a#033[00m
Dec  6 02:23:36 np0005548731 systemd-udevd[271388]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.766 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2e6269-805c-4f98-bbb3-08d3c907e71f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.768 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c014e4e-a1 in ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:23:36 np0005548731 systemd-machined[195355]: New machine qemu-39-instance-00000062.
Dec  6 02:23:36 np0005548731 NetworkManager[49182]: <info>  [1765005816.7696] device (tap5c84e057-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:23:36 np0005548731 NetworkManager[49182]: <info>  [1765005816.7707] device (tap5c84e057-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.770 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c014e4e-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.770 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7d503456-eed9-436b-a00b-00ef9e69268c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.771 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9209c734-e64e-4d44-b4a2-4ace0cf66b42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 systemd[1]: Started Virtual Machine qemu-39-instance-00000062.
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.785 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[7b30728f-ee5c-4778-ae59-05bff3965931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.816 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8f96371a-5070-4097-9efd-8bb5d65b1604]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 nova_compute[232433]: 2025-12-06 07:23:36.831 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:23:36Z|00367|binding|INFO|Setting lport 5c84e057-7718-465e-89ef-0ad3aa2c82b3 ovn-installed in OVS
Dec  6 02:23:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:23:36Z|00368|binding|INFO|Setting lport 5c84e057-7718-465e-89ef-0ad3aa2c82b3 up in Southbound
Dec  6 02:23:36 np0005548731 nova_compute[232433]: 2025-12-06 07:23:36.836 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.845 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2c227d-d761-4377-a099-26d0f47ac728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 NetworkManager[49182]: <info>  [1765005816.8522] manager: (tap7c014e4e-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.853 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5705749c-207e-4e17-bd09-a4088cfd9c06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.885 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c60fbc-82dd-4dca-a4c8-a4422eab6148]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.888 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[86f4891b-1671-44af-b53b-e82afb3e2461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 NetworkManager[49182]: <info>  [1765005816.9136] device (tap7c014e4e-a0): carrier: link connected
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.918 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bbef198a-0e8c-41d1-937e-2b9974d6e69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.933 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2cda16-993b-440d-848d-8560428d8991]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607892, 'reachable_time': 24312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271424, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.948 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[540cdeb4-4b31-4049-9fab-be719206e201]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:141c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607892, 'tstamp': 607892}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271425, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.962 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1d95f654-7829-4ece-b44f-b568d13269fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607892, 'reachable_time': 24312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271426, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:36.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:36.990 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[da89fafc-4c69-41ca-9083-a502a719972b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:37.040 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e14d540a-5f46-4a5e-9d87-326a2a20474a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:37.042 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:37.042 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:37.042 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c014e4e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:23:37 np0005548731 NetworkManager[49182]: <info>  [1765005817.0447] manager: (tap7c014e4e-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.044 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:37 np0005548731 kernel: tap7c014e4e-a0: entered promiscuous mode
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:37.048 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c014e4e-a0, col_values=(('external_ids', {'iface-id': 'd8dd1a7d-045a-42a3-8829-567c43985ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:23:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:23:37Z|00369|binding|INFO|Releasing lport d8dd1a7d-045a-42a3-8829-567c43985ae0 from this chassis (sb_readonly=0)
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.049 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:37.052 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:37.052 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f8276f-380c-4825-9f3a-192328361ac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:37.053 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-7c014e4e-a182-4f60-8285-20525bc99e5a
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:23:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:23:37.054 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'env', 'PROCESS_TAG=haproxy-7c014e4e-a182-4f60-8285-20525bc99e5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c014e4e-a182-4f60-8285-20525bc99e5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.062 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.114 232437 DEBUG nova.compute.manager [req-64e6d3fd-6c81-4fa4-9844-bfaea61d088b req-aa19320b-d289-463c-b8fc-86ebe9972434 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.114 232437 DEBUG oslo_concurrency.lockutils [req-64e6d3fd-6c81-4fa4-9844-bfaea61d088b req-aa19320b-d289-463c-b8fc-86ebe9972434 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.115 232437 DEBUG oslo_concurrency.lockutils [req-64e6d3fd-6c81-4fa4-9844-bfaea61d088b req-aa19320b-d289-463c-b8fc-86ebe9972434 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.115 232437 DEBUG oslo_concurrency.lockutils [req-64e6d3fd-6c81-4fa4-9844-bfaea61d088b req-aa19320b-d289-463c-b8fc-86ebe9972434 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.115 232437 DEBUG nova.compute.manager [req-64e6d3fd-6c81-4fa4-9844-bfaea61d088b req-aa19320b-d289-463c-b8fc-86ebe9972434 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Processing event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.199 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005817.1994345, f8846499-359a-4e65-9120-56011fbc0f1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.200 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] VM Started (Lifecycle Event)#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.203 232437 DEBUG nova.compute.manager [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.208 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.212 232437 INFO nova.virt.libvirt.driver [-] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Instance spawned successfully.#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.213 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.247 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.253 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.255 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.256 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.256 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.257 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.257 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.258 232437 DEBUG nova.virt.libvirt.driver [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.278 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.305 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.306 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005817.2004297, f8846499-359a-4e65-9120-56011fbc0f1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.306 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.335 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.339 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005817.2070277, f8846499-359a-4e65-9120-56011fbc0f1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.339 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.358 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.361 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.382 232437 INFO nova.compute.manager [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Took 9.01 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.382 232437 DEBUG nova.compute.manager [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.394 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:23:37 np0005548731 podman[271500]: 2025-12-06 07:23:37.413174552 +0000 UTC m=+0.048478495 container create f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  6 02:23:37 np0005548731 systemd[1]: Started libpod-conmon-f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57.scope.
Dec  6 02:23:37 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.481 232437 INFO nova.compute.manager [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Took 10.06 seconds to build instance.#033[00m
Dec  6 02:23:37 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6544bbdbc2262a9c128eff7dd7e15741c5250c1f630780ec6f9fdb58ff0a61c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:23:37 np0005548731 podman[271500]: 2025-12-06 07:23:37.388395671 +0000 UTC m=+0.023699634 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:23:37 np0005548731 podman[271500]: 2025-12-06 07:23:37.497957656 +0000 UTC m=+0.133261599 container init f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.502 232437 DEBUG oslo_concurrency.lockutils [None req-aa6a685d-9a08-40fc-b5fe-e37c824637e0 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.502 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "f8846499-359a-4e65-9120-56011fbc0f1d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 7.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.502 232437 INFO nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:23:37 np0005548731 podman[271500]: 2025-12-06 07:23:37.503507985 +0000 UTC m=+0.138811928 container start f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 02:23:37 np0005548731 nova_compute[232433]: 2025-12-06 07:23:37.503 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "f8846499-359a-4e65-9120-56011fbc0f1d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:23:37 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[271515]: [NOTICE]   (271519) : New worker (271521) forked
Dec  6 02:23:37 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[271515]: [NOTICE]   (271519) : Loading success.
Dec  6 02:23:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:37.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:38.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:39 np0005548731 nova_compute[232433]: 2025-12-06 07:23:39.330 232437 DEBUG nova.compute.manager [req-ac7ced09-88db-4bea-bce3-83b21db250a6 req-53d68517-ce3e-4a92-a22c-6ec7156f6ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:23:39 np0005548731 nova_compute[232433]: 2025-12-06 07:23:39.331 232437 DEBUG oslo_concurrency.lockutils [req-ac7ced09-88db-4bea-bce3-83b21db250a6 req-53d68517-ce3e-4a92-a22c-6ec7156f6ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:23:39 np0005548731 nova_compute[232433]: 2025-12-06 07:23:39.331 232437 DEBUG oslo_concurrency.lockutils [req-ac7ced09-88db-4bea-bce3-83b21db250a6 req-53d68517-ce3e-4a92-a22c-6ec7156f6ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:23:39 np0005548731 nova_compute[232433]: 2025-12-06 07:23:39.331 232437 DEBUG oslo_concurrency.lockutils [req-ac7ced09-88db-4bea-bce3-83b21db250a6 req-53d68517-ce3e-4a92-a22c-6ec7156f6ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:23:39 np0005548731 nova_compute[232433]: 2025-12-06 07:23:39.331 232437 DEBUG nova.compute.manager [req-ac7ced09-88db-4bea-bce3-83b21db250a6 req-53d68517-ce3e-4a92-a22c-6ec7156f6ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] No waiting events found dispatching network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:23:39 np0005548731 nova_compute[232433]: 2025-12-06 07:23:39.331 232437 WARNING nova.compute.manager [req-ac7ced09-88db-4bea-bce3-83b21db250a6 req-53d68517-ce3e-4a92-a22c-6ec7156f6ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received unexpected event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:23:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000048s ======
Dec  6 02:23:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:39.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Dec  6 02:23:39 np0005548731 nova_compute[232433]: 2025-12-06 07:23:39.806 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:40.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:41 np0005548731 nova_compute[232433]: 2025-12-06 07:23:41.249 232437 INFO nova.compute.manager [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Rebuilding instance#033[00m
Dec  6 02:23:41 np0005548731 nova_compute[232433]: 2025-12-06 07:23:41.550 232437 DEBUG nova.objects.instance [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'trusted_certs' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:23:41 np0005548731 nova_compute[232433]: 2025-12-06 07:23:41.566 232437 DEBUG nova.compute.manager [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:23:41 np0005548731 nova_compute[232433]: 2025-12-06 07:23:41.606 232437 DEBUG nova.objects.instance [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_requests' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:23:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:41.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:41 np0005548731 nova_compute[232433]: 2025-12-06 07:23:41.617 232437 DEBUG nova.objects.instance [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_devices' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:23:41 np0005548731 nova_compute[232433]: 2025-12-06 07:23:41.634 232437 DEBUG nova.objects.instance [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'resources' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:23:41 np0005548731 nova_compute[232433]: 2025-12-06 07:23:41.647 232437 DEBUG nova.objects.instance [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'migration_context' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:23:41 np0005548731 nova_compute[232433]: 2025-12-06 07:23:41.657 232437 DEBUG nova.objects.instance [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec  6 02:23:41 np0005548731 nova_compute[232433]: 2025-12-06 07:23:41.660 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:23:42 np0005548731 nova_compute[232433]: 2025-12-06 07:23:42.317 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:23:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:43.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:23:44 np0005548731 nova_compute[232433]: 2025-12-06 07:23:44.808 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:44.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:45.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:46.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:47 np0005548731 nova_compute[232433]: 2025-12-06 07:23:47.319 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:23:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:47.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:23:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:48.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:49.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:49 np0005548731 nova_compute[232433]: 2025-12-06 07:23:49.811 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:50.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:51.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:51 np0005548731 nova_compute[232433]: 2025-12-06 07:23:51.700 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:23:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:23:51Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:72:e0 10.100.0.3
Dec  6 02:23:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:23:51Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:72:e0 10.100.0.3
Dec  6 02:23:52 np0005548731 nova_compute[232433]: 2025-12-06 07:23:52.321 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:52.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:23:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:53.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:23:54 np0005548731 nova_compute[232433]: 2025-12-06 07:23:54.815 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:23:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:54.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:23:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:55.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:56 np0005548731 nova_compute[232433]: 2025-12-06 07:23:56.136 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:56.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:57 np0005548731 nova_compute[232433]: 2025-12-06 07:23:57.322 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:23:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:57.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:23:58 np0005548731 nova_compute[232433]: 2025-12-06 07:23:58.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:23:58 np0005548731 nova_compute[232433]: 2025-12-06 07:23:58.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:23:58 np0005548731 nova_compute[232433]: 2025-12-06 07:23:58.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:23:58 np0005548731 nova_compute[232433]: 2025-12-06 07:23:58.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-f8846499-359a-4e65-9120-56011fbc0f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:23:58 np0005548731 nova_compute[232433]: 2025-12-06 07:23:58.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-f8846499-359a-4e65-9120-56011fbc0f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:23:58 np0005548731 nova_compute[232433]: 2025-12-06 07:23:58.134 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:23:58 np0005548731 nova_compute[232433]: 2025-12-06 07:23:58.135 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:23:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:58.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:23:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:23:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:59.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:23:59 np0005548731 nova_compute[232433]: 2025-12-06 07:23:59.817 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:00.866 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:00.866 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:00.867 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:00.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.180 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Updating instance_info_cache with network_info: [{"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:24:01 np0005548731 kernel: tap5c84e057-77 (unregistering): left promiscuous mode
Dec  6 02:24:01 np0005548731 NetworkManager[49182]: <info>  [1765005841.2887] device (tap5c84e057-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:24:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:01Z|00370|binding|INFO|Releasing lport 5c84e057-7718-465e-89ef-0ad3aa2c82b3 from this chassis (sb_readonly=0)
Dec  6 02:24:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:01Z|00371|binding|INFO|Setting lport 5c84e057-7718-465e-89ef-0ad3aa2c82b3 down in Southbound
Dec  6 02:24:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:01Z|00372|binding|INFO|Removing iface tap5c84e057-77 ovn-installed in OVS
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.298 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.300 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.304 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-f8846499-359a-4e65-9120-56011fbc0f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.305 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.305 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.305 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.305 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.307 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:72:e0 10.100.0.3'], port_security=['fa:16:3e:3a:72:e0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8846499-359a-4e65-9120-56011fbc0f1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '4', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=5c84e057-7718-465e-89ef-0ad3aa2c82b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.309 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 5c84e057-7718-465e-89ef-0ad3aa2c82b3 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a unbound from our chassis#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.311 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c014e4e-a182-4f60-8285-20525bc99e5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.313 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7a016609-b9d1-4c83-8534-90b52addc40e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.313 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace which is not needed anymore#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.320 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:01 np0005548731 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000062.scope: Deactivated successfully.
Dec  6 02:24:01 np0005548731 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000062.scope: Consumed 13.739s CPU time.
Dec  6 02:24:01 np0005548731 systemd-machined[195355]: Machine qemu-39-instance-00000062 terminated.
Dec  6 02:24:01 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[271515]: [NOTICE]   (271519) : haproxy version is 2.8.14-c23fe91
Dec  6 02:24:01 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[271515]: [NOTICE]   (271519) : path to executable is /usr/sbin/haproxy
Dec  6 02:24:01 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[271515]: [WARNING]  (271519) : Exiting Master process...
Dec  6 02:24:01 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[271515]: [ALERT]    (271519) : Current worker (271521) exited with code 143 (Terminated)
Dec  6 02:24:01 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[271515]: [WARNING]  (271519) : All workers exited. Exiting... (0)
Dec  6 02:24:01 np0005548731 systemd[1]: libpod-f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57.scope: Deactivated successfully.
Dec  6 02:24:01 np0005548731 podman[271617]: 2025-12-06 07:24:01.453396963 +0000 UTC m=+0.046493885 container died f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  6 02:24:01 np0005548731 systemd[1]: var-lib-containers-storage-overlay-a6544bbdbc2262a9c128eff7dd7e15741c5250c1f630780ec6f9fdb58ff0a61c-merged.mount: Deactivated successfully.
Dec  6 02:24:01 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57-userdata-shm.mount: Deactivated successfully.
Dec  6 02:24:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:24:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:01.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:24:01 np0005548731 podman[271617]: 2025-12-06 07:24:01.692403059 +0000 UTC m=+0.285499981 container cleanup f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 02:24:01 np0005548731 systemd[1]: libpod-conmon-f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57.scope: Deactivated successfully.
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.744 232437 INFO nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Instance shutdown successfully after 20 seconds.#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.749 232437 INFO nova.virt.libvirt.driver [-] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Instance destroyed successfully.#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.753 232437 INFO nova.virt.libvirt.driver [-] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Instance destroyed successfully.#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.754 232437 DEBUG nova.virt.libvirt.vif [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:23:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-31720005',display_name='tempest-ServerDiskConfigTestJSON-server-31720005',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-31720005',id=98,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:23:37Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-lpgj2mdw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:23:40Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=f8846499-359a-4e65-9120-56011fbc0f1d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.755 232437 DEBUG nova.network.os_vif_util [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.755 232437 DEBUG nova.network.os_vif_util [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.756 232437 DEBUG os_vif [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.758 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.759 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c84e057-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.760 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.762 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.765 232437 INFO os_vif [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77')#033[00m
Dec  6 02:24:01 np0005548731 podman[271657]: 2025-12-06 07:24:01.777126589 +0000 UTC m=+0.056047824 container remove f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.783 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[79b337a6-d9ef-4383-a796-b5d0e2f5b4d2]: (4, ('Sat Dec  6 07:24:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57)\nf8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57\nSat Dec  6 07:24:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (f8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57)\nf8f9f5ab31b8f5a7f90e9f6265e22d6bdfad841065c1559f0a659dc786d25c57\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.786 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffb4234-2c4c-4eb8-b1d5-9a5096aa02b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.787 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.789 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:01 np0005548731 kernel: tap7c014e4e-a0: left promiscuous mode
Dec  6 02:24:01 np0005548731 nova_compute[232433]: 2025-12-06 07:24:01.804 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.807 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb27864-20e9-45cb-a603-9c90356c2c5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.821 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6513a609-56fd-4df5-89e4-708cf6c52189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.823 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7d8d7c-6691-4b88-8573-dea58847aa13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.837 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[423aa627-9e3f-4262-a51a-c002ff792e74]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607884, 'reachable_time': 42191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271690, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.841 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:24:01 np0005548731 systemd[1]: run-netns-ovnmeta\x2d7c014e4e\x2da182\x2d4f60\x2d8285\x2d20525bc99e5a.mount: Deactivated successfully.
Dec  6 02:24:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:01.841 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[291f9ebd-4b15-4de1-b092-2e13423ece07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:02 np0005548731 nova_compute[232433]: 2025-12-06 07:24:02.240 232437 DEBUG nova.compute.manager [req-fdf02646-2c28-4113-8cfd-2b71d0dd0b73 req-9bda775c-8485-4016-a07e-4caf5f626260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received event network-vif-unplugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:02 np0005548731 nova_compute[232433]: 2025-12-06 07:24:02.241 232437 DEBUG oslo_concurrency.lockutils [req-fdf02646-2c28-4113-8cfd-2b71d0dd0b73 req-9bda775c-8485-4016-a07e-4caf5f626260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:02 np0005548731 nova_compute[232433]: 2025-12-06 07:24:02.241 232437 DEBUG oslo_concurrency.lockutils [req-fdf02646-2c28-4113-8cfd-2b71d0dd0b73 req-9bda775c-8485-4016-a07e-4caf5f626260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:02 np0005548731 nova_compute[232433]: 2025-12-06 07:24:02.241 232437 DEBUG oslo_concurrency.lockutils [req-fdf02646-2c28-4113-8cfd-2b71d0dd0b73 req-9bda775c-8485-4016-a07e-4caf5f626260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:02 np0005548731 nova_compute[232433]: 2025-12-06 07:24:02.241 232437 DEBUG nova.compute.manager [req-fdf02646-2c28-4113-8cfd-2b71d0dd0b73 req-9bda775c-8485-4016-a07e-4caf5f626260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] No waiting events found dispatching network-vif-unplugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:24:02 np0005548731 nova_compute[232433]: 2025-12-06 07:24:02.242 232437 WARNING nova.compute.manager [req-fdf02646-2c28-4113-8cfd-2b71d0dd0b73 req-9bda775c-8485-4016-a07e-4caf5f626260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received unexpected event network-vif-unplugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 for instance with vm_state active and task_state rebuilding.#033[00m
Dec  6 02:24:02 np0005548731 nova_compute[232433]: 2025-12-06 07:24:02.324 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:03.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:03 np0005548731 nova_compute[232433]: 2025-12-06 07:24:03.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:24:03 np0005548731 nova_compute[232433]: 2025-12-06 07:24:03.628 232437 INFO nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Deleting instance files /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d_del#033[00m
Dec  6 02:24:03 np0005548731 nova_compute[232433]: 2025-12-06 07:24:03.629 232437 INFO nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Deletion of /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d_del complete#033[00m
Dec  6 02:24:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:03.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:03 np0005548731 nova_compute[232433]: 2025-12-06 07:24:03.994 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:24:03 np0005548731 nova_compute[232433]: 2025-12-06 07:24:03.995 232437 INFO nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Creating image(s)#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.015 232437 DEBUG nova.storage.rbd_utils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.037 232437 DEBUG nova.storage.rbd_utils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.060 232437 DEBUG nova.storage.rbd_utils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.064 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.065 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.302 232437 DEBUG nova.virt.libvirt.imagebackend [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/412dd61d-1b1e-439f-b7f9-7e7c4e42924c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/412dd61d-1b1e-439f-b7f9-7e7c4e42924c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.318 232437 DEBUG nova.compute.manager [req-be2a567b-5a0e-4bf5-afbe-417339b4d6d1 req-2ad7dfe5-3202-411d-8a21-62e82f2bbf8b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.319 232437 DEBUG oslo_concurrency.lockutils [req-be2a567b-5a0e-4bf5-afbe-417339b4d6d1 req-2ad7dfe5-3202-411d-8a21-62e82f2bbf8b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.319 232437 DEBUG oslo_concurrency.lockutils [req-be2a567b-5a0e-4bf5-afbe-417339b4d6d1 req-2ad7dfe5-3202-411d-8a21-62e82f2bbf8b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.319 232437 DEBUG oslo_concurrency.lockutils [req-be2a567b-5a0e-4bf5-afbe-417339b4d6d1 req-2ad7dfe5-3202-411d-8a21-62e82f2bbf8b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.319 232437 DEBUG nova.compute.manager [req-be2a567b-5a0e-4bf5-afbe-417339b4d6d1 req-2ad7dfe5-3202-411d-8a21-62e82f2bbf8b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] No waiting events found dispatching network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:24:04 np0005548731 nova_compute[232433]: 2025-12-06 07:24:04.320 232437 WARNING nova.compute.manager [req-be2a567b-5a0e-4bf5-afbe-417339b4d6d1 req-2ad7dfe5-3202-411d-8a21-62e82f2bbf8b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received unexpected event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Dec  6 02:24:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:05.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.344 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.411 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.412 232437 DEBUG nova.virt.images [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] 412dd61d-1b1e-439f-b7f9-7e7c4e42924c was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.413 232437 DEBUG nova.privsep.utils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.414 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.634 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.638 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:05.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.699 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.700 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.719 232437 DEBUG nova.storage.rbd_utils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:05 np0005548731 nova_compute[232433]: 2025-12-06 07:24:05.723 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 f8846499-359a-4e65-9120-56011fbc0f1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:06 np0005548731 nova_compute[232433]: 2025-12-06 07:24:06.761 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:06 np0005548731 podman[271852]: 2025-12-06 07:24:06.921302394 +0000 UTC m=+0.066632339 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:24:06 np0005548731 podman[271851]: 2025-12-06 07:24:06.939298585 +0000 UTC m=+0.089329308 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller)
Dec  6 02:24:06 np0005548731 podman[271850]: 2025-12-06 07:24:06.94347482 +0000 UTC m=+0.094855797 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  6 02:24:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:07.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:07.087 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:24:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:07.089 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:24:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:07.090 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:07 np0005548731 nova_compute[232433]: 2025-12-06 07:24:07.117 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:07 np0005548731 nova_compute[232433]: 2025-12-06 07:24:07.325 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:07.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:08 np0005548731 nova_compute[232433]: 2025-12-06 07:24:08.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:24:08 np0005548731 nova_compute[232433]: 2025-12-06 07:24:08.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:08 np0005548731 nova_compute[232433]: 2025-12-06 07:24:08.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:08 np0005548731 nova_compute[232433]: 2025-12-06 07:24:08.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:08 np0005548731 nova_compute[232433]: 2025-12-06 07:24:08.129 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:24:08 np0005548731 nova_compute[232433]: 2025-12-06 07:24:08.129 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:24:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/637376290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:24:08 np0005548731 nova_compute[232433]: 2025-12-06 07:24:08.567 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:08 np0005548731 nova_compute[232433]: 2025-12-06 07:24:08.735 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:24:08 np0005548731 nova_compute[232433]: 2025-12-06 07:24:08.736 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4464MB free_disk=20.872722625732422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:24:08 np0005548731 nova_compute[232433]: 2025-12-06 07:24:08.737 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:08 np0005548731 nova_compute[232433]: 2025-12-06 07:24:08.737 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:09.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.102 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance f8846499-359a-4e65-9120-56011fbc0f1d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.102 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.103 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.123 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.152 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.153 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.168 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.203 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.256 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:09.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:24:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/806788559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.747 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.752 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:24:09 np0005548731 nova_compute[232433]: 2025-12-06 07:24:09.852 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.196 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 f8846499-359a-4e65-9120-56011fbc0f1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.256 232437 DEBUG nova.storage.rbd_utils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] resizing rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.344 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.344 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Ensure instance console log exists: /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.345 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.345 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.346 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.348 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Start _get_guest_xml network_info=[{"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.351 232437 WARNING nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.355 232437 DEBUG nova.virt.libvirt.host [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.355 232437 DEBUG nova.virt.libvirt.host [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.358 232437 DEBUG nova.virt.libvirt.host [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.359 232437 DEBUG nova.virt.libvirt.host [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.360 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.360 232437 DEBUG nova.virt.hardware [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.360 232437 DEBUG nova.virt.hardware [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.360 232437 DEBUG nova.virt.hardware [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.361 232437 DEBUG nova.virt.hardware [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.361 232437 DEBUG nova.virt.hardware [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.361 232437 DEBUG nova.virt.hardware [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.361 232437 DEBUG nova.virt.hardware [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.361 232437 DEBUG nova.virt.hardware [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.362 232437 DEBUG nova.virt.hardware [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.362 232437 DEBUG nova.virt.hardware [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.362 232437 DEBUG nova.virt.hardware [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.362 232437 DEBUG nova.objects.instance [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'vcpu_model' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.586 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.587 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:10 np0005548731 nova_compute[232433]: 2025-12-06 07:24:10.617 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:24:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1136767738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:24:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:24:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:11.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.030 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.062 232437 DEBUG nova.storage.rbd_utils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.067 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:24:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1705575113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.502 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.504 232437 DEBUG nova.virt.libvirt.vif [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:23:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-31720005',display_name='tempest-ServerDiskConfigTestJSON-server-31720005',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-31720005',id=98,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:23:37Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-lpgj2mdw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:03Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=f8846499-359a-4e65-9120-56011fbc0f1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.505 232437 DEBUG nova.network.os_vif_util [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.506 232437 DEBUG nova.network.os_vif_util [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.508 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  <uuid>f8846499-359a-4e65-9120-56011fbc0f1d</uuid>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  <name>instance-00000062</name>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-31720005</nova:name>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:24:10</nova:creationTime>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <nova:user uuid="d67c136e82ad4001b000848d75eef50d">tempest-ServerDiskConfigTestJSON-749654875-project-member</nova:user>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <nova:project uuid="88f5b34244614321a9b6e902eaba0ece">tempest-ServerDiskConfigTestJSON-749654875</nova:project>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <nova:port uuid="5c84e057-7718-465e-89ef-0ad3aa2c82b3">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <entry name="serial">f8846499-359a-4e65-9120-56011fbc0f1d</entry>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <entry name="uuid">f8846499-359a-4e65-9120-56011fbc0f1d</entry>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f8846499-359a-4e65-9120-56011fbc0f1d_disk">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f8846499-359a-4e65-9120-56011fbc0f1d_disk.config">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:3a:72:e0"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <target dev="tap5c84e057-77"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/console.log" append="off"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:24:11 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:24:11 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:24:11 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:24:11 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.510 232437 DEBUG nova.compute.manager [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Preparing to wait for external event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.510 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.510 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.511 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.511 232437 DEBUG nova.virt.libvirt.vif [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:23:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-31720005',display_name='tempest-ServerDiskConfigTestJSON-server-31720005',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-31720005',id=98,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:23:37Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-lpgj2mdw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:03Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=f8846499-359a-4e65-9120-56011fbc0f1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.512 232437 DEBUG nova.network.os_vif_util [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.512 232437 DEBUG nova.network.os_vif_util [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.513 232437 DEBUG os_vif [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.514 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.514 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.515 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.516 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.517 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c84e057-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.517 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5c84e057-77, col_values=(('external_ids', {'iface-id': '5c84e057-7718-465e-89ef-0ad3aa2c82b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:72:e0', 'vm-uuid': 'f8846499-359a-4e65-9120-56011fbc0f1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:11 np0005548731 NetworkManager[49182]: <info>  [1765005851.5719] manager: (tap5c84e057-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.571 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.575 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.577 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.577 232437 INFO os_vif [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77')#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.587 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.630 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.630 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.631 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No VIF found with MAC fa:16:3e:3a:72:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.631 232437 INFO nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Using config drive#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.656 232437 DEBUG nova.storage.rbd_utils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:11.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.682 232437 DEBUG nova.objects.instance [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'ec2_ids' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:24:11 np0005548731 nova_compute[232433]: 2025-12-06 07:24:11.722 232437 DEBUG nova.objects.instance [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'keypairs' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:24:12 np0005548731 nova_compute[232433]: 2025-12-06 07:24:12.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:12 np0005548731 nova_compute[232433]: 2025-12-06 07:24:12.580 232437 INFO nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Creating config drive at /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config#033[00m
Dec  6 02:24:12 np0005548731 nova_compute[232433]: 2025-12-06 07:24:12.586 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsut8mdgv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:12 np0005548731 nova_compute[232433]: 2025-12-06 07:24:12.715 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsut8mdgv" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:12 np0005548731 nova_compute[232433]: 2025-12-06 07:24:12.754 232437 DEBUG nova.storage.rbd_utils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image f8846499-359a-4e65-9120-56011fbc0f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:12 np0005548731 nova_compute[232433]: 2025-12-06 07:24:12.757 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config f8846499-359a-4e65-9120-56011fbc0f1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:13.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:13 np0005548731 nova_compute[232433]: 2025-12-06 07:24:13.061 232437 DEBUG oslo_concurrency.processutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config f8846499-359a-4e65-9120-56011fbc0f1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:13 np0005548731 nova_compute[232433]: 2025-12-06 07:24:13.062 232437 INFO nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Deleting local config drive /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d/disk.config because it was imported into RBD.#033[00m
Dec  6 02:24:13 np0005548731 kernel: tap5c84e057-77: entered promiscuous mode
Dec  6 02:24:13 np0005548731 NetworkManager[49182]: <info>  [1765005853.1203] manager: (tap5c84e057-77): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Dec  6 02:24:13 np0005548731 nova_compute[232433]: 2025-12-06 07:24:13.120 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:13Z|00373|binding|INFO|Claiming lport 5c84e057-7718-465e-89ef-0ad3aa2c82b3 for this chassis.
Dec  6 02:24:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:13Z|00374|binding|INFO|5c84e057-7718-465e-89ef-0ad3aa2c82b3: Claiming fa:16:3e:3a:72:e0 10.100.0.3
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.129 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:72:e0 10.100.0.3'], port_security=['fa:16:3e:3a:72:e0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8846499-359a-4e65-9120-56011fbc0f1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '5', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=5c84e057-7718-465e-89ef-0ad3aa2c82b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.131 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 5c84e057-7718-465e-89ef-0ad3aa2c82b3 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a bound to our chassis#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.132 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c014e4e-a182-4f60-8285-20525bc99e5a#033[00m
Dec  6 02:24:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:13Z|00375|binding|INFO|Setting lport 5c84e057-7718-465e-89ef-0ad3aa2c82b3 ovn-installed in OVS
Dec  6 02:24:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:13Z|00376|binding|INFO|Setting lport 5c84e057-7718-465e-89ef-0ad3aa2c82b3 up in Southbound
Dec  6 02:24:13 np0005548731 nova_compute[232433]: 2025-12-06 07:24:13.139 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:13 np0005548731 nova_compute[232433]: 2025-12-06 07:24:13.143 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.146 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5c243917-89c2-49b2-a0dd-1fdee6fabec0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.146 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c014e4e-a1 in ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.149 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c014e4e-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.149 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b7547406-f129-48f8-9ff5-6161acfd7e65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.150 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b81a56-c953-4450-b222-02a49dcf789e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 systemd-udevd[272173]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:24:13 np0005548731 systemd-machined[195355]: New machine qemu-40-instance-00000062.
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.161 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[61563f37-21de-449a-a6aa-308db76ea119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 NetworkManager[49182]: <info>  [1765005853.1684] device (tap5c84e057-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:24:13 np0005548731 NetworkManager[49182]: <info>  [1765005853.1702] device (tap5c84e057-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:24:13 np0005548731 systemd[1]: Started Virtual Machine qemu-40-instance-00000062.
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.185 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0d3779-df30-4208-94a4-35e89557444e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.214 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d86d0421-a47e-4880-a5d9-2c402b98a378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.219 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e13ee41c-a117-4067-af9e-dafa460f044b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 NetworkManager[49182]: <info>  [1765005853.2210] manager: (tap7c014e4e-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/192)
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.248 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[09a764e8-cbc1-43cc-b8da-ebefc9a71fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.251 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e2df03c6-51ed-42f2-8c8c-6287d6d4f905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 NetworkManager[49182]: <info>  [1765005853.2736] device (tap7c014e4e-a0): carrier: link connected
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.278 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9654ea27-e780-4e8b-ba86-9fecde8eeaa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.294 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f69b8c-ac05-4f88-8241-7d08beb2a2a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611528, 'reachable_time': 37246, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272206, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.309 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d41b0efa-9371-40be-9a11-378e6dc2b3ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:141c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611528, 'tstamp': 611528}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272207, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.329 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[22090d54-bd75-41e0-a9c9-a9d08ea06ce2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611528, 'reachable_time': 37246, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272208, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.358 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad00be4-1eb4-4b38-8b8c-4be57a97cecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.417 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[11b24a93-62be-41b5-ab6d-65a242494486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.418 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.419 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.419 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c014e4e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:13 np0005548731 nova_compute[232433]: 2025-12-06 07:24:13.422 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:13 np0005548731 NetworkManager[49182]: <info>  [1765005853.4230] manager: (tap7c014e4e-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Dec  6 02:24:13 np0005548731 kernel: tap7c014e4e-a0: entered promiscuous mode
Dec  6 02:24:13 np0005548731 nova_compute[232433]: 2025-12-06 07:24:13.424 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.428 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c014e4e-a0, col_values=(('external_ids', {'iface-id': 'd8dd1a7d-045a-42a3-8829-567c43985ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:13 np0005548731 nova_compute[232433]: 2025-12-06 07:24:13.429 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:13Z|00377|binding|INFO|Releasing lport d8dd1a7d-045a-42a3-8829-567c43985ae0 from this chassis (sb_readonly=0)
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.432 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.433 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ad28969f-79a6-44ae-83e2-3c298b22caff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.434 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-7c014e4e-a182-4f60-8285-20525bc99e5a
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:24:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:13.437 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'env', 'PROCESS_TAG=haproxy-7c014e4e-a182-4f60-8285-20525bc99e5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c014e4e-a182-4f60-8285-20525bc99e5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:24:13 np0005548731 nova_compute[232433]: 2025-12-06 07:24:13.442 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:13.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:13 np0005548731 podman[272240]: 2025-12-06 07:24:13.829895591 +0000 UTC m=+0.043463529 container create 9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:24:13 np0005548731 systemd[1]: Started libpod-conmon-9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258.scope.
Dec  6 02:24:13 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:24:13 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0374b2559bc235913c6036898008a8e3d8c8c7eaf7f3456a9907470b23e73c58/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:24:13 np0005548731 podman[272240]: 2025-12-06 07:24:13.806769693 +0000 UTC m=+0.020337661 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:24:13 np0005548731 podman[272240]: 2025-12-06 07:24:13.910865219 +0000 UTC m=+0.124433167 container init 9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:24:13 np0005548731 podman[272240]: 2025-12-06 07:24:13.916228774 +0000 UTC m=+0.129796712 container start 9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:24:13 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[272291]: [NOTICE]   (272296) : New worker (272298) forked
Dec  6 02:24:13 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[272291]: [NOTICE]   (272296) : Loading success.
Dec  6 02:24:14 np0005548731 nova_compute[232433]: 2025-12-06 07:24:14.067 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for f8846499-359a-4e65-9120-56011fbc0f1d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:24:14 np0005548731 nova_compute[232433]: 2025-12-06 07:24:14.068 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005854.0661876, f8846499-359a-4e65-9120-56011fbc0f1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:24:14 np0005548731 nova_compute[232433]: 2025-12-06 07:24:14.068 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] VM Started (Lifecycle Event)#033[00m
Dec  6 02:24:14 np0005548731 nova_compute[232433]: 2025-12-06 07:24:14.320 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:14 np0005548731 nova_compute[232433]: 2025-12-06 07:24:14.324 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005854.0663707, f8846499-359a-4e65-9120-56011fbc0f1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:24:14 np0005548731 nova_compute[232433]: 2025-12-06 07:24:14.324 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:24:14 np0005548731 nova_compute[232433]: 2025-12-06 07:24:14.355 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:14 np0005548731 nova_compute[232433]: 2025-12-06 07:24:14.359 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:24:14 np0005548731 nova_compute[232433]: 2025-12-06 07:24:14.383 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec  6 02:24:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:15.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:15.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.853 232437 DEBUG nova.compute.manager [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.853 232437 DEBUG oslo_concurrency.lockutils [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.854 232437 DEBUG oslo_concurrency.lockutils [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.854 232437 DEBUG oslo_concurrency.lockutils [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.854 232437 DEBUG nova.compute.manager [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Processing event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.854 232437 DEBUG nova.compute.manager [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.854 232437 DEBUG oslo_concurrency.lockutils [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.854 232437 DEBUG oslo_concurrency.lockutils [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.854 232437 DEBUG oslo_concurrency.lockutils [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.855 232437 DEBUG nova.compute.manager [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] No waiting events found dispatching network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.855 232437 WARNING nova.compute.manager [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received unexpected event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.855 232437 DEBUG nova.compute.manager [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.859 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005855.8590894, f8846499-359a-4e65-9120-56011fbc0f1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.859 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.860 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.863 232437 INFO nova.virt.libvirt.driver [-] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Instance spawned successfully.#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.864 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.881 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.884 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.885 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.885 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.885 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.886 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.886 232437 DEBUG nova.virt.libvirt.driver [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.890 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.944 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec  6 02:24:15 np0005548731 nova_compute[232433]: 2025-12-06 07:24:15.956 232437 DEBUG nova.compute.manager [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:16 np0005548731 nova_compute[232433]: 2025-12-06 07:24:16.016 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:16 np0005548731 nova_compute[232433]: 2025-12-06 07:24:16.016 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:16 np0005548731 nova_compute[232433]: 2025-12-06 07:24:16.016 232437 DEBUG nova.objects.instance [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec  6 02:24:16 np0005548731 nova_compute[232433]: 2025-12-06 07:24:16.071 232437 DEBUG oslo_concurrency.lockutils [None req-04a700b2-38f5-4ded-b4c1-6524435e01a2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:16 np0005548731 nova_compute[232433]: 2025-12-06 07:24:16.571 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:17.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:17 np0005548731 nova_compute[232433]: 2025-12-06 07:24:17.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:17.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:17 np0005548731 nova_compute[232433]: 2025-12-06 07:24:17.702 232437 DEBUG oslo_concurrency.lockutils [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:17 np0005548731 nova_compute[232433]: 2025-12-06 07:24:17.703 232437 DEBUG oslo_concurrency.lockutils [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:17 np0005548731 nova_compute[232433]: 2025-12-06 07:24:17.703 232437 DEBUG oslo_concurrency.lockutils [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:17 np0005548731 nova_compute[232433]: 2025-12-06 07:24:17.703 232437 DEBUG oslo_concurrency.lockutils [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:17 np0005548731 nova_compute[232433]: 2025-12-06 07:24:17.704 232437 DEBUG oslo_concurrency.lockutils [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:17 np0005548731 nova_compute[232433]: 2025-12-06 07:24:17.705 232437 INFO nova.compute.manager [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Terminating instance#033[00m
Dec  6 02:24:17 np0005548731 nova_compute[232433]: 2025-12-06 07:24:17.706 232437 DEBUG nova.compute.manager [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:24:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:18 np0005548731 kernel: tap5c84e057-77 (unregistering): left promiscuous mode
Dec  6 02:24:18 np0005548731 NetworkManager[49182]: <info>  [1765005858.7594] device (tap5c84e057-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:24:18 np0005548731 nova_compute[232433]: 2025-12-06 07:24:18.771 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:18Z|00378|binding|INFO|Releasing lport 5c84e057-7718-465e-89ef-0ad3aa2c82b3 from this chassis (sb_readonly=0)
Dec  6 02:24:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:18Z|00379|binding|INFO|Setting lport 5c84e057-7718-465e-89ef-0ad3aa2c82b3 down in Southbound
Dec  6 02:24:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:18Z|00380|binding|INFO|Removing iface tap5c84e057-77 ovn-installed in OVS
Dec  6 02:24:18 np0005548731 nova_compute[232433]: 2025-12-06 07:24:18.794 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:18 np0005548731 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000062.scope: Deactivated successfully.
Dec  6 02:24:18 np0005548731 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000062.scope: Consumed 2.718s CPU time.
Dec  6 02:24:18 np0005548731 systemd-machined[195355]: Machine qemu-40-instance-00000062 terminated.
Dec  6 02:24:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:18.906 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:72:e0 10.100.0.3'], port_security=['fa:16:3e:3a:72:e0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8846499-359a-4e65-9120-56011fbc0f1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '6', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=5c84e057-7718-465e-89ef-0ad3aa2c82b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:24:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:18.907 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 5c84e057-7718-465e-89ef-0ad3aa2c82b3 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a unbound from our chassis#033[00m
Dec  6 02:24:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:18.909 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c014e4e-a182-4f60-8285-20525bc99e5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:24:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:18.910 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[19e4dd67-5e9c-41aa-8023-ee3cf0c4c8e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:18.910 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace which is not needed anymore#033[00m
Dec  6 02:24:18 np0005548731 nova_compute[232433]: 2025-12-06 07:24:18.925 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:18 np0005548731 nova_compute[232433]: 2025-12-06 07:24:18.930 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:18 np0005548731 nova_compute[232433]: 2025-12-06 07:24:18.937 232437 INFO nova.virt.libvirt.driver [-] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Instance destroyed successfully.#033[00m
Dec  6 02:24:18 np0005548731 nova_compute[232433]: 2025-12-06 07:24:18.938 232437 DEBUG nova.objects.instance [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'resources' on Instance uuid f8846499-359a-4e65-9120-56011fbc0f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:24:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:19.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:19 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[272291]: [NOTICE]   (272296) : haproxy version is 2.8.14-c23fe91
Dec  6 02:24:19 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[272291]: [NOTICE]   (272296) : path to executable is /usr/sbin/haproxy
Dec  6 02:24:19 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[272291]: [WARNING]  (272296) : Exiting Master process...
Dec  6 02:24:19 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[272291]: [ALERT]    (272296) : Current worker (272298) exited with code 143 (Terminated)
Dec  6 02:24:19 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[272291]: [WARNING]  (272296) : All workers exited. Exiting... (0)
Dec  6 02:24:19 np0005548731 systemd[1]: libpod-9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258.scope: Deactivated successfully.
Dec  6 02:24:19 np0005548731 podman[272349]: 2025-12-06 07:24:19.042216223 +0000 UTC m=+0.043631933 container died 9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:24:19 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258-userdata-shm.mount: Deactivated successfully.
Dec  6 02:24:19 np0005548731 systemd[1]: var-lib-containers-storage-overlay-0374b2559bc235913c6036898008a8e3d8c8c7eaf7f3456a9907470b23e73c58-merged.mount: Deactivated successfully.
Dec  6 02:24:19 np0005548731 podman[272349]: 2025-12-06 07:24:19.079438136 +0000 UTC m=+0.080853836 container cleanup 9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:24:19 np0005548731 systemd[1]: libpod-conmon-9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258.scope: Deactivated successfully.
Dec  6 02:24:19 np0005548731 podman[272378]: 2025-12-06 07:24:19.13951037 +0000 UTC m=+0.040000483 container remove 9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:24:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:19.145 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8d2b90-ee3f-4b66-9eed-c3cef34a1c87]: (4, ('Sat Dec  6 07:24:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258)\n9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258\nSat Dec  6 07:24:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258)\n9f663632ed12997de2d607600a82dd76ccb0cb488a03d3bb6fe19f2d4d4ab258\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:19.147 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a18eb477-7e45-477c-9d4f-08ba7e8a9fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:19.148 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:19 np0005548731 nova_compute[232433]: 2025-12-06 07:24:19.151 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:19 np0005548731 kernel: tap7c014e4e-a0: left promiscuous mode
Dec  6 02:24:19 np0005548731 nova_compute[232433]: 2025-12-06 07:24:19.167 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:19.169 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cadd2173-c85f-48f2-8669-3cbb7e305c89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:19.182 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ab477bd0-652d-4c4f-aeb2-68714bab25ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:19.182 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fe688190-bd3e-44e8-89d3-aa3f8f11557d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:19.199 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[579086c2-b631-480d-b1c3-ef2dbf14c5de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611521, 'reachable_time': 15808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272395, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:19 np0005548731 systemd[1]: run-netns-ovnmeta\x2d7c014e4e\x2da182\x2d4f60\x2d8285\x2d20525bc99e5a.mount: Deactivated successfully.
Dec  6 02:24:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:19.202 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:24:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:19.202 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6b1c3b-24a9-43a9-bed1-07697569ce09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:19 np0005548731 nova_compute[232433]: 2025-12-06 07:24:19.507 232437 DEBUG nova.virt.libvirt.vif [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:23:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-31720005',display_name='tempest-ServerDiskConfigTestJSON-server-31720005',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-31720005',id=98,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:24:15Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-lpgj2mdw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:24:16Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=f8846499-359a-4e65-9120-56011fbc0f1d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:24:19 np0005548731 nova_compute[232433]: 2025-12-06 07:24:19.508 232437 DEBUG nova.network.os_vif_util [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "address": "fa:16:3e:3a:72:e0", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c84e057-77", "ovs_interfaceid": "5c84e057-7718-465e-89ef-0ad3aa2c82b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:24:19 np0005548731 nova_compute[232433]: 2025-12-06 07:24:19.509 232437 DEBUG nova.network.os_vif_util [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:24:19 np0005548731 nova_compute[232433]: 2025-12-06 07:24:19.509 232437 DEBUG os_vif [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:24:19 np0005548731 nova_compute[232433]: 2025-12-06 07:24:19.510 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:19 np0005548731 nova_compute[232433]: 2025-12-06 07:24:19.511 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c84e057-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:19 np0005548731 nova_compute[232433]: 2025-12-06 07:24:19.512 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:19 np0005548731 nova_compute[232433]: 2025-12-06 07:24:19.513 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:19 np0005548731 nova_compute[232433]: 2025-12-06 07:24:19.515 232437 INFO os_vif [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:72:e0,bridge_name='br-int',has_traffic_filtering=True,id=5c84e057-7718-465e-89ef-0ad3aa2c82b3,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c84e057-77')#033[00m
Dec  6 02:24:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:19.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:20 np0005548731 nova_compute[232433]: 2025-12-06 07:24:20.465 232437 DEBUG nova.compute.manager [req-3cc1c162-aa06-4465-8e00-2f9e817e1d9b req-a56b2848-0f06-46cf-98da-c55ef6d7c607 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received event network-vif-unplugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:20 np0005548731 nova_compute[232433]: 2025-12-06 07:24:20.465 232437 DEBUG oslo_concurrency.lockutils [req-3cc1c162-aa06-4465-8e00-2f9e817e1d9b req-a56b2848-0f06-46cf-98da-c55ef6d7c607 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:20 np0005548731 nova_compute[232433]: 2025-12-06 07:24:20.466 232437 DEBUG oslo_concurrency.lockutils [req-3cc1c162-aa06-4465-8e00-2f9e817e1d9b req-a56b2848-0f06-46cf-98da-c55ef6d7c607 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:20 np0005548731 nova_compute[232433]: 2025-12-06 07:24:20.466 232437 DEBUG oslo_concurrency.lockutils [req-3cc1c162-aa06-4465-8e00-2f9e817e1d9b req-a56b2848-0f06-46cf-98da-c55ef6d7c607 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:20 np0005548731 nova_compute[232433]: 2025-12-06 07:24:20.466 232437 DEBUG nova.compute.manager [req-3cc1c162-aa06-4465-8e00-2f9e817e1d9b req-a56b2848-0f06-46cf-98da-c55ef6d7c607 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] No waiting events found dispatching network-vif-unplugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:24:20 np0005548731 nova_compute[232433]: 2025-12-06 07:24:20.466 232437 DEBUG nova.compute.manager [req-3cc1c162-aa06-4465-8e00-2f9e817e1d9b req-a56b2848-0f06-46cf-98da-c55ef6d7c607 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received event network-vif-unplugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:24:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:21.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:21 np0005548731 nova_compute[232433]: 2025-12-06 07:24:21.395 232437 INFO nova.virt.libvirt.driver [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Deleting instance files /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d_del#033[00m
Dec  6 02:24:21 np0005548731 nova_compute[232433]: 2025-12-06 07:24:21.396 232437 INFO nova.virt.libvirt.driver [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Deletion of /var/lib/nova/instances/f8846499-359a-4e65-9120-56011fbc0f1d_del complete#033[00m
Dec  6 02:24:21 np0005548731 nova_compute[232433]: 2025-12-06 07:24:21.455 232437 INFO nova.compute.manager [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Took 3.75 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:24:21 np0005548731 nova_compute[232433]: 2025-12-06 07:24:21.456 232437 DEBUG oslo.service.loopingcall [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:24:21 np0005548731 nova_compute[232433]: 2025-12-06 07:24:21.456 232437 DEBUG nova.compute.manager [-] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:24:21 np0005548731 nova_compute[232433]: 2025-12-06 07:24:21.456 232437 DEBUG nova.network.neutron [-] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:24:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:21.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:22 np0005548731 nova_compute[232433]: 2025-12-06 07:24:22.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:22 np0005548731 nova_compute[232433]: 2025-12-06 07:24:22.594 232437 DEBUG nova.compute.manager [req-d5d9a254-51eb-4d6f-9b6c-f3d2c8367e5a req-d1396a50-eb85-46de-a1ac-f44c300394c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:22 np0005548731 nova_compute[232433]: 2025-12-06 07:24:22.594 232437 DEBUG oslo_concurrency.lockutils [req-d5d9a254-51eb-4d6f-9b6c-f3d2c8367e5a req-d1396a50-eb85-46de-a1ac-f44c300394c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:22 np0005548731 nova_compute[232433]: 2025-12-06 07:24:22.595 232437 DEBUG oslo_concurrency.lockutils [req-d5d9a254-51eb-4d6f-9b6c-f3d2c8367e5a req-d1396a50-eb85-46de-a1ac-f44c300394c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:22 np0005548731 nova_compute[232433]: 2025-12-06 07:24:22.595 232437 DEBUG oslo_concurrency.lockutils [req-d5d9a254-51eb-4d6f-9b6c-f3d2c8367e5a req-d1396a50-eb85-46de-a1ac-f44c300394c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:22 np0005548731 nova_compute[232433]: 2025-12-06 07:24:22.595 232437 DEBUG nova.compute.manager [req-d5d9a254-51eb-4d6f-9b6c-f3d2c8367e5a req-d1396a50-eb85-46de-a1ac-f44c300394c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] No waiting events found dispatching network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:24:22 np0005548731 nova_compute[232433]: 2025-12-06 07:24:22.595 232437 WARNING nova.compute.manager [req-d5d9a254-51eb-4d6f-9b6c-f3d2c8367e5a req-d1396a50-eb85-46de-a1ac-f44c300394c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received unexpected event network-vif-plugged-5c84e057-7718-465e-89ef-0ad3aa2c82b3 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:24:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:22 np0005548731 nova_compute[232433]: 2025-12-06 07:24:22.937 232437 DEBUG nova.network.neutron [-] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:24:22 np0005548731 nova_compute[232433]: 2025-12-06 07:24:22.957 232437 INFO nova.compute.manager [-] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Took 1.50 seconds to deallocate network for instance.#033[00m
Dec  6 02:24:23 np0005548731 nova_compute[232433]: 2025-12-06 07:24:23.001 232437 DEBUG oslo_concurrency.lockutils [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:23 np0005548731 nova_compute[232433]: 2025-12-06 07:24:23.001 232437 DEBUG oslo_concurrency.lockutils [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:23.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:23 np0005548731 nova_compute[232433]: 2025-12-06 07:24:23.053 232437 DEBUG oslo_concurrency.processutils [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:24:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1819027873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:24:23 np0005548731 nova_compute[232433]: 2025-12-06 07:24:23.526 232437 DEBUG oslo_concurrency.processutils [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:23 np0005548731 nova_compute[232433]: 2025-12-06 07:24:23.534 232437 DEBUG nova.compute.provider_tree [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:24:23 np0005548731 nova_compute[232433]: 2025-12-06 07:24:23.552 232437 DEBUG nova.scheduler.client.report [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:24:23 np0005548731 nova_compute[232433]: 2025-12-06 07:24:23.580 232437 DEBUG oslo_concurrency.lockutils [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:23 np0005548731 nova_compute[232433]: 2025-12-06 07:24:23.604 232437 INFO nova.scheduler.client.report [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Deleted allocations for instance f8846499-359a-4e65-9120-56011fbc0f1d#033[00m
Dec  6 02:24:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:23.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:23 np0005548731 nova_compute[232433]: 2025-12-06 07:24:23.759 232437 DEBUG oslo_concurrency.lockutils [None req-d7afea4d-92aa-4c12-a027-27b0de9ef605 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "f8846499-359a-4e65-9120-56011fbc0f1d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:24 np0005548731 nova_compute[232433]: 2025-12-06 07:24:24.512 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:25.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:25.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:27.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:27 np0005548731 nova_compute[232433]: 2025-12-06 07:24:27.332 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:27 np0005548731 nova_compute[232433]: 2025-12-06 07:24:27.339 232437 DEBUG nova.compute.manager [req-3b66a7f9-59e9-4d99-b4f9-d9ed708b5630 req-6543b62d-48ce-4288-919c-e8566000f9de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Received event network-vif-deleted-5c84e057-7718-465e-89ef-0ad3aa2c82b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:27.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.417 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.418 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.444 232437 DEBUG nova.compute.manager [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.524 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.524 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.530 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.530 232437 INFO nova.compute.claims [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.654 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.815 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "a6984fe7-72e6-4e80-b77b-925152e01f3f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.816 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.834 232437 DEBUG nova.compute.manager [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:24:28 np0005548731 nova_compute[232433]: 2025-12-06 07:24:28.924 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:29.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:24:29 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2549017799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.088 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.094 232437 DEBUG nova.compute.provider_tree [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.116 232437 DEBUG nova.scheduler.client.report [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.138 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.139 232437 DEBUG nova.compute.manager [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.141 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.146 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.146 232437 INFO nova.compute.claims [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.194 232437 DEBUG nova.compute.manager [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.194 232437 DEBUG nova.network.neutron [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.217 232437 INFO nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.240 232437 DEBUG nova.compute.manager [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.270 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.348 232437 DEBUG nova.compute.manager [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.350 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.350 232437 INFO nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Creating image(s)#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.376 232437 DEBUG nova.storage.rbd_utils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.401 232437 DEBUG nova.storage.rbd_utils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.424 232437 DEBUG nova.storage.rbd_utils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.428 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.506 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.507 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.508 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.508 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.531 232437 DEBUG nova.storage.rbd_utils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.535 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5ea73f51-d224-41ed-892b-a137d9985e73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.556 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:24:29 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/472744709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:24:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:24:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:29.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.701 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.706 232437 DEBUG nova.compute.provider_tree [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.736 232437 DEBUG nova.scheduler.client.report [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.760 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.761 232437 DEBUG nova.compute.manager [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.812 232437 DEBUG nova.compute.manager [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.813 232437 DEBUG nova.network.neutron [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.837 232437 INFO nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.854 232437 DEBUG nova.compute.manager [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.923 232437 DEBUG nova.compute.manager [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.924 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.924 232437 INFO nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Creating image(s)#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.948 232437 DEBUG nova.storage.rbd_utils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image a6984fe7-72e6-4e80-b77b-925152e01f3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.972 232437 DEBUG nova.storage.rbd_utils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image a6984fe7-72e6-4e80-b77b-925152e01f3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:29 np0005548731 nova_compute[232433]: 2025-12-06 07:24:29.997 232437 DEBUG nova.storage.rbd_utils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image a6984fe7-72e6-4e80-b77b-925152e01f3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:30 np0005548731 nova_compute[232433]: 2025-12-06 07:24:30.000 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:30 np0005548731 nova_compute[232433]: 2025-12-06 07:24:30.060 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:30 np0005548731 nova_compute[232433]: 2025-12-06 07:24:30.061 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:30 np0005548731 nova_compute[232433]: 2025-12-06 07:24:30.061 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:30 np0005548731 nova_compute[232433]: 2025-12-06 07:24:30.062 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:30 np0005548731 nova_compute[232433]: 2025-12-06 07:24:30.086 232437 DEBUG nova.storage.rbd_utils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image a6984fe7-72e6-4e80-b77b-925152e01f3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:30 np0005548731 nova_compute[232433]: 2025-12-06 07:24:30.089 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef a6984fe7-72e6-4e80-b77b-925152e01f3f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:30 np0005548731 nova_compute[232433]: 2025-12-06 07:24:30.115 232437 DEBUG nova.policy [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd966fefcb38a45219b9cc637c46a3d62', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:24:30 np0005548731 nova_compute[232433]: 2025-12-06 07:24:30.118 232437 DEBUG nova.policy [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bf140baee82f408e8fafa81062146641', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:24:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:31.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:31 np0005548731 nova_compute[232433]: 2025-12-06 07:24:31.047 232437 DEBUG nova.network.neutron [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Successfully created port: e61e57ee-1e76-4d23-ab33-ab23cf69ba70 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:24:31 np0005548731 nova_compute[232433]: 2025-12-06 07:24:31.129 232437 DEBUG nova.network.neutron [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Successfully created port: 02492be5-e171-4dea-a6fe-20c8b6f3a45e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:24:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:31.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.306 232437 DEBUG nova.network.neutron [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Successfully updated port: e61e57ee-1e76-4d23-ab33-ab23cf69ba70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.329 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.329 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquired lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.329 232437 DEBUG nova.network.neutron [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.361 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5ea73f51-d224-41ed-892b-a137d9985e73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.826s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.449 232437 DEBUG nova.compute.manager [req-72a3f37c-08a2-40bf-9700-902660df0b89 req-214e780c-c7aa-43ea-8bdb-8d0a64d2f27b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received event network-changed-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.449 232437 DEBUG nova.compute.manager [req-72a3f37c-08a2-40bf-9700-902660df0b89 req-214e780c-c7aa-43ea-8bdb-8d0a64d2f27b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Refreshing instance network info cache due to event network-changed-e61e57ee-1e76-4d23-ab33-ab23cf69ba70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.449 232437 DEBUG oslo_concurrency.lockutils [req-72a3f37c-08a2-40bf-9700-902660df0b89 req-214e780c-c7aa-43ea-8bdb-8d0a64d2f27b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.453 232437 DEBUG nova.storage.rbd_utils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] resizing rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.624 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef a6984fe7-72e6-4e80-b77b-925152e01f3f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.658 232437 DEBUG nova.objects.instance [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'migration_context' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.693 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.694 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Ensure instance console log exists: /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.695 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.695 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.695 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:32 np0005548731 nova_compute[232433]: 2025-12-06 07:24:32.700 232437 DEBUG nova.storage.rbd_utils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] resizing rbd image a6984fe7-72e6-4e80-b77b-925152e01f3f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:24:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:33.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.584 232437 DEBUG nova.network.neutron [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Successfully updated port: 02492be5-e171-4dea-a6fe-20c8b6f3a45e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.586 232437 DEBUG nova.network.neutron [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.589 232437 DEBUG nova.compute.manager [req-df1c723c-e19b-4dd6-b9d4-e674286ef33d req-db6b5e5c-5e68-417a-b7e9-7c3b21a3552a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Received event network-changed-02492be5-e171-4dea-a6fe-20c8b6f3a45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.589 232437 DEBUG nova.compute.manager [req-df1c723c-e19b-4dd6-b9d4-e674286ef33d req-db6b5e5c-5e68-417a-b7e9-7c3b21a3552a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Refreshing instance network info cache due to event network-changed-02492be5-e171-4dea-a6fe-20c8b6f3a45e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.589 232437 DEBUG oslo_concurrency.lockutils [req-df1c723c-e19b-4dd6-b9d4-e674286ef33d req-db6b5e5c-5e68-417a-b7e9-7c3b21a3552a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a6984fe7-72e6-4e80-b77b-925152e01f3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.590 232437 DEBUG oslo_concurrency.lockutils [req-df1c723c-e19b-4dd6-b9d4-e674286ef33d req-db6b5e5c-5e68-417a-b7e9-7c3b21a3552a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a6984fe7-72e6-4e80-b77b-925152e01f3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.590 232437 DEBUG nova.network.neutron [req-df1c723c-e19b-4dd6-b9d4-e674286ef33d req-db6b5e5c-5e68-417a-b7e9-7c3b21a3552a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Refreshing network info cache for port 02492be5-e171-4dea-a6fe-20c8b6f3a45e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.607 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "refresh_cache-a6984fe7-72e6-4e80-b77b-925152e01f3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:24:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:33.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.936 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005858.9356663, f8846499-359a-4e65-9120-56011fbc0f1d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.937 232437 INFO nova.compute.manager [-] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.966 232437 DEBUG nova.network.neutron [req-df1c723c-e19b-4dd6-b9d4-e674286ef33d req-db6b5e5c-5e68-417a-b7e9-7c3b21a3552a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:24:33 np0005548731 nova_compute[232433]: 2025-12-06 07:24:33.976 232437 DEBUG nova.compute.manager [None req-5fcb525d-00a0-4d97-8e7f-a5277bd88a6a - - - - - -] [instance: f8846499-359a-4e65-9120-56011fbc0f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:34 np0005548731 nova_compute[232433]: 2025-12-06 07:24:34.337 232437 DEBUG nova.network.neutron [req-df1c723c-e19b-4dd6-b9d4-e674286ef33d req-db6b5e5c-5e68-417a-b7e9-7c3b21a3552a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:24:34 np0005548731 nova_compute[232433]: 2025-12-06 07:24:34.373 232437 DEBUG oslo_concurrency.lockutils [req-df1c723c-e19b-4dd6-b9d4-e674286ef33d req-db6b5e5c-5e68-417a-b7e9-7c3b21a3552a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a6984fe7-72e6-4e80-b77b-925152e01f3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:24:34 np0005548731 nova_compute[232433]: 2025-12-06 07:24:34.373 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquired lock "refresh_cache-a6984fe7-72e6-4e80-b77b-925152e01f3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:24:34 np0005548731 nova_compute[232433]: 2025-12-06 07:24:34.374 232437 DEBUG nova.network.neutron [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:24:34 np0005548731 nova_compute[232433]: 2025-12-06 07:24:34.559 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.017 232437 DEBUG nova.network.neutron [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:24:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:24:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:35.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.245 232437 DEBUG nova.network.neutron [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Updating instance_info_cache with network_info: [{"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.269 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Releasing lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.270 232437 DEBUG nova.compute.manager [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Instance network_info: |[{"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.270 232437 DEBUG oslo_concurrency.lockutils [req-72a3f37c-08a2-40bf-9700-902660df0b89 req-214e780c-c7aa-43ea-8bdb-8d0a64d2f27b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.271 232437 DEBUG nova.network.neutron [req-72a3f37c-08a2-40bf-9700-902660df0b89 req-214e780c-c7aa-43ea-8bdb-8d0a64d2f27b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Refreshing network info cache for port e61e57ee-1e76-4d23-ab33-ab23cf69ba70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.276 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Start _get_guest_xml network_info=[{"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.281 232437 WARNING nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.287 232437 DEBUG nova.virt.libvirt.host [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.287 232437 DEBUG nova.virt.libvirt.host [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.292 232437 DEBUG nova.virt.libvirt.host [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.293 232437 DEBUG nova.virt.libvirt.host [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.294 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.294 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.294 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.294 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.295 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.295 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.295 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.295 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.295 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.295 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.296 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.296 232437 DEBUG nova.virt.hardware [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:24:35 np0005548731 nova_compute[232433]: 2025-12-06 07:24:35.298 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:35.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:24:35 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/495689201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.019 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.047 232437 DEBUG nova.storage.rbd_utils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.051 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.129 232437 DEBUG nova.network.neutron [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Updating instance_info_cache with network_info: [{"id": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "address": "fa:16:3e:0a:25:68", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02492be5-e1", "ovs_interfaceid": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.151 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Releasing lock "refresh_cache-a6984fe7-72e6-4e80-b77b-925152e01f3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.152 232437 DEBUG nova.compute.manager [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Instance network_info: |[{"id": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "address": "fa:16:3e:0a:25:68", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02492be5-e1", "ovs_interfaceid": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:24:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:24:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:24:36 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2049410736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.499 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.500 232437 DEBUG nova.virt.libvirt.vif [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:24:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1971190968',display_name='tempest-ServerRescueTestJSON-server-1971190968',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1971190968',id=103,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5036a5cbf7d44fd809d00942afd76e6',ramdisk_id='',reservation_id='r-uvl7bc16',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1682980670',owner_user_name='tempest-ServerRescueTestJSON-1682980670-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:29Z,user_data=None,user_id='bf140baee82f408e8fafa81062146641',uuid=5ea73f51-d224-41ed-892b-a137d9985e73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.500 232437 DEBUG nova.network.os_vif_util [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converting VIF {"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.501 232437 DEBUG nova.network.os_vif_util [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:ba:09,bridge_name='br-int',has_traffic_filtering=True,id=e61e57ee-1e76-4d23-ab33-ab23cf69ba70,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61e57ee-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.502 232437 DEBUG nova.objects.instance [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.520 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  <uuid>5ea73f51-d224-41ed-892b-a137d9985e73</uuid>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  <name>instance-00000067</name>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerRescueTestJSON-server-1971190968</nova:name>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:24:35</nova:creationTime>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <nova:user uuid="bf140baee82f408e8fafa81062146641">tempest-ServerRescueTestJSON-1682980670-project-member</nova:user>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <nova:project uuid="c5036a5cbf7d44fd809d00942afd76e6">tempest-ServerRescueTestJSON-1682980670</nova:project>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <nova:port uuid="e61e57ee-1e76-4d23-ab33-ab23cf69ba70">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <entry name="serial">5ea73f51-d224-41ed-892b-a137d9985e73</entry>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <entry name="uuid">5ea73f51-d224-41ed-892b-a137d9985e73</entry>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/5ea73f51-d224-41ed-892b-a137d9985e73_disk">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/5ea73f51-d224-41ed-892b-a137d9985e73_disk.config">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:d9:ba:09"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <target dev="tape61e57ee-1e"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/console.log" append="off"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:24:36 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:24:36 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:24:36 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:24:36 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.522 232437 DEBUG nova.compute.manager [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Preparing to wait for external event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.522 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.522 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.523 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.524 232437 DEBUG nova.virt.libvirt.vif [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:24:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1971190968',display_name='tempest-ServerRescueTestJSON-server-1971190968',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1971190968',id=103,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5036a5cbf7d44fd809d00942afd76e6',ramdisk_id='',reservation_id='r-uvl7bc16',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1682980670',owner_user_name='tempest-ServerRescueTestJSON-1682980670-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:29Z,user_data=None,user_id='bf140baee82f408e8fafa81062146641',uuid=5ea73f51-d224-41ed-892b-a137d9985e73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.524 232437 DEBUG nova.network.os_vif_util [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converting VIF {"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.525 232437 DEBUG nova.network.os_vif_util [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:ba:09,bridge_name='br-int',has_traffic_filtering=True,id=e61e57ee-1e76-4d23-ab33-ab23cf69ba70,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61e57ee-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.525 232437 DEBUG os_vif [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:ba:09,bridge_name='br-int',has_traffic_filtering=True,id=e61e57ee-1e76-4d23-ab33-ab23cf69ba70,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61e57ee-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.526 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.526 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.527 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.529 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.530 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape61e57ee-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.531 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape61e57ee-1e, col_values=(('external_ids', {'iface-id': 'e61e57ee-1e76-4d23-ab33-ab23cf69ba70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:ba:09', 'vm-uuid': '5ea73f51-d224-41ed-892b-a137d9985e73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.532 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:36 np0005548731 NetworkManager[49182]: <info>  [1765005876.5332] manager: (tape61e57ee-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.534 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.538 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.539 232437 INFO os_vif [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:ba:09,bridge_name='br-int',has_traffic_filtering=True,id=e61e57ee-1e76-4d23-ab33-ab23cf69ba70,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61e57ee-1e')#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.592 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.592 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.593 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No VIF found with MAC fa:16:3e:d9:ba:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.593 232437 INFO nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Using config drive#033[00m
Dec  6 02:24:36 np0005548731 nova_compute[232433]: 2025-12-06 07:24:36.615 232437 DEBUG nova.storage.rbd_utils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:37.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.224 232437 INFO nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Creating config drive at /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.229 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4_jcd6x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.335 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.408 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4_jcd6x" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.434 232437 DEBUG nova.storage.rbd_utils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.438 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config 5ea73f51-d224-41ed-892b-a137d9985e73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:37 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:24:37 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.471 232437 DEBUG nova.objects.instance [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'migration_context' on Instance uuid a6984fe7-72e6-4e80-b77b-925152e01f3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.492 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.493 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Ensure instance console log exists: /var/lib/nova/instances/a6984fe7-72e6-4e80-b77b-925152e01f3f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.493 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.493 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.493 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.495 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Start _get_guest_xml network_info=[{"id": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "address": "fa:16:3e:0a:25:68", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02492be5-e1", "ovs_interfaceid": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.497 232437 DEBUG nova.network.neutron [req-72a3f37c-08a2-40bf-9700-902660df0b89 req-214e780c-c7aa-43ea-8bdb-8d0a64d2f27b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Updated VIF entry in instance network info cache for port e61e57ee-1e76-4d23-ab33-ab23cf69ba70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.497 232437 DEBUG nova.network.neutron [req-72a3f37c-08a2-40bf-9700-902660df0b89 req-214e780c-c7aa-43ea-8bdb-8d0a64d2f27b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Updating instance_info_cache with network_info: [{"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.502 232437 WARNING nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.507 232437 DEBUG nova.virt.libvirt.host [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.508 232437 DEBUG nova.virt.libvirt.host [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.511 232437 DEBUG nova.virt.libvirt.host [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.512 232437 DEBUG nova.virt.libvirt.host [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.513 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.513 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.514 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.514 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.514 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.514 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.514 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.514 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.515 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.515 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.515 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.515 232437 DEBUG nova.virt.hardware [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.518 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.545 232437 DEBUG oslo_concurrency.lockutils [req-72a3f37c-08a2-40bf-9700-902660df0b89 req-214e780c-c7aa-43ea-8bdb-8d0a64d2f27b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.598 232437 DEBUG oslo_concurrency.processutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config 5ea73f51-d224-41ed-892b-a137d9985e73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.599 232437 INFO nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Deleting local config drive /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config because it was imported into RBD.#033[00m
Dec  6 02:24:37 np0005548731 kernel: tape61e57ee-1e: entered promiscuous mode
Dec  6 02:24:37 np0005548731 NetworkManager[49182]: <info>  [1765005877.6631] manager: (tape61e57ee-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Dec  6 02:24:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:37Z|00381|binding|INFO|Claiming lport e61e57ee-1e76-4d23-ab33-ab23cf69ba70 for this chassis.
Dec  6 02:24:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:37Z|00382|binding|INFO|e61e57ee-1e76-4d23-ab33-ab23cf69ba70: Claiming fa:16:3e:d9:ba:09 10.100.0.10
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.668 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:37.674 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:ba:09 10.100.0.10'], port_security=['fa:16:3e:d9:ba:09 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5ea73f51-d224-41ed-892b-a137d9985e73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=e61e57ee-1e76-4d23-ab33-ab23cf69ba70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:24:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:37.675 143965 INFO neutron.agent.ovn.metadata.agent [-] Port e61e57ee-1e76-4d23-ab33-ab23cf69ba70 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 bound to our chassis#033[00m
Dec  6 02:24:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:37.676 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:24:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:37.676 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b72c1b83-c24b-40d5-b41b-cb7e6cbe9c79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:37.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:37 np0005548731 systemd-machined[195355]: New machine qemu-41-instance-00000067.
Dec  6 02:24:37 np0005548731 systemd[1]: Started Virtual Machine qemu-41-instance-00000067.
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.739 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:37 np0005548731 systemd-udevd[273188]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:24:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:37Z|00383|binding|INFO|Setting lport e61e57ee-1e76-4d23-ab33-ab23cf69ba70 ovn-installed in OVS
Dec  6 02:24:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:37Z|00384|binding|INFO|Setting lport e61e57ee-1e76-4d23-ab33-ab23cf69ba70 up in Southbound
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.747 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:37 np0005548731 NetworkManager[49182]: <info>  [1765005877.7708] device (tape61e57ee-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:24:37 np0005548731 NetworkManager[49182]: <info>  [1765005877.7731] device (tape61e57ee-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:24:37 np0005548731 podman[273162]: 2025-12-06 07:24:37.790522658 +0000 UTC m=+0.076474205 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:24:37 np0005548731 podman[273155]: 2025-12-06 07:24:37.805489943 +0000 UTC m=+0.111863072 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  6 02:24:37 np0005548731 podman[273156]: 2025-12-06 07:24:37.839493614 +0000 UTC m=+0.131859932 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:24:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:24:37 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/735819537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.954 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.977 232437 DEBUG nova.storage.rbd_utils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image a6984fe7-72e6-4e80-b77b-925152e01f3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:37 np0005548731 nova_compute[232433]: 2025-12-06 07:24:37.981 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.294 232437 DEBUG nova.compute.manager [req-4ba8f1b5-1ba7-4331-a953-d608a1da5380 req-060380ce-4b92-4e72-bd4b-e5b97c775fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.294 232437 DEBUG oslo_concurrency.lockutils [req-4ba8f1b5-1ba7-4331-a953-d608a1da5380 req-060380ce-4b92-4e72-bd4b-e5b97c775fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.295 232437 DEBUG oslo_concurrency.lockutils [req-4ba8f1b5-1ba7-4331-a953-d608a1da5380 req-060380ce-4b92-4e72-bd4b-e5b97c775fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.295 232437 DEBUG oslo_concurrency.lockutils [req-4ba8f1b5-1ba7-4331-a953-d608a1da5380 req-060380ce-4b92-4e72-bd4b-e5b97c775fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.295 232437 DEBUG nova.compute.manager [req-4ba8f1b5-1ba7-4331-a953-d608a1da5380 req-060380ce-4b92-4e72-bd4b-e5b97c775fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Processing event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:24:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:24:38 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/936231557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.391 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.392 232437 DEBUG nova.virt.libvirt.vif [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:24:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1094086696',display_name='tempest-DeleteServersTestJSON-server-1094086696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1094086696',id=104,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-zdeyzdaj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:29Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=a6984fe7-72e6-4e80-b77b-925152e01f3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "address": "fa:16:3e:0a:25:68", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02492be5-e1", "ovs_interfaceid": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.392 232437 DEBUG nova.network.os_vif_util [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "address": "fa:16:3e:0a:25:68", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02492be5-e1", "ovs_interfaceid": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.393 232437 DEBUG nova.network.os_vif_util [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:25:68,bridge_name='br-int',has_traffic_filtering=True,id=02492be5-e171-4dea-a6fe-20c8b6f3a45e,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02492be5-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.394 232437 DEBUG nova.objects.instance [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6984fe7-72e6-4e80-b77b-925152e01f3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.408 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  <uuid>a6984fe7-72e6-4e80-b77b-925152e01f3f</uuid>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  <name>instance-00000068</name>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <nova:name>tempest-DeleteServersTestJSON-server-1094086696</nova:name>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:24:37</nova:creationTime>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <nova:user uuid="d966fefcb38a45219b9cc637c46a3d62">tempest-DeleteServersTestJSON-1764569218-project-member</nova:user>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <nova:project uuid="c6d2f50c0db54315bfa96a24511dda90">tempest-DeleteServersTestJSON-1764569218</nova:project>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <nova:port uuid="02492be5-e171-4dea-a6fe-20c8b6f3a45e">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <entry name="serial">a6984fe7-72e6-4e80-b77b-925152e01f3f</entry>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <entry name="uuid">a6984fe7-72e6-4e80-b77b-925152e01f3f</entry>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/a6984fe7-72e6-4e80-b77b-925152e01f3f_disk">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/a6984fe7-72e6-4e80-b77b-925152e01f3f_disk.config">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:0a:25:68"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <target dev="tap02492be5-e1"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/a6984fe7-72e6-4e80-b77b-925152e01f3f/console.log" append="off"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:24:38 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:24:38 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:24:38 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:24:38 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.409 232437 DEBUG nova.compute.manager [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Preparing to wait for external event network-vif-plugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.409 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.409 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.409 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.410 232437 DEBUG nova.virt.libvirt.vif [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:24:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1094086696',display_name='tempest-DeleteServersTestJSON-server-1094086696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1094086696',id=104,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-zdeyzdaj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:29Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=a6984fe7-72e6-4e80-b77b-925152e01f3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "address": "fa:16:3e:0a:25:68", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02492be5-e1", "ovs_interfaceid": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.410 232437 DEBUG nova.network.os_vif_util [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "address": "fa:16:3e:0a:25:68", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02492be5-e1", "ovs_interfaceid": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.410 232437 DEBUG nova.network.os_vif_util [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:25:68,bridge_name='br-int',has_traffic_filtering=True,id=02492be5-e171-4dea-a6fe-20c8b6f3a45e,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02492be5-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.411 232437 DEBUG os_vif [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:25:68,bridge_name='br-int',has_traffic_filtering=True,id=02492be5-e171-4dea-a6fe-20c8b6f3a45e,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02492be5-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.411 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.412 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.412 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.414 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.414 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02492be5-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.414 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02492be5-e1, col_values=(('external_ids', {'iface-id': '02492be5-e171-4dea-a6fe-20c8b6f3a45e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:25:68', 'vm-uuid': 'a6984fe7-72e6-4e80-b77b-925152e01f3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:38 np0005548731 NetworkManager[49182]: <info>  [1765005878.4173] manager: (tap02492be5-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.415 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.421 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.423 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.424 232437 INFO os_vif [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:25:68,bridge_name='br-int',has_traffic_filtering=True,id=02492be5-e171-4dea-a6fe-20c8b6f3a45e,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02492be5-e1')#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.466 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.466 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.466 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No VIF found with MAC fa:16:3e:0a:25:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.467 232437 INFO nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Using config drive#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.608 232437 DEBUG nova.storage.rbd_utils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image a6984fe7-72e6-4e80-b77b-925152e01f3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.811 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005878.811361, 5ea73f51-d224-41ed-892b-a137d9985e73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.812 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] VM Started (Lifecycle Event)#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.814 232437 DEBUG nova.compute.manager [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.816 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.819 232437 INFO nova.virt.libvirt.driver [-] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Instance spawned successfully.#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.819 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.845 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.850 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.853 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.853 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.853 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.853 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.854 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.854 232437 DEBUG nova.virt.libvirt.driver [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.876 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.877 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005878.8133996, 5ea73f51-d224-41ed-892b-a137d9985e73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.877 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.910 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.913 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005878.816055, 5ea73f51-d224-41ed-892b-a137d9985e73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.914 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.928 232437 INFO nova.compute.manager [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Took 9.58 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.929 232437 DEBUG nova.compute.manager [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.936 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.938 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:24:38 np0005548731 nova_compute[232433]: 2025-12-06 07:24:38.990 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:24:39 np0005548731 nova_compute[232433]: 2025-12-06 07:24:39.008 232437 INFO nova.compute.manager [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Took 10.51 seconds to build instance.#033[00m
Dec  6 02:24:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:39.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:39 np0005548731 nova_compute[232433]: 2025-12-06 07:24:39.060 232437 DEBUG oslo_concurrency.lockutils [None req-fcc9ff1d-188f-484c-949d-facadeb08252 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:39 np0005548731 nova_compute[232433]: 2025-12-06 07:24:39.235 232437 INFO nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Creating config drive at /var/lib/nova/instances/a6984fe7-72e6-4e80-b77b-925152e01f3f/disk.config#033[00m
Dec  6 02:24:39 np0005548731 nova_compute[232433]: 2025-12-06 07:24:39.240 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6984fe7-72e6-4e80-b77b-925152e01f3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa970rnlq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:39 np0005548731 nova_compute[232433]: 2025-12-06 07:24:39.371 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6984fe7-72e6-4e80-b77b-925152e01f3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa970rnlq" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:39 np0005548731 nova_compute[232433]: 2025-12-06 07:24:39.414 232437 DEBUG nova.storage.rbd_utils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image a6984fe7-72e6-4e80-b77b-925152e01f3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:24:39 np0005548731 nova_compute[232433]: 2025-12-06 07:24:39.418 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6984fe7-72e6-4e80-b77b-925152e01f3f/disk.config a6984fe7-72e6-4e80-b77b-925152e01f3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:24:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:39.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:40 np0005548731 nova_compute[232433]: 2025-12-06 07:24:40.411 232437 DEBUG nova.compute.manager [req-27f549c8-5887-4c26-9007-8d0f3cb2bca7 req-896204bd-d2da-427e-b122-ea78adaf5b48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:40 np0005548731 nova_compute[232433]: 2025-12-06 07:24:40.412 232437 DEBUG oslo_concurrency.lockutils [req-27f549c8-5887-4c26-9007-8d0f3cb2bca7 req-896204bd-d2da-427e-b122-ea78adaf5b48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:40 np0005548731 nova_compute[232433]: 2025-12-06 07:24:40.413 232437 DEBUG oslo_concurrency.lockutils [req-27f549c8-5887-4c26-9007-8d0f3cb2bca7 req-896204bd-d2da-427e-b122-ea78adaf5b48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:40 np0005548731 nova_compute[232433]: 2025-12-06 07:24:40.413 232437 DEBUG oslo_concurrency.lockutils [req-27f549c8-5887-4c26-9007-8d0f3cb2bca7 req-896204bd-d2da-427e-b122-ea78adaf5b48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:40 np0005548731 nova_compute[232433]: 2025-12-06 07:24:40.414 232437 DEBUG nova.compute.manager [req-27f549c8-5887-4c26-9007-8d0f3cb2bca7 req-896204bd-d2da-427e-b122-ea78adaf5b48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] No waiting events found dispatching network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:24:40 np0005548731 nova_compute[232433]: 2025-12-06 07:24:40.414 232437 WARNING nova.compute.manager [req-27f549c8-5887-4c26-9007-8d0f3cb2bca7 req-896204bd-d2da-427e-b122-ea78adaf5b48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received unexpected event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:24:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:41.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:41 np0005548731 nova_compute[232433]: 2025-12-06 07:24:41.366 232437 INFO nova.compute.manager [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Rescuing#033[00m
Dec  6 02:24:41 np0005548731 nova_compute[232433]: 2025-12-06 07:24:41.367 232437 DEBUG oslo_concurrency.lockutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:24:41 np0005548731 nova_compute[232433]: 2025-12-06 07:24:41.367 232437 DEBUG oslo_concurrency.lockutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquired lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:24:41 np0005548731 nova_compute[232433]: 2025-12-06 07:24:41.367 232437 DEBUG nova.network.neutron [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:24:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:41.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:42 np0005548731 nova_compute[232433]: 2025-12-06 07:24:42.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:42 np0005548731 nova_compute[232433]: 2025-12-06 07:24:42.516 232437 DEBUG oslo_concurrency.processutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6984fe7-72e6-4e80-b77b-925152e01f3f/disk.config a6984fe7-72e6-4e80-b77b-925152e01f3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:24:42 np0005548731 nova_compute[232433]: 2025-12-06 07:24:42.516 232437 INFO nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Deleting local config drive /var/lib/nova/instances/a6984fe7-72e6-4e80-b77b-925152e01f3f/disk.config because it was imported into RBD.#033[00m
Dec  6 02:24:42 np0005548731 kernel: tap02492be5-e1: entered promiscuous mode
Dec  6 02:24:42 np0005548731 NetworkManager[49182]: <info>  [1765005882.5707] manager: (tap02492be5-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/197)
Dec  6 02:24:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:42Z|00385|binding|INFO|Claiming lport 02492be5-e171-4dea-a6fe-20c8b6f3a45e for this chassis.
Dec  6 02:24:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:42Z|00386|binding|INFO|02492be5-e171-4dea-a6fe-20c8b6f3a45e: Claiming fa:16:3e:0a:25:68 10.100.0.6
Dec  6 02:24:42 np0005548731 nova_compute[232433]: 2025-12-06 07:24:42.572 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.584 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:25:68 10.100.0.6'], port_security=['fa:16:3e:0a:25:68 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a6984fe7-72e6-4e80-b77b-925152e01f3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '2', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=02492be5-e171-4dea-a6fe-20c8b6f3a45e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.585 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 02492be5-e171-4dea-a6fe-20c8b6f3a45e in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 bound to our chassis#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.587 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.601 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0a274ed0-b6de-4d3a-9c42-efa3b9c6e9dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.603 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85cfbf28-71 in ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.605 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85cfbf28-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.605 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[734ae97e-7530-4725-8172-7d7cfa58c9d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.606 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a44ae3-312e-4687-aeb9-3333d4aeff6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 systemd-udevd[273394]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:24:42 np0005548731 NetworkManager[49182]: <info>  [1765005882.6223] device (tap02492be5-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:24:42 np0005548731 NetworkManager[49182]: <info>  [1765005882.6238] device (tap02492be5-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.625 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[76d69930-fcd4-469f-bf40-3e0c0f7a015c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 systemd-machined[195355]: New machine qemu-42-instance-00000068.
Dec  6 02:24:42 np0005548731 systemd[1]: Started Virtual Machine qemu-42-instance-00000068.
Dec  6 02:24:42 np0005548731 nova_compute[232433]: 2025-12-06 07:24:42.641 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:42 np0005548731 nova_compute[232433]: 2025-12-06 07:24:42.646 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:42Z|00387|binding|INFO|Setting lport 02492be5-e171-4dea-a6fe-20c8b6f3a45e ovn-installed in OVS
Dec  6 02:24:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:42Z|00388|binding|INFO|Setting lport 02492be5-e171-4dea-a6fe-20c8b6f3a45e up in Southbound
Dec  6 02:24:42 np0005548731 nova_compute[232433]: 2025-12-06 07:24:42.649 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.651 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b919bf3e-0254-42a8-8e9f-aaaa572c5fac]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.678 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c39c72c6-e552-4301-a3b4-82a6c030db4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.684 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[383a38e6-1b8a-4456-b17d-667b6647421b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 NetworkManager[49182]: <info>  [1765005882.6910] manager: (tap85cfbf28-70): new Veth device (/org/freedesktop/NetworkManager/Devices/198)
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.715 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[47503291-9d5f-4a8a-9aa0-7225a343ddcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.719 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[1b34cf2a-eeaf-4198-bede-81de0cc8fb04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 NetworkManager[49182]: <info>  [1765005882.7417] device (tap85cfbf28-70): carrier: link connected
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.747 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e46b5151-71e4-4f29-afd7-801bd23a4b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.763 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[095ae86b-41b8-4465-a943-c222b997146e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614474, 'reachable_time': 39221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273427, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.777 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[57c4b1de-8353-42d4-b306-e3856d4a5e66]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:762'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 614474, 'tstamp': 614474}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273428, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.794 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f18cf9-b24a-4f9b-bfdc-d60a90210175]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614474, 'reachable_time': 39221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273429, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.821 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5f579917-0fdf-40d3-89e0-febfd759d855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.878 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6fd3fb-9ee4-4e61-8ffd-cc32c7a486ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 NetworkManager[49182]: <info>  [1765005882.8827] manager: (tap85cfbf28-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.880 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.880 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.880 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85cfbf28-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:42 np0005548731 kernel: tap85cfbf28-70: entered promiscuous mode
Dec  6 02:24:42 np0005548731 nova_compute[232433]: 2025-12-06 07:24:42.882 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.884 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85cfbf28-70, col_values=(('external_ids', {'iface-id': '41b1b168-8e0e-4991-9750-9b31221f4863'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:42Z|00389|binding|INFO|Releasing lport 41b1b168-8e0e-4991-9750-9b31221f4863 from this chassis (sb_readonly=0)
Dec  6 02:24:42 np0005548731 nova_compute[232433]: 2025-12-06 07:24:42.899 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.900 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.901 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b84555e4-5586-42b0-a532-c9af885fc07e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.902 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:24:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:42.902 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'env', 'PROCESS_TAG=haproxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85cfbf28-7016-4776-8fc2-2eb08a6b8347.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:24:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:43.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:43 np0005548731 podman[273479]: 2025-12-06 07:24:43.282301408 +0000 UTC m=+0.049345628 container create 58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:24:43 np0005548731 systemd[1]: Started libpod-conmon-58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2.scope.
Dec  6 02:24:43 np0005548731 podman[273479]: 2025-12-06 07:24:43.26002762 +0000 UTC m=+0.027071860 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:24:43 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:24:43 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ccde7d88c1131c5d8d3720b4f531822e4a343f70a0d329529a9e08d7bd46e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:24:43 np0005548731 podman[273479]: 2025-12-06 07:24:43.37741305 +0000 UTC m=+0.144457270 container init 58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 02:24:43 np0005548731 podman[273479]: 2025-12-06 07:24:43.385966024 +0000 UTC m=+0.153010244 container start 58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:24:43 np0005548731 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[273493]: [NOTICE]   (273512) : New worker (273515) forked
Dec  6 02:24:43 np0005548731 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[273493]: [NOTICE]   (273512) : Loading success.
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.409 232437 DEBUG nova.compute.manager [req-f7fcda78-676f-419a-aa32-b0edadf828bb req-1c3afc87-d3ae-4bd3-b757-11796f94a72d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Received event network-vif-plugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.410 232437 DEBUG oslo_concurrency.lockutils [req-f7fcda78-676f-419a-aa32-b0edadf828bb req-1c3afc87-d3ae-4bd3-b757-11796f94a72d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.410 232437 DEBUG oslo_concurrency.lockutils [req-f7fcda78-676f-419a-aa32-b0edadf828bb req-1c3afc87-d3ae-4bd3-b757-11796f94a72d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.411 232437 DEBUG oslo_concurrency.lockutils [req-f7fcda78-676f-419a-aa32-b0edadf828bb req-1c3afc87-d3ae-4bd3-b757-11796f94a72d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.412 232437 DEBUG nova.compute.manager [req-f7fcda78-676f-419a-aa32-b0edadf828bb req-1c3afc87-d3ae-4bd3-b757-11796f94a72d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Processing event network-vif-plugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.417 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:24:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:43.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.768 232437 DEBUG nova.compute.manager [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.769 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005883.768313, a6984fe7-72e6-4e80-b77b-925152e01f3f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.770 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] VM Started (Lifecycle Event)#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.772 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.776 232437 INFO nova.virt.libvirt.driver [-] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Instance spawned successfully.#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.776 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.793 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.798 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.803 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.804 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.804 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.805 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.805 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.806 232437 DEBUG nova.virt.libvirt.driver [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.859 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.860 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005883.768599, a6984fe7-72e6-4e80-b77b-925152e01f3f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.860 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.937 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.940 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005883.771613, a6984fe7-72e6-4e80-b77b-925152e01f3f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.940 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.961 232437 DEBUG nova.network.neutron [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Updating instance_info_cache with network_info: [{"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.965 232437 INFO nova.compute.manager [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Took 14.04 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.965 232437 DEBUG nova.compute.manager [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.970 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:43 np0005548731 nova_compute[232433]: 2025-12-06 07:24:43.972 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:24:44 np0005548731 nova_compute[232433]: 2025-12-06 07:24:44.015 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:24:44 np0005548731 nova_compute[232433]: 2025-12-06 07:24:44.024 232437 DEBUG oslo_concurrency.lockutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Releasing lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:24:44 np0005548731 nova_compute[232433]: 2025-12-06 07:24:44.045 232437 INFO nova.compute.manager [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Took 15.15 seconds to build instance.#033[00m
Dec  6 02:24:44 np0005548731 nova_compute[232433]: 2025-12-06 07:24:44.067 232437 DEBUG oslo_concurrency.lockutils [None req-fa6bbbda-7b9c-4dae-babc-e050827eef98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:44 np0005548731 nova_compute[232433]: 2025-12-06 07:24:44.266 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:24:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:24:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:24:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:45.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.488 232437 DEBUG nova.objects.instance [None req-ae01fe82-2958-47eb-99f5-0337e33caf32 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6984fe7-72e6-4e80-b77b-925152e01f3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.514 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005885.5142896, a6984fe7-72e6-4e80-b77b-925152e01f3f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.515 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.528 232437 DEBUG nova.compute.manager [req-6df0e5c3-0c18-43ab-9995-7c1a820e7c1f req-2299e550-5f89-4518-95f3-31861e57cd7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Received event network-vif-plugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.529 232437 DEBUG oslo_concurrency.lockutils [req-6df0e5c3-0c18-43ab-9995-7c1a820e7c1f req-2299e550-5f89-4518-95f3-31861e57cd7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.530 232437 DEBUG oslo_concurrency.lockutils [req-6df0e5c3-0c18-43ab-9995-7c1a820e7c1f req-2299e550-5f89-4518-95f3-31861e57cd7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.530 232437 DEBUG oslo_concurrency.lockutils [req-6df0e5c3-0c18-43ab-9995-7c1a820e7c1f req-2299e550-5f89-4518-95f3-31861e57cd7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.530 232437 DEBUG nova.compute.manager [req-6df0e5c3-0c18-43ab-9995-7c1a820e7c1f req-2299e550-5f89-4518-95f3-31861e57cd7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] No waiting events found dispatching network-vif-plugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.530 232437 WARNING nova.compute.manager [req-6df0e5c3-0c18-43ab-9995-7c1a820e7c1f req-2299e550-5f89-4518-95f3-31861e57cd7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Received unexpected event network-vif-plugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e for instance with vm_state active and task_state suspending.#033[00m
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.539 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.544 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:24:45 np0005548731 nova_compute[232433]: 2025-12-06 07:24:45.572 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec  6 02:24:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:45.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:46 np0005548731 kernel: tap02492be5-e1 (unregistering): left promiscuous mode
Dec  6 02:24:46 np0005548731 NetworkManager[49182]: <info>  [1765005886.9414] device (tap02492be5-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:24:46 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:46Z|00390|binding|INFO|Releasing lport 02492be5-e171-4dea-a6fe-20c8b6f3a45e from this chassis (sb_readonly=0)
Dec  6 02:24:46 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:46Z|00391|binding|INFO|Setting lport 02492be5-e171-4dea-a6fe-20c8b6f3a45e down in Southbound
Dec  6 02:24:46 np0005548731 ovn_controller[133927]: 2025-12-06T07:24:46Z|00392|binding|INFO|Removing iface tap02492be5-e1 ovn-installed in OVS
Dec  6 02:24:46 np0005548731 nova_compute[232433]: 2025-12-06 07:24:46.948 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:46 np0005548731 nova_compute[232433]: 2025-12-06 07:24:46.952 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:46.955 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:25:68 10.100.0.6'], port_security=['fa:16:3e:0a:25:68 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a6984fe7-72e6-4e80-b77b-925152e01f3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '4', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=02492be5-e171-4dea-a6fe-20c8b6f3a45e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:24:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:46.957 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 02492be5-e171-4dea-a6fe-20c8b6f3a45e in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 unbound from our chassis#033[00m
Dec  6 02:24:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:46.959 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:24:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:46.960 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6d16ac-cdaf-4499-b5b6-42b48564b4da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:46.961 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace which is not needed anymore#033[00m
Dec  6 02:24:46 np0005548731 nova_compute[232433]: 2025-12-06 07:24:46.973 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:46 np0005548731 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000068.scope: Deactivated successfully.
Dec  6 02:24:46 np0005548731 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000068.scope: Consumed 2.367s CPU time.
Dec  6 02:24:46 np0005548731 systemd-machined[195355]: Machine qemu-42-instance-00000068 terminated.
Dec  6 02:24:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:24:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:47.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:24:47 np0005548731 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[273493]: [NOTICE]   (273512) : haproxy version is 2.8.14-c23fe91
Dec  6 02:24:47 np0005548731 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[273493]: [NOTICE]   (273512) : path to executable is /usr/sbin/haproxy
Dec  6 02:24:47 np0005548731 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[273493]: [WARNING]  (273512) : Exiting Master process...
Dec  6 02:24:47 np0005548731 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[273493]: [ALERT]    (273512) : Current worker (273515) exited with code 143 (Terminated)
Dec  6 02:24:47 np0005548731 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[273493]: [WARNING]  (273512) : All workers exited. Exiting... (0)
Dec  6 02:24:47 np0005548731 systemd[1]: libpod-58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2.scope: Deactivated successfully.
Dec  6 02:24:47 np0005548731 podman[273657]: 2025-12-06 07:24:47.09615277 +0000 UTC m=+0.046103925 container died 58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 02:24:47 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2-userdata-shm.mount: Deactivated successfully.
Dec  6 02:24:47 np0005548731 systemd[1]: var-lib-containers-storage-overlay-c5ccde7d88c1131c5d8d3720b4f531822e4a343f70a0d329529a9e08d7bd46e4-merged.mount: Deactivated successfully.
Dec  6 02:24:47 np0005548731 podman[273657]: 2025-12-06 07:24:47.13686731 +0000 UTC m=+0.086818475 container cleanup 58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:24:47 np0005548731 systemd[1]: libpod-conmon-58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2.scope: Deactivated successfully.
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.182 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.191 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.196 232437 DEBUG nova.compute.manager [None req-ae01fe82-2958-47eb-99f5-0337e33caf32 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:24:47 np0005548731 podman[273683]: 2025-12-06 07:24:47.222218007 +0000 UTC m=+0.061067041 container remove 58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  6 02:24:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:47.233 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[357b537f-555b-4795-aa54-e31c7da9cf9a]: (4, ('Sat Dec  6 07:24:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2)\n58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2\nSat Dec  6 07:24:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2)\n58a7541fa4e0172460c4bdcea390bf3894ee5e2aaebd665a1b9470ca575f87c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:47.234 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0a428f-1596-424f-8cc3-9edf708e0885]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:47.235 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:47 np0005548731 kernel: tap85cfbf28-70: left promiscuous mode
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.239 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.256 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:47.259 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4eb24f-6cae-4a4c-91ed-152352d2930b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:47.287 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[32bc1f45-425e-4eea-be5e-1811ee42fe0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:47.288 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[81580246-2b54-4358-9fae-46e60c17abe7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:47.307 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0b949c-e930-46b4-a160-496e8d71235d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614468, 'reachable_time': 16090, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273707, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:47.309 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:24:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:47.309 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[f1686266-1043-45c3-8a40-d2b914664e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:24:47 np0005548731 systemd[1]: run-netns-ovnmeta\x2d85cfbf28\x2d7016\x2d4776\x2d8fc2\x2d2eb08a6b8347.mount: Deactivated successfully.
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.338 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.638 232437 DEBUG nova.compute.manager [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Received event network-vif-unplugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.639 232437 DEBUG oslo_concurrency.lockutils [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.640 232437 DEBUG oslo_concurrency.lockutils [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.640 232437 DEBUG oslo_concurrency.lockutils [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.640 232437 DEBUG nova.compute.manager [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] No waiting events found dispatching network-vif-unplugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.641 232437 WARNING nova.compute.manager [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Received unexpected event network-vif-unplugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e for instance with vm_state suspended and task_state None.#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.641 232437 DEBUG nova.compute.manager [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Received event network-vif-plugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.641 232437 DEBUG oslo_concurrency.lockutils [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.642 232437 DEBUG oslo_concurrency.lockutils [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.642 232437 DEBUG oslo_concurrency.lockutils [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.642 232437 DEBUG nova.compute.manager [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] No waiting events found dispatching network-vif-plugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:24:47 np0005548731 nova_compute[232433]: 2025-12-06 07:24:47.643 232437 WARNING nova.compute.manager [req-2304d6f3-56de-4aa6-99e7-aeca8585cda4 req-37bf3b82-3def-4edb-94f8-0ba2ddc0dfc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Received unexpected event network-vif-plugged-02492be5-e171-4dea-a6fe-20c8b6f3a45e for instance with vm_state suspended and task_state None.#033[00m
Dec  6 02:24:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:47.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:48 np0005548731 nova_compute[232433]: 2025-12-06 07:24:48.419 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:49.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.618 232437 DEBUG oslo_concurrency.lockutils [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "a6984fe7-72e6-4e80-b77b-925152e01f3f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.619 232437 DEBUG oslo_concurrency.lockutils [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.619 232437 DEBUG oslo_concurrency.lockutils [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.619 232437 DEBUG oslo_concurrency.lockutils [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.620 232437 DEBUG oslo_concurrency.lockutils [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.621 232437 INFO nova.compute.manager [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Terminating instance#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.622 232437 DEBUG nova.compute.manager [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.628 232437 INFO nova.virt.libvirt.driver [-] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Instance destroyed successfully.#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.629 232437 DEBUG nova.objects.instance [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'resources' on Instance uuid a6984fe7-72e6-4e80-b77b-925152e01f3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.642 232437 DEBUG nova.virt.libvirt.vif [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:24:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1094086696',display_name='tempest-DeleteServersTestJSON-server-1094086696',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1094086696',id=104,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:24:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-zdeyzdaj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:24:47Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=a6984fe7-72e6-4e80-b77b-925152e01f3f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "address": "fa:16:3e:0a:25:68", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02492be5-e1", "ovs_interfaceid": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.642 232437 DEBUG nova.network.os_vif_util [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "address": "fa:16:3e:0a:25:68", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02492be5-e1", "ovs_interfaceid": "02492be5-e171-4dea-a6fe-20c8b6f3a45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.643 232437 DEBUG nova.network.os_vif_util [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:25:68,bridge_name='br-int',has_traffic_filtering=True,id=02492be5-e171-4dea-a6fe-20c8b6f3a45e,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02492be5-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.644 232437 DEBUG os_vif [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:25:68,bridge_name='br-int',has_traffic_filtering=True,id=02492be5-e171-4dea-a6fe-20c8b6f3a45e,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02492be5-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.645 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.646 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02492be5-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.647 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.650 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:24:49 np0005548731 nova_compute[232433]: 2025-12-06 07:24:49.652 232437 INFO os_vif [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:25:68,bridge_name='br-int',has_traffic_filtering=True,id=02492be5-e171-4dea-a6fe-20c8b6f3a45e,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02492be5-e1')#033[00m
Dec  6 02:24:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:49.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:51.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:51.170 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:24:51 np0005548731 nova_compute[232433]: 2025-12-06 07:24:51.170 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:51.172 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:24:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:51.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:52 np0005548731 nova_compute[232433]: 2025-12-06 07:24:52.341 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:53.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:53 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:24:53.175 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:24:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:53.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:54 np0005548731 nova_compute[232433]: 2025-12-06 07:24:54.312 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:24:54 np0005548731 nova_compute[232433]: 2025-12-06 07:24:54.647 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:24:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:55.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:24:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:55.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:57.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:57 np0005548731 nova_compute[232433]: 2025-12-06 07:24:57.343 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:57.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:24:58 np0005548731 nova_compute[232433]: 2025-12-06 07:24:58.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:24:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:59.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:24:59 np0005548731 nova_compute[232433]: 2025-12-06 07:24:59.648 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:24:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:24:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:24:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:59.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:00 np0005548731 nova_compute[232433]: 2025-12-06 07:25:00.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:25:00 np0005548731 nova_compute[232433]: 2025-12-06 07:25:00.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:25:00 np0005548731 nova_compute[232433]: 2025-12-06 07:25:00.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:25:00 np0005548731 nova_compute[232433]: 2025-12-06 07:25:00.128 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  6 02:25:00 np0005548731 nova_compute[232433]: 2025-12-06 07:25:00.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:25:00 np0005548731 nova_compute[232433]: 2025-12-06 07:25:00.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:25:00 np0005548731 nova_compute[232433]: 2025-12-06 07:25:00.129 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:25:00 np0005548731 nova_compute[232433]: 2025-12-06 07:25:00.129 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:00.866 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:00.867 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:00.867 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:01.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:01.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:01 np0005548731 nova_compute[232433]: 2025-12-06 07:25:01.842 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Updating instance_info_cache with network_info: [{"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:25:01 np0005548731 nova_compute[232433]: 2025-12-06 07:25:01.866 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:25:01 np0005548731 nova_compute[232433]: 2025-12-06 07:25:01.866 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:25:01 np0005548731 nova_compute[232433]: 2025-12-06 07:25:01.866 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:25:01 np0005548731 nova_compute[232433]: 2025-12-06 07:25:01.867 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:25:01 np0005548731 nova_compute[232433]: 2025-12-06 07:25:01.867 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:25:02 np0005548731 nova_compute[232433]: 2025-12-06 07:25:02.196 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005887.194474, a6984fe7-72e6-4e80-b77b-925152e01f3f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:25:02 np0005548731 nova_compute[232433]: 2025-12-06 07:25:02.197 232437 INFO nova.compute.manager [-] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:25:02 np0005548731 nova_compute[232433]: 2025-12-06 07:25:02.236 232437 DEBUG nova.compute.manager [None req-24bb905d-54ef-4fd3-86c4-f064db57742c - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:02 np0005548731 nova_compute[232433]: 2025-12-06 07:25:02.239 232437 DEBUG nova.compute.manager [None req-24bb905d-54ef-4fd3-86c4-f064db57742c - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: deleting, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:25:02 np0005548731 nova_compute[232433]: 2025-12-06 07:25:02.261 232437 INFO nova.compute.manager [None req-24bb905d-54ef-4fd3-86c4-f064db57742c - - - - - -] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Dec  6 02:25:02 np0005548731 nova_compute[232433]: 2025-12-06 07:25:02.344 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:03.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:03.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:04 np0005548731 nova_compute[232433]: 2025-12-06 07:25:04.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:25:04 np0005548731 nova_compute[232433]: 2025-12-06 07:25:04.652 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:05.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:05 np0005548731 nova_compute[232433]: 2025-12-06 07:25:05.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:25:05 np0005548731 nova_compute[232433]: 2025-12-06 07:25:05.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:25:05 np0005548731 nova_compute[232433]: 2025-12-06 07:25:05.355 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:25:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:05.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:05 np0005548731 nova_compute[232433]: 2025-12-06 07:25:05.831 232437 INFO nova.virt.libvirt.driver [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Deleting instance files /var/lib/nova/instances/a6984fe7-72e6-4e80-b77b-925152e01f3f_del#033[00m
Dec  6 02:25:05 np0005548731 nova_compute[232433]: 2025-12-06 07:25:05.833 232437 INFO nova.virt.libvirt.driver [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Deletion of /var/lib/nova/instances/a6984fe7-72e6-4e80-b77b-925152e01f3f_del complete#033[00m
Dec  6 02:25:05 np0005548731 nova_compute[232433]: 2025-12-06 07:25:05.903 232437 INFO nova.compute.manager [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Took 16.28 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:25:05 np0005548731 nova_compute[232433]: 2025-12-06 07:25:05.904 232437 DEBUG oslo.service.loopingcall [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:25:05 np0005548731 nova_compute[232433]: 2025-12-06 07:25:05.904 232437 DEBUG nova.compute.manager [-] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:25:05 np0005548731 nova_compute[232433]: 2025-12-06 07:25:05.904 232437 DEBUG nova.network.neutron [-] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.077 232437 DEBUG nova.network.neutron [-] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:25:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000025s ======
Dec  6 02:25:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:07.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.095 232437 INFO nova.compute.manager [-] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Took 1.19 seconds to deallocate network for instance.#033[00m
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.204 232437 DEBUG oslo_concurrency.lockutils [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.204 232437 DEBUG oslo_concurrency.lockutils [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.222 232437 DEBUG nova.compute.manager [req-8bd534de-306c-4f84-b4c3-3a4278a01dfe req-a6c3d812-b891-48d0-b7ff-f95a6010526c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a6984fe7-72e6-4e80-b77b-925152e01f3f] Received event network-vif-deleted-02492be5-e171-4dea-a6fe-20c8b6f3a45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.282 232437 DEBUG oslo_concurrency.processutils [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.345 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:25:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1999743962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.716 232437 DEBUG oslo_concurrency.processutils [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.726 232437 DEBUG nova.compute.provider_tree [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.741 232437 DEBUG nova.scheduler.client.report [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:25:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:07.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.766 232437 DEBUG oslo_concurrency.lockutils [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.798 232437 INFO nova.scheduler.client.report [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Deleted allocations for instance a6984fe7-72e6-4e80-b77b-925152e01f3f#033[00m
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.877 232437 DEBUG oslo_concurrency.lockutils [None req-3e9ce8aa-b5a9-4e1f-9c67-a875d561589b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "a6984fe7-72e6-4e80-b77b-925152e01f3f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 18.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:07 np0005548731 podman[273811]: 2025-12-06 07:25:07.900954833 +0000 UTC m=+0.055944462 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:25:07 np0005548731 podman[273810]: 2025-12-06 07:25:07.909243421 +0000 UTC m=+0.061091761 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 02:25:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:07 np0005548731 kernel: tape61e57ee-1e (unregistering): left promiscuous mode
Dec  6 02:25:07 np0005548731 NetworkManager[49182]: <info>  [1765005907.9580] device (tape61e57ee-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:25:07 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:07Z|00393|binding|INFO|Releasing lport e61e57ee-1e76-4d23-ab33-ab23cf69ba70 from this chassis (sb_readonly=0)
Dec  6 02:25:07 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:07Z|00394|binding|INFO|Setting lport e61e57ee-1e76-4d23-ab33-ab23cf69ba70 down in Southbound
Dec  6 02:25:07 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:07Z|00395|binding|INFO|Removing iface tape61e57ee-1e ovn-installed in OVS
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:07 np0005548731 nova_compute[232433]: 2025-12-06 07:25:07.998 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:08.000 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:ba:09 10.100.0.10'], port_security=['fa:16:3e:d9:ba:09 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5ea73f51-d224-41ed-892b-a137d9985e73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=e61e57ee-1e76-4d23-ab33-ab23cf69ba70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:25:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:08.001 143965 INFO neutron.agent.ovn.metadata.agent [-] Port e61e57ee-1e76-4d23-ab33-ab23cf69ba70 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 unbound from our chassis#033[00m
Dec  6 02:25:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:08.002 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:25:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:08.003 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ed4b6b-6f18-4d4f-90e3-ffccf516cac6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.012 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:08 np0005548731 podman[273848]: 2025-12-06 07:25:08.036391174 +0000 UTC m=+0.111330739 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  6 02:25:08 np0005548731 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000067.scope: Deactivated successfully.
Dec  6 02:25:08 np0005548731 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000067.scope: Consumed 14.780s CPU time.
Dec  6 02:25:08 np0005548731 systemd-machined[195355]: Machine qemu-41-instance-00000067 terminated.
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.142 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.142 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.142 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.142 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.143 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.324 232437 DEBUG nova.compute.manager [req-747b590d-63db-4f64-9c80-3dd903ba43db req-feca9c3b-e2ca-422a-8acd-6aae65c54c0e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received event network-vif-unplugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.325 232437 DEBUG oslo_concurrency.lockutils [req-747b590d-63db-4f64-9c80-3dd903ba43db req-feca9c3b-e2ca-422a-8acd-6aae65c54c0e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.326 232437 DEBUG oslo_concurrency.lockutils [req-747b590d-63db-4f64-9c80-3dd903ba43db req-feca9c3b-e2ca-422a-8acd-6aae65c54c0e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.326 232437 DEBUG oslo_concurrency.lockutils [req-747b590d-63db-4f64-9c80-3dd903ba43db req-feca9c3b-e2ca-422a-8acd-6aae65c54c0e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.326 232437 DEBUG nova.compute.manager [req-747b590d-63db-4f64-9c80-3dd903ba43db req-feca9c3b-e2ca-422a-8acd-6aae65c54c0e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] No waiting events found dispatching network-vif-unplugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.327 232437 WARNING nova.compute.manager [req-747b590d-63db-4f64-9c80-3dd903ba43db req-feca9c3b-e2ca-422a-8acd-6aae65c54c0e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received unexpected event network-vif-unplugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.371 232437 INFO nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Instance shutdown successfully after 24 seconds.#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.376 232437 INFO nova.virt.libvirt.driver [-] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Instance destroyed successfully.#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.376 232437 DEBUG nova.objects.instance [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.391 232437 INFO nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Attempting rescue#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.392 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.397 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.397 232437 INFO nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Creating image(s)#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.421 232437 DEBUG nova.storage.rbd_utils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.424 232437 DEBUG nova.objects.instance [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.474 232437 DEBUG nova.storage.rbd_utils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.498 232437 DEBUG nova.storage.rbd_utils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.502 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.559 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.561 232437 DEBUG oslo_concurrency.lockutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.561 232437 DEBUG oslo_concurrency.lockutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.562 232437 DEBUG oslo_concurrency.lockutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:25:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1966551518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.586 232437 DEBUG nova.storage.rbd_utils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.592 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5ea73f51-d224-41ed-892b-a137d9985e73_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.614 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:25:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3840847270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:25:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:25:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3840847270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.690 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.691 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.821 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.823 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4500MB free_disk=20.872173309326172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.823 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.823 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.929 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 5ea73f51-d224-41ed-892b-a137d9985e73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.930 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.930 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:25:08 np0005548731 nova_compute[232433]: 2025-12-06 07:25:08.973 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:09.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:25:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2880801786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:25:09 np0005548731 nova_compute[232433]: 2025-12-06 07:25:09.425 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:09 np0005548731 nova_compute[232433]: 2025-12-06 07:25:09.432 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:25:09 np0005548731 nova_compute[232433]: 2025-12-06 07:25:09.447 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:25:09 np0005548731 nova_compute[232433]: 2025-12-06 07:25:09.471 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:25:09 np0005548731 nova_compute[232433]: 2025-12-06 07:25:09.472 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:09 np0005548731 nova_compute[232433]: 2025-12-06 07:25:09.654 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:09.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:10 np0005548731 nova_compute[232433]: 2025-12-06 07:25:10.398 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5ea73f51-d224-41ed-892b-a137d9985e73_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.806s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:10 np0005548731 nova_compute[232433]: 2025-12-06 07:25:10.398 232437 DEBUG nova.objects.instance [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'migration_context' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:10 np0005548731 nova_compute[232433]: 2025-12-06 07:25:10.525 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:25:10 np0005548731 nova_compute[232433]: 2025-12-06 07:25:10.526 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Start _get_guest_xml network_info=[{"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-32965635-network", "vif_mac": "fa:16:3e:d9:ba:09"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:25:10 np0005548731 nova_compute[232433]: 2025-12-06 07:25:10.526 232437 DEBUG nova.objects.instance [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'resources' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.047 232437 DEBUG nova.compute.manager [req-abee034e-c9ae-4e1c-b241-8e163200f342 req-55adfa57-d86a-4b7c-a1c1-5685d73f585f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.047 232437 DEBUG oslo_concurrency.lockutils [req-abee034e-c9ae-4e1c-b241-8e163200f342 req-55adfa57-d86a-4b7c-a1c1-5685d73f585f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.047 232437 DEBUG oslo_concurrency.lockutils [req-abee034e-c9ae-4e1c-b241-8e163200f342 req-55adfa57-d86a-4b7c-a1c1-5685d73f585f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.048 232437 DEBUG oslo_concurrency.lockutils [req-abee034e-c9ae-4e1c-b241-8e163200f342 req-55adfa57-d86a-4b7c-a1c1-5685d73f585f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.048 232437 DEBUG nova.compute.manager [req-abee034e-c9ae-4e1c-b241-8e163200f342 req-55adfa57-d86a-4b7c-a1c1-5685d73f585f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] No waiting events found dispatching network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.048 232437 WARNING nova.compute.manager [req-abee034e-c9ae-4e1c-b241-8e163200f342 req-55adfa57-d86a-4b7c-a1c1-5685d73f585f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received unexpected event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.060 232437 WARNING nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.070 232437 DEBUG nova.virt.libvirt.host [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.071 232437 DEBUG nova.virt.libvirt.host [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.076 232437 DEBUG nova.virt.libvirt.host [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.077 232437 DEBUG nova.virt.libvirt.host [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.079 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.079 232437 DEBUG nova.virt.hardware [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.079 232437 DEBUG nova.virt.hardware [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.080 232437 DEBUG nova.virt.hardware [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.080 232437 DEBUG nova.virt.hardware [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.080 232437 DEBUG nova.virt.hardware [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.080 232437 DEBUG nova.virt.hardware [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.081 232437 DEBUG nova.virt.hardware [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.081 232437 DEBUG nova.virt.hardware [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.081 232437 DEBUG nova.virt.hardware [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.081 232437 DEBUG nova.virt.hardware [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.081 232437 DEBUG nova.virt.hardware [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.082 232437 DEBUG nova.objects.instance [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:11.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.103 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:25:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3014821163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.644 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:11 np0005548731 nova_compute[232433]: 2025-12-06 07:25:11.645 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:11.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:25:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1981577113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.076 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.077 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.346 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.473 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:25:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:25:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/534500885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.582 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.584 232437 DEBUG nova.virt.libvirt.vif [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:24:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1971190968',display_name='tempest-ServerRescueTestJSON-server-1971190968',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1971190968',id=103,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:24:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5036a5cbf7d44fd809d00942afd76e6',ramdisk_id='',reservation_id='r-uvl7bc16',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1682980670',owner_user_name='tempest-ServerRescueTestJSON-1682980670-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:38Z,user_data=None,user_id='bf140baee82f408e8fafa81062146641',uuid=5ea73f51-d224-41ed-892b-a137d9985e73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-32965635-network", "vif_mac": "fa:16:3e:d9:ba:09"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.584 232437 DEBUG nova.network.os_vif_util [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converting VIF {"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-32965635-network", "vif_mac": "fa:16:3e:d9:ba:09"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.585 232437 DEBUG nova.network.os_vif_util [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:ba:09,bridge_name='br-int',has_traffic_filtering=True,id=e61e57ee-1e76-4d23-ab33-ab23cf69ba70,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61e57ee-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.586 232437 DEBUG nova.objects.instance [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.630 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  <uuid>5ea73f51-d224-41ed-892b-a137d9985e73</uuid>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  <name>instance-00000067</name>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerRescueTestJSON-server-1971190968</nova:name>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:25:11</nova:creationTime>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <nova:user uuid="bf140baee82f408e8fafa81062146641">tempest-ServerRescueTestJSON-1682980670-project-member</nova:user>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <nova:project uuid="c5036a5cbf7d44fd809d00942afd76e6">tempest-ServerRescueTestJSON-1682980670</nova:project>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <nova:port uuid="e61e57ee-1e76-4d23-ab33-ab23cf69ba70">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <entry name="serial">5ea73f51-d224-41ed-892b-a137d9985e73</entry>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <entry name="uuid">5ea73f51-d224-41ed-892b-a137d9985e73</entry>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/5ea73f51-d224-41ed-892b-a137d9985e73_disk.rescue">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/5ea73f51-d224-41ed-892b-a137d9985e73_disk">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <target dev="vdb" bus="virtio"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/5ea73f51-d224-41ed-892b-a137d9985e73_disk.config.rescue">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:d9:ba:09"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <target dev="tape61e57ee-1e"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/console.log" append="off"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:25:12 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:25:12 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:25:12 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:25:12 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.637 232437 INFO nova.virt.libvirt.driver [-] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Instance destroyed successfully.#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.709 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.709 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.710 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.710 232437 DEBUG nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No VIF found with MAC fa:16:3e:d9:ba:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.710 232437 INFO nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Using config drive#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.729 232437 DEBUG nova.storage.rbd_utils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.755 232437 DEBUG nova.objects.instance [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:12 np0005548731 nova_compute[232433]: 2025-12-06 07:25:12.780 232437 DEBUG nova.objects.instance [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'keypairs' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:13.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:13 np0005548731 nova_compute[232433]: 2025-12-06 07:25:13.225 232437 INFO nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Creating config drive at /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config.rescue#033[00m
Dec  6 02:25:13 np0005548731 nova_compute[232433]: 2025-12-06 07:25:13.232 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnr8yd_te execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:13 np0005548731 nova_compute[232433]: 2025-12-06 07:25:13.364 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnr8yd_te" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:13 np0005548731 nova_compute[232433]: 2025-12-06 07:25:13.391 232437 DEBUG nova.storage.rbd_utils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 5ea73f51-d224-41ed-892b-a137d9985e73_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:13 np0005548731 nova_compute[232433]: 2025-12-06 07:25:13.394 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config.rescue 5ea73f51-d224-41ed-892b-a137d9985e73_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:13.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:14 np0005548731 nova_compute[232433]: 2025-12-06 07:25:14.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:25:14 np0005548731 nova_compute[232433]: 2025-12-06 07:25:14.657 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:15.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:15 np0005548731 nova_compute[232433]: 2025-12-06 07:25:15.497 232437 DEBUG oslo_concurrency.processutils [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config.rescue 5ea73f51-d224-41ed-892b-a137d9985e73_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:15 np0005548731 nova_compute[232433]: 2025-12-06 07:25:15.498 232437 INFO nova.virt.libvirt.driver [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Deleting local config drive /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73/disk.config.rescue because it was imported into RBD.#033[00m
Dec  6 02:25:15 np0005548731 kernel: tape61e57ee-1e: entered promiscuous mode
Dec  6 02:25:15 np0005548731 NetworkManager[49182]: <info>  [1765005915.5638] manager: (tape61e57ee-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Dec  6 02:25:15 np0005548731 nova_compute[232433]: 2025-12-06 07:25:15.563 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:15Z|00396|binding|INFO|Claiming lport e61e57ee-1e76-4d23-ab33-ab23cf69ba70 for this chassis.
Dec  6 02:25:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:15Z|00397|binding|INFO|e61e57ee-1e76-4d23-ab33-ab23cf69ba70: Claiming fa:16:3e:d9:ba:09 10.100.0.10
Dec  6 02:25:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:15Z|00398|binding|INFO|Setting lport e61e57ee-1e76-4d23-ab33-ab23cf69ba70 ovn-installed in OVS
Dec  6 02:25:15 np0005548731 nova_compute[232433]: 2025-12-06 07:25:15.582 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:15 np0005548731 nova_compute[232433]: 2025-12-06 07:25:15.586 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:15 np0005548731 systemd-udevd[274172]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:25:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:15Z|00399|binding|INFO|Setting lport e61e57ee-1e76-4d23-ab33-ab23cf69ba70 up in Southbound
Dec  6 02:25:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:15.590 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:ba:09 10.100.0.10'], port_security=['fa:16:3e:d9:ba:09 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5ea73f51-d224-41ed-892b-a137d9985e73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=e61e57ee-1e76-4d23-ab33-ab23cf69ba70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:25:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:15.591 143965 INFO neutron.agent.ovn.metadata.agent [-] Port e61e57ee-1e76-4d23-ab33-ab23cf69ba70 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 bound to our chassis#033[00m
Dec  6 02:25:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:15.592 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:25:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:15.594 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[997a7382-af30-4ef3-ab0e-e3c37650b02e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:15 np0005548731 systemd-machined[195355]: New machine qemu-43-instance-00000067.
Dec  6 02:25:15 np0005548731 NetworkManager[49182]: <info>  [1765005915.6100] device (tape61e57ee-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:25:15 np0005548731 NetworkManager[49182]: <info>  [1765005915.6118] device (tape61e57ee-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:25:15 np0005548731 systemd[1]: Started Virtual Machine qemu-43-instance-00000067.
Dec  6 02:25:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:15.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.064 232437 DEBUG nova.compute.manager [req-9d28f882-0c16-40c0-82fa-a11e6a09dc30 req-06ecee7c-99b2-4b6e-bc8f-4e7ff068df60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.065 232437 DEBUG oslo_concurrency.lockutils [req-9d28f882-0c16-40c0-82fa-a11e6a09dc30 req-06ecee7c-99b2-4b6e-bc8f-4e7ff068df60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.066 232437 DEBUG oslo_concurrency.lockutils [req-9d28f882-0c16-40c0-82fa-a11e6a09dc30 req-06ecee7c-99b2-4b6e-bc8f-4e7ff068df60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.066 232437 DEBUG oslo_concurrency.lockutils [req-9d28f882-0c16-40c0-82fa-a11e6a09dc30 req-06ecee7c-99b2-4b6e-bc8f-4e7ff068df60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.067 232437 DEBUG nova.compute.manager [req-9d28f882-0c16-40c0-82fa-a11e6a09dc30 req-06ecee7c-99b2-4b6e-bc8f-4e7ff068df60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] No waiting events found dispatching network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.067 232437 WARNING nova.compute.manager [req-9d28f882-0c16-40c0-82fa-a11e6a09dc30 req-06ecee7c-99b2-4b6e-bc8f-4e7ff068df60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received unexpected event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.475 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for 5ea73f51-d224-41ed-892b-a137d9985e73 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.475 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005916.4750423, 5ea73f51-d224-41ed-892b-a137d9985e73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.476 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.479 232437 DEBUG nova.compute.manager [None req-a50a0eee-6b1d-4713-b29f-72595519baa1 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.734 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.738 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.810 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.811 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005916.4772358, 5ea73f51-d224-41ed-892b-a137d9985e73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.811 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] VM Started (Lifecycle Event)#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.899 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:16 np0005548731 nova_compute[232433]: 2025-12-06 07:25:16.902 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:25:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:17.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:17 np0005548731 nova_compute[232433]: 2025-12-06 07:25:17.347 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:17.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:19.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:19 np0005548731 nova_compute[232433]: 2025-12-06 07:25:19.159 232437 DEBUG nova.compute.manager [req-7e3403e2-89e1-49f8-98a9-429ff6cf1a02 req-ca5e55fe-1b8b-4ee2-bac7-bb0bdd9229c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:25:19 np0005548731 nova_compute[232433]: 2025-12-06 07:25:19.159 232437 DEBUG oslo_concurrency.lockutils [req-7e3403e2-89e1-49f8-98a9-429ff6cf1a02 req-ca5e55fe-1b8b-4ee2-bac7-bb0bdd9229c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:19 np0005548731 nova_compute[232433]: 2025-12-06 07:25:19.160 232437 DEBUG oslo_concurrency.lockutils [req-7e3403e2-89e1-49f8-98a9-429ff6cf1a02 req-ca5e55fe-1b8b-4ee2-bac7-bb0bdd9229c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:19 np0005548731 nova_compute[232433]: 2025-12-06 07:25:19.160 232437 DEBUG oslo_concurrency.lockutils [req-7e3403e2-89e1-49f8-98a9-429ff6cf1a02 req-ca5e55fe-1b8b-4ee2-bac7-bb0bdd9229c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:19 np0005548731 nova_compute[232433]: 2025-12-06 07:25:19.160 232437 DEBUG nova.compute.manager [req-7e3403e2-89e1-49f8-98a9-429ff6cf1a02 req-ca5e55fe-1b8b-4ee2-bac7-bb0bdd9229c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] No waiting events found dispatching network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:25:19 np0005548731 nova_compute[232433]: 2025-12-06 07:25:19.160 232437 WARNING nova.compute.manager [req-7e3403e2-89e1-49f8-98a9-429ff6cf1a02 req-ca5e55fe-1b8b-4ee2-bac7-bb0bdd9229c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received unexpected event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 for instance with vm_state rescued and task_state None.#033[00m
Dec  6 02:25:19 np0005548731 nova_compute[232433]: 2025-12-06 07:25:19.660 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:19.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:21.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:21.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:22 np0005548731 nova_compute[232433]: 2025-12-06 07:25:22.350 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:23.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:23.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:24 np0005548731 nova_compute[232433]: 2025-12-06 07:25:24.663 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:24 np0005548731 nova_compute[232433]: 2025-12-06 07:25:24.933 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:24 np0005548731 nova_compute[232433]: 2025-12-06 07:25:24.934 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:24 np0005548731 nova_compute[232433]: 2025-12-06 07:25:24.948 232437 DEBUG nova.compute.manager [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.027 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.027 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.035 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.035 232437 INFO nova.compute.claims [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:25:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:25.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.226 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.264 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.265 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.290 232437 DEBUG nova.compute.manager [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.362 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:25:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1295261342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.661 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.666 232437 DEBUG nova.compute.provider_tree [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.685 232437 DEBUG nova.scheduler.client.report [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.709 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.709 232437 DEBUG nova.compute.manager [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.712 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.723 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.723 232437 INFO nova.compute.claims [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:25:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:25:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:25.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.806 232437 DEBUG nova.compute.manager [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.807 232437 DEBUG nova.network.neutron [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.840 232437 INFO nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.865 232437 DEBUG nova.compute.manager [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.911 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.991 232437 DEBUG nova.compute.manager [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.993 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:25:25 np0005548731 nova_compute[232433]: 2025-12-06 07:25:25.993 232437 INFO nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Creating image(s)#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.023 232437 DEBUG nova.storage.rbd_utils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.056 232437 DEBUG nova.storage.rbd_utils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.092 232437 DEBUG nova.storage.rbd_utils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.095 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.134 232437 DEBUG nova.policy [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bf140baee82f408e8fafa81062146641', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.154 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.155 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.155 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.156 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.183 232437 DEBUG nova.storage.rbd_utils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.188 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 78b7f315-b870-4184-87db-8078fd170237_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:25:26 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2019194234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.380 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.386 232437 DEBUG nova.compute.provider_tree [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.407 232437 DEBUG nova.scheduler.client.report [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.438 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.438 232437 DEBUG nova.compute.manager [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.477 232437 DEBUG nova.compute.manager [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.478 232437 DEBUG nova.network.neutron [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.497 232437 INFO nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.516 232437 DEBUG nova.compute.manager [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.854 232437 DEBUG nova.compute.manager [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.855 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.855 232437 INFO nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Creating image(s)#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.881 232437 DEBUG nova.storage.rbd_utils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.905 232437 DEBUG nova.storage.rbd_utils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.929 232437 DEBUG nova.storage.rbd_utils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.932 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.991 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.992 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.992 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:26 np0005548731 nova_compute[232433]: 2025-12-06 07:25:26.993 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.019 232437 DEBUG nova.storage.rbd_utils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.023 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.043 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 78b7f315-b870-4184-87db-8078fd170237_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.856s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:27.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.118 232437 DEBUG nova.policy [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd67c136e82ad4001b000848d75eef50d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88f5b34244614321a9b6e902eaba0ece', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.126 232437 DEBUG nova.storage.rbd_utils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] resizing rbd image 78b7f315-b870-4184-87db-8078fd170237_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.220 232437 DEBUG nova.objects.instance [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'migration_context' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.267 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.268 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Ensure instance console log exists: /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.268 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.268 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.268 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:27 np0005548731 nova_compute[232433]: 2025-12-06 07:25:27.351 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:27.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:28 np0005548731 nova_compute[232433]: 2025-12-06 07:25:28.048 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:28 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Dec  6 02:25:28 np0005548731 nova_compute[232433]: 2025-12-06 07:25:28.115 232437 DEBUG nova.storage.rbd_utils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] resizing rbd image ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:25:28 np0005548731 nova_compute[232433]: 2025-12-06 07:25:28.249 232437 DEBUG nova.network.neutron [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Successfully created port: 59fc2ff6-f1bb-4cda-a487-d011530f0618 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:25:28 np0005548731 nova_compute[232433]: 2025-12-06 07:25:28.287 232437 DEBUG nova.objects.instance [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'migration_context' on Instance uuid ea8c0005-4b7a-4697-89ae-91f4bef22e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:28 np0005548731 nova_compute[232433]: 2025-12-06 07:25:28.312 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:25:28 np0005548731 nova_compute[232433]: 2025-12-06 07:25:28.312 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Ensure instance console log exists: /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:25:28 np0005548731 nova_compute[232433]: 2025-12-06 07:25:28.313 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:28 np0005548731 nova_compute[232433]: 2025-12-06 07:25:28.313 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:28 np0005548731 nova_compute[232433]: 2025-12-06 07:25:28.314 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:29.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:29 np0005548731 nova_compute[232433]: 2025-12-06 07:25:29.187 232437 DEBUG nova.network.neutron [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Successfully created port: 0e5e71bc-7098-4091-938e-6299f989917f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:25:29 np0005548731 nova_compute[232433]: 2025-12-06 07:25:29.682 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:29.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.127 232437 DEBUG nova.network.neutron [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Successfully updated port: 0e5e71bc-7098-4091-938e-6299f989917f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.150 232437 DEBUG nova.network.neutron [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Successfully updated port: 59fc2ff6-f1bb-4cda-a487-d011530f0618 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.165 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.165 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquired lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.165 232437 DEBUG nova.network.neutron [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.169 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.169 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquired lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.169 232437 DEBUG nova.network.neutron [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.219 232437 DEBUG nova.compute.manager [req-41334835-1759-4bd2-9447-49eb3ad9e263 req-a998d240-a699-49ca-8bb0-604ae7198a7c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-changed-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.219 232437 DEBUG nova.compute.manager [req-41334835-1759-4bd2-9447-49eb3ad9e263 req-a998d240-a699-49ca-8bb0-604ae7198a7c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Refreshing instance network info cache due to event network-changed-0e5e71bc-7098-4091-938e-6299f989917f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.219 232437 DEBUG oslo_concurrency.lockutils [req-41334835-1759-4bd2-9447-49eb3ad9e263 req-a998d240-a699-49ca-8bb0-604ae7198a7c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.296 232437 DEBUG nova.network.neutron [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.300 232437 DEBUG nova.network.neutron [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.619 232437 DEBUG nova.compute.manager [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-changed-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.620 232437 DEBUG nova.compute.manager [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Refreshing instance network info cache due to event network-changed-59fc2ff6-f1bb-4cda-a487-d011530f0618. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:25:30 np0005548731 nova_compute[232433]: 2025-12-06 07:25:30.620 232437 DEBUG oslo_concurrency.lockutils [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:25:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:31.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:31.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.353 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.366 232437 DEBUG nova.network.neutron [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating instance_info_cache with network_info: [{"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.392 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Releasing lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.392 232437 DEBUG nova.compute.manager [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Instance network_info: |[{"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.393 232437 DEBUG oslo_concurrency.lockutils [req-41334835-1759-4bd2-9447-49eb3ad9e263 req-a998d240-a699-49ca-8bb0-604ae7198a7c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.393 232437 DEBUG nova.network.neutron [req-41334835-1759-4bd2-9447-49eb3ad9e263 req-a998d240-a699-49ca-8bb0-604ae7198a7c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Refreshing network info cache for port 0e5e71bc-7098-4091-938e-6299f989917f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.397 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Start _get_guest_xml network_info=[{"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.400 232437 DEBUG nova.network.neutron [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Updating instance_info_cache with network_info: [{"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.403 232437 WARNING nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.421 232437 DEBUG nova.virt.libvirt.host [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.422 232437 DEBUG nova.virt.libvirt.host [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.426 232437 DEBUG nova.virt.libvirt.host [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.426 232437 DEBUG nova.virt.libvirt.host [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.428 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.428 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.428 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.429 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.429 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.429 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.429 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.429 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.430 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.430 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.430 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.430 232437 DEBUG nova.virt.hardware [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.433 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.456 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Releasing lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.457 232437 DEBUG nova.compute.manager [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance network_info: |[{"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.457 232437 DEBUG oslo_concurrency.lockutils [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.458 232437 DEBUG nova.network.neutron [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Refreshing network info cache for port 59fc2ff6-f1bb-4cda-a487-d011530f0618 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.460 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Start _get_guest_xml network_info=[{"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.464 232437 WARNING nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.468 232437 DEBUG nova.virt.libvirt.host [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.468 232437 DEBUG nova.virt.libvirt.host [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.471 232437 DEBUG nova.virt.libvirt.host [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.471 232437 DEBUG nova.virt.libvirt.host [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.472 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.472 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.473 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.473 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.473 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.473 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.473 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.474 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.474 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.474 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.474 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.475 232437 DEBUG nova.virt.hardware [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:25:32 np0005548731 nova_compute[232433]: 2025-12-06 07:25:32.477 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:25:32 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/996845718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:25:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:25:32 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3612033471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:25:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:33.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:33 np0005548731 nova_compute[232433]: 2025-12-06 07:25:33.583 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:33 np0005548731 nova_compute[232433]: 2025-12-06 07:25:33.605 232437 DEBUG nova.storage.rbd_utils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:33 np0005548731 nova_compute[232433]: 2025-12-06 07:25:33.609 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:33 np0005548731 nova_compute[232433]: 2025-12-06 07:25:33.630 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:33 np0005548731 nova_compute[232433]: 2025-12-06 07:25:33.658 232437 DEBUG nova.storage.rbd_utils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:33 np0005548731 nova_compute[232433]: 2025-12-06 07:25:33.663 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:33.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:25:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3723579441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.051 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.052 232437 DEBUG nova.virt.libvirt.vif [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:25:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1037063395',display_name='tempest-ServerDiskConfigTestJSON-server-1037063395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1037063395',id=108,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-qe8qs13c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:26Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=ea8c0005-4b7a-4697-89ae-91f4bef22e36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.053 232437 DEBUG nova.network.os_vif_util [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.053 232437 DEBUG nova.network.os_vif_util [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.054 232437 DEBUG nova.objects.instance [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_devices' on Instance uuid ea8c0005-4b7a-4697-89ae-91f4bef22e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.069 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <uuid>ea8c0005-4b7a-4697-89ae-91f4bef22e36</uuid>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <name>instance-0000006c</name>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1037063395</nova:name>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:25:32</nova:creationTime>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:user uuid="d67c136e82ad4001b000848d75eef50d">tempest-ServerDiskConfigTestJSON-749654875-project-member</nova:user>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:project uuid="88f5b34244614321a9b6e902eaba0ece">tempest-ServerDiskConfigTestJSON-749654875</nova:project>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:port uuid="0e5e71bc-7098-4091-938e-6299f989917f">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="serial">ea8c0005-4b7a-4697-89ae-91f4bef22e36</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="uuid">ea8c0005-4b7a-4697-89ae-91f4bef22e36</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk.config">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:ec:96:d5"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <target dev="tap0e5e71bc-70"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/console.log" append="off"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:25:34 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:25:34 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.069 232437 DEBUG nova.compute.manager [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Preparing to wait for external event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.069 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.070 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.070 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.070 232437 DEBUG nova.virt.libvirt.vif [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:25:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1037063395',display_name='tempest-ServerDiskConfigTestJSON-server-1037063395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1037063395',id=108,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-qe8qs13c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:26Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=ea8c0005-4b7a-4697-89ae-91f4bef22e36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.071 232437 DEBUG nova.network.os_vif_util [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.071 232437 DEBUG nova.network.os_vif_util [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.071 232437 DEBUG os_vif [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.072 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.072 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.073 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.076 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.076 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e5e71bc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.076 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e5e71bc-70, col_values=(('external_ids', {'iface-id': '0e5e71bc-7098-4091-938e-6299f989917f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:96:d5', 'vm-uuid': 'ea8c0005-4b7a-4697-89ae-91f4bef22e36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.079 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:34 np0005548731 NetworkManager[49182]: <info>  [1765005934.0813] manager: (tap0e5e71bc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.087 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.090 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.091 232437 INFO os_vif [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70')#033[00m
Dec  6 02:25:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:25:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1114235426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.144 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.145 232437 DEBUG nova.virt.libvirt.vif [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:25:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1448250982',display_name='tempest-ServerRescueTestJSON-server-1448250982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1448250982',id=107,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5036a5cbf7d44fd809d00942afd76e6',ramdisk_id='',reservation_id='r-d804mxte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1682980670',owner_user_name='tempest-ServerRescueTestJSON-1682980670-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:25Z,user_data=None,user_id='bf140baee82f408e8fafa81062146641',uuid=78b7f315-b870-4184-87db-8078fd170237,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.146 232437 DEBUG nova.network.os_vif_util [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converting VIF {"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.146 232437 DEBUG nova.network.os_vif_util [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:72:dc,bridge_name='br-int',has_traffic_filtering=True,id=59fc2ff6-f1bb-4cda-a487-d011530f0618,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59fc2ff6-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.147 232437 DEBUG nova.objects.instance [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.152 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.152 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.152 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No VIF found with MAC fa:16:3e:ec:96:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.153 232437 INFO nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Using config drive#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.172 232437 DEBUG nova.storage.rbd_utils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.177 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <uuid>78b7f315-b870-4184-87db-8078fd170237</uuid>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <name>instance-0000006b</name>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerRescueTestJSON-server-1448250982</nova:name>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:25:32</nova:creationTime>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:user uuid="bf140baee82f408e8fafa81062146641">tempest-ServerRescueTestJSON-1682980670-project-member</nova:user>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:project uuid="c5036a5cbf7d44fd809d00942afd76e6">tempest-ServerRescueTestJSON-1682980670</nova:project>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <nova:port uuid="59fc2ff6-f1bb-4cda-a487-d011530f0618">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="serial">78b7f315-b870-4184-87db-8078fd170237</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="uuid">78b7f315-b870-4184-87db-8078fd170237</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/78b7f315-b870-4184-87db-8078fd170237_disk">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/78b7f315-b870-4184-87db-8078fd170237_disk.config">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:ef:72:dc"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <target dev="tap59fc2ff6-f1"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/console.log" append="off"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:25:34 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:25:34 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:25:34 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:25:34 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.177 232437 DEBUG nova.compute.manager [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Preparing to wait for external event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.177 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.178 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.178 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.178 232437 DEBUG nova.virt.libvirt.vif [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:25:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1448250982',display_name='tempest-ServerRescueTestJSON-server-1448250982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1448250982',id=107,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5036a5cbf7d44fd809d00942afd76e6',ramdisk_id='',reservation_id='r-d804mxte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1682980670',owner_user_name='tempest-ServerRescueTestJSON-1682980670-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:25Z,user_data=None,user_id='bf140baee82f408e8fafa81062146641',uuid=78b7f315-b870-4184-87db-8078fd170237,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.179 232437 DEBUG nova.network.os_vif_util [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converting VIF {"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.179 232437 DEBUG nova.network.os_vif_util [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:72:dc,bridge_name='br-int',has_traffic_filtering=True,id=59fc2ff6-f1bb-4cda-a487-d011530f0618,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59fc2ff6-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.179 232437 DEBUG os_vif [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:72:dc,bridge_name='br-int',has_traffic_filtering=True,id=59fc2ff6-f1bb-4cda-a487-d011530f0618,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59fc2ff6-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.180 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.180 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.181 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.183 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.183 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59fc2ff6-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.183 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59fc2ff6-f1, col_values=(('external_ids', {'iface-id': '59fc2ff6-f1bb-4cda-a487-d011530f0618', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:72:dc', 'vm-uuid': '78b7f315-b870-4184-87db-8078fd170237'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.184 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:34 np0005548731 NetworkManager[49182]: <info>  [1765005934.1860] manager: (tap59fc2ff6-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.186 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.192 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.193 232437 INFO os_vif [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:72:dc,bridge_name='br-int',has_traffic_filtering=True,id=59fc2ff6-f1bb-4cda-a487-d011530f0618,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59fc2ff6-f1')#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.243 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.244 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.244 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No VIF found with MAC fa:16:3e:ef:72:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.245 232437 INFO nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Using config drive#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.266 232437 DEBUG nova.storage.rbd_utils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.317 232437 DEBUG nova.network.neutron [req-41334835-1759-4bd2-9447-49eb3ad9e263 req-a998d240-a699-49ca-8bb0-604ae7198a7c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updated VIF entry in instance network info cache for port 0e5e71bc-7098-4091-938e-6299f989917f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.317 232437 DEBUG nova.network.neutron [req-41334835-1759-4bd2-9447-49eb3ad9e263 req-a998d240-a699-49ca-8bb0-604ae7198a7c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating instance_info_cache with network_info: [{"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:25:34 np0005548731 nova_compute[232433]: 2025-12-06 07:25:34.361 232437 DEBUG oslo_concurrency.lockutils [req-41334835-1759-4bd2-9447-49eb3ad9e263 req-a998d240-a699-49ca-8bb0-604ae7198a7c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:25:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:35.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.120 232437 DEBUG nova.network.neutron [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Updated VIF entry in instance network info cache for port 59fc2ff6-f1bb-4cda-a487-d011530f0618. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.121 232437 DEBUG nova.network.neutron [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Updating instance_info_cache with network_info: [{"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.135 232437 DEBUG oslo_concurrency.lockutils [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.391 232437 INFO nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Creating config drive at /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/disk.config#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.398 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbwpmxnvf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.425 232437 INFO nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Creating config drive at /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.431 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps1aakpmj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.530 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbwpmxnvf" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.561 232437 DEBUG nova.storage.rbd_utils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.566 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/disk.config ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.601 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps1aakpmj" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.632 232437 DEBUG nova.storage.rbd_utils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:25:35 np0005548731 nova_compute[232433]: 2025-12-06 07:25:35.636 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config 78b7f315-b870-4184-87db-8078fd170237_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:25:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:35.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.136 232437 DEBUG oslo_concurrency.processutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/disk.config ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.137 232437 INFO nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Deleting local config drive /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/disk.config because it was imported into RBD.#033[00m
Dec  6 02:25:36 np0005548731 NetworkManager[49182]: <info>  [1765005936.1913] manager: (tap0e5e71bc-70): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Dec  6 02:25:36 np0005548731 kernel: tap0e5e71bc-70: entered promiscuous mode
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.196 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:36Z|00400|binding|INFO|Claiming lport 0e5e71bc-7098-4091-938e-6299f989917f for this chassis.
Dec  6 02:25:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:36Z|00401|binding|INFO|0e5e71bc-7098-4091-938e-6299f989917f: Claiming fa:16:3e:ec:96:d5 10.100.0.13
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.204 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:96:d5 10.100.0.13'], port_security=['fa:16:3e:ec:96:d5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ea8c0005-4b7a-4697-89ae-91f4bef22e36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '2', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=0e5e71bc-7098-4091-938e-6299f989917f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.206 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 0e5e71bc-7098-4091-938e-6299f989917f in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a bound to our chassis#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.208 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c014e4e-a182-4f60-8285-20525bc99e5a#033[00m
Dec  6 02:25:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:36Z|00402|binding|INFO|Setting lport 0e5e71bc-7098-4091-938e-6299f989917f ovn-installed in OVS
Dec  6 02:25:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:36Z|00403|binding|INFO|Setting lport 0e5e71bc-7098-4091-938e-6299f989917f up in Southbound
Dec  6 02:25:36 np0005548731 systemd-udevd[274936]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.220 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0f9a8d-20b0-4643-b1a4-c8237adf2c27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.221 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c014e4e-a1 in ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.222 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.224 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c014e4e-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.224 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[75b9d96d-864f-4867-9fc1-980c1185b4f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.226 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[777a6a55-1a30-4e43-a0bd-74593e92e32c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.228 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:36 np0005548731 systemd-machined[195355]: New machine qemu-44-instance-0000006c.
Dec  6 02:25:36 np0005548731 NetworkManager[49182]: <info>  [1765005936.2362] device (tap0e5e71bc-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:25:36 np0005548731 NetworkManager[49182]: <info>  [1765005936.2372] device (tap0e5e71bc-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:25:36 np0005548731 systemd[1]: Started Virtual Machine qemu-44-instance-0000006c.
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.239 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[14a95d0a-9bbf-473a-bb99-fecfc263369e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.252 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b9bf8767-80a0-4443-81fe-8515c8532247]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.279 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b16364cb-3a6c-4a96-a27c-2c11baadc61d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 systemd-udevd[274942]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:25:36 np0005548731 NetworkManager[49182]: <info>  [1765005936.2860] manager: (tap7c014e4e-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/204)
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.285 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[aff364b5-8fce-4663-a0a9-c63dfa921df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.316 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb91ac8-ba7e-4d88-86f4-f833adedff8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.319 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[09969122-4633-454f-b78c-18036dd01e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 NetworkManager[49182]: <info>  [1765005936.3414] device (tap7c014e4e-a0): carrier: link connected
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.346 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[8f20b0b5-0013-4789-9f13-b008a2b9e692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.368 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e73af45e-b684-4323-9e17-41b8d26ef927]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619834, 'reachable_time': 26010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274974, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.381 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6309b7be-9e0e-407c-9eae-012d9a88be31]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:141c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619834, 'tstamp': 619834}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274975, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.394 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0ed0b3-d076-47e1-81f6-5487fff66883]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619834, 'reachable_time': 26010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274976, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.425 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bce52c39-9f9e-478f-8f8c-59c63655592f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.450 232437 DEBUG nova.compute.manager [req-f1c60193-bc9b-47b2-9cbc-3ca53419ad13 req-24214b08-ce18-45ea-a957-0b33132fbb92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.451 232437 DEBUG oslo_concurrency.lockutils [req-f1c60193-bc9b-47b2-9cbc-3ca53419ad13 req-24214b08-ce18-45ea-a957-0b33132fbb92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.451 232437 DEBUG oslo_concurrency.lockutils [req-f1c60193-bc9b-47b2-9cbc-3ca53419ad13 req-24214b08-ce18-45ea-a957-0b33132fbb92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.451 232437 DEBUG oslo_concurrency.lockutils [req-f1c60193-bc9b-47b2-9cbc-3ca53419ad13 req-24214b08-ce18-45ea-a957-0b33132fbb92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.451 232437 DEBUG nova.compute.manager [req-f1c60193-bc9b-47b2-9cbc-3ca53419ad13 req-24214b08-ce18-45ea-a957-0b33132fbb92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Processing event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.478 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[02c2d9aa-133a-49b8-9e92-87945540668e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.479 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.479 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.480 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c014e4e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:25:36 np0005548731 NetworkManager[49182]: <info>  [1765005936.4819] manager: (tap7c014e4e-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Dec  6 02:25:36 np0005548731 kernel: tap7c014e4e-a0: entered promiscuous mode
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.482 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.486 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c014e4e-a0, col_values=(('external_ids', {'iface-id': 'd8dd1a7d-045a-42a3-8829-567c43985ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:25:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:36Z|00404|binding|INFO|Releasing lport d8dd1a7d-045a-42a3-8829-567c43985ae0 from this chassis (sb_readonly=0)
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.500 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.502 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:36 np0005548731 nova_compute[232433]: 2025-12-06 07:25:36.505 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.506 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.507 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c90d12dd-bb3d-483e-8685-5809683fb0d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.508 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-7c014e4e-a182-4f60-8285-20525bc99e5a
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:25:36 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:36.508 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'env', 'PROCESS_TAG=haproxy-7c014e4e-a182-4f60-8285-20525bc99e5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c014e4e-a182-4f60-8285-20525bc99e5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:25:36 np0005548731 podman[275029]: 2025-12-06 07:25:36.840246519 +0000 UTC m=+0.040732627 container create adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:25:36 np0005548731 systemd[1]: Started libpod-conmon-adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c.scope.
Dec  6 02:25:36 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:25:36 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65c3a21a9aaed257b9eeca0556e06a425115a3500b19063f90695592df203259/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:25:36 np0005548731 podman[275029]: 2025-12-06 07:25:36.914968753 +0000 UTC m=+0.115454881 container init adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:25:36 np0005548731 podman[275029]: 2025-12-06 07:25:36.818990409 +0000 UTC m=+0.019476547 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:25:36 np0005548731 podman[275029]: 2025-12-06 07:25:36.920197481 +0000 UTC m=+0.120683599 container start adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:25:36 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[275043]: [NOTICE]   (275047) : New worker (275049) forked
Dec  6 02:25:36 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[275043]: [NOTICE]   (275047) : Loading success.
Dec  6 02:25:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:37.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:37.117 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:25:37 np0005548731 nova_compute[232433]: 2025-12-06 07:25:37.118 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:37.119 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:25:37 np0005548731 nova_compute[232433]: 2025-12-06 07:25:37.357 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:37.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.113 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005938.1129785, ea8c0005-4b7a-4697-89ae-91f4bef22e36 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.114 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] VM Started (Lifecycle Event)#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.115 232437 DEBUG nova.compute.manager [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.118 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.122 232437 INFO nova.virt.libvirt.driver [-] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Instance spawned successfully.#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.122 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.136 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.143 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.146 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.147 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.148 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.148 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.149 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.149 232437 DEBUG nova.virt.libvirt.driver [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.171 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.171 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005938.1138542, ea8c0005-4b7a-4697-89ae-91f4bef22e36 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.172 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.198 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.201 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005938.1179433, ea8c0005-4b7a-4697-89ae-91f4bef22e36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.201 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.207 232437 INFO nova.compute.manager [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Took 11.35 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.207 232437 DEBUG nova.compute.manager [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.230 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.234 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.259 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.270 232437 INFO nova.compute.manager [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Took 12.92 seconds to build instance.#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.287 232437 DEBUG oslo_concurrency.lockutils [None req-c824a71a-795c-421e-aee3-9d1f9ae7b0d4 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.309 232437 DEBUG oslo_concurrency.processutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config 78b7f315-b870-4184-87db-8078fd170237_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.309 232437 INFO nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Deleting local config drive /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config because it was imported into RBD.#033[00m
Dec  6 02:25:38 np0005548731 kernel: tap59fc2ff6-f1: entered promiscuous mode
Dec  6 02:25:38 np0005548731 NetworkManager[49182]: <info>  [1765005938.3603] manager: (tap59fc2ff6-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Dec  6 02:25:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:38Z|00405|binding|INFO|Claiming lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 for this chassis.
Dec  6 02:25:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:38Z|00406|binding|INFO|59fc2ff6-f1bb-4cda-a487-d011530f0618: Claiming fa:16:3e:ef:72:dc 10.100.0.2
Dec  6 02:25:38 np0005548731 systemd-udevd[274964]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.361 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:38Z|00407|binding|INFO|Setting lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 ovn-installed in OVS
Dec  6 02:25:38 np0005548731 NetworkManager[49182]: <info>  [1765005938.3876] device (tap59fc2ff6-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:25:38 np0005548731 NetworkManager[49182]: <info>  [1765005938.3886] device (tap59fc2ff6-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.386 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.389 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:38 np0005548731 systemd-machined[195355]: New machine qemu-45-instance-0000006b.
Dec  6 02:25:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:38.423 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:72:dc 10.100.0.2'], port_security=['fa:16:3e:ef:72:dc 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '78b7f315-b870-4184-87db-8078fd170237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=59fc2ff6-f1bb-4cda-a487-d011530f0618) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:25:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:38.424 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 59fc2ff6-f1bb-4cda-a487-d011530f0618 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 bound to our chassis#033[00m
Dec  6 02:25:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:25:38Z|00408|binding|INFO|Setting lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 up in Southbound
Dec  6 02:25:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:38.425 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:25:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:38.427 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1d83ab2d-77d8-4f76-9ef4-9a54721c394e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:25:38 np0005548731 systemd[1]: Started Virtual Machine qemu-45-instance-0000006b.
Dec  6 02:25:38 np0005548731 podman[275090]: 2025-12-06 07:25:38.448443998 +0000 UTC m=+0.102437333 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  6 02:25:38 np0005548731 podman[275095]: 2025-12-06 07:25:38.481328231 +0000 UTC m=+0.134583459 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:25:38 np0005548731 podman[275093]: 2025-12-06 07:25:38.495247311 +0000 UTC m=+0.146564131 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.811 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005938.8114824, 78b7f315-b870-4184-87db-8078fd170237 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.812 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] VM Started (Lifecycle Event)#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.850 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.854 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005938.8117678, 78b7f315-b870-4184-87db-8078fd170237 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.855 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.885 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.889 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:25:38 np0005548731 nova_compute[232433]: 2025-12-06 07:25:38.906 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:25:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:39.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.184 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.400 232437 DEBUG nova.compute.manager [req-cd54751d-447f-4ebe-aa92-9dbcf1fd1125 req-7f6f7e58-e94f-45e7-9ec8-61f261770ebe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.400 232437 DEBUG oslo_concurrency.lockutils [req-cd54751d-447f-4ebe-aa92-9dbcf1fd1125 req-7f6f7e58-e94f-45e7-9ec8-61f261770ebe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.400 232437 DEBUG oslo_concurrency.lockutils [req-cd54751d-447f-4ebe-aa92-9dbcf1fd1125 req-7f6f7e58-e94f-45e7-9ec8-61f261770ebe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.400 232437 DEBUG oslo_concurrency.lockutils [req-cd54751d-447f-4ebe-aa92-9dbcf1fd1125 req-7f6f7e58-e94f-45e7-9ec8-61f261770ebe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.401 232437 DEBUG nova.compute.manager [req-cd54751d-447f-4ebe-aa92-9dbcf1fd1125 req-7f6f7e58-e94f-45e7-9ec8-61f261770ebe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Processing event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.401 232437 DEBUG nova.compute.manager [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.404 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005939.4041348, 78b7f315-b870-4184-87db-8078fd170237 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.404 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.406 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.409 232437 INFO nova.virt.libvirt.driver [-] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance spawned successfully.#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.409 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.420 232437 DEBUG nova.compute.manager [req-cf58b156-7653-4d3f-8686-7be2bb8edb0a req-2feb696a-0063-47a7-8092-8563c4402741 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.420 232437 DEBUG oslo_concurrency.lockutils [req-cf58b156-7653-4d3f-8686-7be2bb8edb0a req-2feb696a-0063-47a7-8092-8563c4402741 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.421 232437 DEBUG oslo_concurrency.lockutils [req-cf58b156-7653-4d3f-8686-7be2bb8edb0a req-2feb696a-0063-47a7-8092-8563c4402741 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.421 232437 DEBUG oslo_concurrency.lockutils [req-cf58b156-7653-4d3f-8686-7be2bb8edb0a req-2feb696a-0063-47a7-8092-8563c4402741 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.421 232437 DEBUG nova.compute.manager [req-cf58b156-7653-4d3f-8686-7be2bb8edb0a req-2feb696a-0063-47a7-8092-8563c4402741 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] No waiting events found dispatching network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.421 232437 WARNING nova.compute.manager [req-cf58b156-7653-4d3f-8686-7be2bb8edb0a req-2feb696a-0063-47a7-8092-8563c4402741 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received unexpected event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f for instance with vm_state active and task_state None.#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.430 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.436 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.439 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.439 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.440 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.440 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.440 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.441 232437 DEBUG nova.virt.libvirt.driver [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.485 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.603 232437 INFO nova.compute.manager [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Took 13.61 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.604 232437 DEBUG nova.compute.manager [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.702 232437 INFO nova.compute.manager [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Took 14.71 seconds to build instance.#033[00m
Dec  6 02:25:39 np0005548731 nova_compute[232433]: 2025-12-06 07:25:39.741 232437 DEBUG oslo_concurrency.lockutils [None req-b8fe46a9-d2c2-4091-a0d3-08081f5eac06 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:39.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:25:40.121 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:25:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:41.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:41 np0005548731 nova_compute[232433]: 2025-12-06 07:25:41.388 232437 INFO nova.compute.manager [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Rescuing#033[00m
Dec  6 02:25:41 np0005548731 nova_compute[232433]: 2025-12-06 07:25:41.389 232437 DEBUG oslo_concurrency.lockutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:25:41 np0005548731 nova_compute[232433]: 2025-12-06 07:25:41.390 232437 DEBUG oslo_concurrency.lockutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquired lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:25:41 np0005548731 nova_compute[232433]: 2025-12-06 07:25:41.390 232437 DEBUG nova.network.neutron [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:25:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:41.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:42 np0005548731 nova_compute[232433]: 2025-12-06 07:25:42.265 232437 DEBUG nova.compute.manager [req-dcfa5908-618e-4568-b4d3-f00948d7f075 req-54dd0e89-3668-4609-bcce-be9fa41a2036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:25:42 np0005548731 nova_compute[232433]: 2025-12-06 07:25:42.265 232437 DEBUG oslo_concurrency.lockutils [req-dcfa5908-618e-4568-b4d3-f00948d7f075 req-54dd0e89-3668-4609-bcce-be9fa41a2036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:25:42 np0005548731 nova_compute[232433]: 2025-12-06 07:25:42.266 232437 DEBUG oslo_concurrency.lockutils [req-dcfa5908-618e-4568-b4d3-f00948d7f075 req-54dd0e89-3668-4609-bcce-be9fa41a2036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:25:42 np0005548731 nova_compute[232433]: 2025-12-06 07:25:42.266 232437 DEBUG oslo_concurrency.lockutils [req-dcfa5908-618e-4568-b4d3-f00948d7f075 req-54dd0e89-3668-4609-bcce-be9fa41a2036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:25:42 np0005548731 nova_compute[232433]: 2025-12-06 07:25:42.266 232437 DEBUG nova.compute.manager [req-dcfa5908-618e-4568-b4d3-f00948d7f075 req-54dd0e89-3668-4609-bcce-be9fa41a2036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] No waiting events found dispatching network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:25:42 np0005548731 nova_compute[232433]: 2025-12-06 07:25:42.266 232437 WARNING nova.compute.manager [req-dcfa5908-618e-4568-b4d3-f00948d7f075 req-54dd0e89-3668-4609-bcce-be9fa41a2036 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received unexpected event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:25:42 np0005548731 nova_compute[232433]: 2025-12-06 07:25:42.359 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:43.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:43 np0005548731 nova_compute[232433]: 2025-12-06 07:25:43.203 232437 DEBUG nova.network.neutron [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Updating instance_info_cache with network_info: [{"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:25:43 np0005548731 nova_compute[232433]: 2025-12-06 07:25:43.226 232437 DEBUG oslo_concurrency.lockutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Releasing lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:25:43 np0005548731 nova_compute[232433]: 2025-12-06 07:25:43.551 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:25:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:43.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:44 np0005548731 nova_compute[232433]: 2025-12-06 07:25:44.187 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:45.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:45.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:46 np0005548731 nova_compute[232433]: 2025-12-06 07:25:46.145 232437 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:25:46 np0005548731 nova_compute[232433]: 2025-12-06 07:25:46.147 232437 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquired lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:25:46 np0005548731 nova_compute[232433]: 2025-12-06 07:25:46.147 232437 DEBUG nova.network.neutron [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:25:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:25:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:47.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:25:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 02:25:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:25:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:25:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:25:47 np0005548731 nova_compute[232433]: 2025-12-06 07:25:47.362 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:47.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:49.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:49 np0005548731 nova_compute[232433]: 2025-12-06 07:25:49.234 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:49.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:51.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:51.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:52 np0005548731 nova_compute[232433]: 2025-12-06 07:25:52.128 232437 DEBUG nova.network.neutron [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating instance_info_cache with network_info: [{"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:25:52 np0005548731 nova_compute[232433]: 2025-12-06 07:25:52.265 232437 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Releasing lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:25:52 np0005548731 nova_compute[232433]: 2025-12-06 07:25:52.363 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:52 np0005548731 nova_compute[232433]: 2025-12-06 07:25:52.672 232437 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Dec  6 02:25:52 np0005548731 nova_compute[232433]: 2025-12-06 07:25:52.673 232437 DEBUG nova.virt.libvirt.volume.remotefs [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Creating file /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/de325329bcf541d89c61f21b6a9c1e77.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Dec  6 02:25:52 np0005548731 nova_compute[232433]: 2025-12-06 07:25:52.674 232437 DEBUG oslo_concurrency.processutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/de325329bcf541d89c61f21b6a9c1e77.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:53 np0005548731 nova_compute[232433]: 2025-12-06 07:25:53.100 232437 DEBUG oslo_concurrency.processutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/de325329bcf541d89c61f21b6a9c1e77.tmp" returned: 1 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:53 np0005548731 nova_compute[232433]: 2025-12-06 07:25:53.101 232437 DEBUG oslo_concurrency.processutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/de325329bcf541d89c61f21b6a9c1e77.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  6 02:25:53 np0005548731 nova_compute[232433]: 2025-12-06 07:25:53.101 232437 DEBUG nova.virt.libvirt.volume.remotefs [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Creating directory /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Dec  6 02:25:53 np0005548731 nova_compute[232433]: 2025-12-06 07:25:53.101 232437 DEBUG oslo_concurrency.processutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:25:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:53.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:53 np0005548731 nova_compute[232433]: 2025-12-06 07:25:53.327 232437 DEBUG oslo_concurrency.processutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:25:53 np0005548731 nova_compute[232433]: 2025-12-06 07:25:53.331 232437 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:25:53 np0005548731 nova_compute[232433]: 2025-12-06 07:25:53.589 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:25:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:53.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:54 np0005548731 nova_compute[232433]: 2025-12-06 07:25:54.235 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:55.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:55.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:25:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:25:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:57.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:57 np0005548731 nova_compute[232433]: 2025-12-06 07:25:57.365 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:25:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:57.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:25:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:25:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:25:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:59.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:25:59 np0005548731 nova_compute[232433]: 2025-12-06 07:25:59.279 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:25:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:25:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:25:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:59.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:26:00 np0005548731 nova_compute[232433]: 2025-12-06 07:26:00.324 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:26:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:00Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:96:d5 10.100.0.13
Dec  6 02:26:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:00Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:96:d5 10.100.0.13
Dec  6 02:26:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:00.867 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:00.868 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:00.869 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:01 np0005548731 nova_compute[232433]: 2025-12-06 07:26:01.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:26:01 np0005548731 nova_compute[232433]: 2025-12-06 07:26:01.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:26:01 np0005548731 nova_compute[232433]: 2025-12-06 07:26:01.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:26:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:01.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:01 np0005548731 nova_compute[232433]: 2025-12-06 07:26:01.417 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:26:01 np0005548731 nova_compute[232433]: 2025-12-06 07:26:01.418 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:26:01 np0005548731 nova_compute[232433]: 2025-12-06 07:26:01.418 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:26:01 np0005548731 nova_compute[232433]: 2025-12-06 07:26:01.418 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:01.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:26:02 np0005548731 nova_compute[232433]: 2025-12-06 07:26:02.366 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:03.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:03 np0005548731 nova_compute[232433]: 2025-12-06 07:26:03.339 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Updating instance_info_cache with network_info: [{"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:26:03 np0005548731 nova_compute[232433]: 2025-12-06 07:26:03.373 232437 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:26:03 np0005548731 nova_compute[232433]: 2025-12-06 07:26:03.384 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-5ea73f51-d224-41ed-892b-a137d9985e73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:26:03 np0005548731 nova_compute[232433]: 2025-12-06 07:26:03.385 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:26:03 np0005548731 nova_compute[232433]: 2025-12-06 07:26:03.386 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:26:03 np0005548731 nova_compute[232433]: 2025-12-06 07:26:03.387 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:26:03 np0005548731 nova_compute[232433]: 2025-12-06 07:26:03.387 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:26:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:03.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:04 np0005548731 nova_compute[232433]: 2025-12-06 07:26:04.280 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:04 np0005548731 nova_compute[232433]: 2025-12-06 07:26:04.626 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:26:05 np0005548731 nova_compute[232433]: 2025-12-06 07:26:05.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:26:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:05.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:26:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:05.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:06 np0005548731 nova_compute[232433]: 2025-12-06 07:26:06.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:26:07 np0005548731 nova_compute[232433]: 2025-12-06 07:26:07.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:26:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:07.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:07 np0005548731 nova_compute[232433]: 2025-12-06 07:26:07.367 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:07.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:26:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.150 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.150 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.151 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.151 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.151 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:26:08 np0005548731 kernel: tap59fc2ff6-f1 (unregistering): left promiscuous mode
Dec  6 02:26:08 np0005548731 NetworkManager[49182]: <info>  [1765005968.1628] device (tap59fc2ff6-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:26:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:08Z|00409|binding|INFO|Releasing lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 from this chassis (sb_readonly=0)
Dec  6 02:26:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:08Z|00410|binding|INFO|Setting lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 down in Southbound
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.192 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:08Z|00411|binding|INFO|Removing iface tap59fc2ff6-f1 ovn-installed in OVS
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.222 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:08 np0005548731 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Dec  6 02:26:08 np0005548731 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000006b.scope: Consumed 14.914s CPU time.
Dec  6 02:26:08 np0005548731 systemd-machined[195355]: Machine qemu-45-instance-0000006b terminated.
Dec  6 02:26:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:08.279 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:72:dc 10.100.0.2'], port_security=['fa:16:3e:ef:72:dc 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '78b7f315-b870-4184-87db-8078fd170237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=59fc2ff6-f1bb-4cda-a487-d011530f0618) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:26:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:08.280 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 59fc2ff6-f1bb-4cda-a487-d011530f0618 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 unbound from our chassis#033[00m
Dec  6 02:26:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:08.281 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:26:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:08.282 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbccf4d-4eff-4cd5-9e9e-2f9fec7bc491]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:08Z|00412|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.392 232437 INFO nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Instance shutdown successfully after 15 seconds.#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.414 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.421 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:26:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2900093873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.598 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.641 232437 INFO nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance shutdown successfully after 25 seconds.#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.646 232437 INFO nova.virt.libvirt.driver [-] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance destroyed successfully.#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.646 232437 DEBUG nova.objects.instance [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.822 232437 INFO nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Attempting rescue#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.823 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.827 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.827 232437 INFO nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Creating image(s)#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.860 232437 DEBUG nova.storage.rbd_utils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.865 232437 DEBUG nova.objects.instance [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:08 np0005548731 podman[275551]: 2025-12-06 07:26:08.907487954 +0000 UTC m=+0.054390549 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 02:26:08 np0005548731 podman[275542]: 2025-12-06 07:26:08.908756165 +0000 UTC m=+0.063576393 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.923 232437 DEBUG nova.storage.rbd_utils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:26:08 np0005548731 podman[275550]: 2025-12-06 07:26:08.933227913 +0000 UTC m=+0.085750815 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.952 232437 DEBUG nova.storage.rbd_utils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:26:08 np0005548731 nova_compute[232433]: 2025-12-06 07:26:08.955 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.018 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.019 232437 DEBUG oslo_concurrency.lockutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.020 232437 DEBUG oslo_concurrency.lockutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.020 232437 DEBUG oslo_concurrency.lockutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.046 232437 DEBUG nova.storage.rbd_utils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.050 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 78b7f315-b870-4184-87db-8078fd170237_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:26:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:09.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:26:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3294259920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:26:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:26:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3294259920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.282 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.303 232437 DEBUG nova.compute.manager [req-1e1ee7a0-a748-483f-936f-057f42a7fbd0 req-47331272-f960-47fa-bb76-f5c07eb14fd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-unplugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.303 232437 DEBUG oslo_concurrency.lockutils [req-1e1ee7a0-a748-483f-936f-057f42a7fbd0 req-47331272-f960-47fa-bb76-f5c07eb14fd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.303 232437 DEBUG oslo_concurrency.lockutils [req-1e1ee7a0-a748-483f-936f-057f42a7fbd0 req-47331272-f960-47fa-bb76-f5c07eb14fd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.304 232437 DEBUG oslo_concurrency.lockutils [req-1e1ee7a0-a748-483f-936f-057f42a7fbd0 req-47331272-f960-47fa-bb76-f5c07eb14fd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.304 232437 DEBUG nova.compute.manager [req-1e1ee7a0-a748-483f-936f-057f42a7fbd0 req-47331272-f960-47fa-bb76-f5c07eb14fd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] No waiting events found dispatching network-vif-unplugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.304 232437 WARNING nova.compute.manager [req-1e1ee7a0-a748-483f-936f-057f42a7fbd0 req-47331272-f960-47fa-bb76-f5c07eb14fd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received unexpected event network-vif-unplugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:26:09 np0005548731 kernel: tap0e5e71bc-70 (unregistering): left promiscuous mode
Dec  6 02:26:09 np0005548731 NetworkManager[49182]: <info>  [1765005969.3761] device (tap0e5e71bc-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:26:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:09Z|00413|binding|INFO|Releasing lport 0e5e71bc-7098-4091-938e-6299f989917f from this chassis (sb_readonly=0)
Dec  6 02:26:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:09Z|00414|binding|INFO|Setting lport 0e5e71bc-7098-4091-938e-6299f989917f down in Southbound
Dec  6 02:26:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:09Z|00415|binding|INFO|Removing iface tap0e5e71bc-70 ovn-installed in OVS
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.385 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.400 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:09 np0005548731 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Dec  6 02:26:09 np0005548731 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000006c.scope: Consumed 14.964s CPU time.
Dec  6 02:26:09 np0005548731 systemd-machined[195355]: Machine qemu-44-instance-0000006c terminated.
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.493 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:96:d5 10.100.0.13'], port_security=['fa:16:3e:ec:96:d5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ea8c0005-4b7a-4697-89ae-91f4bef22e36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '4', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=0e5e71bc-7098-4091-938e-6299f989917f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.495 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 0e5e71bc-7098-4091-938e-6299f989917f in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a unbound from our chassis#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.497 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c014e4e-a182-4f60-8285-20525bc99e5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.498 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d2b453bb-3513-4e47-9587-1979c54130ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.498 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace which is not needed anymore#033[00m
Dec  6 02:26:09 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[275043]: [NOTICE]   (275047) : haproxy version is 2.8.14-c23fe91
Dec  6 02:26:09 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[275043]: [NOTICE]   (275047) : path to executable is /usr/sbin/haproxy
Dec  6 02:26:09 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[275043]: [WARNING]  (275047) : Exiting Master process...
Dec  6 02:26:09 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[275043]: [WARNING]  (275047) : Exiting Master process...
Dec  6 02:26:09 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[275043]: [ALERT]    (275047) : Current worker (275049) exited with code 143 (Terminated)
Dec  6 02:26:09 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[275043]: [WARNING]  (275047) : All workers exited. Exiting... (0)
Dec  6 02:26:09 np0005548731 systemd[1]: libpod-adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c.scope: Deactivated successfully.
Dec  6 02:26:09 np0005548731 podman[275721]: 2025-12-06 07:26:09.626045051 +0000 UTC m=+0.046301421 container died adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.626 232437 INFO nova.virt.libvirt.driver [-] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Instance destroyed successfully.#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.628 232437 DEBUG nova.virt.libvirt.vif [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1037063395',display_name='tempest-ServerDiskConfigTestJSON-server-1037063395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1037063395',id=108,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-qe8qs13c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:25:44Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=ea8c0005-4b7a-4697-89ae-91f4bef22e36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:ec:96:d5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.628 232437 DEBUG nova.network.os_vif_util [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:ec:96:d5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.629 232437 DEBUG nova.network.os_vif_util [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.629 232437 DEBUG os_vif [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.631 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.631 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e5e71bc-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.632 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.633 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.638 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.643 232437 INFO os_vif [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70')#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.648 232437 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.648 232437 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:26:09 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c-userdata-shm.mount: Deactivated successfully.
Dec  6 02:26:09 np0005548731 systemd[1]: var-lib-containers-storage-overlay-65c3a21a9aaed257b9eeca0556e06a425115a3500b19063f90695592df203259-merged.mount: Deactivated successfully.
Dec  6 02:26:09 np0005548731 podman[275721]: 2025-12-06 07:26:09.663787113 +0000 UTC m=+0.084043483 container cleanup adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:26:09 np0005548731 systemd[1]: libpod-conmon-adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c.scope: Deactivated successfully.
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.709 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.710 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.712 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.712 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.713 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.716 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.716 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:26:09 np0005548731 podman[275764]: 2025-12-06 07:26:09.735163306 +0000 UTC m=+0.045846521 container remove adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.743 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[241d060b-24be-4307-b7b2-f6519e0aa1d9]: (4, ('Sat Dec  6 07:26:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c)\nadeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c\nSat Dec  6 07:26:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (adeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c)\nadeef35555af72856c6649942f9a6b91e3f53c1af9f42c564181c40631d3337c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.745 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b7cd2d-adc4-4ec0-a857-13d1056727d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.747 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.749 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:09 np0005548731 kernel: tap7c014e4e-a0: left promiscuous mode
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.766 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.770 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[192cddc3-ea3c-48ff-ae4a-313b72142bb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.785 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[03d91638-e5bc-42f7-9338-2cad1a863a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.786 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad97664-28ce-4efb-bd4a-204d9a0f7fba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.801 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[26202453-793e-4938-9bd6-ac503868512f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619828, 'reachable_time': 39980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275782, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.804 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:26:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:09.804 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[814c1b4d-cfbb-415c-a7ef-8349162e5fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:09 np0005548731 systemd[1]: run-netns-ovnmeta\x2d7c014e4e\x2da182\x2d4f60\x2d8285\x2d20525bc99e5a.mount: Deactivated successfully.
Dec  6 02:26:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:09.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.877 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.878 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4264MB free_disk=20.740219116210938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.878 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.878 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.975 232437 INFO nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating resource usage from migration 2799ffd6-1eb2-4617-b3b6-b7669cd5f78c#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.996 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 5ea73f51-d224-41ed-892b-a137d9985e73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.997 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 78b7f315-b870-4184-87db-8078fd170237 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.997 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Migration 2799ffd6-1eb2-4617-b3b6-b7669cd5f78c is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.997 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:26:09 np0005548731 nova_compute[232433]: 2025-12-06 07:26:09.997 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:26:10 np0005548731 nova_compute[232433]: 2025-12-06 07:26:10.074 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:26:10 np0005548731 nova_compute[232433]: 2025-12-06 07:26:10.109 232437 DEBUG neutronclient.v2_0.client [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 0e5e71bc-7098-4091-938e-6299f989917f for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Dec  6 02:26:10 np0005548731 nova_compute[232433]: 2025-12-06 07:26:10.256 232437 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:10 np0005548731 nova_compute[232433]: 2025-12-06 07:26:10.256 232437 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:10 np0005548731 nova_compute[232433]: 2025-12-06 07:26:10.256 232437 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:26:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4230824791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:26:10 np0005548731 nova_compute[232433]: 2025-12-06 07:26:10.515 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:26:10 np0005548731 nova_compute[232433]: 2025-12-06 07:26:10.521 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:26:10 np0005548731 nova_compute[232433]: 2025-12-06 07:26:10.658 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:26:10 np0005548731 nova_compute[232433]: 2025-12-06 07:26:10.699 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:26:10 np0005548731 nova_compute[232433]: 2025-12-06 07:26:10.699 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:26:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:11.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:26:11 np0005548731 nova_compute[232433]: 2025-12-06 07:26:11.451 232437 DEBUG nova.compute.manager [req-f6ba71ac-3b16-48a9-a4b9-b319833bc141 req-5a09dbca-2c3b-40d9-a1a1-d03a61ed6247 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:11 np0005548731 nova_compute[232433]: 2025-12-06 07:26:11.452 232437 DEBUG oslo_concurrency.lockutils [req-f6ba71ac-3b16-48a9-a4b9-b319833bc141 req-5a09dbca-2c3b-40d9-a1a1-d03a61ed6247 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:11 np0005548731 nova_compute[232433]: 2025-12-06 07:26:11.452 232437 DEBUG oslo_concurrency.lockutils [req-f6ba71ac-3b16-48a9-a4b9-b319833bc141 req-5a09dbca-2c3b-40d9-a1a1-d03a61ed6247 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:11 np0005548731 nova_compute[232433]: 2025-12-06 07:26:11.452 232437 DEBUG oslo_concurrency.lockutils [req-f6ba71ac-3b16-48a9-a4b9-b319833bc141 req-5a09dbca-2c3b-40d9-a1a1-d03a61ed6247 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:11 np0005548731 nova_compute[232433]: 2025-12-06 07:26:11.453 232437 DEBUG nova.compute.manager [req-f6ba71ac-3b16-48a9-a4b9-b319833bc141 req-5a09dbca-2c3b-40d9-a1a1-d03a61ed6247 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] No waiting events found dispatching network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:11 np0005548731 nova_compute[232433]: 2025-12-06 07:26:11.453 232437 WARNING nova.compute.manager [req-f6ba71ac-3b16-48a9-a4b9-b319833bc141 req-5a09dbca-2c3b-40d9-a1a1-d03a61ed6247 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received unexpected event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:26:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:11.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:12 np0005548731 nova_compute[232433]: 2025-12-06 07:26:12.262 232437 DEBUG nova.compute.manager [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-unplugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:12 np0005548731 nova_compute[232433]: 2025-12-06 07:26:12.262 232437 DEBUG oslo_concurrency.lockutils [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:12 np0005548731 nova_compute[232433]: 2025-12-06 07:26:12.263 232437 DEBUG oslo_concurrency.lockutils [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:12 np0005548731 nova_compute[232433]: 2025-12-06 07:26:12.263 232437 DEBUG oslo_concurrency.lockutils [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:12 np0005548731 nova_compute[232433]: 2025-12-06 07:26:12.263 232437 DEBUG nova.compute.manager [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] No waiting events found dispatching network-vif-unplugged-0e5e71bc-7098-4091-938e-6299f989917f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:12 np0005548731 nova_compute[232433]: 2025-12-06 07:26:12.263 232437 WARNING nova.compute.manager [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received unexpected event network-vif-unplugged-0e5e71bc-7098-4091-938e-6299f989917f for instance with vm_state active and task_state resize_migrated.#033[00m
Dec  6 02:26:12 np0005548731 nova_compute[232433]: 2025-12-06 07:26:12.369 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:26:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:13.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:26:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:13.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.171 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 78b7f315-b870-4184-87db-8078fd170237_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.172 232437 DEBUG nova.objects.instance [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'migration_context' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.394 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.394 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Start _get_guest_xml network_info=[{"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-32965635-network", "vif_mac": "fa:16:3e:ef:72:dc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.395 232437 DEBUG nova.objects.instance [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'resources' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.531 232437 WARNING nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.538 232437 DEBUG nova.virt.libvirt.host [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.539 232437 DEBUG nova.virt.libvirt.host [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.541 232437 DEBUG nova.virt.libvirt.host [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.542 232437 DEBUG nova.virt.libvirt.host [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.543 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.544 232437 DEBUG nova.virt.hardware [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.544 232437 DEBUG nova.virt.hardware [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.544 232437 DEBUG nova.virt.hardware [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.545 232437 DEBUG nova.virt.hardware [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.545 232437 DEBUG nova.virt.hardware [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.545 232437 DEBUG nova.virt.hardware [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.545 232437 DEBUG nova.virt.hardware [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.546 232437 DEBUG nova.virt.hardware [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.546 232437 DEBUG nova.virt.hardware [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.546 232437 DEBUG nova.virt.hardware [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.547 232437 DEBUG nova.virt.hardware [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.547 232437 DEBUG nova.objects.instance [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.584 232437 DEBUG nova.compute.manager [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-changed-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.584 232437 DEBUG nova.compute.manager [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Refreshing instance network info cache due to event network-changed-0e5e71bc-7098-4091-938e-6299f989917f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.585 232437 DEBUG oslo_concurrency.lockutils [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.586 232437 DEBUG oslo_concurrency.lockutils [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.586 232437 DEBUG nova.network.neutron [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Refreshing network info cache for port 0e5e71bc-7098-4091-938e-6299f989917f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.633 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.646 232437 DEBUG nova.compute.manager [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.646 232437 DEBUG oslo_concurrency.lockutils [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.647 232437 DEBUG oslo_concurrency.lockutils [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.647 232437 DEBUG oslo_concurrency.lockutils [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.647 232437 DEBUG nova.compute.manager [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] No waiting events found dispatching network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.647 232437 WARNING nova.compute.manager [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received unexpected event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f for instance with vm_state active and task_state resize_migrated.#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.654 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:26:14 np0005548731 nova_compute[232433]: 2025-12-06 07:26:14.699 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:26:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:26:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2648319042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:26:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:15.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:26:15 np0005548731 nova_compute[232433]: 2025-12-06 07:26:15.219 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:26:15 np0005548731 nova_compute[232433]: 2025-12-06 07:26:15.221 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:26:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:26:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1647351199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:26:15 np0005548731 nova_compute[232433]: 2025-12-06 07:26:15.628 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:26:15 np0005548731 nova_compute[232433]: 2025-12-06 07:26:15.631 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:26:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:15.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:26:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1655734803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.052 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.054 232437 DEBUG nova.virt.libvirt.vif [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1448250982',display_name='tempest-ServerRescueTestJSON-server-1448250982',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1448250982',id=107,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5036a5cbf7d44fd809d00942afd76e6',ramdisk_id='',reservation_id='r-d804mxte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1682980670',owner_user_name='tempest-ServerRescueTestJSON-1682980670-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:39Z,user_data=None,user_id='bf140baee82f408e8fafa81062146641',uuid=78b7f315-b870-4184-87db-8078fd170237,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-32965635-network", "vif_mac": "fa:16:3e:ef:72:dc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.055 232437 DEBUG nova.network.os_vif_util [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converting VIF {"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-32965635-network", "vif_mac": "fa:16:3e:ef:72:dc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.056 232437 DEBUG nova.network.os_vif_util [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:72:dc,bridge_name='br-int',has_traffic_filtering=True,id=59fc2ff6-f1bb-4cda-a487-d011530f0618,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59fc2ff6-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.057 232437 DEBUG nova.objects.instance [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.098 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  <uuid>78b7f315-b870-4184-87db-8078fd170237</uuid>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  <name>instance-0000006b</name>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerRescueTestJSON-server-1448250982</nova:name>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:26:14</nova:creationTime>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <nova:user uuid="bf140baee82f408e8fafa81062146641">tempest-ServerRescueTestJSON-1682980670-project-member</nova:user>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <nova:project uuid="c5036a5cbf7d44fd809d00942afd76e6">tempest-ServerRescueTestJSON-1682980670</nova:project>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <nova:port uuid="59fc2ff6-f1bb-4cda-a487-d011530f0618">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <entry name="serial">78b7f315-b870-4184-87db-8078fd170237</entry>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <entry name="uuid">78b7f315-b870-4184-87db-8078fd170237</entry>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/78b7f315-b870-4184-87db-8078fd170237_disk.rescue">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/78b7f315-b870-4184-87db-8078fd170237_disk">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <target dev="vdb" bus="virtio"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/78b7f315-b870-4184-87db-8078fd170237_disk.config.rescue">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:ef:72:dc"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <target dev="tap59fc2ff6-f1"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/console.log" append="off"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:26:16 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:26:16 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:26:16 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:26:16 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.107 232437 INFO nova.virt.libvirt.driver [-] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance destroyed successfully.#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.170 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.171 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.171 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.171 232437 DEBUG nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] No VIF found with MAC fa:16:3e:ef:72:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.171 232437 INFO nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Using config drive#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.196 232437 DEBUG nova.storage.rbd_utils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.249 232437 DEBUG nova.objects.instance [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.298 232437 DEBUG nova.network.neutron [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updated VIF entry in instance network info cache for port 0e5e71bc-7098-4091-938e-6299f989917f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.299 232437 DEBUG nova.network.neutron [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating instance_info_cache with network_info: [{"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.313 232437 DEBUG nova.objects.instance [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'keypairs' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:16 np0005548731 nova_compute[232433]: 2025-12-06 07:26:16.380 232437 DEBUG oslo_concurrency.lockutils [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:26:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:26:17 np0005548731 nova_compute[232433]: 2025-12-06 07:26:17.334 232437 INFO nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Creating config drive at /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config.rescue#033[00m
Dec  6 02:26:17 np0005548731 nova_compute[232433]: 2025-12-06 07:26:17.339 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwaznj10q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:26:17 np0005548731 nova_compute[232433]: 2025-12-06 07:26:17.371 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:17 np0005548731 nova_compute[232433]: 2025-12-06 07:26:17.471 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwaznj10q" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:26:17 np0005548731 nova_compute[232433]: 2025-12-06 07:26:17.497 232437 DEBUG nova.storage.rbd_utils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] rbd image 78b7f315-b870-4184-87db-8078fd170237_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:26:17 np0005548731 nova_compute[232433]: 2025-12-06 07:26:17.501 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config.rescue 78b7f315-b870-4184-87db-8078fd170237_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:26:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:17.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:19.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:19 np0005548731 nova_compute[232433]: 2025-12-06 07:26:19.635 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:19.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e265 e265: 3 total, 3 up, 3 in
Dec  6 02:26:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:21.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:21.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:22 np0005548731 nova_compute[232433]: 2025-12-06 07:26:22.407 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:23.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:23 np0005548731 nova_compute[232433]: 2025-12-06 07:26:23.432 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005968.4300728, 78b7f315-b870-4184-87db-8078fd170237 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:26:23 np0005548731 nova_compute[232433]: 2025-12-06 07:26:23.433 232437 INFO nova.compute.manager [-] [instance: 78b7f315-b870-4184-87db-8078fd170237] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:26:23 np0005548731 nova_compute[232433]: 2025-12-06 07:26:23.468 232437 DEBUG nova.compute.manager [None req-01d9c428-c71c-497f-80c9-8b1fa417e64b - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:26:23 np0005548731 nova_compute[232433]: 2025-12-06 07:26:23.472 232437 DEBUG nova.compute.manager [None req-01d9c428-c71c-497f-80c9-8b1fa417e64b - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:26:23 np0005548731 nova_compute[232433]: 2025-12-06 07:26:23.565 232437 INFO nova.compute.manager [None req-01d9c428-c71c-497f-80c9-8b1fa417e64b - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Dec  6 02:26:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:26:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:23.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:26:24 np0005548731 nova_compute[232433]: 2025-12-06 07:26:24.624 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005969.6223533, ea8c0005-4b7a-4697-89ae-91f4bef22e36 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:26:24 np0005548731 nova_compute[232433]: 2025-12-06 07:26:24.624 232437 INFO nova.compute.manager [-] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:26:24 np0005548731 nova_compute[232433]: 2025-12-06 07:26:24.684 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:24 np0005548731 nova_compute[232433]: 2025-12-06 07:26:24.932 232437 DEBUG nova.compute.manager [None req-dddad382-72f4-4530-99df-b46e6a0d956c - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:26:24 np0005548731 nova_compute[232433]: 2025-12-06 07:26:24.935 232437 DEBUG nova.compute.manager [None req-dddad382-72f4-4530-99df-b46e6a0d956c - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:26:24 np0005548731 nova_compute[232433]: 2025-12-06 07:26:24.993 232437 INFO nova.compute.manager [None req-dddad382-72f4-4530-99df-b46e6a0d956c - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Dec  6 02:26:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:25.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:25.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:26 np0005548731 nova_compute[232433]: 2025-12-06 07:26:26.255 232437 DEBUG oslo_concurrency.processutils [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config.rescue 78b7f315-b870-4184-87db-8078fd170237_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.754s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:26:26 np0005548731 nova_compute[232433]: 2025-12-06 07:26:26.255 232437 INFO nova.virt.libvirt.driver [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Deleting local config drive /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237/disk.config.rescue because it was imported into RBD.#033[00m
Dec  6 02:26:26 np0005548731 kernel: tap59fc2ff6-f1: entered promiscuous mode
Dec  6 02:26:26 np0005548731 NetworkManager[49182]: <info>  [1765005986.3078] manager: (tap59fc2ff6-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Dec  6 02:26:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:26Z|00416|binding|INFO|Claiming lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 for this chassis.
Dec  6 02:26:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:26Z|00417|binding|INFO|59fc2ff6-f1bb-4cda-a487-d011530f0618: Claiming fa:16:3e:ef:72:dc 10.100.0.2
Dec  6 02:26:26 np0005548731 nova_compute[232433]: 2025-12-06 07:26:26.309 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:26Z|00418|binding|INFO|Setting lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 ovn-installed in OVS
Dec  6 02:26:26 np0005548731 nova_compute[232433]: 2025-12-06 07:26:26.326 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:26 np0005548731 systemd-udevd[275951]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:26:26 np0005548731 systemd-machined[195355]: New machine qemu-46-instance-0000006b.
Dec  6 02:26:26 np0005548731 NetworkManager[49182]: <info>  [1765005986.3462] device (tap59fc2ff6-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:26:26 np0005548731 NetworkManager[49182]: <info>  [1765005986.3469] device (tap59fc2ff6-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:26:26 np0005548731 systemd[1]: Started Virtual Machine qemu-46-instance-0000006b.
Dec  6 02:26:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:26Z|00419|binding|INFO|Setting lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 up in Southbound
Dec  6 02:26:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:26.409 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:72:dc 10.100.0.2'], port_security=['fa:16:3e:ef:72:dc 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '78b7f315-b870-4184-87db-8078fd170237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=59fc2ff6-f1bb-4cda-a487-d011530f0618) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:26:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:26.411 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 59fc2ff6-f1bb-4cda-a487-d011530f0618 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 bound to our chassis#033[00m
Dec  6 02:26:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:26.412 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:26:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:26.413 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[52199da1-1e77-43f6-a707-7f8d7705d47b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:27.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:27 np0005548731 nova_compute[232433]: 2025-12-06 07:26:27.410 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:27 np0005548731 nova_compute[232433]: 2025-12-06 07:26:27.843 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005987.8427973, 78b7f315-b870-4184-87db-8078fd170237 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:26:27 np0005548731 nova_compute[232433]: 2025-12-06 07:26:27.843 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:26:27 np0005548731 nova_compute[232433]: 2025-12-06 07:26:27.847 232437 DEBUG nova.compute.manager [None req-5aff57d1-d39a-4033-a630-7f8749128f13 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:26:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:27.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:27 np0005548731 nova_compute[232433]: 2025-12-06 07:26:27.931 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:26:27 np0005548731 nova_compute[232433]: 2025-12-06 07:26:27.933 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:26:27 np0005548731 nova_compute[232433]: 2025-12-06 07:26:27.991 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Dec  6 02:26:27 np0005548731 nova_compute[232433]: 2025-12-06 07:26:27.992 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005987.845461, 78b7f315-b870-4184-87db-8078fd170237 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:26:27 np0005548731 nova_compute[232433]: 2025-12-06 07:26:27.992 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] VM Started (Lifecycle Event)#033[00m
Dec  6 02:26:28 np0005548731 nova_compute[232433]: 2025-12-06 07:26:28.025 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:26:28 np0005548731 nova_compute[232433]: 2025-12-06 07:26:28.028 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:26:28 np0005548731 nova_compute[232433]: 2025-12-06 07:26:28.452 232437 DEBUG nova.compute.manager [req-ae7a1c9a-bb67-40f6-ba95-9bfc5071ecbc req-bb76d9b2-f732-4e07-9cfb-4e16e65a7093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:28 np0005548731 nova_compute[232433]: 2025-12-06 07:26:28.453 232437 DEBUG oslo_concurrency.lockutils [req-ae7a1c9a-bb67-40f6-ba95-9bfc5071ecbc req-bb76d9b2-f732-4e07-9cfb-4e16e65a7093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:28 np0005548731 nova_compute[232433]: 2025-12-06 07:26:28.453 232437 DEBUG oslo_concurrency.lockutils [req-ae7a1c9a-bb67-40f6-ba95-9bfc5071ecbc req-bb76d9b2-f732-4e07-9cfb-4e16e65a7093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:28 np0005548731 nova_compute[232433]: 2025-12-06 07:26:28.454 232437 DEBUG oslo_concurrency.lockutils [req-ae7a1c9a-bb67-40f6-ba95-9bfc5071ecbc req-bb76d9b2-f732-4e07-9cfb-4e16e65a7093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:28 np0005548731 nova_compute[232433]: 2025-12-06 07:26:28.454 232437 DEBUG nova.compute.manager [req-ae7a1c9a-bb67-40f6-ba95-9bfc5071ecbc req-bb76d9b2-f732-4e07-9cfb-4e16e65a7093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] No waiting events found dispatching network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:28 np0005548731 nova_compute[232433]: 2025-12-06 07:26:28.454 232437 WARNING nova.compute.manager [req-ae7a1c9a-bb67-40f6-ba95-9bfc5071ecbc req-bb76d9b2-f732-4e07-9cfb-4e16e65a7093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received unexpected event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 for instance with vm_state rescued and task_state None.#033[00m
Dec  6 02:26:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:29.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:29 np0005548731 nova_compute[232433]: 2025-12-06 07:26:29.687 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:26:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:29.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.778 232437 DEBUG nova.compute.manager [req-c312547b-be49-4036-a41e-8baf9a94c147 req-974910e0-b19f-421c-a7ac-d1c5049fba46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.779 232437 DEBUG oslo_concurrency.lockutils [req-c312547b-be49-4036-a41e-8baf9a94c147 req-974910e0-b19f-421c-a7ac-d1c5049fba46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.779 232437 DEBUG oslo_concurrency.lockutils [req-c312547b-be49-4036-a41e-8baf9a94c147 req-974910e0-b19f-421c-a7ac-d1c5049fba46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.779 232437 DEBUG oslo_concurrency.lockutils [req-c312547b-be49-4036-a41e-8baf9a94c147 req-974910e0-b19f-421c-a7ac-d1c5049fba46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.780 232437 DEBUG nova.compute.manager [req-c312547b-be49-4036-a41e-8baf9a94c147 req-974910e0-b19f-421c-a7ac-d1c5049fba46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] No waiting events found dispatching network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.780 232437 WARNING nova.compute.manager [req-c312547b-be49-4036-a41e-8baf9a94c147 req-974910e0-b19f-421c-a7ac-d1c5049fba46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received unexpected event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.901 232437 DEBUG nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.902 232437 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.902 232437 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.902 232437 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.902 232437 DEBUG nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] No waiting events found dispatching network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.902 232437 WARNING nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received unexpected event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f for instance with vm_state resized and task_state None.#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.902 232437 DEBUG nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.903 232437 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.903 232437 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.903 232437 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.903 232437 DEBUG nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] No waiting events found dispatching network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:30 np0005548731 nova_compute[232433]: 2025-12-06 07:26:30.903 232437 WARNING nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received unexpected event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f for instance with vm_state resized and task_state None.#033[00m
Dec  6 02:26:31 np0005548731 nova_compute[232433]: 2025-12-06 07:26:31.011 232437 INFO nova.compute.manager [None req-de8205ed-14bd-4e42-9d8a-86b5a0f39b82 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Unrescuing#033[00m
Dec  6 02:26:31 np0005548731 nova_compute[232433]: 2025-12-06 07:26:31.012 232437 DEBUG oslo_concurrency.lockutils [None req-de8205ed-14bd-4e42-9d8a-86b5a0f39b82 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:26:31 np0005548731 nova_compute[232433]: 2025-12-06 07:26:31.012 232437 DEBUG oslo_concurrency.lockutils [None req-de8205ed-14bd-4e42-9d8a-86b5a0f39b82 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquired lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:26:31 np0005548731 nova_compute[232433]: 2025-12-06 07:26:31.012 232437 DEBUG nova.network.neutron [None req-de8205ed-14bd-4e42-9d8a-86b5a0f39b82 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:26:31 np0005548731 nova_compute[232433]: 2025-12-06 07:26:31.164 232437 DEBUG oslo_concurrency.lockutils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:31 np0005548731 nova_compute[232433]: 2025-12-06 07:26:31.165 232437 DEBUG oslo_concurrency.lockutils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:31 np0005548731 nova_compute[232433]: 2025-12-06 07:26:31.165 232437 DEBUG nova.compute.manager [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Going to confirm migration 14 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Dec  6 02:26:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:31.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:31.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:26:32 np0005548731 nova_compute[232433]: 2025-12-06 07:26:32.411 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:33 np0005548731 nova_compute[232433]: 2025-12-06 07:26:33.031 232437 DEBUG neutronclient.v2_0.client [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 0e5e71bc-7098-4091-938e-6299f989917f for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Dec  6 02:26:33 np0005548731 nova_compute[232433]: 2025-12-06 07:26:33.032 232437 DEBUG oslo_concurrency.lockutils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:26:33 np0005548731 nova_compute[232433]: 2025-12-06 07:26:33.032 232437 DEBUG oslo_concurrency.lockutils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquired lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:26:33 np0005548731 nova_compute[232433]: 2025-12-06 07:26:33.032 232437 DEBUG nova.network.neutron [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:26:33 np0005548731 nova_compute[232433]: 2025-12-06 07:26:33.033 232437 DEBUG nova.objects.instance [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'info_cache' on Instance uuid ea8c0005-4b7a-4697-89ae-91f4bef22e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:33.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:33.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:33 np0005548731 nova_compute[232433]: 2025-12-06 07:26:33.967 232437 DEBUG nova.network.neutron [None req-de8205ed-14bd-4e42-9d8a-86b5a0f39b82 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Updating instance_info_cache with network_info: [{"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:26:34 np0005548731 nova_compute[232433]: 2025-12-06 07:26:34.506 232437 DEBUG oslo_concurrency.lockutils [None req-de8205ed-14bd-4e42-9d8a-86b5a0f39b82 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Releasing lock "refresh_cache-78b7f315-b870-4184-87db-8078fd170237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:26:34 np0005548731 nova_compute[232433]: 2025-12-06 07:26:34.506 232437 DEBUG nova.objects.instance [None req-de8205ed-14bd-4e42-9d8a-86b5a0f39b82 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'flavor' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:34 np0005548731 nova_compute[232433]: 2025-12-06 07:26:34.689 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:35.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:35.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:36 np0005548731 nova_compute[232433]: 2025-12-06 07:26:36.201 232437 DEBUG nova.network.neutron [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating instance_info_cache with network_info: [{"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:26:36 np0005548731 kernel: tap59fc2ff6-f1 (unregistering): left promiscuous mode
Dec  6 02:26:36 np0005548731 nova_compute[232433]: 2025-12-06 07:26:36.623 232437 DEBUG oslo_concurrency.lockutils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Releasing lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:26:36 np0005548731 nova_compute[232433]: 2025-12-06 07:26:36.623 232437 DEBUG nova.objects.instance [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'migration_context' on Instance uuid ea8c0005-4b7a-4697-89ae-91f4bef22e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:36 np0005548731 NetworkManager[49182]: <info>  [1765005996.6250] device (tap59fc2ff6-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:26:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:36Z|00420|binding|INFO|Releasing lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 from this chassis (sb_readonly=0)
Dec  6 02:26:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:36Z|00421|binding|INFO|Setting lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 down in Southbound
Dec  6 02:26:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:36Z|00422|binding|INFO|Removing iface tap59fc2ff6-f1 ovn-installed in OVS
Dec  6 02:26:36 np0005548731 nova_compute[232433]: 2025-12-06 07:26:36.635 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:36 np0005548731 nova_compute[232433]: 2025-12-06 07:26:36.648 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:36 np0005548731 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Dec  6 02:26:36 np0005548731 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006b.scope: Consumed 7.935s CPU time.
Dec  6 02:26:36 np0005548731 systemd-machined[195355]: Machine qemu-46-instance-0000006b terminated.
Dec  6 02:26:36 np0005548731 nova_compute[232433]: 2025-12-06 07:26:36.733 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:36 np0005548731 nova_compute[232433]: 2025-12-06 07:26:36.737 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:37.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:26:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:37.271 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:72:dc 10.100.0.2'], port_security=['fa:16:3e:ef:72:dc 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '78b7f315-b870-4184-87db-8078fd170237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=59fc2ff6-f1bb-4cda-a487-d011530f0618) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:26:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:37.272 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 59fc2ff6-f1bb-4cda-a487-d011530f0618 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 unbound from our chassis#033[00m
Dec  6 02:26:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:37.273 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:26:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:37.274 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4d65649b-976a-4ede-96f2-e5bc92eda514]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:37 np0005548731 nova_compute[232433]: 2025-12-06 07:26:37.540 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:37 np0005548731 nova_compute[232433]: 2025-12-06 07:26:37.544 232437 INFO nova.virt.libvirt.driver [-] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance destroyed successfully.#033[00m
Dec  6 02:26:37 np0005548731 nova_compute[232433]: 2025-12-06 07:26:37.545 232437 DEBUG nova.storage.rbd_utils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] removing snapshot(nova-resize) on rbd image(ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:26:37 np0005548731 nova_compute[232433]: 2025-12-06 07:26:37.545 232437 DEBUG nova.objects.instance [None req-de8205ed-14bd-4e42-9d8a-86b5a0f39b82 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:37 np0005548731 kernel: tap59fc2ff6-f1: entered promiscuous mode
Dec  6 02:26:37 np0005548731 systemd-udevd[276077]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:26:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:37Z|00423|binding|INFO|Claiming lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 for this chassis.
Dec  6 02:26:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:37Z|00424|binding|INFO|59fc2ff6-f1bb-4cda-a487-d011530f0618: Claiming fa:16:3e:ef:72:dc 10.100.0.2
Dec  6 02:26:37 np0005548731 NetworkManager[49182]: <info>  [1765005997.6979] manager: (tap59fc2ff6-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Dec  6 02:26:37 np0005548731 nova_compute[232433]: 2025-12-06 07:26:37.698 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:37 np0005548731 NetworkManager[49182]: <info>  [1765005997.7069] device (tap59fc2ff6-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:26:37 np0005548731 NetworkManager[49182]: <info>  [1765005997.7076] device (tap59fc2ff6-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:26:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:37Z|00425|binding|INFO|Setting lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 ovn-installed in OVS
Dec  6 02:26:37 np0005548731 nova_compute[232433]: 2025-12-06 07:26:37.721 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:37 np0005548731 nova_compute[232433]: 2025-12-06 07:26:37.725 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:37 np0005548731 systemd-machined[195355]: New machine qemu-47-instance-0000006b.
Dec  6 02:26:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:37Z|00426|binding|INFO|Setting lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 up in Southbound
Dec  6 02:26:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:37.740 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:72:dc 10.100.0.2'], port_security=['fa:16:3e:ef:72:dc 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '78b7f315-b870-4184-87db-8078fd170237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=59fc2ff6-f1bb-4cda-a487-d011530f0618) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:26:37 np0005548731 systemd[1]: Started Virtual Machine qemu-47-instance-0000006b.
Dec  6 02:26:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:37.742 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 59fc2ff6-f1bb-4cda-a487-d011530f0618 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 bound to our chassis#033[00m
Dec  6 02:26:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:37.743 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:26:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:37.744 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec4c443-e9a2-49f8-b6ed-50ce0aa86986]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:37.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:26:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.147 232437 DEBUG nova.compute.manager [req-3f507c63-323e-4fb6-9c53-12ec9a0618ed req-72c9d9c6-596f-4d1d-9845-e4c377203fd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-unplugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.148 232437 DEBUG oslo_concurrency.lockutils [req-3f507c63-323e-4fb6-9c53-12ec9a0618ed req-72c9d9c6-596f-4d1d-9845-e4c377203fd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.148 232437 DEBUG oslo_concurrency.lockutils [req-3f507c63-323e-4fb6-9c53-12ec9a0618ed req-72c9d9c6-596f-4d1d-9845-e4c377203fd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.148 232437 DEBUG oslo_concurrency.lockutils [req-3f507c63-323e-4fb6-9c53-12ec9a0618ed req-72c9d9c6-596f-4d1d-9845-e4c377203fd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.148 232437 DEBUG nova.compute.manager [req-3f507c63-323e-4fb6-9c53-12ec9a0618ed req-72c9d9c6-596f-4d1d-9845-e4c377203fd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] No waiting events found dispatching network-vif-unplugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.149 232437 WARNING nova.compute.manager [req-3f507c63-323e-4fb6-9c53-12ec9a0618ed req-72c9d9c6-596f-4d1d-9845-e4c377203fd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received unexpected event network-vif-unplugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.310 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for 78b7f315-b870-4184-87db-8078fd170237 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.311 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005998.3102984, 78b7f315-b870-4184-87db-8078fd170237 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.311 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.332 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.339 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.365 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.366 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765005998.313166, 78b7f315-b870-4184-87db-8078fd170237 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.366 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] VM Started (Lifecycle Event)#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.425 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.429 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.505 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Dec  6 02:26:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:38.889 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:26:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:38.890 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:26:38 np0005548731 nova_compute[232433]: 2025-12-06 07:26:38.890 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:39 np0005548731 nova_compute[232433]: 2025-12-06 07:26:39.690 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:39.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:39 np0005548731 podman[276211]: 2025-12-06 07:26:39.915437154 +0000 UTC m=+0.062964359 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:26:39 np0005548731 podman[276209]: 2025-12-06 07:26:39.915441284 +0000 UTC m=+0.058918860 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 02:26:39 np0005548731 podman[276210]: 2025-12-06 07:26:39.934049168 +0000 UTC m=+0.082433944 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.262 232437 DEBUG nova.compute.manager [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.264 232437 DEBUG oslo_concurrency.lockutils [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.264 232437 DEBUG oslo_concurrency.lockutils [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.264 232437 DEBUG oslo_concurrency.lockutils [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.265 232437 DEBUG nova.compute.manager [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] No waiting events found dispatching network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.265 232437 WARNING nova.compute.manager [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received unexpected event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.265 232437 DEBUG nova.compute.manager [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.265 232437 DEBUG oslo_concurrency.lockutils [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.265 232437 DEBUG oslo_concurrency.lockutils [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.265 232437 DEBUG oslo_concurrency.lockutils [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.266 232437 DEBUG nova.compute.manager [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] No waiting events found dispatching network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.266 232437 WARNING nova.compute.manager [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received unexpected event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.266 232437 DEBUG nova.compute.manager [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.266 232437 DEBUG oslo_concurrency.lockutils [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.266 232437 DEBUG oslo_concurrency.lockutils [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.267 232437 DEBUG oslo_concurrency.lockutils [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.267 232437 DEBUG nova.compute.manager [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] No waiting events found dispatching network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:40 np0005548731 nova_compute[232433]: 2025-12-06 07:26:40.267 232437 WARNING nova.compute.manager [req-600e18c3-17ed-496e-85c9-3786d8a7f4d8 req-b493e5c9-716b-4a46-b292-04b8a7c870db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received unexpected event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:26:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e266 e266: 3 total, 3 up, 3 in
Dec  6 02:26:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:41.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:26:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:41.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:26:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e267 e267: 3 total, 3 up, 3 in
Dec  6 02:26:42 np0005548731 nova_compute[232433]: 2025-12-06 07:26:42.414 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:42.891 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:26:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:43.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:26:43 np0005548731 nova_compute[232433]: 2025-12-06 07:26:43.486 232437 DEBUG nova.virt.libvirt.vif [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1037063395',display_name='tempest-ServerDiskConfigTestJSON-server-1037063395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1037063395',id=108,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:26:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-qe8qs13c',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:26:26Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=ea8c0005-4b7a-4697-89ae-91f4bef22e36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:26:43 np0005548731 nova_compute[232433]: 2025-12-06 07:26:43.486 232437 DEBUG nova.network.os_vif_util [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:26:43 np0005548731 nova_compute[232433]: 2025-12-06 07:26:43.487 232437 DEBUG nova.network.os_vif_util [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:26:43 np0005548731 nova_compute[232433]: 2025-12-06 07:26:43.488 232437 DEBUG os_vif [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:26:43 np0005548731 nova_compute[232433]: 2025-12-06 07:26:43.489 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:43 np0005548731 nova_compute[232433]: 2025-12-06 07:26:43.490 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e5e71bc-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:26:43 np0005548731 nova_compute[232433]: 2025-12-06 07:26:43.490 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:26:43 np0005548731 nova_compute[232433]: 2025-12-06 07:26:43.492 232437 INFO os_vif [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70')#033[00m
Dec  6 02:26:43 np0005548731 nova_compute[232433]: 2025-12-06 07:26:43.492 232437 DEBUG oslo_concurrency.lockutils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:43 np0005548731 nova_compute[232433]: 2025-12-06 07:26:43.492 232437 DEBUG oslo_concurrency.lockutils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:26:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:43.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:26:44 np0005548731 nova_compute[232433]: 2025-12-06 07:26:44.641 232437 DEBUG oslo_concurrency.processutils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:26:44 np0005548731 nova_compute[232433]: 2025-12-06 07:26:44.693 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:26:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/514090520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:26:45 np0005548731 nova_compute[232433]: 2025-12-06 07:26:45.087 232437 DEBUG oslo_concurrency.processutils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:26:45 np0005548731 nova_compute[232433]: 2025-12-06 07:26:45.093 232437 DEBUG nova.compute.provider_tree [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:26:45 np0005548731 nova_compute[232433]: 2025-12-06 07:26:45.144 232437 DEBUG nova.compute.manager [None req-de8205ed-14bd-4e42-9d8a-86b5a0f39b82 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:26:45 np0005548731 nova_compute[232433]: 2025-12-06 07:26:45.187 232437 DEBUG nova.scheduler.client.report [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:26:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:45.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:45.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:46 np0005548731 nova_compute[232433]: 2025-12-06 07:26:46.848 232437 DEBUG oslo_concurrency.lockutils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 3.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.091 232437 INFO nova.scheduler.client.report [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Deleted allocation for migration 2799ffd6-1eb2-4617-b3b6-b7669cd5f78c#033[00m
Dec  6 02:26:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:47.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.216 232437 DEBUG oslo_concurrency.lockutils [None req-d609dd44-fa2e-4e1f-a353-6dbc1bb909d3 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 16.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.416 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.622 232437 DEBUG oslo_concurrency.lockutils [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.623 232437 DEBUG oslo_concurrency.lockutils [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.624 232437 DEBUG oslo_concurrency.lockutils [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.624 232437 DEBUG oslo_concurrency.lockutils [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.625 232437 DEBUG oslo_concurrency.lockutils [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.626 232437 INFO nova.compute.manager [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Terminating instance#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.627 232437 DEBUG nova.compute.manager [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:26:47 np0005548731 kernel: tap59fc2ff6-f1 (unregistering): left promiscuous mode
Dec  6 02:26:47 np0005548731 NetworkManager[49182]: <info>  [1765006007.7398] device (tap59fc2ff6-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:26:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:47Z|00427|binding|INFO|Releasing lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 from this chassis (sb_readonly=0)
Dec  6 02:26:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:47Z|00428|binding|INFO|Setting lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 down in Southbound
Dec  6 02:26:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:47Z|00429|binding|INFO|Removing iface tap59fc2ff6-f1 ovn-installed in OVS
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.753 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.777 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:47 np0005548731 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Dec  6 02:26:47 np0005548731 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006b.scope: Consumed 10.155s CPU time.
Dec  6 02:26:47 np0005548731 systemd-machined[195355]: Machine qemu-47-instance-0000006b terminated.
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.845 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:72:dc 10.100.0.2'], port_security=['fa:16:3e:ef:72:dc 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '78b7f315-b870-4184-87db-8078fd170237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=59fc2ff6-f1bb-4cda-a487-d011530f0618) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.847 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 59fc2ff6-f1bb-4cda-a487-d011530f0618 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 unbound from our chassis#033[00m
Dec  6 02:26:47 np0005548731 kernel: tap59fc2ff6-f1: entered promiscuous mode
Dec  6 02:26:47 np0005548731 NetworkManager[49182]: <info>  [1765006007.8473] manager: (tap59fc2ff6-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/209)
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.848 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:26:47 np0005548731 kernel: tap59fc2ff6-f1 (unregistering): left promiscuous mode
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.849 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[17a93411-8295-4a4d-bdf9-9c79c32aa946]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:47Z|00430|binding|INFO|Claiming lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 for this chassis.
Dec  6 02:26:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:47Z|00431|binding|INFO|59fc2ff6-f1bb-4cda-a487-d011530f0618: Claiming fa:16:3e:ef:72:dc 10.100.0.2
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.867 232437 INFO nova.virt.libvirt.driver [-] [instance: 78b7f315-b870-4184-87db-8078fd170237] Instance destroyed successfully.#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.867 232437 DEBUG nova.objects.instance [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'resources' on Instance uuid 78b7f315-b870-4184-87db-8078fd170237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:47Z|00432|binding|INFO|Setting lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 ovn-installed in OVS
Dec  6 02:26:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:47Z|00433|if_status|INFO|Not setting lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 down as sb is readonly
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.871 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:47Z|00434|binding|INFO|Releasing lport 59fc2ff6-f1bb-4cda-a487-d011530f0618 from this chassis (sb_readonly=0)
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.880 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:72:dc 10.100.0.2'], port_security=['fa:16:3e:ef:72:dc 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '78b7f315-b870-4184-87db-8078fd170237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=59fc2ff6-f1bb-4cda-a487-d011530f0618) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.882 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 59fc2ff6-f1bb-4cda-a487-d011530f0618 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 bound to our chassis#033[00m
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.883 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.883 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5a69a1-9bda-4209-b4e7-ea662af8d09e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:47.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.889 232437 DEBUG nova.virt.libvirt.vif [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1448250982',display_name='tempest-ServerRescueTestJSON-server-1448250982',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1448250982',id=107,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:26:27Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5036a5cbf7d44fd809d00942afd76e6',ramdisk_id='',reservation_id='r-d804mxte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1682980670',owner_user_name='tempest-ServerRescueTestJSON-1682980670-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:26:45Z,user_data=None,user_id='bf140baee82f408e8fafa81062146641',uuid=78b7f315-b870-4184-87db-8078fd170237,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.889 232437 DEBUG nova.network.os_vif_util [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converting VIF {"id": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "address": "fa:16:3e:ef:72:dc", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59fc2ff6-f1", "ovs_interfaceid": "59fc2ff6-f1bb-4cda-a487-d011530f0618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.890 232437 DEBUG nova.network.os_vif_util [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:72:dc,bridge_name='br-int',has_traffic_filtering=True,id=59fc2ff6-f1bb-4cda-a487-d011530f0618,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59fc2ff6-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.890 232437 DEBUG os_vif [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:72:dc,bridge_name='br-int',has_traffic_filtering=True,id=59fc2ff6-f1bb-4cda-a487-d011530f0618,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59fc2ff6-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.892 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.892 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59fc2ff6-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.894 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.894 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:47 np0005548731 nova_compute[232433]: 2025-12-06 07:26:47.897 232437 INFO os_vif [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:72:dc,bridge_name='br-int',has_traffic_filtering=True,id=59fc2ff6-f1bb-4cda-a487-d011530f0618,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59fc2ff6-f1')#033[00m
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.905 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:72:dc 10.100.0.2'], port_security=['fa:16:3e:ef:72:dc 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '78b7f315-b870-4184-87db-8078fd170237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=59fc2ff6-f1bb-4cda-a487-d011530f0618) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.906 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 59fc2ff6-f1bb-4cda-a487-d011530f0618 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 unbound from our chassis#033[00m
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.907 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:26:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:47.908 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e64961a4-8b38-492d-9d4d-0a764e3c3f4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:48 np0005548731 nova_compute[232433]: 2025-12-06 07:26:48.184 232437 DEBUG nova.compute.manager [req-10fd66f5-cf74-4c9f-94a0-a7ce4f1ab2ef req-06709589-fc1a-4b34-9c32-6c6ef44aaf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-unplugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:48 np0005548731 nova_compute[232433]: 2025-12-06 07:26:48.185 232437 DEBUG oslo_concurrency.lockutils [req-10fd66f5-cf74-4c9f-94a0-a7ce4f1ab2ef req-06709589-fc1a-4b34-9c32-6c6ef44aaf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:48 np0005548731 nova_compute[232433]: 2025-12-06 07:26:48.185 232437 DEBUG oslo_concurrency.lockutils [req-10fd66f5-cf74-4c9f-94a0-a7ce4f1ab2ef req-06709589-fc1a-4b34-9c32-6c6ef44aaf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:48 np0005548731 nova_compute[232433]: 2025-12-06 07:26:48.218 232437 DEBUG oslo_concurrency.lockutils [req-10fd66f5-cf74-4c9f-94a0-a7ce4f1ab2ef req-06709589-fc1a-4b34-9c32-6c6ef44aaf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:48 np0005548731 nova_compute[232433]: 2025-12-06 07:26:48.219 232437 DEBUG nova.compute.manager [req-10fd66f5-cf74-4c9f-94a0-a7ce4f1ab2ef req-06709589-fc1a-4b34-9c32-6c6ef44aaf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] No waiting events found dispatching network-vif-unplugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:48 np0005548731 nova_compute[232433]: 2025-12-06 07:26:48.219 232437 DEBUG nova.compute.manager [req-10fd66f5-cf74-4c9f-94a0-a7ce4f1ab2ef req-06709589-fc1a-4b34-9c32-6c6ef44aaf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-unplugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:26:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e268 e268: 3 total, 3 up, 3 in
Dec  6 02:26:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:49.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:49 np0005548731 nova_compute[232433]: 2025-12-06 07:26:49.227 232437 INFO nova.virt.libvirt.driver [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Deleting instance files /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237_del#033[00m
Dec  6 02:26:49 np0005548731 nova_compute[232433]: 2025-12-06 07:26:49.228 232437 INFO nova.virt.libvirt.driver [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Deletion of /var/lib/nova/instances/78b7f315-b870-4184-87db-8078fd170237_del complete#033[00m
Dec  6 02:26:49 np0005548731 nova_compute[232433]: 2025-12-06 07:26:49.343 232437 INFO nova.compute.manager [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Took 1.72 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:26:49 np0005548731 nova_compute[232433]: 2025-12-06 07:26:49.344 232437 DEBUG oslo.service.loopingcall [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:26:49 np0005548731 nova_compute[232433]: 2025-12-06 07:26:49.344 232437 DEBUG nova.compute.manager [-] [instance: 78b7f315-b870-4184-87db-8078fd170237] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:26:49 np0005548731 nova_compute[232433]: 2025-12-06 07:26:49.345 232437 DEBUG nova.network.neutron [-] [instance: 78b7f315-b870-4184-87db-8078fd170237] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:26:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:49.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:50 np0005548731 nova_compute[232433]: 2025-12-06 07:26:50.322 232437 DEBUG nova.compute.manager [req-db20851a-150c-4e7e-a087-07e019cf7f07 req-4b6e6fd8-f1aa-4fad-b13e-a9d17dc8d4cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:50 np0005548731 nova_compute[232433]: 2025-12-06 07:26:50.322 232437 DEBUG oslo_concurrency.lockutils [req-db20851a-150c-4e7e-a087-07e019cf7f07 req-4b6e6fd8-f1aa-4fad-b13e-a9d17dc8d4cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "78b7f315-b870-4184-87db-8078fd170237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:50 np0005548731 nova_compute[232433]: 2025-12-06 07:26:50.323 232437 DEBUG oslo_concurrency.lockutils [req-db20851a-150c-4e7e-a087-07e019cf7f07 req-4b6e6fd8-f1aa-4fad-b13e-a9d17dc8d4cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:50 np0005548731 nova_compute[232433]: 2025-12-06 07:26:50.323 232437 DEBUG oslo_concurrency.lockutils [req-db20851a-150c-4e7e-a087-07e019cf7f07 req-4b6e6fd8-f1aa-4fad-b13e-a9d17dc8d4cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:50 np0005548731 nova_compute[232433]: 2025-12-06 07:26:50.323 232437 DEBUG nova.compute.manager [req-db20851a-150c-4e7e-a087-07e019cf7f07 req-4b6e6fd8-f1aa-4fad-b13e-a9d17dc8d4cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] No waiting events found dispatching network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:50 np0005548731 nova_compute[232433]: 2025-12-06 07:26:50.324 232437 WARNING nova.compute.manager [req-db20851a-150c-4e7e-a087-07e019cf7f07 req-4b6e6fd8-f1aa-4fad-b13e-a9d17dc8d4cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received unexpected event network-vif-plugged-59fc2ff6-f1bb-4cda-a487-d011530f0618 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:26:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:51.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:26:51 np0005548731 nova_compute[232433]: 2025-12-06 07:26:51.674 232437 DEBUG nova.network.neutron [-] [instance: 78b7f315-b870-4184-87db-8078fd170237] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:26:51 np0005548731 nova_compute[232433]: 2025-12-06 07:26:51.695 232437 INFO nova.compute.manager [-] [instance: 78b7f315-b870-4184-87db-8078fd170237] Took 2.35 seconds to deallocate network for instance.#033[00m
Dec  6 02:26:51 np0005548731 nova_compute[232433]: 2025-12-06 07:26:51.808 232437 DEBUG nova.compute.manager [req-fce06235-c7c4-4b6e-9629-422273bb7501 req-ffcda295-ffd7-40fc-9567-b39d446fd462 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 78b7f315-b870-4184-87db-8078fd170237] Received event network-vif-deleted-59fc2ff6-f1bb-4cda-a487-d011530f0618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:51 np0005548731 nova_compute[232433]: 2025-12-06 07:26:51.840 232437 DEBUG oslo_concurrency.lockutils [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:51 np0005548731 nova_compute[232433]: 2025-12-06 07:26:51.841 232437 DEBUG oslo_concurrency.lockutils [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:26:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:51.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:26:51 np0005548731 nova_compute[232433]: 2025-12-06 07:26:51.913 232437 DEBUG oslo_concurrency.processutils [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:26:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:26:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3505948962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:26:52 np0005548731 nova_compute[232433]: 2025-12-06 07:26:52.417 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:52 np0005548731 nova_compute[232433]: 2025-12-06 07:26:52.421 232437 DEBUG oslo_concurrency.processutils [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:26:52 np0005548731 nova_compute[232433]: 2025-12-06 07:26:52.427 232437 DEBUG nova.compute.provider_tree [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:26:52 np0005548731 nova_compute[232433]: 2025-12-06 07:26:52.480 232437 DEBUG nova.scheduler.client.report [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:26:52 np0005548731 nova_compute[232433]: 2025-12-06 07:26:52.544 232437 DEBUG oslo_concurrency.lockutils [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:52 np0005548731 nova_compute[232433]: 2025-12-06 07:26:52.670 232437 INFO nova.scheduler.client.report [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Deleted allocations for instance 78b7f315-b870-4184-87db-8078fd170237#033[00m
Dec  6 02:26:52 np0005548731 nova_compute[232433]: 2025-12-06 07:26:52.793 232437 DEBUG oslo_concurrency.lockutils [None req-fbe54018-dff9-40ea-b7c4-90cbaacc2b99 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "78b7f315-b870-4184-87db-8078fd170237" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:52 np0005548731 nova_compute[232433]: 2025-12-06 07:26:52.894 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:53.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:26:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:53.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.063784) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014063938, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2340, "num_deletes": 258, "total_data_size": 5596692, "memory_usage": 5678264, "flush_reason": "Manual Compaction"}
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014089152, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 3668690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42573, "largest_seqno": 44907, "table_properties": {"data_size": 3659055, "index_size": 6065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20455, "raw_average_key_size": 20, "raw_value_size": 3639577, "raw_average_value_size": 3665, "num_data_blocks": 264, "num_entries": 993, "num_filter_entries": 993, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005802, "oldest_key_time": 1765005802, "file_creation_time": 1765006014, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 25661 microseconds, and 11386 cpu microseconds.
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.089452) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 3668690 bytes OK
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.089578) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.092028) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.092042) EVENT_LOG_v1 {"time_micros": 1765006014092037, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.092057) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 5586306, prev total WAL file size 5586306, number of live WAL files 2.
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.093761) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323632' seq:72057594037927935, type:22 .. '6C6F676D0031353133' seq:0, type:0; will stop at (end)
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(3582KB)], [81(9076KB)]
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014093803, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 12962784, "oldest_snapshot_seqno": -1}
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 7552 keys, 12813877 bytes, temperature: kUnknown
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014159481, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 12813877, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12762149, "index_size": 31776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 195070, "raw_average_key_size": 25, "raw_value_size": 12625929, "raw_average_value_size": 1671, "num_data_blocks": 1263, "num_entries": 7552, "num_filter_entries": 7552, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765006014, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.159805) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 12813877 bytes
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.161139) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.0 rd, 194.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.9 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(7.0) write-amplify(3.5) OK, records in: 8085, records dropped: 533 output_compression: NoCompression
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.161154) EVENT_LOG_v1 {"time_micros": 1765006014161147, "job": 50, "event": "compaction_finished", "compaction_time_micros": 65793, "compaction_time_cpu_micros": 28638, "output_level": 6, "num_output_files": 1, "total_output_size": 12813877, "num_input_records": 8085, "num_output_records": 7552, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014162088, "job": 50, "event": "table_file_deletion", "file_number": 83}
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014164022, "job": 50, "event": "table_file_deletion", "file_number": 81}
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.093653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.164080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.164084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.164086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.164087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:26:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:26:54.164089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:26:54 np0005548731 nova_compute[232433]: 2025-12-06 07:26:54.358 232437 DEBUG oslo_concurrency.lockutils [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:54 np0005548731 nova_compute[232433]: 2025-12-06 07:26:54.359 232437 DEBUG oslo_concurrency.lockutils [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:54 np0005548731 nova_compute[232433]: 2025-12-06 07:26:54.359 232437 DEBUG oslo_concurrency.lockutils [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:54 np0005548731 nova_compute[232433]: 2025-12-06 07:26:54.359 232437 DEBUG oslo_concurrency.lockutils [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:54 np0005548731 nova_compute[232433]: 2025-12-06 07:26:54.360 232437 DEBUG oslo_concurrency.lockutils [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:54 np0005548731 nova_compute[232433]: 2025-12-06 07:26:54.361 232437 INFO nova.compute.manager [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Terminating instance#033[00m
Dec  6 02:26:54 np0005548731 nova_compute[232433]: 2025-12-06 07:26:54.363 232437 DEBUG nova.compute.manager [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:26:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:55.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:55 np0005548731 kernel: tape61e57ee-1e (unregistering): left promiscuous mode
Dec  6 02:26:55 np0005548731 NetworkManager[49182]: <info>  [1765006015.3401] device (tape61e57ee-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:26:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:55Z|00435|binding|INFO|Releasing lport e61e57ee-1e76-4d23-ab33-ab23cf69ba70 from this chassis (sb_readonly=0)
Dec  6 02:26:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:55Z|00436|binding|INFO|Setting lport e61e57ee-1e76-4d23-ab33-ab23cf69ba70 down in Southbound
Dec  6 02:26:55 np0005548731 nova_compute[232433]: 2025-12-06 07:26:55.346 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:26:55Z|00437|binding|INFO|Removing iface tape61e57ee-1e ovn-installed in OVS
Dec  6 02:26:55 np0005548731 nova_compute[232433]: 2025-12-06 07:26:55.348 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:55 np0005548731 nova_compute[232433]: 2025-12-06 07:26:55.362 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:55 np0005548731 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000067.scope: Deactivated successfully.
Dec  6 02:26:55 np0005548731 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000067.scope: Consumed 17.592s CPU time.
Dec  6 02:26:55 np0005548731 systemd-machined[195355]: Machine qemu-43-instance-00000067 terminated.
Dec  6 02:26:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:55.490 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:ba:09 10.100.0.10'], port_security=['fa:16:3e:d9:ba:09 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5ea73f51-d224-41ed-892b-a137d9985e73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3935518d-ad39-4c36-90ae-f8fe979eafc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5036a5cbf7d44fd809d00942afd76e6', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'acd03568-07d0-4090-ab6d-0f9189f7ac4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdc71006-3db8-48ee-b5c0-325cb1eaa593, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=e61e57ee-1e76-4d23-ab33-ab23cf69ba70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:26:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:55.492 143965 INFO neutron.agent.ovn.metadata.agent [-] Port e61e57ee-1e76-4d23-ab33-ab23cf69ba70 in datapath 3935518d-ad39-4c36-90ae-f8fe979eafc5 unbound from our chassis#033[00m
Dec  6 02:26:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:55.493 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3935518d-ad39-4c36-90ae-f8fe979eafc5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:26:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:26:55.494 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[97689490-66a2-4dc7-b6be-eb06f1a786d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:26:55 np0005548731 nova_compute[232433]: 2025-12-06 07:26:55.581 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:55 np0005548731 nova_compute[232433]: 2025-12-06 07:26:55.588 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:55 np0005548731 nova_compute[232433]: 2025-12-06 07:26:55.600 232437 INFO nova.virt.libvirt.driver [-] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Instance destroyed successfully.#033[00m
Dec  6 02:26:55 np0005548731 nova_compute[232433]: 2025-12-06 07:26:55.601 232437 DEBUG nova.objects.instance [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lazy-loading 'resources' on Instance uuid 5ea73f51-d224-41ed-892b-a137d9985e73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:26:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:55.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:26:56 np0005548731 nova_compute[232433]: 2025-12-06 07:26:56.174 232437 DEBUG nova.virt.libvirt.vif [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:24:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1971190968',display_name='tempest-ServerRescueTestJSON-server-1971190968',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1971190968',id=103,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:16Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5036a5cbf7d44fd809d00942afd76e6',ramdisk_id='',reservation_id='r-uvl7bc16',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1682980670',owner_user_name='tempest-ServerRescueTestJSON-1682980670-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:25:16Z,user_data=None,user_id='bf140baee82f408e8fafa81062146641',uuid=5ea73f51-d224-41ed-892b-a137d9985e73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:26:56 np0005548731 nova_compute[232433]: 2025-12-06 07:26:56.175 232437 DEBUG nova.network.os_vif_util [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converting VIF {"id": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "address": "fa:16:3e:d9:ba:09", "network": {"id": "3935518d-ad39-4c36-90ae-f8fe979eafc5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-32965635-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c5036a5cbf7d44fd809d00942afd76e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61e57ee-1e", "ovs_interfaceid": "e61e57ee-1e76-4d23-ab33-ab23cf69ba70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:26:56 np0005548731 nova_compute[232433]: 2025-12-06 07:26:56.176 232437 DEBUG nova.network.os_vif_util [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:ba:09,bridge_name='br-int',has_traffic_filtering=True,id=e61e57ee-1e76-4d23-ab33-ab23cf69ba70,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61e57ee-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:26:56 np0005548731 nova_compute[232433]: 2025-12-06 07:26:56.176 232437 DEBUG os_vif [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:ba:09,bridge_name='br-int',has_traffic_filtering=True,id=e61e57ee-1e76-4d23-ab33-ab23cf69ba70,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61e57ee-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:26:56 np0005548731 nova_compute[232433]: 2025-12-06 07:26:56.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:56 np0005548731 nova_compute[232433]: 2025-12-06 07:26:56.178 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape61e57ee-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:26:56 np0005548731 nova_compute[232433]: 2025-12-06 07:26:56.179 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:56 np0005548731 nova_compute[232433]: 2025-12-06 07:26:56.181 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:56 np0005548731 nova_compute[232433]: 2025-12-06 07:26:56.183 232437 INFO os_vif [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:ba:09,bridge_name='br-int',has_traffic_filtering=True,id=e61e57ee-1e76-4d23-ab33-ab23cf69ba70,network=Network(3935518d-ad39-4c36-90ae-f8fe979eafc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61e57ee-1e')#033[00m
Dec  6 02:26:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:57.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e269 e269: 3 total, 3 up, 3 in
Dec  6 02:26:57 np0005548731 nova_compute[232433]: 2025-12-06 07:26:57.455 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:26:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:57.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:26:58 np0005548731 nova_compute[232433]: 2025-12-06 07:26:58.148 232437 DEBUG nova.compute.manager [req-200a86ae-80a1-4592-accb-62ae24cd4456 req-b7bf9b7e-4ebb-4de0-8dcf-c3136a2ae66c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received event network-vif-unplugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:26:58 np0005548731 nova_compute[232433]: 2025-12-06 07:26:58.149 232437 DEBUG oslo_concurrency.lockutils [req-200a86ae-80a1-4592-accb-62ae24cd4456 req-b7bf9b7e-4ebb-4de0-8dcf-c3136a2ae66c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:26:58 np0005548731 nova_compute[232433]: 2025-12-06 07:26:58.149 232437 DEBUG oslo_concurrency.lockutils [req-200a86ae-80a1-4592-accb-62ae24cd4456 req-b7bf9b7e-4ebb-4de0-8dcf-c3136a2ae66c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:26:58 np0005548731 nova_compute[232433]: 2025-12-06 07:26:58.149 232437 DEBUG oslo_concurrency.lockutils [req-200a86ae-80a1-4592-accb-62ae24cd4456 req-b7bf9b7e-4ebb-4de0-8dcf-c3136a2ae66c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:26:58 np0005548731 nova_compute[232433]: 2025-12-06 07:26:58.150 232437 DEBUG nova.compute.manager [req-200a86ae-80a1-4592-accb-62ae24cd4456 req-b7bf9b7e-4ebb-4de0-8dcf-c3136a2ae66c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] No waiting events found dispatching network-vif-unplugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:26:58 np0005548731 nova_compute[232433]: 2025-12-06 07:26:58.150 232437 DEBUG nova.compute.manager [req-200a86ae-80a1-4592-accb-62ae24cd4456 req-b7bf9b7e-4ebb-4de0-8dcf-c3136a2ae66c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received event network-vif-unplugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:26:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:26:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:26:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:26:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:26:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:26:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:26:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:59.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:26:59 np0005548731 nova_compute[232433]: 2025-12-06 07:26:59.628 232437 INFO nova.virt.libvirt.driver [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Deleting instance files /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73_del#033[00m
Dec  6 02:26:59 np0005548731 nova_compute[232433]: 2025-12-06 07:26:59.628 232437 INFO nova.virt.libvirt.driver [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Deletion of /var/lib/nova/instances/5ea73f51-d224-41ed-892b-a137d9985e73_del complete#033[00m
Dec  6 02:26:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:26:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:26:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:59.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:00 np0005548731 nova_compute[232433]: 2025-12-06 07:27:00.352 232437 INFO nova.compute.manager [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Took 5.99 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:27:00 np0005548731 nova_compute[232433]: 2025-12-06 07:27:00.352 232437 DEBUG oslo.service.loopingcall [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:27:00 np0005548731 nova_compute[232433]: 2025-12-06 07:27:00.352 232437 DEBUG nova.compute.manager [-] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:27:00 np0005548731 nova_compute[232433]: 2025-12-06 07:27:00.353 232437 DEBUG nova.network.neutron [-] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:27:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:27:00.868 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:27:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:27:00.868 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:27:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:27:00.868 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:27:01 np0005548731 nova_compute[232433]: 2025-12-06 07:27:01.180 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:01.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:01.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:02 np0005548731 nova_compute[232433]: 2025-12-06 07:27:02.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:27:02 np0005548731 nova_compute[232433]: 2025-12-06 07:27:02.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:27:02 np0005548731 nova_compute[232433]: 2025-12-06 07:27:02.456 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:02 np0005548731 nova_compute[232433]: 2025-12-06 07:27:02.866 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006007.86516, 78b7f315-b870-4184-87db-8078fd170237 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:27:02 np0005548731 nova_compute[232433]: 2025-12-06 07:27:02.867 232437 INFO nova.compute.manager [-] [instance: 78b7f315-b870-4184-87db-8078fd170237] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:02.930347) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006022930428, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 396, "num_deletes": 252, "total_data_size": 375262, "memory_usage": 383048, "flush_reason": "Manual Compaction"}
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006022934841, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 247460, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44912, "largest_seqno": 45303, "table_properties": {"data_size": 245065, "index_size": 495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6156, "raw_average_key_size": 19, "raw_value_size": 240144, "raw_average_value_size": 755, "num_data_blocks": 21, "num_entries": 318, "num_filter_entries": 318, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006014, "oldest_key_time": 1765006014, "file_creation_time": 1765006022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 4545 microseconds, and 1590 cpu microseconds.
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:02.934899) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 247460 bytes OK
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:02.934924) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:02.936508) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:02.936529) EVENT_LOG_v1 {"time_micros": 1765006022936522, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:02.936565) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 372658, prev total WAL file size 372658, number of live WAL files 2.
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:02.937235) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(241KB)], [84(12MB)]
Dec  6 02:27:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006022937274, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 13061337, "oldest_snapshot_seqno": -1}
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 7350 keys, 11110163 bytes, temperature: kUnknown
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006023001976, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 11110163, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11061090, "index_size": 29569, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 191586, "raw_average_key_size": 26, "raw_value_size": 10929649, "raw_average_value_size": 1487, "num_data_blocks": 1164, "num_entries": 7350, "num_filter_entries": 7350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765006022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:03.002219) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11110163 bytes
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:03.003705) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.6 rd, 171.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.2 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(97.7) write-amplify(44.9) OK, records in: 7870, records dropped: 520 output_compression: NoCompression
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:03.003723) EVENT_LOG_v1 {"time_micros": 1765006023003715, "job": 52, "event": "compaction_finished", "compaction_time_micros": 64783, "compaction_time_cpu_micros": 29283, "output_level": 6, "num_output_files": 1, "total_output_size": 11110163, "num_input_records": 7870, "num_output_records": 7350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006023003897, "job": 52, "event": "table_file_deletion", "file_number": 86}
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006023006907, "job": 52, "event": "table_file_deletion", "file_number": 84}
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:02.937138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:03.006996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:03.007003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:03.007006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:03.007009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:27:03 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:27:03.007012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:27:03 np0005548731 nova_compute[232433]: 2025-12-06 07:27:03.055 232437 DEBUG nova.compute.manager [None req-78c76c88-70f3-4be1-be81-a61f6c07280a - - - - - -] [instance: 78b7f315-b870-4184-87db-8078fd170237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:27:03 np0005548731 nova_compute[232433]: 2025-12-06 07:27:03.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:27:03 np0005548731 nova_compute[232433]: 2025-12-06 07:27:03.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:27:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:03.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:03 np0005548731 nova_compute[232433]: 2025-12-06 07:27:03.666 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Skipping network cache update for instance because it has been migrated to another host. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9902#033[00m
Dec  6 02:27:03 np0005548731 nova_compute[232433]: 2025-12-06 07:27:03.666 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:27:03 np0005548731 nova_compute[232433]: 2025-12-06 07:27:03.781 232437 DEBUG nova.compute.manager [req-f1a5338d-ef81-4491-8b71-d79fba8c6f0d req-c506b16f-0e0b-4886-8353-d4fe7a9c406c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:27:03 np0005548731 nova_compute[232433]: 2025-12-06 07:27:03.781 232437 DEBUG oslo_concurrency.lockutils [req-f1a5338d-ef81-4491-8b71-d79fba8c6f0d req-c506b16f-0e0b-4886-8353-d4fe7a9c406c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:27:03 np0005548731 nova_compute[232433]: 2025-12-06 07:27:03.782 232437 DEBUG oslo_concurrency.lockutils [req-f1a5338d-ef81-4491-8b71-d79fba8c6f0d req-c506b16f-0e0b-4886-8353-d4fe7a9c406c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:27:03 np0005548731 nova_compute[232433]: 2025-12-06 07:27:03.782 232437 DEBUG oslo_concurrency.lockutils [req-f1a5338d-ef81-4491-8b71-d79fba8c6f0d req-c506b16f-0e0b-4886-8353-d4fe7a9c406c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:27:03 np0005548731 nova_compute[232433]: 2025-12-06 07:27:03.782 232437 DEBUG nova.compute.manager [req-f1a5338d-ef81-4491-8b71-d79fba8c6f0d req-c506b16f-0e0b-4886-8353-d4fe7a9c406c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] No waiting events found dispatching network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:27:03 np0005548731 nova_compute[232433]: 2025-12-06 07:27:03.782 232437 WARNING nova.compute.manager [req-f1a5338d-ef81-4491-8b71-d79fba8c6f0d req-c506b16f-0e0b-4886-8353-d4fe7a9c406c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received unexpected event network-vif-plugged-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 for instance with vm_state rescued and task_state deleting.#033[00m
Dec  6 02:27:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:27:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:03.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:04 np0005548731 nova_compute[232433]: 2025-12-06 07:27:04.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:27:04 np0005548731 nova_compute[232433]: 2025-12-06 07:27:04.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:27:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:27:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:05.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:05 np0005548731 nova_compute[232433]: 2025-12-06 07:27:05.522 232437 DEBUG nova.network.neutron [-] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:27:05 np0005548731 nova_compute[232433]: 2025-12-06 07:27:05.761 232437 INFO nova.compute.manager [-] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Took 5.41 seconds to deallocate network for instance.#033[00m
Dec  6 02:27:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:05.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:06 np0005548731 nova_compute[232433]: 2025-12-06 07:27:06.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:27:06 np0005548731 nova_compute[232433]: 2025-12-06 07:27:06.184 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:06 np0005548731 nova_compute[232433]: 2025-12-06 07:27:06.272 232437 DEBUG nova.compute.manager [req-0d326729-2a89-4fbb-b711-3cfb9c0c73f3 req-bf92bfa8-0f63-44d3-abc1-413f9c876249 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Received event network-vif-deleted-e61e57ee-1e76-4d23-ab33-ab23cf69ba70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:27:06 np0005548731 nova_compute[232433]: 2025-12-06 07:27:06.424 232437 DEBUG oslo_concurrency.lockutils [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:27:06 np0005548731 nova_compute[232433]: 2025-12-06 07:27:06.424 232437 DEBUG oslo_concurrency.lockutils [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:27:06 np0005548731 nova_compute[232433]: 2025-12-06 07:27:06.534 232437 DEBUG oslo_concurrency.processutils [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:27:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:27:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1443338164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:27:06 np0005548731 nova_compute[232433]: 2025-12-06 07:27:06.982 232437 DEBUG oslo_concurrency.processutils [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:27:06 np0005548731 nova_compute[232433]: 2025-12-06 07:27:06.989 232437 DEBUG nova.compute.provider_tree [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:27:07 np0005548731 nova_compute[232433]: 2025-12-06 07:27:07.012 232437 DEBUG nova.scheduler.client.report [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:27:07 np0005548731 nova_compute[232433]: 2025-12-06 07:27:07.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:27:07 np0005548731 nova_compute[232433]: 2025-12-06 07:27:07.135 232437 DEBUG oslo_concurrency.lockutils [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:27:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:07.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:07 np0005548731 nova_compute[232433]: 2025-12-06 07:27:07.458 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:07 np0005548731 nova_compute[232433]: 2025-12-06 07:27:07.613 232437 INFO nova.scheduler.client.report [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Deleted allocations for instance 5ea73f51-d224-41ed-892b-a137d9985e73#033[00m
Dec  6 02:27:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:07.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:08 np0005548731 nova_compute[232433]: 2025-12-06 07:27:08.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:27:08 np0005548731 nova_compute[232433]: 2025-12-06 07:27:08.433 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:27:08 np0005548731 nova_compute[232433]: 2025-12-06 07:27:08.434 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:27:08 np0005548731 nova_compute[232433]: 2025-12-06 07:27:08.434 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:27:08 np0005548731 nova_compute[232433]: 2025-12-06 07:27:08.434 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:27:08 np0005548731 nova_compute[232433]: 2025-12-06 07:27:08.434 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:27:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:27:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/874349504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:27:08 np0005548731 nova_compute[232433]: 2025-12-06 07:27:08.866 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:27:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 e270: 3 total, 3 up, 3 in
Dec  6 02:27:09 np0005548731 nova_compute[232433]: 2025-12-06 07:27:09.026 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:27:09 np0005548731 nova_compute[232433]: 2025-12-06 07:27:09.027 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4494MB free_disk=20.897064208984375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:27:09 np0005548731 nova_compute[232433]: 2025-12-06 07:27:09.027 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:27:09 np0005548731 nova_compute[232433]: 2025-12-06 07:27:09.027 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:27:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:09.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:09.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:10 np0005548731 podman[276704]: 2025-12-06 07:27:10.311938172 +0000 UTC m=+0.070151224 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 02:27:10 np0005548731 podman[276706]: 2025-12-06 07:27:10.318440912 +0000 UTC m=+0.074285756 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 02:27:10 np0005548731 podman[276705]: 2025-12-06 07:27:10.342418457 +0000 UTC m=+0.100937186 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  6 02:27:10 np0005548731 nova_compute[232433]: 2025-12-06 07:27:10.598 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006015.5973742, 5ea73f51-d224-41ed-892b-a137d9985e73 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:27:10 np0005548731 nova_compute[232433]: 2025-12-06 07:27:10.598 232437 INFO nova.compute.manager [-] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:27:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:27:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:27:11 np0005548731 nova_compute[232433]: 2025-12-06 07:27:11.186 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:11.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:11 np0005548731 nova_compute[232433]: 2025-12-06 07:27:11.667 232437 DEBUG oslo_concurrency.lockutils [None req-6084b658-955d-451f-a992-44b8565a90e5 bf140baee82f408e8fafa81062146641 c5036a5cbf7d44fd809d00942afd76e6 - - default default] Lock "5ea73f51-d224-41ed-892b-a137d9985e73" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 17.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:27:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:11.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:12 np0005548731 nova_compute[232433]: 2025-12-06 07:27:12.073 232437 DEBUG nova.compute.manager [None req-01550d34-79ef-4e5c-8f98-802b6fdf56e9 - - - - - -] [instance: 5ea73f51-d224-41ed-892b-a137d9985e73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:27:12 np0005548731 nova_compute[232433]: 2025-12-06 07:27:12.251 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:27:12 np0005548731 nova_compute[232433]: 2025-12-06 07:27:12.252 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:27:12 np0005548731 nova_compute[232433]: 2025-12-06 07:27:12.356 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:27:12 np0005548731 nova_compute[232433]: 2025-12-06 07:27:12.459 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:27:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1591755363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:27:12 np0005548731 nova_compute[232433]: 2025-12-06 07:27:12.778 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:27:12 np0005548731 nova_compute[232433]: 2025-12-06 07:27:12.785 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:27:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:13.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:13.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:14 np0005548731 nova_compute[232433]: 2025-12-06 07:27:14.190 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:27:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:15.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:15 np0005548731 nova_compute[232433]: 2025-12-06 07:27:15.252 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:27:15 np0005548731 nova_compute[232433]: 2025-12-06 07:27:15.253 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:27:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:15.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:27:16.066 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:27:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:27:16.067 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:27:16 np0005548731 nova_compute[232433]: 2025-12-06 07:27:16.067 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:16 np0005548731 nova_compute[232433]: 2025-12-06 07:27:16.187 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:16 np0005548731 nova_compute[232433]: 2025-12-06 07:27:16.254 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:27:16 np0005548731 nova_compute[232433]: 2025-12-06 07:27:16.965 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:27:16 np0005548731 nova_compute[232433]: 2025-12-06 07:27:16.966 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:27:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:17.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:17 np0005548731 nova_compute[232433]: 2025-12-06 07:27:17.461 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:17.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:27:19.069 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:27:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:19.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:19.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:21 np0005548731 nova_compute[232433]: 2025-12-06 07:27:21.190 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:21.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:21.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:22 np0005548731 nova_compute[232433]: 2025-12-06 07:27:22.462 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:22 np0005548731 nova_compute[232433]: 2025-12-06 07:27:22.469 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:23.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:23.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:27:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:25.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:25.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:26 np0005548731 nova_compute[232433]: 2025-12-06 07:27:26.192 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:27.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:27 np0005548731 nova_compute[232433]: 2025-12-06 07:27:27.465 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:27.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:31 np0005548731 nova_compute[232433]: 2025-12-06 07:27:31.232 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:31.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:27:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:31.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:32 np0005548731 nova_compute[232433]: 2025-12-06 07:27:32.468 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:33.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:27:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:33.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:27:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:35.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:35.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:36 np0005548731 nova_compute[232433]: 2025-12-06 07:27:36.234 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:37.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:37 np0005548731 nova_compute[232433]: 2025-12-06 07:27:37.470 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:39.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:27:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:39.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:40 np0005548731 podman[276878]: 2025-12-06 07:27:40.900265484 +0000 UTC m=+0.055672050 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:27:40 np0005548731 podman[276880]: 2025-12-06 07:27:40.907512191 +0000 UTC m=+0.057448253 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:27:40 np0005548731 podman[276879]: 2025-12-06 07:27:40.95004555 +0000 UTC m=+0.102990106 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.236 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:41.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.601 232437 DEBUG nova.compute.manager [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.775 232437 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.776 232437 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.806 232437 DEBUG nova.objects.instance [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_requests' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.834 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.834 232437 INFO nova.compute.claims [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.834 232437 DEBUG nova.objects.instance [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'resources' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.845 232437 DEBUG nova.objects.instance [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_devices' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.897 232437 INFO nova.compute.resource_tracker [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating resource usage from migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f#033[00m
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.897 232437 DEBUG nova.compute.resource_tracker [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Starting to track incoming migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Dec  6 02:27:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:27:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:41.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:41 np0005548731 nova_compute[232433]: 2025-12-06 07:27:41.978 232437 DEBUG oslo_concurrency.processutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:27:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:27:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3802733494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:27:42 np0005548731 nova_compute[232433]: 2025-12-06 07:27:42.456 232437 DEBUG oslo_concurrency.processutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:27:42 np0005548731 nova_compute[232433]: 2025-12-06 07:27:42.463 232437 DEBUG nova.compute.provider_tree [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:27:42 np0005548731 nova_compute[232433]: 2025-12-06 07:27:42.472 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:42 np0005548731 nova_compute[232433]: 2025-12-06 07:27:42.577 232437 DEBUG nova.scheduler.client.report [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:27:42 np0005548731 nova_compute[232433]: 2025-12-06 07:27:42.628 232437 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:27:42 np0005548731 nova_compute[232433]: 2025-12-06 07:27:42.628 232437 INFO nova.compute.manager [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Migrating#033[00m
Dec  6 02:27:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:27:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:43.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:27:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:43.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:27:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:27:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:45.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:45 np0005548731 systemd-logind[794]: New session 61 of user nova.
Dec  6 02:27:45 np0005548731 systemd[1]: Created slice User Slice of UID 42436.
Dec  6 02:27:45 np0005548731 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec  6 02:27:45 np0005548731 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec  6 02:27:45 np0005548731 systemd[1]: Starting User Manager for UID 42436...
Dec  6 02:27:45 np0005548731 systemd[276969]: Queued start job for default target Main User Target.
Dec  6 02:27:45 np0005548731 systemd[276969]: Created slice User Application Slice.
Dec  6 02:27:45 np0005548731 systemd[276969]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 02:27:45 np0005548731 systemd[276969]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 02:27:45 np0005548731 systemd[276969]: Reached target Paths.
Dec  6 02:27:45 np0005548731 systemd[276969]: Reached target Timers.
Dec  6 02:27:45 np0005548731 systemd[276969]: Starting D-Bus User Message Bus Socket...
Dec  6 02:27:45 np0005548731 systemd[276969]: Starting Create User's Volatile Files and Directories...
Dec  6 02:27:45 np0005548731 systemd[276969]: Listening on D-Bus User Message Bus Socket.
Dec  6 02:27:45 np0005548731 systemd[276969]: Reached target Sockets.
Dec  6 02:27:45 np0005548731 systemd[276969]: Finished Create User's Volatile Files and Directories.
Dec  6 02:27:45 np0005548731 systemd[276969]: Reached target Basic System.
Dec  6 02:27:45 np0005548731 systemd[276969]: Reached target Main User Target.
Dec  6 02:27:45 np0005548731 systemd[276969]: Startup finished in 141ms.
Dec  6 02:27:45 np0005548731 systemd[1]: Started User Manager for UID 42436.
Dec  6 02:27:45 np0005548731 systemd[1]: Started Session 61 of User nova.
Dec  6 02:27:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:45.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:46 np0005548731 systemd[1]: session-61.scope: Deactivated successfully.
Dec  6 02:27:46 np0005548731 systemd-logind[794]: Session 61 logged out. Waiting for processes to exit.
Dec  6 02:27:46 np0005548731 systemd-logind[794]: Removed session 61.
Dec  6 02:27:46 np0005548731 systemd-logind[794]: New session 63 of user nova.
Dec  6 02:27:46 np0005548731 systemd[1]: Started Session 63 of User nova.
Dec  6 02:27:46 np0005548731 nova_compute[232433]: 2025-12-06 07:27:46.239 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:46 np0005548731 systemd[1]: session-63.scope: Deactivated successfully.
Dec  6 02:27:46 np0005548731 systemd-logind[794]: Session 63 logged out. Waiting for processes to exit.
Dec  6 02:27:46 np0005548731 systemd-logind[794]: Removed session 63.
Dec  6 02:27:46 np0005548731 nova_compute[232433]: 2025-12-06 07:27:46.650 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:27:46 np0005548731 nova_compute[232433]: 2025-12-06 07:27:46.650 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:27:46 np0005548731 nova_compute[232433]: 2025-12-06 07:27:46.671 232437 DEBUG nova.compute.manager [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:27:46 np0005548731 nova_compute[232433]: 2025-12-06 07:27:46.794 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:27:46 np0005548731 nova_compute[232433]: 2025-12-06 07:27:46.794 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:27:46 np0005548731 nova_compute[232433]: 2025-12-06 07:27:46.804 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:27:46 np0005548731 nova_compute[232433]: 2025-12-06 07:27:46.805 232437 INFO nova.compute.claims [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.002 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:27:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:47.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:27:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4280102292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.461 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.469 232437 DEBUG nova.compute.provider_tree [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.474 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.483 232437 DEBUG nova.scheduler.client.report [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.522 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.523 232437 DEBUG nova.compute.manager [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.589 232437 DEBUG nova.compute.manager [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.589 232437 DEBUG nova.network.neutron [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.610 232437 INFO nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.639 232437 DEBUG nova.compute.manager [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.784 232437 DEBUG nova.compute.manager [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.786 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.787 232437 INFO nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Creating image(s)#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.810 232437 DEBUG nova.storage.rbd_utils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.832 232437 DEBUG nova.storage.rbd_utils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.855 232437 DEBUG nova.storage.rbd_utils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.858 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.908 232437 DEBUG nova.policy [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f6f0d78959484986bab10b40c84e9406', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '036c977b26cc46cabc5f599cb6c00e9a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.921 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.923 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.923 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.924 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:27:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.944 232437 DEBUG nova.storage.rbd_utils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:27:47 np0005548731 nova_compute[232433]: 2025-12-06 07:27:47.947 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 714e9301-631d-44ba-8f91-a1d65209745b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:27:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:27:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:47.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:48 np0005548731 nova_compute[232433]: 2025-12-06 07:27:48.941 232437 DEBUG nova.network.neutron [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Successfully created port: 012ae37e-970e-4506-8216-7c5339055cf7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:27:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:27:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:49.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:49.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:50 np0005548731 nova_compute[232433]: 2025-12-06 07:27:50.111 232437 DEBUG nova.network.neutron [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Successfully updated port: 012ae37e-970e-4506-8216-7c5339055cf7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:27:50 np0005548731 nova_compute[232433]: 2025-12-06 07:27:50.128 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquiring lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:27:50 np0005548731 nova_compute[232433]: 2025-12-06 07:27:50.129 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquired lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:27:50 np0005548731 nova_compute[232433]: 2025-12-06 07:27:50.129 232437 DEBUG nova.network.neutron [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:27:50 np0005548731 nova_compute[232433]: 2025-12-06 07:27:50.224 232437 DEBUG nova.compute.manager [req-da2ae941-fbf7-4b6b-9bc9-467be2be5457 req-a4f5657e-61f9-4ac7-bc89-8c4869e4d15b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-changed-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:27:50 np0005548731 nova_compute[232433]: 2025-12-06 07:27:50.225 232437 DEBUG nova.compute.manager [req-da2ae941-fbf7-4b6b-9bc9-467be2be5457 req-a4f5657e-61f9-4ac7-bc89-8c4869e4d15b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Refreshing instance network info cache due to event network-changed-012ae37e-970e-4506-8216-7c5339055cf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:27:50 np0005548731 nova_compute[232433]: 2025-12-06 07:27:50.225 232437 DEBUG oslo_concurrency.lockutils [req-da2ae941-fbf7-4b6b-9bc9-467be2be5457 req-a4f5657e-61f9-4ac7-bc89-8c4869e4d15b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:27:51 np0005548731 nova_compute[232433]: 2025-12-06 07:27:51.242 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:51.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:51 np0005548731 nova_compute[232433]: 2025-12-06 07:27:51.282 232437 DEBUG nova.network.neutron [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:27:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:52 np0005548731 nova_compute[232433]: 2025-12-06 07:27:52.476 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:53.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:53.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:27:55.014 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:27:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:27:55.015 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:27:55 np0005548731 nova_compute[232433]: 2025-12-06 07:27:55.016 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:55 np0005548731 nova_compute[232433]: 2025-12-06 07:27:55.216 232437 DEBUG nova.network.neutron [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updating instance_info_cache with network_info: [{"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:27:55 np0005548731 nova_compute[232433]: 2025-12-06 07:27:55.244 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Releasing lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:27:55 np0005548731 nova_compute[232433]: 2025-12-06 07:27:55.244 232437 DEBUG nova.compute.manager [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance network_info: |[{"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:27:55 np0005548731 nova_compute[232433]: 2025-12-06 07:27:55.244 232437 DEBUG oslo_concurrency.lockutils [req-da2ae941-fbf7-4b6b-9bc9-467be2be5457 req-a4f5657e-61f9-4ac7-bc89-8c4869e4d15b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:27:55 np0005548731 nova_compute[232433]: 2025-12-06 07:27:55.245 232437 DEBUG nova.network.neutron [req-da2ae941-fbf7-4b6b-9bc9-467be2be5457 req-a4f5657e-61f9-4ac7-bc89-8c4869e4d15b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Refreshing network info cache for port 012ae37e-970e-4506-8216-7c5339055cf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:27:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:55.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:55.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:56 np0005548731 nova_compute[232433]: 2025-12-06 07:27:56.245 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:56 np0005548731 systemd[1]: Stopping User Manager for UID 42436...
Dec  6 02:27:56 np0005548731 systemd[276969]: Activating special unit Exit the Session...
Dec  6 02:27:56 np0005548731 systemd[276969]: Stopped target Main User Target.
Dec  6 02:27:56 np0005548731 systemd[276969]: Stopped target Basic System.
Dec  6 02:27:56 np0005548731 systemd[276969]: Stopped target Paths.
Dec  6 02:27:56 np0005548731 systemd[276969]: Stopped target Sockets.
Dec  6 02:27:56 np0005548731 systemd[276969]: Stopped target Timers.
Dec  6 02:27:56 np0005548731 systemd[276969]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  6 02:27:56 np0005548731 systemd[276969]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  6 02:27:56 np0005548731 systemd[276969]: Closed D-Bus User Message Bus Socket.
Dec  6 02:27:56 np0005548731 systemd[276969]: Stopped Create User's Volatile Files and Directories.
Dec  6 02:27:56 np0005548731 systemd[276969]: Removed slice User Application Slice.
Dec  6 02:27:56 np0005548731 systemd[276969]: Reached target Shutdown.
Dec  6 02:27:56 np0005548731 systemd[276969]: Finished Exit the Session.
Dec  6 02:27:56 np0005548731 systemd[276969]: Reached target Exit the Session.
Dec  6 02:27:56 np0005548731 systemd[1]: user@42436.service: Deactivated successfully.
Dec  6 02:27:56 np0005548731 systemd[1]: Stopped User Manager for UID 42436.
Dec  6 02:27:56 np0005548731 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec  6 02:27:56 np0005548731 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec  6 02:27:56 np0005548731 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec  6 02:27:56 np0005548731 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec  6 02:27:56 np0005548731 systemd[1]: Removed slice User Slice of UID 42436.
Dec  6 02:27:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:27:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:57.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:27:57 np0005548731 nova_compute[232433]: 2025-12-06 07:27:57.479 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:27:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:27:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:27:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:57.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:27:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:27:58.017 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:27:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:59.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:27:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:27:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:27:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:59.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:00 np0005548731 nova_compute[232433]: 2025-12-06 07:28:00.741 232437 DEBUG nova.network.neutron [req-da2ae941-fbf7-4b6b-9bc9-467be2be5457 req-a4f5657e-61f9-4ac7-bc89-8c4869e4d15b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updated VIF entry in instance network info cache for port 012ae37e-970e-4506-8216-7c5339055cf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:28:00 np0005548731 nova_compute[232433]: 2025-12-06 07:28:00.741 232437 DEBUG nova.network.neutron [req-da2ae941-fbf7-4b6b-9bc9-467be2be5457 req-a4f5657e-61f9-4ac7-bc89-8c4869e4d15b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updating instance_info_cache with network_info: [{"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:28:00 np0005548731 nova_compute[232433]: 2025-12-06 07:28:00.796 232437 DEBUG oslo_concurrency.lockutils [req-da2ae941-fbf7-4b6b-9bc9-467be2be5457 req-a4f5657e-61f9-4ac7-bc89-8c4869e4d15b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:28:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:00.868 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:28:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:00.869 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:28:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:00.869 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:28:01 np0005548731 nova_compute[232433]: 2025-12-06 07:28:01.266 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:01.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:01.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:02 np0005548731 nova_compute[232433]: 2025-12-06 07:28:02.482 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:03 np0005548731 nova_compute[232433]: 2025-12-06 07:28:03.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:03 np0005548731 nova_compute[232433]: 2025-12-06 07:28:03.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:03 np0005548731 nova_compute[232433]: 2025-12-06 07:28:03.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:28:03 np0005548731 nova_compute[232433]: 2025-12-06 07:28:03.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:28:03 np0005548731 nova_compute[232433]: 2025-12-06 07:28:03.123 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  6 02:28:03 np0005548731 nova_compute[232433]: 2025-12-06 07:28:03.124 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:28:03 np0005548731 nova_compute[232433]: 2025-12-06 07:28:03.124 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 02:28:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:03.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 02:28:03 np0005548731 nova_compute[232433]: 2025-12-06 07:28:03.294 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 714e9301-631d-44ba-8f91-a1d65209745b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 15.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:28:03 np0005548731 nova_compute[232433]: 2025-12-06 07:28:03.381 232437 DEBUG nova.storage.rbd_utils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] resizing rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:28:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:03.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:05.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:06.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:06 np0005548731 nova_compute[232433]: 2025-12-06 07:28:06.265 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:06 np0005548731 nova_compute[232433]: 2025-12-06 07:28:06.265 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:06 np0005548731 nova_compute[232433]: 2025-12-06 07:28:06.265 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:28:06 np0005548731 nova_compute[232433]: 2025-12-06 07:28:06.268 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.247 232437 DEBUG nova.objects.instance [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lazy-loading 'migration_context' on Instance uuid 714e9301-631d-44ba-8f91-a1d65209745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:28:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:28:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:07.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.485 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.604 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.605 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Ensure instance console log exists: /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.605 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.606 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.606 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.608 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Start _get_guest_xml network_info=[{"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.612 232437 WARNING nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.621 232437 DEBUG nova.virt.libvirt.host [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.622 232437 DEBUG nova.virt.libvirt.host [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.624 232437 DEBUG nova.virt.libvirt.host [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.625 232437 DEBUG nova.virt.libvirt.host [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.626 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.627 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.627 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.627 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.628 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.628 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.628 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.628 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.629 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.629 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.629 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.629 232437 DEBUG nova.virt.hardware [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:28:07 np0005548731 nova_compute[232433]: 2025-12-06 07:28:07.632 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:28:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:08.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:28:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1106945750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.158 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.158 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.159 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.160 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.160 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.460 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.828s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.490 232437 DEBUG nova.storage.rbd_utils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.494 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:28:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:28:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2551767847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.623 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:28:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:28:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2081102898' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:28:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:28:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2081102898' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.767 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.769 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4488MB free_disk=20.833885192871094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.769 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.770 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:28:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:28:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1704356668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.953 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.955 232437 DEBUG nova.virt.libvirt.vif [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:27:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-965064608',display_name='tempest-ServerRescueTestJSONUnderV235-server-965064608',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-965064608',id=111,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='036c977b26cc46cabc5f599cb6c00e9a',ramdisk_id='',reservation_id='r-1y4pi8rg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-706658007',owner_user_name='tempest-ServerRescueTestJSONUnderV235-706658007-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:27:47Z,user_data=None,user_id='f6f0d78959484986bab10b40c84e9406',uuid=714e9301-631d-44ba-8f91-a1d65209745b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.955 232437 DEBUG nova.network.os_vif_util [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Converting VIF {"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.956 232437 DEBUG nova.network.os_vif_util [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:71:4e,bridge_name='br-int',has_traffic_filtering=True,id=012ae37e-970e-4506-8216-7c5339055cf7,network=Network(248c2772-b8d1-4900-9e05-67104b197a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap012ae37e-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:28:08 np0005548731 nova_compute[232433]: 2025-12-06 07:28:08.958 232437 DEBUG nova.objects.instance [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 714e9301-631d-44ba-8f91-a1d65209745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:28:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:09.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.824 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  <uuid>714e9301-631d-44ba-8f91-a1d65209745b</uuid>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  <name>instance-0000006f</name>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-965064608</nova:name>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:28:07</nova:creationTime>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <nova:user uuid="f6f0d78959484986bab10b40c84e9406">tempest-ServerRescueTestJSONUnderV235-706658007-project-member</nova:user>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <nova:project uuid="036c977b26cc46cabc5f599cb6c00e9a">tempest-ServerRescueTestJSONUnderV235-706658007</nova:project>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <nova:port uuid="012ae37e-970e-4506-8216-7c5339055cf7">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <entry name="serial">714e9301-631d-44ba-8f91-a1d65209745b</entry>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <entry name="uuid">714e9301-631d-44ba-8f91-a1d65209745b</entry>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/714e9301-631d-44ba-8f91-a1d65209745b_disk">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/714e9301-631d-44ba-8f91-a1d65209745b_disk.config">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:01:71:4e"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <target dev="tap012ae37e-97"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/console.log" append="off"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:28:09 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:28:09 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:28:09 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:28:09 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.826 232437 DEBUG nova.compute.manager [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Preparing to wait for external event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.826 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.826 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.827 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.827 232437 DEBUG nova.virt.libvirt.vif [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:27:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-965064608',display_name='tempest-ServerRescueTestJSONUnderV235-server-965064608',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-965064608',id=111,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='036c977b26cc46cabc5f599cb6c00e9a',ramdisk_id='',reservation_id='r-1y4pi8rg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-706658007',owner_user_name='tempest-ServerRescueTestJSONUnderV235-706658007-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:27:47Z,user_data=None,user_id='f6f0d78959484986bab10b40c84e9406',uuid=714e9301-631d-44ba-8f91-a1d65209745b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.827 232437 DEBUG nova.network.os_vif_util [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Converting VIF {"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.828 232437 DEBUG nova.network.os_vif_util [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:71:4e,bridge_name='br-int',has_traffic_filtering=True,id=012ae37e-970e-4506-8216-7c5339055cf7,network=Network(248c2772-b8d1-4900-9e05-67104b197a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap012ae37e-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.828 232437 DEBUG os_vif [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:71:4e,bridge_name='br-int',has_traffic_filtering=True,id=012ae37e-970e-4506-8216-7c5339055cf7,network=Network(248c2772-b8d1-4900-9e05-67104b197a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap012ae37e-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.829 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.829 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.830 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.832 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.832 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap012ae37e-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.833 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap012ae37e-97, col_values=(('external_ids', {'iface-id': '012ae37e-970e-4506-8216-7c5339055cf7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:71:4e', 'vm-uuid': '714e9301-631d-44ba-8f91-a1d65209745b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.834 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:09 np0005548731 NetworkManager[49182]: <info>  [1765006089.8356] manager: (tap012ae37e-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.839 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.839 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:09 np0005548731 nova_compute[232433]: 2025-12-06 07:28:09.840 232437 INFO os_vif [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:71:4e,bridge_name='br-int',has_traffic_filtering=True,id=012ae37e-970e-4506-8216-7c5339055cf7,network=Network(248c2772-b8d1-4900-9e05-67104b197a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap012ae37e-97')#033[00m
Dec  6 02:28:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:10.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.360 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Migration for instance 018b5c4d-2e0a-428b-ac8b-a74236e0d103 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.401 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.401 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.401 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] No VIF found with MAC fa:16:3e:01:71:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.402 232437 INFO nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Using config drive#033[00m
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.438 232437 DEBUG nova.storage.rbd_utils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.445 232437 INFO nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating resource usage from migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f#033[00m
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.445 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Starting to track incoming migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.518 232437 WARNING nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 018b5c4d-2e0a-428b-ac8b-a74236e0d103 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.#033[00m
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.519 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 714e9301-631d-44ba-8f91-a1d65209745b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.519 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:28:10 np0005548731 nova_compute[232433]: 2025-12-06 07:28:10.519 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.001 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.049 232437 INFO nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Creating config drive at /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.055 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4i_dyz8h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.184 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4i_dyz8h" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.217 232437 DEBUG nova.storage.rbd_utils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.221 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config 714e9301-631d-44ba-8f91-a1d65209745b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:28:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:11.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:28:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3051144945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.435 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.441 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.463 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.492 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.492 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.493 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.493 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:28:11 np0005548731 nova_compute[232433]: 2025-12-06 07:28:11.515 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:28:11 np0005548731 podman[277589]: 2025-12-06 07:28:11.898134109 +0000 UTC m=+0.061079693 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 02:28:11 np0005548731 podman[277591]: 2025-12-06 07:28:11.905329024 +0000 UTC m=+0.064053715 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:28:11 np0005548731 podman[277590]: 2025-12-06 07:28:11.930740695 +0000 UTC m=+0.092457169 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:28:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:12.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:12 np0005548731 nova_compute[232433]: 2025-12-06 07:28:12.487 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.000 232437 DEBUG oslo_concurrency.processutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config 714e9301-631d-44ba-8f91-a1d65209745b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.779s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.001 232437 INFO nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Deleting local config drive /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config because it was imported into RBD.#033[00m
Dec  6 02:28:13 np0005548731 kernel: tap012ae37e-97: entered promiscuous mode
Dec  6 02:28:13 np0005548731 NetworkManager[49182]: <info>  [1765006093.0542] manager: (tap012ae37e-97): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.052 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:28:13Z|00438|binding|INFO|Claiming lport 012ae37e-970e-4506-8216-7c5339055cf7 for this chassis.
Dec  6 02:28:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:28:13Z|00439|binding|INFO|012ae37e-970e-4506-8216-7c5339055cf7: Claiming fa:16:3e:01:71:4e 10.100.0.10
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:13.069 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:71:4e 10.100.0.10'], port_security=['fa:16:3e:01:71:4e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '714e9301-631d-44ba-8f91-a1d65209745b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-248c2772-b8d1-4900-9e05-67104b197a82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '036c977b26cc46cabc5f599cb6c00e9a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e899a491-7b15-4e66-adca-8deec8c4c859', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eb80078-9659-4d3e-aaa8-ad516bb7d903, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=012ae37e-970e-4506-8216-7c5339055cf7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:28:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:13.071 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 012ae37e-970e-4506-8216-7c5339055cf7 in datapath 248c2772-b8d1-4900-9e05-67104b197a82 bound to our chassis#033[00m
Dec  6 02:28:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:13.072 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 248c2772-b8d1-4900-9e05-67104b197a82 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:28:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:13.073 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c68c1d71-3c9e-4325-a489-1b86747f36d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:13 np0005548731 systemd-udevd[277669]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:28:13 np0005548731 systemd-machined[195355]: New machine qemu-48-instance-0000006f.
Dec  6 02:28:13 np0005548731 NetworkManager[49182]: <info>  [1765006093.0910] device (tap012ae37e-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:28:13 np0005548731 NetworkManager[49182]: <info>  [1765006093.0918] device (tap012ae37e-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:28:13 np0005548731 systemd[1]: Started Virtual Machine qemu-48-instance-0000006f.
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.120 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:28:13Z|00440|binding|INFO|Setting lport 012ae37e-970e-4506-8216-7c5339055cf7 ovn-installed in OVS
Dec  6 02:28:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:28:13Z|00441|binding|INFO|Setting lport 012ae37e-970e-4506-8216-7c5339055cf7 up in Southbound
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.130 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:13.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.515 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:28:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.830 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006093.8295305, 714e9301-631d-44ba-8f91-a1d65209745b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.831 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] VM Started (Lifecycle Event)#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.874 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.877 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006093.832815, 714e9301-631d-44ba-8f91-a1d65209745b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.877 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.911 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.914 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.964 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.978 232437 DEBUG nova.compute.manager [req-b1438033-0b93-4eb3-8266-8dbb6c8fed41 req-48053554-da87-4e9b-a257-92c0cf4dcc3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.979 232437 DEBUG oslo_concurrency.lockutils [req-b1438033-0b93-4eb3-8266-8dbb6c8fed41 req-48053554-da87-4e9b-a257-92c0cf4dcc3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.979 232437 DEBUG oslo_concurrency.lockutils [req-b1438033-0b93-4eb3-8266-8dbb6c8fed41 req-48053554-da87-4e9b-a257-92c0cf4dcc3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.980 232437 DEBUG oslo_concurrency.lockutils [req-b1438033-0b93-4eb3-8266-8dbb6c8fed41 req-48053554-da87-4e9b-a257-92c0cf4dcc3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.981 232437 DEBUG nova.compute.manager [req-b1438033-0b93-4eb3-8266-8dbb6c8fed41 req-48053554-da87-4e9b-a257-92c0cf4dcc3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Processing event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.982 232437 DEBUG nova.compute.manager [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.986 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006093.9860554, 714e9301-631d-44ba-8f91-a1d65209745b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.986 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.989 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.992 232437 INFO nova.virt.libvirt.driver [-] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance spawned successfully.#033[00m
Dec  6 02:28:13 np0005548731 nova_compute[232433]: 2025-12-06 07:28:13.993 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:28:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:14.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.034 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.039 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.042 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.043 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.043 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.044 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.044 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.045 232437 DEBUG nova.virt.libvirt.driver [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.077 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.211 232437 INFO nova.compute.manager [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Took 26.43 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.211 232437 DEBUG nova.compute.manager [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.288 232437 INFO nova.compute.manager [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Took 27.54 seconds to build instance.#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.343 232437 DEBUG oslo_concurrency.lockutils [None req-8299c763-c9a1-4e47-85e8-964e80da1dd7 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:28:14 np0005548731 nova_compute[232433]: 2025-12-06 07:28:14.834 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:15 np0005548731 nova_compute[232433]: 2025-12-06 07:28:15.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:28:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:28:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:15.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:16.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:16 np0005548731 nova_compute[232433]: 2025-12-06 07:28:16.049 232437 INFO nova.compute.manager [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Rescuing#033[00m
Dec  6 02:28:16 np0005548731 nova_compute[232433]: 2025-12-06 07:28:16.050 232437 DEBUG oslo_concurrency.lockutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquiring lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:28:16 np0005548731 nova_compute[232433]: 2025-12-06 07:28:16.050 232437 DEBUG oslo_concurrency.lockutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquired lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:28:16 np0005548731 nova_compute[232433]: 2025-12-06 07:28:16.050 232437 DEBUG nova.network.neutron [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:28:16 np0005548731 nova_compute[232433]: 2025-12-06 07:28:16.150 232437 DEBUG nova.compute.manager [req-78005150-c22f-4a6e-9a91-31df5f77a628 req-9fb4c346-9c30-4b38-8887-bc2ab6f65c20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:28:16 np0005548731 nova_compute[232433]: 2025-12-06 07:28:16.151 232437 DEBUG oslo_concurrency.lockutils [req-78005150-c22f-4a6e-9a91-31df5f77a628 req-9fb4c346-9c30-4b38-8887-bc2ab6f65c20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:28:16 np0005548731 nova_compute[232433]: 2025-12-06 07:28:16.151 232437 DEBUG oslo_concurrency.lockutils [req-78005150-c22f-4a6e-9a91-31df5f77a628 req-9fb4c346-9c30-4b38-8887-bc2ab6f65c20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:28:16 np0005548731 nova_compute[232433]: 2025-12-06 07:28:16.151 232437 DEBUG oslo_concurrency.lockutils [req-78005150-c22f-4a6e-9a91-31df5f77a628 req-9fb4c346-9c30-4b38-8887-bc2ab6f65c20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:28:16 np0005548731 nova_compute[232433]: 2025-12-06 07:28:16.151 232437 DEBUG nova.compute.manager [req-78005150-c22f-4a6e-9a91-31df5f77a628 req-9fb4c346-9c30-4b38-8887-bc2ab6f65c20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] No waiting events found dispatching network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:28:16 np0005548731 nova_compute[232433]: 2025-12-06 07:28:16.152 232437 WARNING nova.compute.manager [req-78005150-c22f-4a6e-9a91-31df5f77a628 req-9fb4c346-9c30-4b38-8887-bc2ab6f65c20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received unexpected event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:28:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:17.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:17 np0005548731 nova_compute[232433]: 2025-12-06 07:28:17.488 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:18.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:18 np0005548731 nova_compute[232433]: 2025-12-06 07:28:18.427 232437 DEBUG nova.network.neutron [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updating instance_info_cache with network_info: [{"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:28:18 np0005548731 nova_compute[232433]: 2025-12-06 07:28:18.574 232437 DEBUG oslo_concurrency.lockutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Releasing lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:28:19 np0005548731 nova_compute[232433]: 2025-12-06 07:28:19.126 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:28:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:19.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:28:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:28:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:28:19 np0005548731 nova_compute[232433]: 2025-12-06 07:28:19.836 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:20.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:21.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:21 np0005548731 nova_compute[232433]: 2025-12-06 07:28:21.400 232437 INFO nova.network.neutron [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating port cf525aa5-a11a-4218-98dd-328546551976 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec  6 02:28:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:22.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:22 np0005548731 nova_compute[232433]: 2025-12-06 07:28:22.489 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:23.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:24.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:24 np0005548731 nova_compute[232433]: 2025-12-06 07:28:24.566 232437 DEBUG nova.compute.manager [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-unplugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:28:24 np0005548731 nova_compute[232433]: 2025-12-06 07:28:24.567 232437 DEBUG oslo_concurrency.lockutils [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:28:24 np0005548731 nova_compute[232433]: 2025-12-06 07:28:24.567 232437 DEBUG oslo_concurrency.lockutils [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:28:24 np0005548731 nova_compute[232433]: 2025-12-06 07:28:24.567 232437 DEBUG oslo_concurrency.lockutils [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:28:24 np0005548731 nova_compute[232433]: 2025-12-06 07:28:24.567 232437 DEBUG nova.compute.manager [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] No waiting events found dispatching network-vif-unplugged-cf525aa5-a11a-4218-98dd-328546551976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:28:24 np0005548731 nova_compute[232433]: 2025-12-06 07:28:24.568 232437 WARNING nova.compute.manager [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received unexpected event network-vif-unplugged-cf525aa5-a11a-4218-98dd-328546551976 for instance with vm_state active and task_state resize_migrated.#033[00m
Dec  6 02:28:24 np0005548731 nova_compute[232433]: 2025-12-06 07:28:24.838 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:25 np0005548731 nova_compute[232433]: 2025-12-06 07:28:25.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:25.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:26.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:26 np0005548731 nova_compute[232433]: 2025-12-06 07:28:26.910 232437 DEBUG nova.compute.manager [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:28:26 np0005548731 nova_compute[232433]: 2025-12-06 07:28:26.911 232437 DEBUG oslo_concurrency.lockutils [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:28:26 np0005548731 nova_compute[232433]: 2025-12-06 07:28:26.911 232437 DEBUG oslo_concurrency.lockutils [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:28:26 np0005548731 nova_compute[232433]: 2025-12-06 07:28:26.911 232437 DEBUG oslo_concurrency.lockutils [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:28:26 np0005548731 nova_compute[232433]: 2025-12-06 07:28:26.911 232437 DEBUG nova.compute.manager [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] No waiting events found dispatching network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:28:26 np0005548731 nova_compute[232433]: 2025-12-06 07:28:26.912 232437 WARNING nova.compute.manager [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received unexpected event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 for instance with vm_state active and task_state resize_migrated.#033[00m
Dec  6 02:28:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:27.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:27 np0005548731 nova_compute[232433]: 2025-12-06 07:28:27.466 232437 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:28:27 np0005548731 nova_compute[232433]: 2025-12-06 07:28:27.467 232437 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquired lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:28:27 np0005548731 nova_compute[232433]: 2025-12-06 07:28:27.467 232437 DEBUG nova.network.neutron [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:28:27 np0005548731 nova_compute[232433]: 2025-12-06 07:28:27.491 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:28:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:28.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:28:29 np0005548731 nova_compute[232433]: 2025-12-06 07:28:29.170 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:28:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:29.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:29 np0005548731 nova_compute[232433]: 2025-12-06 07:28:29.839 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:30.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:30 np0005548731 nova_compute[232433]: 2025-12-06 07:28:30.433 232437 DEBUG nova.network.neutron [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating instance_info_cache with network_info: [{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:28:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:31.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:31 np0005548731 nova_compute[232433]: 2025-12-06 07:28:31.506 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:28:31 np0005548731 nova_compute[232433]: 2025-12-06 07:28:31.507 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:28:31 np0005548731 nova_compute[232433]: 2025-12-06 07:28:31.846 232437 DEBUG nova.compute.manager [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-changed-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:28:31 np0005548731 nova_compute[232433]: 2025-12-06 07:28:31.847 232437 DEBUG nova.compute.manager [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Refreshing instance network info cache due to event network-changed-cf525aa5-a11a-4218-98dd-328546551976. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:28:31 np0005548731 nova_compute[232433]: 2025-12-06 07:28:31.847 232437 DEBUG oslo_concurrency.lockutils [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:28:31 np0005548731 nova_compute[232433]: 2025-12-06 07:28:31.914 232437 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Releasing lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:28:31 np0005548731 nova_compute[232433]: 2025-12-06 07:28:31.917 232437 DEBUG oslo_concurrency.lockutils [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:28:31 np0005548731 nova_compute[232433]: 2025-12-06 07:28:31.917 232437 DEBUG nova.network.neutron [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Refreshing network info cache for port cf525aa5-a11a-4218-98dd-328546551976 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:28:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:28:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:32.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:32 np0005548731 nova_compute[232433]: 2025-12-06 07:28:32.299 232437 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Dec  6 02:28:32 np0005548731 nova_compute[232433]: 2025-12-06 07:28:32.301 232437 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 02:28:32 np0005548731 nova_compute[232433]: 2025-12-06 07:28:32.301 232437 INFO nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Creating image(s)#033[00m
Dec  6 02:28:32 np0005548731 nova_compute[232433]: 2025-12-06 07:28:32.337 232437 DEBUG nova.storage.rbd_utils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] creating snapshot(nova-resize) on rbd image(018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:28:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:33 np0005548731 nova_compute[232433]: 2025-12-06 07:28:33.206 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:33.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:34.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:34 np0005548731 nova_compute[232433]: 2025-12-06 07:28:34.841 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:35.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:35 np0005548731 nova_compute[232433]: 2025-12-06 07:28:35.888 232437 DEBUG nova.network.neutron [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updated VIF entry in instance network info cache for port cf525aa5-a11a-4218-98dd-328546551976. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:28:35 np0005548731 nova_compute[232433]: 2025-12-06 07:28:35.889 232437 DEBUG nova.network.neutron [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating instance_info_cache with network_info: [{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:28:35 np0005548731 nova_compute[232433]: 2025-12-06 07:28:35.960 232437 DEBUG oslo_concurrency.lockutils [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:28:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:28:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:28:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:36.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:28:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:37.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:37 np0005548731 nova_compute[232433]: 2025-12-06 07:28:37.494 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:38.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e271 e271: 3 total, 3 up, 3 in
Dec  6 02:28:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:39.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:39 np0005548731 nova_compute[232433]: 2025-12-06 07:28:39.842 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:40.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:40 np0005548731 nova_compute[232433]: 2025-12-06 07:28:40.238 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:28:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:41.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:42.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:42 np0005548731 nova_compute[232433]: 2025-12-06 07:28:42.496 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:42 np0005548731 podman[277873]: 2025-12-06 07:28:42.888913227 +0000 UTC m=+0.046926766 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  6 02:28:42 np0005548731 podman[277875]: 2025-12-06 07:28:42.899921556 +0000 UTC m=+0.053504327 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  6 02:28:42 np0005548731 podman[277874]: 2025-12-06 07:28:42.926370912 +0000 UTC m=+0.083118351 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec  6 02:28:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:43.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:43 np0005548731 nova_compute[232433]: 2025-12-06 07:28:43.696 232437 DEBUG nova.objects.instance [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'trusted_certs' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:28:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:44.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:44 np0005548731 nova_compute[232433]: 2025-12-06 07:28:44.843 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:45.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:46.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:47.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.390 232437 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.391 232437 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Ensure instance console log exists: /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.391 232437 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.392 232437 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.392 232437 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.394 232437 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Start _get_guest_xml network_info=[{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:01:c9:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.398 232437 WARNING nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.449 232437 DEBUG nova.virt.libvirt.host [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.450 232437 DEBUG nova.virt.libvirt.host [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.463 232437 DEBUG nova.virt.libvirt.host [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.464 232437 DEBUG nova.virt.libvirt.host [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.465 232437 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.466 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.466 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.466 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.466 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.467 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.467 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.467 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.467 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.468 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.468 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.468 232437 DEBUG nova.virt.hardware [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.468 232437 DEBUG nova.objects.instance [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'vcpu_model' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.498 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:47 np0005548731 nova_compute[232433]: 2025-12-06 07:28:47.523 232437 DEBUG oslo_concurrency.processutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:28:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:28:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2549393210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:28:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.057 232437 DEBUG oslo_concurrency.processutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:28:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:48.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.103 232437 DEBUG oslo_concurrency.processutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:28:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:28:48 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1698605546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.724 232437 DEBUG oslo_concurrency.processutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.726 232437 DEBUG nova.virt.libvirt.vif [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1401757895',display_name='tempest-ServerDiskConfigTestJSON-server-1401757895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1401757895',id=110,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:27:34Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-iyd6zt70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:28:19Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=018b5c4d-2e0a-428b-ac8b-a74236e0d103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:01:c9:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.727 232437 DEBUG nova.network.os_vif_util [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:01:c9:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.727 232437 DEBUG nova.network.os_vif_util [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.730 232437 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  <uuid>018b5c4d-2e0a-428b-ac8b-a74236e0d103</uuid>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  <name>instance-0000006e</name>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  <memory>196608</memory>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1401757895</nova:name>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:28:47</nova:creationTime>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.micro">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <nova:memory>192</nova:memory>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <nova:user uuid="d67c136e82ad4001b000848d75eef50d">tempest-ServerDiskConfigTestJSON-749654875-project-member</nova:user>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <nova:project uuid="88f5b34244614321a9b6e902eaba0ece">tempest-ServerDiskConfigTestJSON-749654875</nova:project>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <nova:port uuid="cf525aa5-a11a-4218-98dd-328546551976">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <entry name="serial">018b5c4d-2e0a-428b-ac8b-a74236e0d103</entry>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <entry name="uuid">018b5c4d-2e0a-428b-ac8b-a74236e0d103</entry>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk.config">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:01:c9:f4"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <target dev="tapcf525aa5-a1"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/console.log" append="off"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:28:48 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:28:48 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:28:48 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:28:48 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.732 232437 DEBUG nova.virt.libvirt.vif [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1401757895',display_name='tempest-ServerDiskConfigTestJSON-server-1401757895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1401757895',id=110,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:27:34Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-iyd6zt70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:28:19Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=018b5c4d-2e0a-428b-ac8b-a74236e0d103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:01:c9:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.732 232437 DEBUG nova.network.os_vif_util [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:01:c9:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.733 232437 DEBUG nova.network.os_vif_util [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.733 232437 DEBUG os_vif [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.734 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.734 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.735 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.737 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.737 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf525aa5-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.738 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf525aa5-a1, col_values=(('external_ids', {'iface-id': 'cf525aa5-a11a-4218-98dd-328546551976', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:c9:f4', 'vm-uuid': '018b5c4d-2e0a-428b-ac8b-a74236e0d103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:28:48 np0005548731 NetworkManager[49182]: <info>  [1765006128.7404] manager: (tapcf525aa5-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.742 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.746 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:48 np0005548731 nova_compute[232433]: 2025-12-06 07:28:48.747 232437 INFO os_vif [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1')#033[00m
Dec  6 02:28:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:49.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:50.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:51 np0005548731 nova_compute[232433]: 2025-12-06 07:28:51.320 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:28:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:51.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:52 np0005548731 nova_compute[232433]: 2025-12-06 07:28:52.501 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:53.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:53 np0005548731 nova_compute[232433]: 2025-12-06 07:28:53.741 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:54.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:54 np0005548731 nova_compute[232433]: 2025-12-06 07:28:54.615 232437 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:28:54 np0005548731 nova_compute[232433]: 2025-12-06 07:28:54.615 232437 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:28:54 np0005548731 nova_compute[232433]: 2025-12-06 07:28:54.615 232437 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No VIF found with MAC fa:16:3e:01:c9:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:28:54 np0005548731 nova_compute[232433]: 2025-12-06 07:28:54.621 232437 INFO nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Using config drive#033[00m
Dec  6 02:28:54 np0005548731 kernel: tapcf525aa5-a1: entered promiscuous mode
Dec  6 02:28:54 np0005548731 NetworkManager[49182]: <info>  [1765006134.6937] manager: (tapcf525aa5-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Dec  6 02:28:54 np0005548731 nova_compute[232433]: 2025-12-06 07:28:54.694 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:28:54Z|00442|binding|INFO|Claiming lport cf525aa5-a11a-4218-98dd-328546551976 for this chassis.
Dec  6 02:28:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:28:54Z|00443|binding|INFO|cf525aa5-a11a-4218-98dd-328546551976: Claiming fa:16:3e:01:c9:f4 10.100.0.11
Dec  6 02:28:54 np0005548731 systemd-udevd[278125]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:28:54 np0005548731 systemd-machined[195355]: New machine qemu-49-instance-0000006e.
Dec  6 02:28:54 np0005548731 NetworkManager[49182]: <info>  [1765006134.7346] device (tapcf525aa5-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:28:54 np0005548731 NetworkManager[49182]: <info>  [1765006134.7357] device (tapcf525aa5-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:28:54 np0005548731 systemd[1]: Started Virtual Machine qemu-49-instance-0000006e.
Dec  6 02:28:54 np0005548731 nova_compute[232433]: 2025-12-06 07:28:54.751 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:28:54Z|00444|binding|INFO|Setting lport cf525aa5-a11a-4218-98dd-328546551976 ovn-installed in OVS
Dec  6 02:28:54 np0005548731 nova_compute[232433]: 2025-12-06 07:28:54.757 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:55.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.059 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:c9:f4 10.100.0.11'], port_security=['fa:16:3e:01:c9:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '018b5c4d-2e0a-428b-ac8b-a74236e0d103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '6', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=cf525aa5-a11a-4218-98dd-328546551976) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:28:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:28:56Z|00445|binding|INFO|Setting lport cf525aa5-a11a-4218-98dd-328546551976 up in Southbound
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.060 143965 INFO neutron.agent.ovn.metadata.agent [-] Port cf525aa5-a11a-4218-98dd-328546551976 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a bound to our chassis#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.062 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c014e4e-a182-4f60-8285-20525bc99e5a#033[00m
Dec  6 02:28:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:56.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.084 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[04b921a1-48f9-4bb6-962f-8260183659cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.085 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c014e4e-a1 in ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.087 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c014e4e-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.087 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[16f570cd-721c-43a3-a129-d9153ed881b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.088 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[73318d27-1e12-4b66-9816-7158ed79030d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.098 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[7513569f-49f8-43b6-99cb-a99d5bef093d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.112 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2bbd8d-b727-46d6-9770-da7f3afc538c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.141 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[849a3dc4-276e-4f21-8881-5aa5f728550c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.146 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[154e9e61-8e5b-4b20-be20-231dca048404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 systemd-udevd[278128]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:28:56 np0005548731 NetworkManager[49182]: <info>  [1765006136.1478] manager: (tap7c014e4e-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/214)
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.197 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[2780d6ab-71a1-4c66-926c-88bef5acd72f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.200 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[a05d41e6-701e-4e1c-9fe7-02cb78462855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 NetworkManager[49182]: <info>  [1765006136.2278] device (tap7c014e4e-a0): carrier: link connected
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.240 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[10776aa9-1ada-4b19-ad88-38462024135b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.267 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac3d4b7-6e7e-4712-b562-e0de78887e4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639823, 'reachable_time': 16814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278177, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.289 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fa773e12-335c-4943-93e1-92003115167b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:141c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639823, 'tstamp': 639823}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278179, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.314 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0b624027-516d-4cfb-8dee-9ca98af71a65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639823, 'reachable_time': 16814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278180, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.360 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3db67396-6fde-473b-b290-e877a808db36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.441 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d25e149d-5f80-4ac0-97f3-8fba02668a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.444 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.444 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.445 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c014e4e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:28:56 np0005548731 NetworkManager[49182]: <info>  [1765006136.4474] manager: (tap7c014e4e-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Dec  6 02:28:56 np0005548731 kernel: tap7c014e4e-a0: entered promiscuous mode
Dec  6 02:28:56 np0005548731 nova_compute[232433]: 2025-12-06 07:28:56.446 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:56 np0005548731 nova_compute[232433]: 2025-12-06 07:28:56.448 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.451 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c014e4e-a0, col_values=(('external_ids', {'iface-id': 'd8dd1a7d-045a-42a3-8829-567c43985ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:28:56 np0005548731 nova_compute[232433]: 2025-12-06 07:28:56.451 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:28:56Z|00446|binding|INFO|Releasing lport d8dd1a7d-045a-42a3-8829-567c43985ae0 from this chassis (sb_readonly=1)
Dec  6 02:28:56 np0005548731 nova_compute[232433]: 2025-12-06 07:28:56.453 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.453 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.454 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7adc45b0-b394-4b5d-a6b3-e1478d2de250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.455 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-7c014e4e-a182-4f60-8285-20525bc99e5a
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:28:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:28:56.455 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'env', 'PROCESS_TAG=haproxy-7c014e4e-a182-4f60-8285-20525bc99e5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c014e4e-a182-4f60-8285-20525bc99e5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:28:56 np0005548731 nova_compute[232433]: 2025-12-06 07:28:56.465 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:56 np0005548731 podman[278214]: 2025-12-06 07:28:56.850601435 +0000 UTC m=+0.069821856 container create 8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:28:56 np0005548731 systemd[1]: Started libpod-conmon-8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc.scope.
Dec  6 02:28:56 np0005548731 podman[278214]: 2025-12-06 07:28:56.819516246 +0000 UTC m=+0.038736717 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:28:56 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:28:56 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01698db359531da1ae6f379f6616ad9de1b53112362ebcc28bb15ae730dcd193/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:28:56 np0005548731 podman[278214]: 2025-12-06 07:28:56.931462069 +0000 UTC m=+0.150682520 container init 8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:28:56 np0005548731 podman[278214]: 2025-12-06 07:28:56.938242135 +0000 UTC m=+0.157462556 container start 8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:28:56 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[278229]: [NOTICE]   (278234) : New worker (278236) forked
Dec  6 02:28:56 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[278229]: [NOTICE]   (278234) : Loading success.
Dec  6 02:28:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:28:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:57.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:28:57 np0005548731 nova_compute[232433]: 2025-12-06 07:28:57.505 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:28:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:58.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:28:58 np0005548731 nova_compute[232433]: 2025-12-06 07:28:58.742 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:28:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:28:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:28:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:00.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:00 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:29:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:00.870 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:00.870 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:00.871 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:01 np0005548731 nova_compute[232433]: 2025-12-06 07:29:01.226 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006141.2258356, 018b5c4d-2e0a-428b-ac8b-a74236e0d103 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:29:01 np0005548731 nova_compute[232433]: 2025-12-06 07:29:01.227 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:29:01 np0005548731 nova_compute[232433]: 2025-12-06 07:29:01.231 232437 DEBUG nova.compute.manager [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:29:01 np0005548731 nova_compute[232433]: 2025-12-06 07:29:01.234 232437 INFO nova.virt.libvirt.driver [-] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance running successfully.#033[00m
Dec  6 02:29:01 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 02:29:01 np0005548731 nova_compute[232433]: 2025-12-06 07:29:01.237 232437 DEBUG nova.virt.libvirt.guest [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 02:29:01 np0005548731 nova_compute[232433]: 2025-12-06 07:29:01.237 232437 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Dec  6 02:29:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:01.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:01 np0005548731 nova_compute[232433]: 2025-12-06 07:29:01.864 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:29:01 np0005548731 nova_compute[232433]: 2025-12-06 07:29:01.869 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:29:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:02 np0005548731 nova_compute[232433]: 2025-12-06 07:29:02.364 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:29:02 np0005548731 nova_compute[232433]: 2025-12-06 07:29:02.507 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:02 np0005548731 nova_compute[232433]: 2025-12-06 07:29:02.652 232437 DEBUG nova.compute.manager [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:02 np0005548731 nova_compute[232433]: 2025-12-06 07:29:02.652 232437 DEBUG oslo_concurrency.lockutils [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:02 np0005548731 nova_compute[232433]: 2025-12-06 07:29:02.653 232437 DEBUG oslo_concurrency.lockutils [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:02 np0005548731 nova_compute[232433]: 2025-12-06 07:29:02.653 232437 DEBUG oslo_concurrency.lockutils [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:02 np0005548731 nova_compute[232433]: 2025-12-06 07:29:02.653 232437 DEBUG nova.compute.manager [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] No waiting events found dispatching network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:29:02 np0005548731 nova_compute[232433]: 2025-12-06 07:29:02.653 232437 WARNING nova.compute.manager [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received unexpected event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 for instance with vm_state active and task_state resize_finish.#033[00m
Dec  6 02:29:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:03 np0005548731 nova_compute[232433]: 2025-12-06 07:29:03.507 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:29:03 np0005548731 nova_compute[232433]: 2025-12-06 07:29:03.619 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 02:29:03 np0005548731 nova_compute[232433]: 2025-12-06 07:29:03.620 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006141.230105, 018b5c4d-2e0a-428b-ac8b-a74236e0d103 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:29:03 np0005548731 nova_compute[232433]: 2025-12-06 07:29:03.620 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] VM Started (Lifecycle Event)#033[00m
Dec  6 02:29:03 np0005548731 nova_compute[232433]: 2025-12-06 07:29:03.746 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:29:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:04.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:04 np0005548731 nova_compute[232433]: 2025-12-06 07:29:04.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:29:04 np0005548731 nova_compute[232433]: 2025-12-06 07:29:04.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:29:04 np0005548731 nova_compute[232433]: 2025-12-06 07:29:04.103 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:29:04 np0005548731 nova_compute[232433]: 2025-12-06 07:29:04.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:29:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:05.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:05 np0005548731 nova_compute[232433]: 2025-12-06 07:29:05.383 232437 INFO nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance shutdown successfully after 46 seconds.#033[00m
Dec  6 02:29:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:06.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:06 np0005548731 kernel: tap012ae37e-97 (unregistering): left promiscuous mode
Dec  6 02:29:06 np0005548731 NetworkManager[49182]: <info>  [1765006146.9639] device (tap012ae37e-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:29:06 np0005548731 nova_compute[232433]: 2025-12-06 07:29:06.978 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:06 np0005548731 nova_compute[232433]: 2025-12-06 07:29:06.981 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:06Z|00447|binding|INFO|Releasing lport 012ae37e-970e-4506-8216-7c5339055cf7 from this chassis (sb_readonly=0)
Dec  6 02:29:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:06Z|00448|binding|INFO|Setting lport 012ae37e-970e-4506-8216-7c5339055cf7 down in Southbound
Dec  6 02:29:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:06Z|00449|binding|INFO|Removing iface tap012ae37e-97 ovn-installed in OVS
Dec  6 02:29:06 np0005548731 nova_compute[232433]: 2025-12-06 07:29:06.993 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:07 np0005548731 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Dec  6 02:29:07 np0005548731 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006f.scope: Consumed 15.416s CPU time.
Dec  6 02:29:07 np0005548731 systemd-machined[195355]: Machine qemu-48-instance-0000006f terminated.
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.088 232437 DEBUG nova.compute.manager [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.089 232437 DEBUG oslo_concurrency.lockutils [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.089 232437 DEBUG oslo_concurrency.lockutils [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.089 232437 DEBUG oslo_concurrency.lockutils [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.089 232437 DEBUG nova.compute.manager [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] No waiting events found dispatching network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.089 232437 WARNING nova.compute.manager [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received unexpected event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 for instance with vm_state active and task_state resize_finish.#033[00m
Dec  6 02:29:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:07.122 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:71:4e 10.100.0.10'], port_security=['fa:16:3e:01:71:4e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '714e9301-631d-44ba-8f91-a1d65209745b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-248c2772-b8d1-4900-9e05-67104b197a82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '036c977b26cc46cabc5f599cb6c00e9a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e899a491-7b15-4e66-adca-8deec8c4c859', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eb80078-9659-4d3e-aaa8-ad516bb7d903, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=012ae37e-970e-4506-8216-7c5339055cf7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:29:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:07.123 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 012ae37e-970e-4506-8216-7c5339055cf7 in datapath 248c2772-b8d1-4900-9e05-67104b197a82 unbound from our chassis#033[00m
Dec  6 02:29:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:07.124 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 248c2772-b8d1-4900-9e05-67104b197a82 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:29:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:07.125 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d1bb9997-cf82-44da-be65-2aa5ed745320]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.223 232437 INFO nova.virt.libvirt.driver [-] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance destroyed successfully.#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.224 232437 DEBUG nova.objects.instance [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 714e9301-631d-44ba-8f91-a1d65209745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.248 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:07.247 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:29:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:07.248 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:29:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:29:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.482 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.482 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.483 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.483 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.508 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.594 232437 INFO nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Attempting rescue#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.594 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.599 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.600 232437 INFO nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Creating image(s)#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.630 232437 DEBUG nova.storage.rbd_utils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.634 232437 DEBUG nova.objects.instance [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 714e9301-631d-44ba-8f91-a1d65209745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.637 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.640 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.676 232437 DEBUG nova.storage.rbd_utils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.701 232437 DEBUG nova.storage.rbd_utils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.705 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.768 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.769 232437 DEBUG oslo_concurrency.lockutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.770 232437 DEBUG oslo_concurrency.lockutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.770 232437 DEBUG oslo_concurrency.lockutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.796 232437 DEBUG nova.storage.rbd_utils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:29:07 np0005548731 nova_compute[232433]: 2025-12-06 07:29:07.799 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 714e9301-631d-44ba-8f91-a1d65209745b_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:29:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:29:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:08.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:08 np0005548731 nova_compute[232433]: 2025-12-06 07:29:08.748 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:09.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:29:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:10.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:29:10 np0005548731 nova_compute[232433]: 2025-12-06 07:29:10.498 232437 DEBUG nova.compute.manager [req-8d36200f-11d7-4661-a7e4-2870be58ff06 req-f88fe600-d8dc-4f36-beb0-ef26980d9bfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-vif-unplugged-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:10 np0005548731 nova_compute[232433]: 2025-12-06 07:29:10.499 232437 DEBUG oslo_concurrency.lockutils [req-8d36200f-11d7-4661-a7e4-2870be58ff06 req-f88fe600-d8dc-4f36-beb0-ef26980d9bfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:10 np0005548731 nova_compute[232433]: 2025-12-06 07:29:10.499 232437 DEBUG oslo_concurrency.lockutils [req-8d36200f-11d7-4661-a7e4-2870be58ff06 req-f88fe600-d8dc-4f36-beb0-ef26980d9bfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:10 np0005548731 nova_compute[232433]: 2025-12-06 07:29:10.499 232437 DEBUG oslo_concurrency.lockutils [req-8d36200f-11d7-4661-a7e4-2870be58ff06 req-f88fe600-d8dc-4f36-beb0-ef26980d9bfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:10 np0005548731 nova_compute[232433]: 2025-12-06 07:29:10.499 232437 DEBUG nova.compute.manager [req-8d36200f-11d7-4661-a7e4-2870be58ff06 req-f88fe600-d8dc-4f36-beb0-ef26980d9bfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] No waiting events found dispatching network-vif-unplugged-012ae37e-970e-4506-8216-7c5339055cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:29:10 np0005548731 nova_compute[232433]: 2025-12-06 07:29:10.500 232437 WARNING nova.compute.manager [req-8d36200f-11d7-4661-a7e4-2870be58ff06 req-f88fe600-d8dc-4f36-beb0-ef26980d9bfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received unexpected event network-vif-unplugged-012ae37e-970e-4506-8216-7c5339055cf7 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.318 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 714e9301-631d-44ba-8f91-a1d65209745b_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.318 232437 DEBUG nova.objects.instance [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lazy-loading 'migration_context' on Instance uuid 714e9301-631d-44ba-8f91-a1d65209745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:29:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:11.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.676 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.677 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Start _get_guest_xml network_info=[{"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "vif_mac": "fa:16:3e:01:71:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.677 232437 DEBUG nova.objects.instance [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lazy-loading 'resources' on Instance uuid 714e9301-631d-44ba-8f91-a1d65209745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.708 232437 WARNING nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.714 232437 DEBUG nova.virt.libvirt.host [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.714 232437 DEBUG nova.virt.libvirt.host [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.717 232437 DEBUG nova.virt.libvirt.host [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.717 232437 DEBUG nova.virt.libvirt.host [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.718 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.719 232437 DEBUG nova.virt.hardware [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.719 232437 DEBUG nova.virt.hardware [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.719 232437 DEBUG nova.virt.hardware [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.720 232437 DEBUG nova.virt.hardware [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.720 232437 DEBUG nova.virt.hardware [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.720 232437 DEBUG nova.virt.hardware [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.720 232437 DEBUG nova.virt.hardware [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.720 232437 DEBUG nova.virt.hardware [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.721 232437 DEBUG nova.virt.hardware [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.721 232437 DEBUG nova.virt.hardware [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.721 232437 DEBUG nova.virt.hardware [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:29:11 np0005548731 nova_compute[232433]: 2025-12-06 07:29:11.721 232437 DEBUG nova.objects.instance [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 714e9301-631d-44ba-8f91-a1d65209745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:29:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:12.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:12.250 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:29:12 np0005548731 nova_compute[232433]: 2025-12-06 07:29:12.510 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:29:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:13.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:13 np0005548731 nova_compute[232433]: 2025-12-06 07:29:13.751 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:13 np0005548731 podman[278442]: 2025-12-06 07:29:13.909576646 +0000 UTC m=+0.061090262 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  6 02:29:13 np0005548731 podman[278440]: 2025-12-06 07:29:13.923718222 +0000 UTC m=+0.078959169 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:29:13 np0005548731 podman[278441]: 2025-12-06 07:29:13.930057427 +0000 UTC m=+0.079267977 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, tcib_managed=true)
Dec  6 02:29:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:15.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:16 np0005548731 nova_compute[232433]: 2025-12-06 07:29:16.071 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:29:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:16.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:16 np0005548731 nova_compute[232433]: 2025-12-06 07:29:16.466 232437 DEBUG nova.compute.manager [req-a9f901bc-a9f2-4948-ba35-7c3969943a68 req-761c3341-135f-4e63-b9f4-fb083fd2c2fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:16 np0005548731 nova_compute[232433]: 2025-12-06 07:29:16.466 232437 DEBUG oslo_concurrency.lockutils [req-a9f901bc-a9f2-4948-ba35-7c3969943a68 req-761c3341-135f-4e63-b9f4-fb083fd2c2fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:16 np0005548731 nova_compute[232433]: 2025-12-06 07:29:16.467 232437 DEBUG oslo_concurrency.lockutils [req-a9f901bc-a9f2-4948-ba35-7c3969943a68 req-761c3341-135f-4e63-b9f4-fb083fd2c2fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:16 np0005548731 nova_compute[232433]: 2025-12-06 07:29:16.467 232437 DEBUG oslo_concurrency.lockutils [req-a9f901bc-a9f2-4948-ba35-7c3969943a68 req-761c3341-135f-4e63-b9f4-fb083fd2c2fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:16 np0005548731 nova_compute[232433]: 2025-12-06 07:29:16.467 232437 DEBUG nova.compute.manager [req-a9f901bc-a9f2-4948-ba35-7c3969943a68 req-761c3341-135f-4e63-b9f4-fb083fd2c2fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] No waiting events found dispatching network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:29:16 np0005548731 nova_compute[232433]: 2025-12-06 07:29:16.467 232437 WARNING nova.compute.manager [req-a9f901bc-a9f2-4948-ba35-7c3969943a68 req-761c3341-135f-4e63-b9f4-fb083fd2c2fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received unexpected event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:29:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:29:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1424801763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:29:16 np0005548731 nova_compute[232433]: 2025-12-06 07:29:16.512 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:29:16 np0005548731 nova_compute[232433]: 2025-12-06 07:29:16.512 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:29:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:29:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1045039534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.116 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating instance_info_cache with network_info: [{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:29:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:17.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.511 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.533 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.534 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.653 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.654 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.655 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.655 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.656 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.656 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.656 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.656 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.656 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.830 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.831 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.831 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.831 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:29:17 np0005548731 nova_compute[232433]: 2025-12-06 07:29:17.831 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:29:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:18.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:29:18 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1311121322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:29:18 np0005548731 nova_compute[232433]: 2025-12-06 07:29:18.294 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:29:18 np0005548731 nova_compute[232433]: 2025-12-06 07:29:18.355 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.820s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:29:18 np0005548731 nova_compute[232433]: 2025-12-06 07:29:18.356 232437 DEBUG nova.virt.libvirt.vif [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:27:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-965064608',display_name='tempest-ServerRescueTestJSONUnderV235-server-965064608',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-965064608',id=111,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:28:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='036c977b26cc46cabc5f599cb6c00e9a',ramdisk_id='',reservation_id='r-1y4pi8rg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-706658007',owner_user_name='tempest-ServerRescueTestJSONUnderV235-706658007-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:28:14Z,user_data=None,user_id='f6f0d78959484986bab10b40c84e9406',uuid=714e9301-631d-44ba-8f91-a1d65209745b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "vif_mac": "fa:16:3e:01:71:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:29:18 np0005548731 nova_compute[232433]: 2025-12-06 07:29:18.357 232437 DEBUG nova.network.os_vif_util [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Converting VIF {"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "vif_mac": "fa:16:3e:01:71:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:29:18 np0005548731 nova_compute[232433]: 2025-12-06 07:29:18.358 232437 DEBUG nova.network.os_vif_util [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:71:4e,bridge_name='br-int',has_traffic_filtering=True,id=012ae37e-970e-4506-8216-7c5339055cf7,network=Network(248c2772-b8d1-4900-9e05-67104b197a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap012ae37e-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:29:18 np0005548731 nova_compute[232433]: 2025-12-06 07:29:18.359 232437 DEBUG nova.objects.instance [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 714e9301-631d-44ba-8f91-a1d65209745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:29:18 np0005548731 nova_compute[232433]: 2025-12-06 07:29:18.753 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:19.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:20.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:21Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:c9:f4 10.100.0.11
Dec  6 02:29:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:21Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:c9:f4 10.100.0.11
Dec  6 02:29:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:29:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:21.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:29:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:22.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:22 np0005548731 nova_compute[232433]: 2025-12-06 07:29:22.513 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:22 np0005548731 nova_compute[232433]: 2025-12-06 07:29:22.730 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006147.2215967, 714e9301-631d-44ba-8f91-a1d65209745b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:29:22 np0005548731 nova_compute[232433]: 2025-12-06 07:29:22.731 232437 INFO nova.compute.manager [-] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:29:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:23.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:23 np0005548731 nova_compute[232433]: 2025-12-06 07:29:23.755 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:24.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8853 writes, 46K keys, 8853 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 8853 writes, 8853 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1377 writes, 7070 keys, 1377 commit groups, 1.0 writes per commit group, ingest: 14.36 MB, 0.02 MB/s#012Interval WAL: 1377 writes, 1377 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     25.4      2.20              0.16        26    0.085       0      0       0.0       0.0#012  L6      1/0   10.60 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.2     75.9     63.8      3.73              0.69        25    0.149    153K    14K       0.0       0.0#012 Sum      1/0   10.60 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.2     47.7     49.5      5.93              0.85        51    0.116    153K    14K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5    133.5    136.2      0.54              0.19        12    0.045     46K   3058       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     75.9     63.8      3.73              0.69        25    0.149    153K    14K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     25.5      2.20              0.16        25    0.088       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.055, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.08 MB/s write, 0.28 GB read, 0.08 MB/s read, 5.9 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 31.52 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000168 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1808,30.38 MB,9.99243%) FilterBlock(51,436.30 KB,0.140155%) IndexBlock(51,737.86 KB,0.237028%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 02:29:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:25.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:25 np0005548731 nova_compute[232433]: 2025-12-06 07:29:25.888 232437 DEBUG nova.compute.manager [None req-804e4269-a2ec-462e-844e-3f41e9c7a341 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:29:25 np0005548731 nova_compute[232433]: 2025-12-06 07:29:25.890 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  <uuid>714e9301-631d-44ba-8f91-a1d65209745b</uuid>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  <name>instance-0000006f</name>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-965064608</nova:name>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:29:11</nova:creationTime>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <nova:user uuid="f6f0d78959484986bab10b40c84e9406">tempest-ServerRescueTestJSONUnderV235-706658007-project-member</nova:user>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <nova:project uuid="036c977b26cc46cabc5f599cb6c00e9a">tempest-ServerRescueTestJSONUnderV235-706658007</nova:project>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <nova:port uuid="012ae37e-970e-4506-8216-7c5339055cf7">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <entry name="serial">714e9301-631d-44ba-8f91-a1d65209745b</entry>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <entry name="uuid">714e9301-631d-44ba-8f91-a1d65209745b</entry>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/714e9301-631d-44ba-8f91-a1d65209745b_disk.rescue">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/714e9301-631d-44ba-8f91-a1d65209745b_disk">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <target dev="vdb" bus="virtio"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/714e9301-631d-44ba-8f91-a1d65209745b_disk.config.rescue">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:01:71:4e"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <target dev="tap012ae37e-97"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/console.log" append="off"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:29:25 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:29:25 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:29:25 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:29:25 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:29:25 np0005548731 nova_compute[232433]: 2025-12-06 07:29:25.896 232437 DEBUG nova.compute.manager [None req-804e4269-a2ec-462e-844e-3f41e9c7a341 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:29:25 np0005548731 nova_compute[232433]: 2025-12-06 07:29:25.900 232437 INFO nova.virt.libvirt.driver [-] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance destroyed successfully.#033[00m
Dec  6 02:29:25 np0005548731 nova_compute[232433]: 2025-12-06 07:29:25.983 232437 INFO nova.compute.manager [None req-804e4269-a2ec-462e-844e-3f41e9c7a341 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.024 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.024 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.025 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.027 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.028 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.093 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.093 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.093 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.094 232437 DEBUG nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] No VIF found with MAC fa:16:3e:01:71:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.094 232437 INFO nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Using config drive#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.117 232437 DEBUG nova.storage.rbd_utils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:29:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:26.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.153 232437 DEBUG nova.objects.instance [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 714e9301-631d-44ba-8f91-a1d65209745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.200 232437 DEBUG nova.objects.instance [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lazy-loading 'keypairs' on Instance uuid 714e9301-631d-44ba-8f91-a1d65209745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.261 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.262 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4200MB free_disk=20.785152435302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.262 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.263 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.352 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Applying migration context for instance 018b5c4d-2e0a-428b-ac8b-a74236e0d103 as it has an incoming, in-progress migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f. Migration status is confirming _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.353 232437 INFO nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating resource usage from migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.387 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 018b5c4d-2e0a-428b-ac8b-a74236e0d103 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.388 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 714e9301-631d-44ba-8f91-a1d65209745b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.388 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.388 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.452 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.503 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.504 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.533 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.568 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.730 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.844 232437 INFO nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Creating config drive at /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config.rescue#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.852 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpla6uj26l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:29:26 np0005548731 nova_compute[232433]: 2025-12-06 07:29:26.983 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpla6uj26l" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:29:27 np0005548731 nova_compute[232433]: 2025-12-06 07:29:27.012 232437 DEBUG nova.storage.rbd_utils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] rbd image 714e9301-631d-44ba-8f91-a1d65209745b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:29:27 np0005548731 nova_compute[232433]: 2025-12-06 07:29:27.017 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config.rescue 714e9301-631d-44ba-8f91-a1d65209745b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:29:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:29:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1719017473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:29:27 np0005548731 nova_compute[232433]: 2025-12-06 07:29:27.150 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:29:27 np0005548731 nova_compute[232433]: 2025-12-06 07:29:27.156 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:29:27 np0005548731 nova_compute[232433]: 2025-12-06 07:29:27.235 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:29:27 np0005548731 nova_compute[232433]: 2025-12-06 07:29:27.279 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:29:27 np0005548731 nova_compute[232433]: 2025-12-06 07:29:27.280 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:27.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:27 np0005548731 nova_compute[232433]: 2025-12-06 07:29:27.516 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:29:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:28.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:28 np0005548731 nova_compute[232433]: 2025-12-06 07:29:28.757 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:29.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:29:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:30.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:30 np0005548731 nova_compute[232433]: 2025-12-06 07:29:30.325 232437 DEBUG oslo_concurrency.processutils [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config.rescue 714e9301-631d-44ba-8f91-a1d65209745b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:29:30 np0005548731 nova_compute[232433]: 2025-12-06 07:29:30.325 232437 INFO nova.virt.libvirt.driver [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Deleting local config drive /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b/disk.config.rescue because it was imported into RBD.#033[00m
Dec  6 02:29:30 np0005548731 kernel: tap012ae37e-97: entered promiscuous mode
Dec  6 02:29:30 np0005548731 NetworkManager[49182]: <info>  [1765006170.3811] manager: (tap012ae37e-97): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Dec  6 02:29:30 np0005548731 nova_compute[232433]: 2025-12-06 07:29:30.428 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:30 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:30Z|00450|binding|INFO|Claiming lport 012ae37e-970e-4506-8216-7c5339055cf7 for this chassis.
Dec  6 02:29:30 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:30Z|00451|binding|INFO|012ae37e-970e-4506-8216-7c5339055cf7: Claiming fa:16:3e:01:71:4e 10.100.0.10
Dec  6 02:29:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:30.443 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:71:4e 10.100.0.10'], port_security=['fa:16:3e:01:71:4e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '714e9301-631d-44ba-8f91-a1d65209745b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-248c2772-b8d1-4900-9e05-67104b197a82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '036c977b26cc46cabc5f599cb6c00e9a', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e899a491-7b15-4e66-adca-8deec8c4c859', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eb80078-9659-4d3e-aaa8-ad516bb7d903, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=012ae37e-970e-4506-8216-7c5339055cf7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:29:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:30.444 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 012ae37e-970e-4506-8216-7c5339055cf7 in datapath 248c2772-b8d1-4900-9e05-67104b197a82 bound to our chassis#033[00m
Dec  6 02:29:30 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:30Z|00452|binding|INFO|Setting lport 012ae37e-970e-4506-8216-7c5339055cf7 ovn-installed in OVS
Dec  6 02:29:30 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:30Z|00453|binding|INFO|Setting lport 012ae37e-970e-4506-8216-7c5339055cf7 up in Southbound
Dec  6 02:29:30 np0005548731 nova_compute[232433]: 2025-12-06 07:29:30.445 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:30.445 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 248c2772-b8d1-4900-9e05-67104b197a82 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:29:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:30.446 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa2d270-b7a7-413b-b569-adac41f33270]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:29:30 np0005548731 nova_compute[232433]: 2025-12-06 07:29:30.447 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:30 np0005548731 systemd-udevd[278743]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:29:30 np0005548731 systemd-machined[195355]: New machine qemu-50-instance-0000006f.
Dec  6 02:29:30 np0005548731 NetworkManager[49182]: <info>  [1765006170.4663] device (tap012ae37e-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:29:30 np0005548731 NetworkManager[49182]: <info>  [1765006170.4676] device (tap012ae37e-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:29:30 np0005548731 systemd[1]: Started Virtual Machine qemu-50-instance-0000006f.
Dec  6 02:29:30 np0005548731 nova_compute[232433]: 2025-12-06 07:29:30.837 232437 DEBUG nova.compute.manager [req-dc9be239-3c1c-4ba0-8bfb-03c7199855f0 req-074debc6-73f8-4699-adf6-a87ca6eaca41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:30 np0005548731 nova_compute[232433]: 2025-12-06 07:29:30.837 232437 DEBUG oslo_concurrency.lockutils [req-dc9be239-3c1c-4ba0-8bfb-03c7199855f0 req-074debc6-73f8-4699-adf6-a87ca6eaca41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:30 np0005548731 nova_compute[232433]: 2025-12-06 07:29:30.837 232437 DEBUG oslo_concurrency.lockutils [req-dc9be239-3c1c-4ba0-8bfb-03c7199855f0 req-074debc6-73f8-4699-adf6-a87ca6eaca41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:30 np0005548731 nova_compute[232433]: 2025-12-06 07:29:30.838 232437 DEBUG oslo_concurrency.lockutils [req-dc9be239-3c1c-4ba0-8bfb-03c7199855f0 req-074debc6-73f8-4699-adf6-a87ca6eaca41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:30 np0005548731 nova_compute[232433]: 2025-12-06 07:29:30.838 232437 DEBUG nova.compute.manager [req-dc9be239-3c1c-4ba0-8bfb-03c7199855f0 req-074debc6-73f8-4699-adf6-a87ca6eaca41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] No waiting events found dispatching network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:29:30 np0005548731 nova_compute[232433]: 2025-12-06 07:29:30.838 232437 WARNING nova.compute.manager [req-dc9be239-3c1c-4ba0-8bfb-03c7199855f0 req-074debc6-73f8-4699-adf6-a87ca6eaca41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received unexpected event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:29:31 np0005548731 nova_compute[232433]: 2025-12-06 07:29:31.272 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006171.2720473, 714e9301-631d-44ba-8f91-a1d65209745b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:29:31 np0005548731 nova_compute[232433]: 2025-12-06 07:29:31.273 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:29:31 np0005548731 nova_compute[232433]: 2025-12-06 07:29:31.279 232437 DEBUG nova.compute.manager [None req-81f128a9-62a2-4499-bcfe-d4b14b063a07 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:29:31 np0005548731 nova_compute[232433]: 2025-12-06 07:29:31.294 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:29:31 np0005548731 nova_compute[232433]: 2025-12-06 07:29:31.298 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:29:31 np0005548731 nova_compute[232433]: 2025-12-06 07:29:31.327 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Dec  6 02:29:31 np0005548731 nova_compute[232433]: 2025-12-06 07:29:31.328 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006171.2733288, 714e9301-631d-44ba-8f91-a1d65209745b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:29:31 np0005548731 nova_compute[232433]: 2025-12-06 07:29:31.328 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] VM Started (Lifecycle Event)#033[00m
Dec  6 02:29:31 np0005548731 nova_compute[232433]: 2025-12-06 07:29:31.357 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:29:31 np0005548731 nova_compute[232433]: 2025-12-06 07:29:31.362 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:29:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:31.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e272 e272: 3 total, 3 up, 3 in
Dec  6 02:29:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:32.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:32 np0005548731 nova_compute[232433]: 2025-12-06 07:29:32.570 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:32 np0005548731 nova_compute[232433]: 2025-12-06 07:29:32.970 232437 DEBUG nova.compute.manager [req-c67b77bc-81ae-42dd-88ab-a838a4248d50 req-bae5441c-97b9-46a6-b078-c9e30d4bfdef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:32 np0005548731 nova_compute[232433]: 2025-12-06 07:29:32.971 232437 DEBUG oslo_concurrency.lockutils [req-c67b77bc-81ae-42dd-88ab-a838a4248d50 req-bae5441c-97b9-46a6-b078-c9e30d4bfdef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:32 np0005548731 nova_compute[232433]: 2025-12-06 07:29:32.971 232437 DEBUG oslo_concurrency.lockutils [req-c67b77bc-81ae-42dd-88ab-a838a4248d50 req-bae5441c-97b9-46a6-b078-c9e30d4bfdef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:32 np0005548731 nova_compute[232433]: 2025-12-06 07:29:32.971 232437 DEBUG oslo_concurrency.lockutils [req-c67b77bc-81ae-42dd-88ab-a838a4248d50 req-bae5441c-97b9-46a6-b078-c9e30d4bfdef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:32 np0005548731 nova_compute[232433]: 2025-12-06 07:29:32.972 232437 DEBUG nova.compute.manager [req-c67b77bc-81ae-42dd-88ab-a838a4248d50 req-bae5441c-97b9-46a6-b078-c9e30d4bfdef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] No waiting events found dispatching network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:29:32 np0005548731 nova_compute[232433]: 2025-12-06 07:29:32.972 232437 WARNING nova.compute.manager [req-c67b77bc-81ae-42dd-88ab-a838a4248d50 req-bae5441c-97b9-46a6-b078-c9e30d4bfdef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received unexpected event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 for instance with vm_state rescued and task_state None.#033[00m
Dec  6 02:29:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:33.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:33 np0005548731 nova_compute[232433]: 2025-12-06 07:29:33.759 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:34.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:34 np0005548731 nova_compute[232433]: 2025-12-06 07:29:34.977 232437 DEBUG nova.compute.manager [req-82775bf3-4b9a-482c-8ff5-2f20a7e123b2 req-ef7a72ac-dbbf-4398-a7d6-3d3477020960 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-changed-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:34 np0005548731 nova_compute[232433]: 2025-12-06 07:29:34.977 232437 DEBUG nova.compute.manager [req-82775bf3-4b9a-482c-8ff5-2f20a7e123b2 req-ef7a72ac-dbbf-4398-a7d6-3d3477020960 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Refreshing instance network info cache due to event network-changed-012ae37e-970e-4506-8216-7c5339055cf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:29:34 np0005548731 nova_compute[232433]: 2025-12-06 07:29:34.977 232437 DEBUG oslo_concurrency.lockutils [req-82775bf3-4b9a-482c-8ff5-2f20a7e123b2 req-ef7a72ac-dbbf-4398-a7d6-3d3477020960 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:29:34 np0005548731 nova_compute[232433]: 2025-12-06 07:29:34.978 232437 DEBUG oslo_concurrency.lockutils [req-82775bf3-4b9a-482c-8ff5-2f20a7e123b2 req-ef7a72ac-dbbf-4398-a7d6-3d3477020960 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:29:34 np0005548731 nova_compute[232433]: 2025-12-06 07:29:34.978 232437 DEBUG nova.network.neutron [req-82775bf3-4b9a-482c-8ff5-2f20a7e123b2 req-ef7a72ac-dbbf-4398-a7d6-3d3477020960 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Refreshing network info cache for port 012ae37e-970e-4506-8216-7c5339055cf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:29:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:35.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:36.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:36 np0005548731 nova_compute[232433]: 2025-12-06 07:29:36.873 232437 DEBUG oslo_concurrency.lockutils [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:36 np0005548731 nova_compute[232433]: 2025-12-06 07:29:36.873 232437 DEBUG oslo_concurrency.lockutils [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:36 np0005548731 nova_compute[232433]: 2025-12-06 07:29:36.874 232437 DEBUG oslo_concurrency.lockutils [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:36 np0005548731 nova_compute[232433]: 2025-12-06 07:29:36.874 232437 DEBUG oslo_concurrency.lockutils [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:36 np0005548731 nova_compute[232433]: 2025-12-06 07:29:36.874 232437 DEBUG oslo_concurrency.lockutils [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:36 np0005548731 nova_compute[232433]: 2025-12-06 07:29:36.875 232437 INFO nova.compute.manager [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Terminating instance#033[00m
Dec  6 02:29:36 np0005548731 nova_compute[232433]: 2025-12-06 07:29:36.876 232437 DEBUG nova.compute.manager [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.055 232437 DEBUG nova.network.neutron [req-82775bf3-4b9a-482c-8ff5-2f20a7e123b2 req-ef7a72ac-dbbf-4398-a7d6-3d3477020960 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updated VIF entry in instance network info cache for port 012ae37e-970e-4506-8216-7c5339055cf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.056 232437 DEBUG nova.network.neutron [req-82775bf3-4b9a-482c-8ff5-2f20a7e123b2 req-ef7a72ac-dbbf-4398-a7d6-3d3477020960 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updating instance_info_cache with network_info: [{"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.074 232437 DEBUG oslo_concurrency.lockutils [req-82775bf3-4b9a-482c-8ff5-2f20a7e123b2 req-ef7a72ac-dbbf-4398-a7d6-3d3477020960 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.174 232437 DEBUG nova.compute.manager [req-22415089-8890-4473-9f8c-74ffc05d6a3c req-8e0e608a-2f7a-4fb1-900b-469fc85cbbf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-changed-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.175 232437 DEBUG nova.compute.manager [req-22415089-8890-4473-9f8c-74ffc05d6a3c req-8e0e608a-2f7a-4fb1-900b-469fc85cbbf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Refreshing instance network info cache due to event network-changed-012ae37e-970e-4506-8216-7c5339055cf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.175 232437 DEBUG oslo_concurrency.lockutils [req-22415089-8890-4473-9f8c-74ffc05d6a3c req-8e0e608a-2f7a-4fb1-900b-469fc85cbbf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.176 232437 DEBUG oslo_concurrency.lockutils [req-22415089-8890-4473-9f8c-74ffc05d6a3c req-8e0e608a-2f7a-4fb1-900b-469fc85cbbf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.176 232437 DEBUG nova.network.neutron [req-22415089-8890-4473-9f8c-74ffc05d6a3c req-8e0e608a-2f7a-4fb1-900b-469fc85cbbf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Refreshing network info cache for port 012ae37e-970e-4506-8216-7c5339055cf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:29:37 np0005548731 kernel: tapcf525aa5-a1 (unregistering): left promiscuous mode
Dec  6 02:29:37 np0005548731 NetworkManager[49182]: <info>  [1765006177.2831] device (tapcf525aa5-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.287 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:37Z|00454|binding|INFO|Releasing lport cf525aa5-a11a-4218-98dd-328546551976 from this chassis (sb_readonly=0)
Dec  6 02:29:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:37Z|00455|binding|INFO|Setting lport cf525aa5-a11a-4218-98dd-328546551976 down in Southbound
Dec  6 02:29:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:29:37Z|00456|binding|INFO|Removing iface tapcf525aa5-a1 ovn-installed in OVS
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.290 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.302 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:c9:f4 10.100.0.11'], port_security=['fa:16:3e:01:c9:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '018b5c4d-2e0a-428b-ac8b-a74236e0d103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '7', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=cf525aa5-a11a-4218-98dd-328546551976) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.304 143965 INFO neutron.agent.ovn.metadata.agent [-] Port cf525aa5-a11a-4218-98dd-328546551976 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a unbound from our chassis#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.306 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c014e4e-a182-4f60-8285-20525bc99e5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.308 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[df0ccf25-8e57-43d0-9fff-9cb404a6c11e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.308 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace which is not needed anymore#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.311 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:37 np0005548731 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Dec  6 02:29:37 np0005548731 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006e.scope: Consumed 15.156s CPU time.
Dec  6 02:29:37 np0005548731 systemd-machined[195355]: Machine qemu-49-instance-0000006e terminated.
Dec  6 02:29:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:37.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:37 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[278229]: [NOTICE]   (278234) : haproxy version is 2.8.14-c23fe91
Dec  6 02:29:37 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[278229]: [NOTICE]   (278234) : path to executable is /usr/sbin/haproxy
Dec  6 02:29:37 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[278229]: [WARNING]  (278234) : Exiting Master process...
Dec  6 02:29:37 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[278229]: [ALERT]    (278234) : Current worker (278236) exited with code 143 (Terminated)
Dec  6 02:29:37 np0005548731 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[278229]: [WARNING]  (278234) : All workers exited. Exiting... (0)
Dec  6 02:29:37 np0005548731 systemd[1]: libpod-8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc.scope: Deactivated successfully.
Dec  6 02:29:37 np0005548731 podman[279089]: 2025-12-06 07:29:37.449287263 +0000 UTC m=+0.051030078 container died 8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 02:29:37 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc-userdata-shm.mount: Deactivated successfully.
Dec  6 02:29:37 np0005548731 systemd[1]: var-lib-containers-storage-overlay-01698db359531da1ae6f379f6616ad9de1b53112362ebcc28bb15ae730dcd193-merged.mount: Deactivated successfully.
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.521 232437 INFO nova.virt.libvirt.driver [-] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance destroyed successfully.#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.522 232437 DEBUG nova.objects.instance [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'resources' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:29:37 np0005548731 podman[279089]: 2025-12-06 07:29:37.529663435 +0000 UTC m=+0.131406240 container cleanup 8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 02:29:37 np0005548731 systemd[1]: libpod-conmon-8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc.scope: Deactivated successfully.
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.542 232437 DEBUG nova.virt.libvirt.vif [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1401757895',display_name='tempest-ServerDiskConfigTestJSON-server-1401757895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1401757895',id=110,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:29:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-iyd6zt70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:29:32Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=018b5c4d-2e0a-428b-ac8b-a74236e0d103,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.544 232437 DEBUG nova.network.os_vif_util [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.545 232437 DEBUG nova.network.os_vif_util [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.545 232437 DEBUG os_vif [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.547 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.548 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf525aa5-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.550 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.554 232437 INFO os_vif [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1')#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.573 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:37 np0005548731 podman[279131]: 2025-12-06 07:29:37.762958112 +0000 UTC m=+0.208929632 container remove 8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.772 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[532b56a4-bc16-4361-83eb-d8cebafa84b0]: (4, ('Sat Dec  6 07:29:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc)\n8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc\nSat Dec  6 07:29:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc)\n8fa2e765f0cc6ce54876d6994a5c3de43650ffb555088b5390d3459bf434b3dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.774 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ec352e-45c7-48c1-b335-cfa305ae8e70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.775 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.777 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:37 np0005548731 kernel: tap7c014e4e-a0: left promiscuous mode
Dec  6 02:29:37 np0005548731 nova_compute[232433]: 2025-12-06 07:29:37.790 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.794 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3ea7c3-e328-4aab-842c-55b3b1eea1ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.810 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c6ff4f-44d0-4753-8e14-36ac5af64bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.811 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[75e2aaf4-2575-42ce-837d-50b9e9ab16bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.826 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6e92e8-db7f-49c5-b8c4-435e1008aa52]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639814, 'reachable_time': 28056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279164, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.829 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:29:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:29:37.830 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c9bc96c8-201a-4716-82be-be6be13622f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:29:37 np0005548731 systemd[1]: run-netns-ovnmeta\x2d7c014e4e\x2da182\x2d4f60\x2d8285\x2d20525bc99e5a.mount: Deactivated successfully.
Dec  6 02:29:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:38.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:38 np0005548731 nova_compute[232433]: 2025-12-06 07:29:38.166 232437 DEBUG nova.compute.manager [req-288bd55e-d4e5-4272-b2c1-a943d988d326 req-f3a4fe7b-55d8-41a0-b167-6566a61c6344 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-unplugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:38 np0005548731 nova_compute[232433]: 2025-12-06 07:29:38.167 232437 DEBUG oslo_concurrency.lockutils [req-288bd55e-d4e5-4272-b2c1-a943d988d326 req-f3a4fe7b-55d8-41a0-b167-6566a61c6344 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:38 np0005548731 nova_compute[232433]: 2025-12-06 07:29:38.167 232437 DEBUG oslo_concurrency.lockutils [req-288bd55e-d4e5-4272-b2c1-a943d988d326 req-f3a4fe7b-55d8-41a0-b167-6566a61c6344 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:38 np0005548731 nova_compute[232433]: 2025-12-06 07:29:38.167 232437 DEBUG oslo_concurrency.lockutils [req-288bd55e-d4e5-4272-b2c1-a943d988d326 req-f3a4fe7b-55d8-41a0-b167-6566a61c6344 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:38 np0005548731 nova_compute[232433]: 2025-12-06 07:29:38.168 232437 DEBUG nova.compute.manager [req-288bd55e-d4e5-4272-b2c1-a943d988d326 req-f3a4fe7b-55d8-41a0-b167-6566a61c6344 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] No waiting events found dispatching network-vif-unplugged-cf525aa5-a11a-4218-98dd-328546551976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:29:38 np0005548731 nova_compute[232433]: 2025-12-06 07:29:38.168 232437 DEBUG nova.compute.manager [req-288bd55e-d4e5-4272-b2c1-a943d988d326 req-f3a4fe7b-55d8-41a0-b167-6566a61c6344 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-unplugged-cf525aa5-a11a-4218-98dd-328546551976 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:29:38 np0005548731 nova_compute[232433]: 2025-12-06 07:29:38.276 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:29:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:39.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.087 232437 DEBUG nova.network.neutron [req-22415089-8890-4473-9f8c-74ffc05d6a3c req-8e0e608a-2f7a-4fb1-900b-469fc85cbbf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updated VIF entry in instance network info cache for port 012ae37e-970e-4506-8216-7c5339055cf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.089 232437 DEBUG nova.network.neutron [req-22415089-8890-4473-9f8c-74ffc05d6a3c req-8e0e608a-2f7a-4fb1-900b-469fc85cbbf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updating instance_info_cache with network_info: [{"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.111 232437 DEBUG oslo_concurrency.lockutils [req-22415089-8890-4473-9f8c-74ffc05d6a3c req-8e0e608a-2f7a-4fb1-900b-469fc85cbbf2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:29:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:40.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.293 232437 DEBUG nova.compute.manager [req-93b44c07-dd76-41f0-b671-3ce82ab6f454 req-01c4cbe7-4b95-45e2-8710-f148370e2360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.294 232437 DEBUG oslo_concurrency.lockutils [req-93b44c07-dd76-41f0-b671-3ce82ab6f454 req-01c4cbe7-4b95-45e2-8710-f148370e2360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.294 232437 DEBUG oslo_concurrency.lockutils [req-93b44c07-dd76-41f0-b671-3ce82ab6f454 req-01c4cbe7-4b95-45e2-8710-f148370e2360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.295 232437 DEBUG oslo_concurrency.lockutils [req-93b44c07-dd76-41f0-b671-3ce82ab6f454 req-01c4cbe7-4b95-45e2-8710-f148370e2360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.295 232437 DEBUG nova.compute.manager [req-93b44c07-dd76-41f0-b671-3ce82ab6f454 req-01c4cbe7-4b95-45e2-8710-f148370e2360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] No waiting events found dispatching network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.295 232437 WARNING nova.compute.manager [req-93b44c07-dd76-41f0-b671-3ce82ab6f454 req-01c4cbe7-4b95-45e2-8710-f148370e2360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received unexpected event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.355 232437 INFO nova.virt.libvirt.driver [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Deleting instance files /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103_del#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.357 232437 INFO nova.virt.libvirt.driver [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Deletion of /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103_del complete#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.443 232437 INFO nova.compute.manager [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Took 3.57 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.444 232437 DEBUG oslo.service.loopingcall [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.444 232437 DEBUG nova.compute.manager [-] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:29:40 np0005548731 nova_compute[232433]: 2025-12-06 07:29:40.444 232437 DEBUG nova.network.neutron [-] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:29:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 e273: 3 total, 3 up, 3 in
Dec  6 02:29:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:41.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:29:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:29:41 np0005548731 nova_compute[232433]: 2025-12-06 07:29:41.878 232437 DEBUG nova.network.neutron [-] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:29:41 np0005548731 nova_compute[232433]: 2025-12-06 07:29:41.903 232437 INFO nova.compute.manager [-] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Took 1.46 seconds to deallocate network for instance.#033[00m
Dec  6 02:29:41 np0005548731 nova_compute[232433]: 2025-12-06 07:29:41.928 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:41 np0005548731 NetworkManager[49182]: <info>  [1765006181.9294] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Dec  6 02:29:41 np0005548731 NetworkManager[49182]: <info>  [1765006181.9308] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Dec  6 02:29:41 np0005548731 nova_compute[232433]: 2025-12-06 07:29:41.967 232437 DEBUG oslo_concurrency.lockutils [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:41 np0005548731 nova_compute[232433]: 2025-12-06 07:29:41.968 232437 DEBUG oslo_concurrency.lockutils [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:42 np0005548731 nova_compute[232433]: 2025-12-06 07:29:42.082 232437 DEBUG oslo_concurrency.processutils [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:29:42 np0005548731 nova_compute[232433]: 2025-12-06 07:29:42.114 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:29:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:42.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:29:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:29:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/54487037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:29:42 np0005548731 nova_compute[232433]: 2025-12-06 07:29:42.520 232437 DEBUG oslo_concurrency.processutils [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:29:42 np0005548731 nova_compute[232433]: 2025-12-06 07:29:42.526 232437 DEBUG nova.compute.provider_tree [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:29:42 np0005548731 nova_compute[232433]: 2025-12-06 07:29:42.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:42 np0005548731 nova_compute[232433]: 2025-12-06 07:29:42.551 232437 DEBUG nova.scheduler.client.report [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:29:42 np0005548731 nova_compute[232433]: 2025-12-06 07:29:42.571 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:42 np0005548731 nova_compute[232433]: 2025-12-06 07:29:42.603 232437 DEBUG oslo_concurrency.lockutils [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:42 np0005548731 nova_compute[232433]: 2025-12-06 07:29:42.641 232437 INFO nova.scheduler.client.report [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Deleted allocations for instance 018b5c4d-2e0a-428b-ac8b-a74236e0d103#033[00m
Dec  6 02:29:42 np0005548731 nova_compute[232433]: 2025-12-06 07:29:42.721 232437 DEBUG nova.compute.manager [req-93d30ec8-e876-49d5-8088-2df2abacd4f7 req-69a6f1bd-5e89-41fd-9c6d-ef7d982adf20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-deleted-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:42 np0005548731 nova_compute[232433]: 2025-12-06 07:29:42.777 232437 DEBUG oslo_concurrency.lockutils [None req-3cb62e1f-6002-4135-975e-ee3a0ff14cf7 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:43.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:44.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:44 np0005548731 nova_compute[232433]: 2025-12-06 07:29:44.890 232437 DEBUG nova.compute.manager [req-d1165771-389e-4ad7-abc8-318130b4005e req-35ff2d3a-4796-495a-90fc-cda913390f89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-changed-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:44 np0005548731 nova_compute[232433]: 2025-12-06 07:29:44.891 232437 DEBUG nova.compute.manager [req-d1165771-389e-4ad7-abc8-318130b4005e req-35ff2d3a-4796-495a-90fc-cda913390f89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Refreshing instance network info cache due to event network-changed-012ae37e-970e-4506-8216-7c5339055cf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:29:44 np0005548731 nova_compute[232433]: 2025-12-06 07:29:44.891 232437 DEBUG oslo_concurrency.lockutils [req-d1165771-389e-4ad7-abc8-318130b4005e req-35ff2d3a-4796-495a-90fc-cda913390f89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:29:44 np0005548731 nova_compute[232433]: 2025-12-06 07:29:44.892 232437 DEBUG oslo_concurrency.lockutils [req-d1165771-389e-4ad7-abc8-318130b4005e req-35ff2d3a-4796-495a-90fc-cda913390f89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:29:44 np0005548731 nova_compute[232433]: 2025-12-06 07:29:44.892 232437 DEBUG nova.network.neutron [req-d1165771-389e-4ad7-abc8-318130b4005e req-35ff2d3a-4796-495a-90fc-cda913390f89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Refreshing network info cache for port 012ae37e-970e-4506-8216-7c5339055cf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:29:44 np0005548731 podman[279195]: 2025-12-06 07:29:44.924979956 +0000 UTC m=+0.066343731 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 02:29:44 np0005548731 podman[279193]: 2025-12-06 07:29:44.941407307 +0000 UTC m=+0.095485963 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec  6 02:29:44 np0005548731 podman[279194]: 2025-12-06 07:29:44.952642191 +0000 UTC m=+0.106133783 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:29:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:45.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:29:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1839537884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:29:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:46.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:47.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:47 np0005548731 nova_compute[232433]: 2025-12-06 07:29:47.385 232437 DEBUG nova.network.neutron [req-d1165771-389e-4ad7-abc8-318130b4005e req-35ff2d3a-4796-495a-90fc-cda913390f89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updated VIF entry in instance network info cache for port 012ae37e-970e-4506-8216-7c5339055cf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:29:47 np0005548731 nova_compute[232433]: 2025-12-06 07:29:47.386 232437 DEBUG nova.network.neutron [req-d1165771-389e-4ad7-abc8-318130b4005e req-35ff2d3a-4796-495a-90fc-cda913390f89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updating instance_info_cache with network_info: [{"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:29:47 np0005548731 nova_compute[232433]: 2025-12-06 07:29:47.415 232437 DEBUG oslo_concurrency.lockutils [req-d1165771-389e-4ad7-abc8-318130b4005e req-35ff2d3a-4796-495a-90fc-cda913390f89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:29:47 np0005548731 nova_compute[232433]: 2025-12-06 07:29:47.550 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:47 np0005548731 nova_compute[232433]: 2025-12-06 07:29:47.573 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:29:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:48.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:29:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:49.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:50.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:29:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:51.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:29:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:29:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:52.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:29:52 np0005548731 nova_compute[232433]: 2025-12-06 07:29:52.515 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006177.5145955, 018b5c4d-2e0a-428b-ac8b-a74236e0d103 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:29:52 np0005548731 nova_compute[232433]: 2025-12-06 07:29:52.515 232437 INFO nova.compute.manager [-] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:29:52 np0005548731 nova_compute[232433]: 2025-12-06 07:29:52.520 232437 DEBUG nova.compute.manager [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-changed-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:29:52 np0005548731 nova_compute[232433]: 2025-12-06 07:29:52.521 232437 DEBUG nova.compute.manager [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Refreshing instance network info cache due to event network-changed-012ae37e-970e-4506-8216-7c5339055cf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:29:52 np0005548731 nova_compute[232433]: 2025-12-06 07:29:52.521 232437 DEBUG oslo_concurrency.lockutils [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:29:52 np0005548731 nova_compute[232433]: 2025-12-06 07:29:52.521 232437 DEBUG oslo_concurrency.lockutils [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:29:52 np0005548731 nova_compute[232433]: 2025-12-06 07:29:52.521 232437 DEBUG nova.network.neutron [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Refreshing network info cache for port 012ae37e-970e-4506-8216-7c5339055cf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:29:52 np0005548731 nova_compute[232433]: 2025-12-06 07:29:52.545 232437 DEBUG nova.compute.manager [None req-f5d4a92b-dd9f-4599-bc3d-d9d51d4b5e29 - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:29:52 np0005548731 nova_compute[232433]: 2025-12-06 07:29:52.552 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:52 np0005548731 nova_compute[232433]: 2025-12-06 07:29:52.575 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:53.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:54.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:29:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:55.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:56.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:56 np0005548731 nova_compute[232433]: 2025-12-06 07:29:56.659 232437 DEBUG nova.network.neutron [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updated VIF entry in instance network info cache for port 012ae37e-970e-4506-8216-7c5339055cf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:29:56 np0005548731 nova_compute[232433]: 2025-12-06 07:29:56.659 232437 DEBUG nova.network.neutron [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updating instance_info_cache with network_info: [{"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:29:56 np0005548731 nova_compute[232433]: 2025-12-06 07:29:56.927 232437 DEBUG oslo_concurrency.lockutils [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-714e9301-631d-44ba-8f91-a1d65209745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:29:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:57.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:57 np0005548731 nova_compute[232433]: 2025-12-06 07:29:57.554 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:57 np0005548731 nova_compute[232433]: 2025-12-06 07:29:57.576 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:29:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:58.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:29:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:29:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:29:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:59.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:29:59 np0005548731 nova_compute[232433]: 2025-12-06 07:29:59.897 232437 DEBUG oslo_concurrency.lockutils [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:59 np0005548731 nova_compute[232433]: 2025-12-06 07:29:59.898 232437 DEBUG oslo_concurrency.lockutils [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:59 np0005548731 nova_compute[232433]: 2025-12-06 07:29:59.898 232437 DEBUG oslo_concurrency.lockutils [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:29:59 np0005548731 nova_compute[232433]: 2025-12-06 07:29:59.899 232437 DEBUG oslo_concurrency.lockutils [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:29:59 np0005548731 nova_compute[232433]: 2025-12-06 07:29:59.899 232437 DEBUG oslo_concurrency.lockutils [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:29:59 np0005548731 nova_compute[232433]: 2025-12-06 07:29:59.900 232437 INFO nova.compute.manager [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Terminating instance#033[00m
Dec  6 02:29:59 np0005548731 nova_compute[232433]: 2025-12-06 07:29:59.901 232437 DEBUG nova.compute.manager [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:30:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:00.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:00 np0005548731 kernel: tap012ae37e-97 (unregistering): left promiscuous mode
Dec  6 02:30:00 np0005548731 NetworkManager[49182]: <info>  [1765006200.2525] device (tap012ae37e-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:30:00 np0005548731 nova_compute[232433]: 2025-12-06 07:30:00.259 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:00Z|00457|binding|INFO|Releasing lport 012ae37e-970e-4506-8216-7c5339055cf7 from this chassis (sb_readonly=0)
Dec  6 02:30:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:00Z|00458|binding|INFO|Setting lport 012ae37e-970e-4506-8216-7c5339055cf7 down in Southbound
Dec  6 02:30:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:00Z|00459|binding|INFO|Removing iface tap012ae37e-97 ovn-installed in OVS
Dec  6 02:30:00 np0005548731 nova_compute[232433]: 2025-12-06 07:30:00.261 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:00 np0005548731 nova_compute[232433]: 2025-12-06 07:30:00.275 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:00 np0005548731 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Dec  6 02:30:00 np0005548731 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006f.scope: Consumed 15.377s CPU time.
Dec  6 02:30:00 np0005548731 systemd-machined[195355]: Machine qemu-50-instance-0000006f terminated.
Dec  6 02:30:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:00.355 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:71:4e 10.100.0.10'], port_security=['fa:16:3e:01:71:4e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '714e9301-631d-44ba-8f91-a1d65209745b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-248c2772-b8d1-4900-9e05-67104b197a82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '036c977b26cc46cabc5f599cb6c00e9a', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e899a491-7b15-4e66-adca-8deec8c4c859', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eb80078-9659-4d3e-aaa8-ad516bb7d903, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=012ae37e-970e-4506-8216-7c5339055cf7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:30:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:00.356 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 012ae37e-970e-4506-8216-7c5339055cf7 in datapath 248c2772-b8d1-4900-9e05-67104b197a82 unbound from our chassis#033[00m
Dec  6 02:30:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:00.357 143965 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 248c2772-b8d1-4900-9e05-67104b197a82 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec  6 02:30:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:00.358 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7e336d83-cebc-4560-a50a-cb9c2abf56c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:00 np0005548731 nova_compute[232433]: 2025-12-06 07:30:00.535 232437 INFO nova.virt.libvirt.driver [-] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Instance destroyed successfully.#033[00m
Dec  6 02:30:00 np0005548731 nova_compute[232433]: 2025-12-06 07:30:00.535 232437 DEBUG nova.objects.instance [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lazy-loading 'resources' on Instance uuid 714e9301-631d-44ba-8f91-a1d65209745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:30:00 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 02:30:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:00.871 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:00.872 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:00.872 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:00 np0005548731 nova_compute[232433]: 2025-12-06 07:30:00.998 232437 DEBUG nova.virt.libvirt.vif [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:27:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-965064608',display_name='tempest-ServerRescueTestJSONUnderV235-server-965064608',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-965064608',id=111,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:29:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='036c977b26cc46cabc5f599cb6c00e9a',ramdisk_id='',reservation_id='r-1y4pi8rg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-706658007',owner_user_name='tempest-ServerRescueTestJSONUnderV235-706658007-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:29:31Z,user_data=None,user_id='f6f0d78959484986bab10b40c84e9406',uuid=714e9301-631d-44ba-8f91-a1d65209745b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:30:01 np0005548731 nova_compute[232433]: 2025-12-06 07:30:00.998 232437 DEBUG nova.network.os_vif_util [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Converting VIF {"id": "012ae37e-970e-4506-8216-7c5339055cf7", "address": "fa:16:3e:01:71:4e", "network": {"id": "248c2772-b8d1-4900-9e05-67104b197a82", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-505599830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "036c977b26cc46cabc5f599cb6c00e9a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap012ae37e-97", "ovs_interfaceid": "012ae37e-970e-4506-8216-7c5339055cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:01 np0005548731 nova_compute[232433]: 2025-12-06 07:30:00.999 232437 DEBUG nova.network.os_vif_util [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:71:4e,bridge_name='br-int',has_traffic_filtering=True,id=012ae37e-970e-4506-8216-7c5339055cf7,network=Network(248c2772-b8d1-4900-9e05-67104b197a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap012ae37e-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:01 np0005548731 nova_compute[232433]: 2025-12-06 07:30:00.999 232437 DEBUG os_vif [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:71:4e,bridge_name='br-int',has_traffic_filtering=True,id=012ae37e-970e-4506-8216-7c5339055cf7,network=Network(248c2772-b8d1-4900-9e05-67104b197a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap012ae37e-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:30:01 np0005548731 nova_compute[232433]: 2025-12-06 07:30:01.001 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:01 np0005548731 nova_compute[232433]: 2025-12-06 07:30:01.001 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap012ae37e-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:01 np0005548731 nova_compute[232433]: 2025-12-06 07:30:01.002 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:01 np0005548731 nova_compute[232433]: 2025-12-06 07:30:01.004 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:30:01 np0005548731 nova_compute[232433]: 2025-12-06 07:30:01.007 232437 INFO os_vif [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:71:4e,bridge_name='br-int',has_traffic_filtering=True,id=012ae37e-970e-4506-8216-7c5339055cf7,network=Network(248c2772-b8d1-4900-9e05-67104b197a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap012ae37e-97')#033[00m
Dec  6 02:30:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:01.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:02 np0005548731 nova_compute[232433]: 2025-12-06 07:30:02.141 232437 DEBUG nova.compute.manager [req-e759c603-f814-4e16-8fb4-a2dd2a43c336 req-95e60f90-0fc6-4a3d-816d-a87aad968718 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-vif-unplugged-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:02 np0005548731 nova_compute[232433]: 2025-12-06 07:30:02.141 232437 DEBUG oslo_concurrency.lockutils [req-e759c603-f814-4e16-8fb4-a2dd2a43c336 req-95e60f90-0fc6-4a3d-816d-a87aad968718 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:02 np0005548731 nova_compute[232433]: 2025-12-06 07:30:02.141 232437 DEBUG oslo_concurrency.lockutils [req-e759c603-f814-4e16-8fb4-a2dd2a43c336 req-95e60f90-0fc6-4a3d-816d-a87aad968718 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:02 np0005548731 nova_compute[232433]: 2025-12-06 07:30:02.141 232437 DEBUG oslo_concurrency.lockutils [req-e759c603-f814-4e16-8fb4-a2dd2a43c336 req-95e60f90-0fc6-4a3d-816d-a87aad968718 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:02 np0005548731 nova_compute[232433]: 2025-12-06 07:30:02.142 232437 DEBUG nova.compute.manager [req-e759c603-f814-4e16-8fb4-a2dd2a43c336 req-95e60f90-0fc6-4a3d-816d-a87aad968718 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] No waiting events found dispatching network-vif-unplugged-012ae37e-970e-4506-8216-7c5339055cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:30:02 np0005548731 nova_compute[232433]: 2025-12-06 07:30:02.142 232437 DEBUG nova.compute.manager [req-e759c603-f814-4e16-8fb4-a2dd2a43c336 req-95e60f90-0fc6-4a3d-816d-a87aad968718 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-vif-unplugged-012ae37e-970e-4506-8216-7c5339055cf7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:30:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:02.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:02 np0005548731 nova_compute[232433]: 2025-12-06 07:30:02.578 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:02 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 02:30:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:03.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:04 np0005548731 nova_compute[232433]: 2025-12-06 07:30:04.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:30:04 np0005548731 nova_compute[232433]: 2025-12-06 07:30:04.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:30:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:04.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:04 np0005548731 nova_compute[232433]: 2025-12-06 07:30:04.215 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907#033[00m
Dec  6 02:30:04 np0005548731 nova_compute[232433]: 2025-12-06 07:30:04.216 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:30:04 np0005548731 nova_compute[232433]: 2025-12-06 07:30:04.216 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:30:05 np0005548731 nova_compute[232433]: 2025-12-06 07:30:05.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:30:05 np0005548731 nova_compute[232433]: 2025-12-06 07:30:05.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:30:05 np0005548731 nova_compute[232433]: 2025-12-06 07:30:05.172 232437 DEBUG nova.compute.manager [req-4307e32c-c8e5-4f77-989f-88c7686562c0 req-6685fe8d-d10a-482c-b46c-60fe98885adc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:05 np0005548731 nova_compute[232433]: 2025-12-06 07:30:05.172 232437 DEBUG oslo_concurrency.lockutils [req-4307e32c-c8e5-4f77-989f-88c7686562c0 req-6685fe8d-d10a-482c-b46c-60fe98885adc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714e9301-631d-44ba-8f91-a1d65209745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:05 np0005548731 nova_compute[232433]: 2025-12-06 07:30:05.172 232437 DEBUG oslo_concurrency.lockutils [req-4307e32c-c8e5-4f77-989f-88c7686562c0 req-6685fe8d-d10a-482c-b46c-60fe98885adc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:05 np0005548731 nova_compute[232433]: 2025-12-06 07:30:05.172 232437 DEBUG oslo_concurrency.lockutils [req-4307e32c-c8e5-4f77-989f-88c7686562c0 req-6685fe8d-d10a-482c-b46c-60fe98885adc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:05 np0005548731 nova_compute[232433]: 2025-12-06 07:30:05.172 232437 DEBUG nova.compute.manager [req-4307e32c-c8e5-4f77-989f-88c7686562c0 req-6685fe8d-d10a-482c-b46c-60fe98885adc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] No waiting events found dispatching network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:30:05 np0005548731 nova_compute[232433]: 2025-12-06 07:30:05.173 232437 WARNING nova.compute.manager [req-4307e32c-c8e5-4f77-989f-88c7686562c0 req-6685fe8d-d10a-482c-b46c-60fe98885adc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received unexpected event network-vif-plugged-012ae37e-970e-4506-8216-7c5339055cf7 for instance with vm_state rescued and task_state deleting.#033[00m
Dec  6 02:30:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:05.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:05 np0005548731 nova_compute[232433]: 2025-12-06 07:30:05.907 232437 INFO nova.virt.libvirt.driver [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Deleting instance files /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b_del#033[00m
Dec  6 02:30:05 np0005548731 nova_compute[232433]: 2025-12-06 07:30:05.908 232437 INFO nova.virt.libvirt.driver [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Deletion of /var/lib/nova/instances/714e9301-631d-44ba-8f91-a1d65209745b_del complete#033[00m
Dec  6 02:30:06 np0005548731 nova_compute[232433]: 2025-12-06 07:30:06.044 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:06 np0005548731 nova_compute[232433]: 2025-12-06 07:30:06.047 232437 INFO nova.compute.manager [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Took 6.15 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:30:06 np0005548731 nova_compute[232433]: 2025-12-06 07:30:06.048 232437 DEBUG oslo.service.loopingcall [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:30:06 np0005548731 nova_compute[232433]: 2025-12-06 07:30:06.048 232437 DEBUG nova.compute.manager [-] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:30:06 np0005548731 nova_compute[232433]: 2025-12-06 07:30:06.048 232437 DEBUG nova.network.neutron [-] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:30:06 np0005548731 nova_compute[232433]: 2025-12-06 07:30:06.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:30:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:06.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:06 np0005548731 nova_compute[232433]: 2025-12-06 07:30:06.875 232437 DEBUG nova.network.neutron [-] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:30:06 np0005548731 nova_compute[232433]: 2025-12-06 07:30:06.892 232437 INFO nova.compute.manager [-] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Took 0.84 seconds to deallocate network for instance.#033[00m
Dec  6 02:30:06 np0005548731 nova_compute[232433]: 2025-12-06 07:30:06.958 232437 DEBUG oslo_concurrency.lockutils [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:06 np0005548731 nova_compute[232433]: 2025-12-06 07:30:06.959 232437 DEBUG oslo_concurrency.lockutils [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:06 np0005548731 nova_compute[232433]: 2025-12-06 07:30:06.984 232437 DEBUG nova.compute.manager [req-705fc047-0bcf-493b-ae30-1c0a676a8b0e req-58fb9adc-0783-4c91-bccb-979ce8ff79dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Received event network-vif-deleted-012ae37e-970e-4506-8216-7c5339055cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:07 np0005548731 nova_compute[232433]: 2025-12-06 07:30:07.006 232437 DEBUG oslo_concurrency.processutils [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:07.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:30:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1666525427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:30:07 np0005548731 nova_compute[232433]: 2025-12-06 07:30:07.433 232437 DEBUG oslo_concurrency.processutils [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:07 np0005548731 nova_compute[232433]: 2025-12-06 07:30:07.439 232437 DEBUG nova.compute.provider_tree [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:30:07 np0005548731 nova_compute[232433]: 2025-12-06 07:30:07.475 232437 DEBUG nova.scheduler.client.report [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:30:07 np0005548731 nova_compute[232433]: 2025-12-06 07:30:07.512 232437 DEBUG oslo_concurrency.lockutils [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:07 np0005548731 nova_compute[232433]: 2025-12-06 07:30:07.548 232437 INFO nova.scheduler.client.report [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Deleted allocations for instance 714e9301-631d-44ba-8f91-a1d65209745b#033[00m
Dec  6 02:30:07 np0005548731 nova_compute[232433]: 2025-12-06 07:30:07.580 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:07 np0005548731 nova_compute[232433]: 2025-12-06 07:30:07.619 232437 DEBUG oslo_concurrency.lockutils [None req-bc2a11f0-393d-407d-8be8-a59d52fa8d79 f6f0d78959484986bab10b40c84e9406 036c977b26cc46cabc5f599cb6c00e9a - - default default] Lock "714e9301-631d-44ba-8f91-a1d65209745b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:08 np0005548731 nova_compute[232433]: 2025-12-06 07:30:08.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:30:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:08.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:30:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/815563372' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:30:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:30:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/815563372' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:30:08 np0005548731 nova_compute[232433]: 2025-12-06 07:30:08.757 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:08 np0005548731 nova_compute[232433]: 2025-12-06 07:30:08.758 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:08 np0005548731 nova_compute[232433]: 2025-12-06 07:30:08.779 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:30:08 np0005548731 nova_compute[232433]: 2025-12-06 07:30:08.912 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:08 np0005548731 nova_compute[232433]: 2025-12-06 07:30:08.913 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:08 np0005548731 nova_compute[232433]: 2025-12-06 07:30:08.929 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:30:08 np0005548731 nova_compute[232433]: 2025-12-06 07:30:08.930 232437 INFO nova.compute.claims [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.050 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:30:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:09.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:30:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:30:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/20788799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.464 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.469 232437 DEBUG nova.compute.provider_tree [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.493 232437 DEBUG nova.scheduler.client.report [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.516 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.517 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.582 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.582 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.609 232437 INFO nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.677 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.726 232437 INFO nova.virt.block_device [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Booting with volume 0deea545-3f8b-4364-87f8-f0e1b1b71caf at /dev/vda#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.865 232437 DEBUG os_brick.utils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.866 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.877 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.877 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[5b623052-00bb-4a96-93ad-64dd32dbf82d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.879 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.887 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.887 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[42111c5b-3508-4a0e-86cc-7611f5c99182]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.889 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.897 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.897 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[96663fff-b821-4837-a3a2-2a87066161b3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.899 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[dc46ff57-ea8a-4023-a8bf-9ea7d16a036f]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.899 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.922 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.924 232437 DEBUG os_brick.initiator.connectors.lightos [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.925 232437 DEBUG os_brick.initiator.connectors.lightos [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.925 232437 DEBUG os_brick.initiator.connectors.lightos [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.926 232437 DEBUG os_brick.utils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:30:09 np0005548731 nova_compute[232433]: 2025-12-06 07:30:09.926 232437 DEBUG nova.virt.block_device [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating existing volume attachment record: 605fe259-713f-428e-a86f-1b28c17a77b3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:30:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:30:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:10.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:30:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:10.854 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:30:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:10.855 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:30:10 np0005548731 nova_compute[232433]: 2025-12-06 07:30:10.856 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:10 np0005548731 nova_compute[232433]: 2025-12-06 07:30:10.914 232437 DEBUG nova.policy [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e5d532f4af34acc9e3a7a08819bb089', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.045 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.135 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.135 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.135 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.135 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.303 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:11.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.409 232437 INFO nova.virt.block_device [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Booting with volume 2c5c3509-8d3e-4707-bc90-7053f467e836 at /dev/vdb#033[00m
Dec  6 02:30:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:30:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/878054034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.570 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.602 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.692 232437 DEBUG os_brick.utils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.693 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.701 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.702 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7f86412f-08a1-4d4f-acb3-be02b540d605]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.702 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.709 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.709 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[2574a3db-b0f0-463c-96ee-8d6455a82d18]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.711 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.718 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.718 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[0752d11a-c085-4fc6-80f3-002f8c34601d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.719 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[463d13b2-a226-4688-b926-99c00c126173]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.719 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.740 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.743 232437 DEBUG os_brick.initiator.connectors.lightos [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.743 232437 DEBUG os_brick.initiator.connectors.lightos [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.744 232437 DEBUG os_brick.initiator.connectors.lightos [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.744 232437 DEBUG os_brick.utils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] <== get_connector_properties: return (51ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.744 232437 DEBUG nova.virt.block_device [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating existing volume attachment record: 6b36861a-4ab3-49b3-80b9-e4897e7f322c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.773 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.775 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4468MB free_disk=20.863555908203125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.775 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.775 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.828 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 02b3a680-d1bd-462f-95dc-aa0934c7ceac actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.828 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.829 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:30:11 np0005548731 nova_compute[232433]: 2025-12-06 07:30:11.870 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:12 np0005548731 nova_compute[232433]: 2025-12-06 07:30:12.177 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully created port: 70cec09c-3d65-4e69-8c35-7d82e75f4add _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:30:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:30:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:12.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:30:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:30:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/87351210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:30:12 np0005548731 nova_compute[232433]: 2025-12-06 07:30:12.303 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:12 np0005548731 nova_compute[232433]: 2025-12-06 07:30:12.308 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:30:12 np0005548731 nova_compute[232433]: 2025-12-06 07:30:12.324 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:30:12 np0005548731 nova_compute[232433]: 2025-12-06 07:30:12.345 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:30:12 np0005548731 nova_compute[232433]: 2025-12-06 07:30:12.345 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:30:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2865460060' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:30:12 np0005548731 nova_compute[232433]: 2025-12-06 07:30:12.581 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:12 np0005548731 nova_compute[232433]: 2025-12-06 07:30:12.795 232437 INFO nova.virt.block_device [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Booting with volume 6db274ff-02a7-4513-aa8b-c44494100a39 at /dev/vdc#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.338 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully created port: ee2de0a7-a321-4f41-868f-457ea7275ded _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:30:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:30:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:13.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.532 232437 DEBUG os_brick.utils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.533 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.543 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.543 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[a192c2a1-fa50-4a33-aade-3e28a8cd5a69]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.544 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.552 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.553 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[95609f7d-3495-4f2d-ba76-9c5dcae14b40]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.554 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.562 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.563 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[99b83737-edd4-4473-a448-f43963d0620c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.564 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[1932b946-7900-4847-ba18-9a5ce6bf0610]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.564 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.587 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.589 232437 DEBUG os_brick.initiator.connectors.lightos [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.590 232437 DEBUG os_brick.initiator.connectors.lightos [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.590 232437 DEBUG os_brick.initiator.connectors.lightos [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.590 232437 DEBUG os_brick.utils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:30:13 np0005548731 nova_compute[232433]: 2025-12-06 07:30:13.591 232437 DEBUG nova.virt.block_device [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating existing volume attachment record: 14554c76-4c07-4f31-8b77-0531073660cd _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:30:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:30:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:14.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:30:14 np0005548731 nova_compute[232433]: 2025-12-06 07:30:14.347 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:30:14 np0005548731 nova_compute[232433]: 2025-12-06 07:30:14.647 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully created port: 7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.070 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.071 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.072 232437 INFO nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Creating image(s)#033[00m
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.072 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.072 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Ensure instance console log exists: /var/lib/nova/instances/02b3a680-d1bd-462f-95dc-aa0934c7ceac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.073 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.073 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.073 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.391 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully created port: 691d36f1-88c4-4e29-b5a7-059d5b66aee3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:30:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:15.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.533 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006200.5317428, 714e9301-631d-44ba-8f91-a1d65209745b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.533 232437 INFO nova.compute.manager [-] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:30:15 np0005548731 nova_compute[232433]: 2025-12-06 07:30:15.551 232437 DEBUG nova.compute.manager [None req-e0ea2781-1c0f-4f5d-a3ca-6e9c1aa87910 - - - - - -] [instance: 714e9301-631d-44ba-8f91-a1d65209745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:30:15 np0005548731 podman[279571]: 2025-12-06 07:30:15.89022496 +0000 UTC m=+0.052316197 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:30:15 np0005548731 podman[279573]: 2025-12-06 07:30:15.897214571 +0000 UTC m=+0.056223723 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  6 02:30:15 np0005548731 podman[279572]: 2025-12-06 07:30:15.946162556 +0000 UTC m=+0.107828933 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:30:16 np0005548731 nova_compute[232433]: 2025-12-06 07:30:16.047 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:16.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:16 np0005548731 nova_compute[232433]: 2025-12-06 07:30:16.438 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully created port: a935a2a5-757d-49f8-9924-1f5f12e03325 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:30:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:17.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:17 np0005548731 nova_compute[232433]: 2025-12-06 07:30:17.583 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:17 np0005548731 nova_compute[232433]: 2025-12-06 07:30:17.829 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully updated port: 70cec09c-3d65-4e69-8c35-7d82e75f4add _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:30:18 np0005548731 nova_compute[232433]: 2025-12-06 07:30:18.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:30:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:30:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:18.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:30:18 np0005548731 nova_compute[232433]: 2025-12-06 07:30:18.371 232437 DEBUG nova.compute.manager [req-a32b1fc1-6f71-4cd9-b2dd-e6bae507167b req-c516dbb3-fed6-4fae-8939-e638173d3fc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-changed-70cec09c-3d65-4e69-8c35-7d82e75f4add external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:18 np0005548731 nova_compute[232433]: 2025-12-06 07:30:18.371 232437 DEBUG nova.compute.manager [req-a32b1fc1-6f71-4cd9-b2dd-e6bae507167b req-c516dbb3-fed6-4fae-8939-e638173d3fc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing instance network info cache due to event network-changed-70cec09c-3d65-4e69-8c35-7d82e75f4add. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:30:18 np0005548731 nova_compute[232433]: 2025-12-06 07:30:18.371 232437 DEBUG oslo_concurrency.lockutils [req-a32b1fc1-6f71-4cd9-b2dd-e6bae507167b req-c516dbb3-fed6-4fae-8939-e638173d3fc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:30:18 np0005548731 nova_compute[232433]: 2025-12-06 07:30:18.371 232437 DEBUG oslo_concurrency.lockutils [req-a32b1fc1-6f71-4cd9-b2dd-e6bae507167b req-c516dbb3-fed6-4fae-8939-e638173d3fc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:30:18 np0005548731 nova_compute[232433]: 2025-12-06 07:30:18.372 232437 DEBUG nova.network.neutron [req-a32b1fc1-6f71-4cd9-b2dd-e6bae507167b req-c516dbb3-fed6-4fae-8939-e638173d3fc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing network info cache for port 70cec09c-3d65-4e69-8c35-7d82e75f4add _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:30:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:18 np0005548731 nova_compute[232433]: 2025-12-06 07:30:18.565 232437 DEBUG nova.network.neutron [req-a32b1fc1-6f71-4cd9-b2dd-e6bae507167b req-c516dbb3-fed6-4fae-8939-e638173d3fc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:30:18 np0005548731 nova_compute[232433]: 2025-12-06 07:30:18.966 232437 DEBUG nova.network.neutron [req-a32b1fc1-6f71-4cd9-b2dd-e6bae507167b req-c516dbb3-fed6-4fae-8939-e638173d3fc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:30:18 np0005548731 nova_compute[232433]: 2025-12-06 07:30:18.985 232437 DEBUG oslo_concurrency.lockutils [req-a32b1fc1-6f71-4cd9-b2dd-e6bae507167b req-c516dbb3-fed6-4fae-8939-e638173d3fc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:30:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:19.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:19 np0005548731 nova_compute[232433]: 2025-12-06 07:30:19.905 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully updated port: 99fbd66e-87ac-42e3-8722-34d617ef9bb1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:30:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:20.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:20 np0005548731 nova_compute[232433]: 2025-12-06 07:30:20.478 232437 DEBUG nova.compute.manager [req-9f015eb3-09f4-4632-96e5-86df041ff142 req-d15abd06-e88c-489e-895c-e0b93b445459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-changed-99fbd66e-87ac-42e3-8722-34d617ef9bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:20 np0005548731 nova_compute[232433]: 2025-12-06 07:30:20.479 232437 DEBUG nova.compute.manager [req-9f015eb3-09f4-4632-96e5-86df041ff142 req-d15abd06-e88c-489e-895c-e0b93b445459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing instance network info cache due to event network-changed-99fbd66e-87ac-42e3-8722-34d617ef9bb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:30:20 np0005548731 nova_compute[232433]: 2025-12-06 07:30:20.479 232437 DEBUG oslo_concurrency.lockutils [req-9f015eb3-09f4-4632-96e5-86df041ff142 req-d15abd06-e88c-489e-895c-e0b93b445459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:30:20 np0005548731 nova_compute[232433]: 2025-12-06 07:30:20.479 232437 DEBUG oslo_concurrency.lockutils [req-9f015eb3-09f4-4632-96e5-86df041ff142 req-d15abd06-e88c-489e-895c-e0b93b445459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:30:20 np0005548731 nova_compute[232433]: 2025-12-06 07:30:20.480 232437 DEBUG nova.network.neutron [req-9f015eb3-09f4-4632-96e5-86df041ff142 req-d15abd06-e88c-489e-895c-e0b93b445459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing network info cache for port 99fbd66e-87ac-42e3-8722-34d617ef9bb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:30:20 np0005548731 nova_compute[232433]: 2025-12-06 07:30:20.763 232437 DEBUG nova.network.neutron [req-9f015eb3-09f4-4632-96e5-86df041ff142 req-d15abd06-e88c-489e-895c-e0b93b445459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:30:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:20.857 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:20 np0005548731 nova_compute[232433]: 2025-12-06 07:30:20.928 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully updated port: 7989285b-96a5-43e8-b98e-c0fe1badbf5e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:30:21 np0005548731 nova_compute[232433]: 2025-12-06 07:30:21.050 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:21 np0005548731 nova_compute[232433]: 2025-12-06 07:30:21.337 232437 DEBUG nova.network.neutron [req-9f015eb3-09f4-4632-96e5-86df041ff142 req-d15abd06-e88c-489e-895c-e0b93b445459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:30:21 np0005548731 nova_compute[232433]: 2025-12-06 07:30:21.353 232437 DEBUG oslo_concurrency.lockutils [req-9f015eb3-09f4-4632-96e5-86df041ff142 req-d15abd06-e88c-489e-895c-e0b93b445459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:30:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:21.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:21 np0005548731 nova_compute[232433]: 2025-12-06 07:30:21.760 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully updated port: ee2de0a7-a321-4f41-868f-457ea7275ded _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:30:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:22.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:22 np0005548731 nova_compute[232433]: 2025-12-06 07:30:22.584 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:22 np0005548731 nova_compute[232433]: 2025-12-06 07:30:22.614 232437 DEBUG nova.compute.manager [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-changed-7989285b-96a5-43e8-b98e-c0fe1badbf5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:22 np0005548731 nova_compute[232433]: 2025-12-06 07:30:22.614 232437 DEBUG nova.compute.manager [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing instance network info cache due to event network-changed-7989285b-96a5-43e8-b98e-c0fe1badbf5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:30:22 np0005548731 nova_compute[232433]: 2025-12-06 07:30:22.614 232437 DEBUG oslo_concurrency.lockutils [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:30:22 np0005548731 nova_compute[232433]: 2025-12-06 07:30:22.615 232437 DEBUG oslo_concurrency.lockutils [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:30:22 np0005548731 nova_compute[232433]: 2025-12-06 07:30:22.615 232437 DEBUG nova.network.neutron [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing network info cache for port 7989285b-96a5-43e8-b98e-c0fe1badbf5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:30:22 np0005548731 nova_compute[232433]: 2025-12-06 07:30:22.804 232437 DEBUG nova.network.neutron [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:30:23 np0005548731 nova_compute[232433]: 2025-12-06 07:30:23.177 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully updated port: 7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:30:23 np0005548731 nova_compute[232433]: 2025-12-06 07:30:23.200 232437 DEBUG nova.network.neutron [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:30:23 np0005548731 nova_compute[232433]: 2025-12-06 07:30:23.245 232437 DEBUG oslo_concurrency.lockutils [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:30:23 np0005548731 nova_compute[232433]: 2025-12-06 07:30:23.245 232437 DEBUG nova.compute.manager [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-changed-ee2de0a7-a321-4f41-868f-457ea7275ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:23 np0005548731 nova_compute[232433]: 2025-12-06 07:30:23.246 232437 DEBUG nova.compute.manager [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing instance network info cache due to event network-changed-ee2de0a7-a321-4f41-868f-457ea7275ded. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:30:23 np0005548731 nova_compute[232433]: 2025-12-06 07:30:23.246 232437 DEBUG oslo_concurrency.lockutils [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:30:23 np0005548731 nova_compute[232433]: 2025-12-06 07:30:23.246 232437 DEBUG oslo_concurrency.lockutils [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:30:23 np0005548731 nova_compute[232433]: 2025-12-06 07:30:23.246 232437 DEBUG nova.network.neutron [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing network info cache for port ee2de0a7-a321-4f41-868f-457ea7275ded _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:30:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:23.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:23 np0005548731 nova_compute[232433]: 2025-12-06 07:30:23.520 232437 DEBUG nova.network.neutron [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:30:24 np0005548731 nova_compute[232433]: 2025-12-06 07:30:24.092 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully updated port: 691d36f1-88c4-4e29-b5a7-059d5b66aee3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:30:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:24.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:24 np0005548731 nova_compute[232433]: 2025-12-06 07:30:24.328 232437 DEBUG nova.network.neutron [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:30:24 np0005548731 nova_compute[232433]: 2025-12-06 07:30:24.347 232437 DEBUG oslo_concurrency.lockutils [req-8f4ea02c-52df-4f35-b99d-cce4bbd51d95 req-d5255183-ce97-4fb4-a049-78f4d0013242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:30:24 np0005548731 nova_compute[232433]: 2025-12-06 07:30:24.739 232437 DEBUG nova.compute.manager [req-0f58004f-54fa-470f-88ce-ce5335995cef req-c812a80e-a35d-4aab-b558-b99f541f09be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-changed-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:24 np0005548731 nova_compute[232433]: 2025-12-06 07:30:24.740 232437 DEBUG nova.compute.manager [req-0f58004f-54fa-470f-88ce-ce5335995cef req-c812a80e-a35d-4aab-b558-b99f541f09be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing instance network info cache due to event network-changed-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:30:24 np0005548731 nova_compute[232433]: 2025-12-06 07:30:24.740 232437 DEBUG oslo_concurrency.lockutils [req-0f58004f-54fa-470f-88ce-ce5335995cef req-c812a80e-a35d-4aab-b558-b99f541f09be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:30:24 np0005548731 nova_compute[232433]: 2025-12-06 07:30:24.740 232437 DEBUG oslo_concurrency.lockutils [req-0f58004f-54fa-470f-88ce-ce5335995cef req-c812a80e-a35d-4aab-b558-b99f541f09be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:30:24 np0005548731 nova_compute[232433]: 2025-12-06 07:30:24.741 232437 DEBUG nova.network.neutron [req-0f58004f-54fa-470f-88ce-ce5335995cef req-c812a80e-a35d-4aab-b558-b99f541f09be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing network info cache for port 7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:30:24 np0005548731 nova_compute[232433]: 2025-12-06 07:30:24.957 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Successfully updated port: a935a2a5-757d-49f8-9924-1f5f12e03325 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:30:24 np0005548731 nova_compute[232433]: 2025-12-06 07:30:24.982 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:30:25 np0005548731 nova_compute[232433]: 2025-12-06 07:30:25.094 232437 DEBUG nova.network.neutron [req-0f58004f-54fa-470f-88ce-ce5335995cef req-c812a80e-a35d-4aab-b558-b99f541f09be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:30:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:25.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:25 np0005548731 nova_compute[232433]: 2025-12-06 07:30:25.780 232437 DEBUG nova.network.neutron [req-0f58004f-54fa-470f-88ce-ce5335995cef req-c812a80e-a35d-4aab-b558-b99f541f09be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:30:25 np0005548731 nova_compute[232433]: 2025-12-06 07:30:25.812 232437 DEBUG oslo_concurrency.lockutils [req-0f58004f-54fa-470f-88ce-ce5335995cef req-c812a80e-a35d-4aab-b558-b99f541f09be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:30:25 np0005548731 nova_compute[232433]: 2025-12-06 07:30:25.812 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquired lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:30:25 np0005548731 nova_compute[232433]: 2025-12-06 07:30:25.813 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:30:26 np0005548731 nova_compute[232433]: 2025-12-06 07:30:26.090 232437 DEBUG nova.compute.manager [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-changed-691d36f1-88c4-4e29-b5a7-059d5b66aee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:26 np0005548731 nova_compute[232433]: 2025-12-06 07:30:26.091 232437 DEBUG nova.compute.manager [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing instance network info cache due to event network-changed-691d36f1-88c4-4e29-b5a7-059d5b66aee3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:30:26 np0005548731 nova_compute[232433]: 2025-12-06 07:30:26.091 232437 DEBUG oslo_concurrency.lockutils [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:30:26 np0005548731 nova_compute[232433]: 2025-12-06 07:30:26.092 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:26 np0005548731 nova_compute[232433]: 2025-12-06 07:30:26.095 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:30:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:26.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:27.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:27 np0005548731 nova_compute[232433]: 2025-12-06 07:30:27.587 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:30:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:28.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:30:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:29.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:30.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:30:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3602.4 total, 600.0 interval#012Cumulative writes: 40K writes, 168K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.05 MB/s#012Cumulative WAL: 40K writes, 14K syncs, 2.85 writes per sync, written: 0.17 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9515 writes, 38K keys, 9515 commit groups, 1.0 writes per commit group, ingest: 41.59 MB, 0.07 MB/s#012Interval WAL: 9515 writes, 3582 syncs, 2.66 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 02:30:31 np0005548731 nova_compute[232433]: 2025-12-06 07:30:31.095 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:31.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:32.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:32 np0005548731 nova_compute[232433]: 2025-12-06 07:30:32.589 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:33.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:34.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:35.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:36 np0005548731 nova_compute[232433]: 2025-12-06 07:30:36.098 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:36.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:37.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:37 np0005548731 nova_compute[232433]: 2025-12-06 07:30:37.590 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:38.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:39.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.591 232437 DEBUG nova.network.neutron [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [{"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.627 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Releasing lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.627 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Instance network_info: |[{"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.628 232437 DEBUG oslo_concurrency.lockutils [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.628 232437 DEBUG nova.network.neutron [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing network info cache for port 691d36f1-88c4-4e29-b5a7-059d5b66aee3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.635 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Start _get_guest_xml network_info=[{"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025
Dec  6 02:30:39 np0005548731 nova_compute[232433]: disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0deea545-3f8b-4364-87f8-f0e1b1b71caf', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0deea545-3f8b-4364-87f8-f0e1b1b71caf', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'attached_at': '', 'detached_at': '', 'volume_id': '0deea545-3f8b-4364-87f8-f0e1b1b71caf', 'serial': '0deea545-3f8b-4364-87f8-f0e1b1b71caf'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'attachment_id': '605fe259-713f-428e-a86f-1b28c17a77b3', 'volume_type': None}, {'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2c5c3509-8d3e-4707-bc90-7053f467e836', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2c5c3509-8d3e-4707-bc90-7053f467e836', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'attached_at': '', 'detached_at': '', 'volume_id': '2c5c3509-8d3e-4707-bc90-7053f467e836', 'serial': '2c5c3509-8d3e-4707-bc90-7053f467e836'}, 'disk_bus': 'virtio', 'boot_index': 1, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'attachment_id': '6b36861a-4ab3-49b3-80b9-e4897e7f322c', 'volume_type': None}, {'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6db274ff-02a7-4513-aa8b-c44494100a39', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6db274ff-02a7-4513-aa8b-c44494100a39', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'attached_at': '', 'detached_at': '', 'volume_id': '6db274ff-02a7-4513-aa8b-c44494100a39', 'serial': '6db274ff-02a7-4513-aa8b-c44494100a39'}, 'disk_bus': 'virtio', 'boot_index': 2, 'delete_on_termination': False, 'mount_device': '/dev/vdc', 'attachment_id': '14554c76-4c07-4f31-8b77-0531073660cd', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.640 232437 WARNING nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.649 232437 DEBUG nova.virt.libvirt.host [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.650 232437 DEBUG nova.virt.libvirt.host [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.653 232437 DEBUG nova.virt.libvirt.host [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.654 232437 DEBUG nova.virt.libvirt.host [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.655 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.655 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.655 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.656 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.656 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.656 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.657 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.657 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.657 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.657 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.658 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.658 232437 DEBUG nova.virt.hardware [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.685 232437 DEBUG nova.storage.rbd_utils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] rbd image 02b3a680-d1bd-462f-95dc-aa0934c7ceac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:30:39 np0005548731 rsyslogd[1006]: message too long (8192) with configured size 8096, begin of message is: 2025-12-06 07:30:39.635 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  6 02:30:39 np0005548731 nova_compute[232433]: 2025-12-06 07:30:39.690 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:30:40 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/428252605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.121 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.203 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.204 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.206 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:a4:68,bridge_name='br-int',has_traffic_filtering=True,id=70cec09c-3d65-4e69-8c35-7d82e75f4add,network=Network(15e538c5-1144-486f-bc43-86871ca19e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70cec09c-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.207 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.208 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.208 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:05:bf,bridge_name='br-int',has_traffic_filtering=True,id=99fbd66e-87ac-42e3-8722-34d617ef9bb1,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap99fbd66e-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.209 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.209 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.209 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:25:4d,bridge_name='br-int',has_traffic_filtering=True,id=7989285b-96a5-43e8-b98e-c0fe1badbf5e,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7989285b-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.210 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.210 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.211 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:49:cc,bridge_name='br-int',has_traffic_filtering=True,id=ee2de0a7-a321-4f41-868f-457ea7275ded,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2de0a7-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.211 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.212 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.212 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:a8:03,bridge_name='br-int',has_traffic_filtering=True,id=7aeffb1d-aa58-4c79-9768-bcd0a7dcce43,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7aeffb1d-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.213 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.213 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.213 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:aa:a0,bridge_name='br-int',has_traffic_filtering=True,id=691d36f1-88c4-4e29-b5a7-059d5b66aee3,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691d36f1-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.214 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.214 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.214 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:17:f5,bridge_name='br-int',has_traffic_filtering=True,id=a935a2a5-757d-49f8-9924-1f5f12e03325,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa935a2a5-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.215 232437 DEBUG nova.objects.instance [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lazy-loading 'pci_devices' on Instance uuid 02b3a680-d1bd-462f-95dc-aa0934c7ceac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.230 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  <uuid>02b3a680-d1bd-462f-95dc-aa0934c7ceac</uuid>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  <name>instance-00000072</name>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <nova:name>tempest-device-tagging-server-978600578</nova:name>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:30:39</nova:creationTime>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:user uuid="8e5d532f4af34acc9e3a7a08819bb089">tempest-TaggedBootDevicesTest_v242-1145519801-project-member</nova:user>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:project uuid="1a5dd5f9ca5747618b386c87a40ede88">tempest-TaggedBootDevicesTest_v242-1145519801</nova:project>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:port uuid="70cec09c-3d65-4e69-8c35-7d82e75f4add">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:port uuid="99fbd66e-87ac-42e3-8722-34d617ef9bb1">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.1.1.137" ipVersion="4"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:port uuid="7989285b-96a5-43e8-b98e-c0fe1badbf5e">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.1.1.231" ipVersion="4"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:port uuid="ee2de0a7-a321-4f41-868f-457ea7275ded">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.1.1.249" ipVersion="4"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:port uuid="7aeffb1d-aa58-4c79-9768-bcd0a7dcce43">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.1.1.213" ipVersion="4"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:port uuid="691d36f1-88c4-4e29-b5a7-059d5b66aee3">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <nova:port uuid="a935a2a5-757d-49f8-9924-1f5f12e03325">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <entry name="serial">02b3a680-d1bd-462f-95dc-aa0934c7ceac</entry>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <entry name="uuid">02b3a680-d1bd-462f-95dc-aa0934c7ceac</entry>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/02b3a680-d1bd-462f-95dc-aa0934c7ceac_disk.config">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-0deea545-3f8b-4364-87f8-f0e1b1b71caf">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <serial>0deea545-3f8b-4364-87f8-f0e1b1b71caf</serial>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-2c5c3509-8d3e-4707-bc90-7053f467e836">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <target dev="vdb" bus="virtio"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <serial>2c5c3509-8d3e-4707-bc90-7053f467e836</serial>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-6db274ff-02a7-4513-aa8b-c44494100a39">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <target dev="vdc" bus="virtio"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <serial>6db274ff-02a7-4513-aa8b-c44494100a39</serial>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:89:a4:68"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <target dev="tap70cec09c-3d"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:c3:05:bf"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <target dev="tap99fbd66e-87"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:e2:25:4d"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <target dev="tap7989285b-96"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:62:49:cc"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <target dev="tapee2de0a7-a3"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:45:a8:03"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <target dev="tap7aeffb1d-aa"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:b2:aa:a0"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <target dev="tap691d36f1-88"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:c4:17:f5"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <target dev="tapa935a2a5-75"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/02b3a680-d1bd-462f-95dc-aa0934c7ceac/console.log" append="off"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:30:40 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:30:40 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:30:40 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:30:40 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.232 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Preparing to wait for external event network-vif-plugged-70cec09c-3d65-4e69-8c35-7d82e75f4add prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.232 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.233 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.233 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.233 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Preparing to wait for external event network-vif-plugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.233 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.233 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.233 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.234 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Preparing to wait for external event network-vif-plugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.234 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.234 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.234 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.234 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Preparing to wait for external event network-vif-plugged-ee2de0a7-a321-4f41-868f-457ea7275ded prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.235 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.235 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.235 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.235 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Preparing to wait for external event network-vif-plugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.235 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.235 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.236 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.236 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Preparing to wait for external event network-vif-plugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.236 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.236 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.236 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.236 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Preparing to wait for external event network-vif-plugged-a935a2a5-757d-49f8-9924-1f5f12e03325 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.236 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.237 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.237 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.237 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.238 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.238 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:a4:68,bridge_name='br-int',has_traffic_filtering=True,id=70cec09c-3d65-4e69-8c35-7d82e75f4add,network=Network(15e538c5-1144-486f-bc43-86871ca19e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70cec09c-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.239 232437 DEBUG os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:a4:68,bridge_name='br-int',has_traffic_filtering=True,id=70cec09c-3d65-4e69-8c35-7d82e75f4add,network=Network(15e538c5-1144-486f-bc43-86871ca19e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70cec09c-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.240 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.240 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.241 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.243 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.243 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70cec09c-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.244 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap70cec09c-3d, col_values=(('external_ids', {'iface-id': '70cec09c-3d65-4e69-8c35-7d82e75f4add', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:a4:68', 'vm-uuid': '02b3a680-d1bd-462f-95dc-aa0934c7ceac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.245 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 NetworkManager[49182]: <info>  [1765006240.2465] manager: (tap70cec09c-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/219)
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.248 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:30:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:40.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.254 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.255 232437 INFO os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:a4:68,bridge_name='br-int',has_traffic_filtering=True,id=70cec09c-3d65-4e69-8c35-7d82e75f4add,network=Network(15e538c5-1144-486f-bc43-86871ca19e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70cec09c-3d')#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.256 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.256 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.257 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:05:bf,bridge_name='br-int',has_traffic_filtering=True,id=99fbd66e-87ac-42e3-8722-34d617ef9bb1,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap99fbd66e-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.257 232437 DEBUG os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:05:bf,bridge_name='br-int',has_traffic_filtering=True,id=99fbd66e-87ac-42e3-8722-34d617ef9bb1,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap99fbd66e-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.258 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.258 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.258 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.261 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.261 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99fbd66e-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.262 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap99fbd66e-87, col_values=(('external_ids', {'iface-id': '99fbd66e-87ac-42e3-8722-34d617ef9bb1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:05:bf', 'vm-uuid': '02b3a680-d1bd-462f-95dc-aa0934c7ceac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.263 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 NetworkManager[49182]: <info>  [1765006240.2639] manager: (tap99fbd66e-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.265 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.270 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.270 232437 INFO os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:05:bf,bridge_name='br-int',has_traffic_filtering=True,id=99fbd66e-87ac-42e3-8722-34d617ef9bb1,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap99fbd66e-87')#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.272 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.272 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.273 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:25:4d,bridge_name='br-int',has_traffic_filtering=True,id=7989285b-96a5-43e8-b98e-c0fe1badbf5e,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7989285b-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.273 232437 DEBUG os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:25:4d,bridge_name='br-int',has_traffic_filtering=True,id=7989285b-96a5-43e8-b98e-c0fe1badbf5e,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7989285b-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.273 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.274 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.274 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.276 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.277 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7989285b-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.277 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7989285b-96, col_values=(('external_ids', {'iface-id': '7989285b-96a5-43e8-b98e-c0fe1badbf5e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:25:4d', 'vm-uuid': '02b3a680-d1bd-462f-95dc-aa0934c7ceac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.278 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 NetworkManager[49182]: <info>  [1765006240.2797] manager: (tap7989285b-96): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.281 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.286 232437 INFO os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:25:4d,bridge_name='br-int',has_traffic_filtering=True,id=7989285b-96a5-43e8-b98e-c0fe1badbf5e,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7989285b-96')#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.287 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.287 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.288 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:49:cc,bridge_name='br-int',has_traffic_filtering=True,id=ee2de0a7-a321-4f41-868f-457ea7275ded,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2de0a7-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.288 232437 DEBUG os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:49:cc,bridge_name='br-int',has_traffic_filtering=True,id=ee2de0a7-a321-4f41-868f-457ea7275ded,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2de0a7-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.288 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.289 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.289 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.290 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.291 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee2de0a7-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.291 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee2de0a7-a3, col_values=(('external_ids', {'iface-id': 'ee2de0a7-a321-4f41-868f-457ea7275ded', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:49:cc', 'vm-uuid': '02b3a680-d1bd-462f-95dc-aa0934c7ceac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.292 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 NetworkManager[49182]: <info>  [1765006240.2929] manager: (tapee2de0a7-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.294 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.302 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.303 232437 INFO os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:49:cc,bridge_name='br-int',has_traffic_filtering=True,id=ee2de0a7-a321-4f41-868f-457ea7275ded,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2de0a7-a3')#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.304 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.304 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.305 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:a8:03,bridge_name='br-int',has_traffic_filtering=True,id=7aeffb1d-aa58-4c79-9768-bcd0a7dcce43,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7aeffb1d-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.305 232437 DEBUG os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:a8:03,bridge_name='br-int',has_traffic_filtering=True,id=7aeffb1d-aa58-4c79-9768-bcd0a7dcce43,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7aeffb1d-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.306 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.306 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.306 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.308 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.308 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7aeffb1d-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.309 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7aeffb1d-aa, col_values=(('external_ids', {'iface-id': '7aeffb1d-aa58-4c79-9768-bcd0a7dcce43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:a8:03', 'vm-uuid': '02b3a680-d1bd-462f-95dc-aa0934c7ceac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.310 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 NetworkManager[49182]: <info>  [1765006240.3110] manager: (tap7aeffb1d-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.312 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.323 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.324 232437 INFO os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:a8:03,bridge_name='br-int',has_traffic_filtering=True,id=7aeffb1d-aa58-4c79-9768-bcd0a7dcce43,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7aeffb1d-aa')#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.324 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.324 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.325 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:aa:a0,bridge_name='br-int',has_traffic_filtering=True,id=691d36f1-88c4-4e29-b5a7-059d5b66aee3,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691d36f1-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.325 232437 DEBUG os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:aa:a0,bridge_name='br-int',has_traffic_filtering=True,id=691d36f1-88c4-4e29-b5a7-059d5b66aee3,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691d36f1-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.326 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.326 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.326 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.327 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap691d36f1-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.328 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap691d36f1-88, col_values=(('external_ids', {'iface-id': '691d36f1-88c4-4e29-b5a7-059d5b66aee3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:aa:a0', 'vm-uuid': '02b3a680-d1bd-462f-95dc-aa0934c7ceac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 NetworkManager[49182]: <info>  [1765006240.3296] manager: (tap691d36f1-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.331 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.344 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.345 232437 INFO os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:aa:a0,bridge_name='br-int',has_traffic_filtering=True,id=691d36f1-88c4-4e29-b5a7-059d5b66aee3,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691d36f1-88')#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.346 232437 DEBUG nova.virt.libvirt.vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.346 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.346 232437 DEBUG nova.network.os_vif_util [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:17:f5,bridge_name='br-int',has_traffic_filtering=True,id=a935a2a5-757d-49f8-9924-1f5f12e03325,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa935a2a5-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.347 232437 DEBUG os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:17:f5,bridge_name='br-int',has_traffic_filtering=True,id=a935a2a5-757d-49f8-9924-1f5f12e03325,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa935a2a5-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.347 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.347 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.347 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.349 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.349 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa935a2a5-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.349 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa935a2a5-75, col_values=(('external_ids', {'iface-id': 'a935a2a5-757d-49f8-9924-1f5f12e03325', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:17:f5', 'vm-uuid': '02b3a680-d1bd-462f-95dc-aa0934c7ceac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.350 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 NetworkManager[49182]: <info>  [1765006240.3509] manager: (tapa935a2a5-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.352 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.369 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.370 232437 INFO os_vif [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:17:f5,bridge_name='br-int',has_traffic_filtering=True,id=a935a2a5-757d-49f8-9924-1f5f12e03325,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa935a2a5-75')#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.426 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.427 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.427 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] No VIF found with MAC fa:16:3e:89:a4:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.427 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] No VIF found with MAC fa:16:3e:45:a8:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.428 232437 INFO nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Using config drive#033[00m
Dec  6 02:30:40 np0005548731 nova_compute[232433]: 2025-12-06 07:30:40.447 232437 DEBUG nova.storage.rbd_utils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] rbd image 02b3a680-d1bd-462f-95dc-aa0934c7ceac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:30:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:41.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:41 np0005548731 nova_compute[232433]: 2025-12-06 07:30:41.495 232437 INFO nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Creating config drive at /var/lib/nova/instances/02b3a680-d1bd-462f-95dc-aa0934c7ceac/disk.config#033[00m
Dec  6 02:30:41 np0005548731 nova_compute[232433]: 2025-12-06 07:30:41.501 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02b3a680-d1bd-462f-95dc-aa0934c7ceac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp123tkgaj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:41 np0005548731 nova_compute[232433]: 2025-12-06 07:30:41.633 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02b3a680-d1bd-462f-95dc-aa0934c7ceac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp123tkgaj" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:41 np0005548731 nova_compute[232433]: 2025-12-06 07:30:41.657 232437 DEBUG nova.storage.rbd_utils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] rbd image 02b3a680-d1bd-462f-95dc-aa0934c7ceac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:30:41 np0005548731 nova_compute[232433]: 2025-12-06 07:30:41.660 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02b3a680-d1bd-462f-95dc-aa0934c7ceac/disk.config 02b3a680-d1bd-462f-95dc-aa0934c7ceac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:30:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:42.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.477 232437 DEBUG oslo_concurrency.processutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02b3a680-d1bd-462f-95dc-aa0934c7ceac/disk.config 02b3a680-d1bd-462f-95dc-aa0934c7ceac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.816s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.477 232437 INFO nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Deleting local config drive /var/lib/nova/instances/02b3a680-d1bd-462f-95dc-aa0934c7ceac/disk.config because it was imported into RBD.#033[00m
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.5404] manager: (tap70cec09c-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Dec  6 02:30:42 np0005548731 kernel: tap70cec09c-3d: entered promiscuous mode
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.5568] manager: (tap99fbd66e-87): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00460|binding|INFO|Claiming lport 70cec09c-3d65-4e69-8c35-7d82e75f4add for this chassis.
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00461|binding|INFO|70cec09c-3d65-4e69-8c35-7d82e75f4add: Claiming fa:16:3e:89:a4:68 10.100.0.11
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.560 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.563 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.567 232437 DEBUG nova.network.neutron [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updated VIF entry in instance network info cache for port 691d36f1-88c4-4e29-b5a7-059d5b66aee3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.568 232437 DEBUG nova.network.neutron [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [{"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:30:42 np0005548731 systemd-udevd[279847]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:30:42 np0005548731 systemd-udevd[279846]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:30:42 np0005548731 kernel: tap7989285b-96: entered promiscuous mode
Dec  6 02:30:42 np0005548731 kernel: tap99fbd66e-87: entered promiscuous mode
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.5764] manager: (tap7989285b-96): new Tun device (/org/freedesktop/NetworkManager/Devices/228)
Dec  6 02:30:42 np0005548731 systemd-udevd[279850]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.5868] device (tap70cec09c-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.5882] device (tap99fbd66e-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.5889] device (tap70cec09c-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.5893] device (tap99fbd66e-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.5915] manager: (tapee2de0a7-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.5924] device (tap7989285b-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.5930] device (tap7989285b-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.6089] manager: (tap7aeffb1d-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.6263] manager: (tap691d36f1-88): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.6422] manager: (tapa935a2a5-75): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00462|if_status|INFO|Not updating pb chassis for 7989285b-96a5-43e8-b98e-c0fe1badbf5e now as sb is readonly
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.644 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.648 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:42 np0005548731 kernel: tapa935a2a5-75: entered promiscuous mode
Dec  6 02:30:42 np0005548731 kernel: tap7aeffb1d-aa: entered promiscuous mode
Dec  6 02:30:42 np0005548731 kernel: tapee2de0a7-a3: entered promiscuous mode
Dec  6 02:30:42 np0005548731 kernel: tap691d36f1-88: entered promiscuous mode
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.6519] device (tap7aeffb1d-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.6536] device (tapee2de0a7-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.6550] device (tap691d36f1-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.654 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.6570] device (tap7aeffb1d-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.6576] device (tapee2de0a7-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.6588] device (tap691d36f1-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.6608] device (tapa935a2a5-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.6621] device (tapa935a2a5-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.664 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.669 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00463|binding|INFO|Claiming lport 7989285b-96a5-43e8-b98e-c0fe1badbf5e for this chassis.
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00464|binding|INFO|7989285b-96a5-43e8-b98e-c0fe1badbf5e: Claiming fa:16:3e:e2:25:4d 10.1.1.231
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00465|binding|INFO|Claiming lport a935a2a5-757d-49f8-9924-1f5f12e03325 for this chassis.
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00466|binding|INFO|a935a2a5-757d-49f8-9924-1f5f12e03325: Claiming fa:16:3e:c4:17:f5 10.2.2.200
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00467|binding|INFO|Claiming lport 99fbd66e-87ac-42e3-8722-34d617ef9bb1 for this chassis.
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00468|binding|INFO|99fbd66e-87ac-42e3-8722-34d617ef9bb1: Claiming fa:16:3e:c3:05:bf 10.1.1.137
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00469|binding|INFO|Claiming lport ee2de0a7-a321-4f41-868f-457ea7275ded for this chassis.
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00470|binding|INFO|ee2de0a7-a321-4f41-868f-457ea7275ded: Claiming fa:16:3e:62:49:cc 10.1.1.249
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00471|binding|INFO|Claiming lport 691d36f1-88c4-4e29-b5a7-059d5b66aee3 for this chassis.
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00472|binding|INFO|691d36f1-88c4-4e29-b5a7-059d5b66aee3: Claiming fa:16:3e:b2:aa:a0 10.2.2.100
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00473|binding|INFO|Claiming lport 7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 for this chassis.
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00474|binding|INFO|7aeffb1d-aa58-4c79-9768-bcd0a7dcce43: Claiming fa:16:3e:45:a8:03 10.1.1.213
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.682 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:a4:68 10.100.0.11'], port_security=['fa:16:3e:89:a4:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e538c5-1144-486f-bc43-86871ca19e5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '2', 'neutron:security_group_ids': '676548a0-8cc7-4c39-b257-246676a83552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4091bb9f-da77-4225-9ab9-5795e8064344, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=70cec09c-3d65-4e69-8c35-7d82e75f4add) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:30:42 np0005548731 systemd-machined[195355]: New machine qemu-51-instance-00000072.
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.683 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 70cec09c-3d65-4e69-8c35-7d82e75f4add in datapath 15e538c5-1144-486f-bc43-86871ca19e5e bound to our chassis#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.685 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15e538c5-1144-486f-bc43-86871ca19e5e#033[00m
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00475|binding|INFO|Setting lport 70cec09c-3d65-4e69-8c35-7d82e75f4add ovn-installed in OVS
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00476|binding|INFO|Setting lport 70cec09c-3d65-4e69-8c35-7d82e75f4add up in Southbound
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.690 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.698 232437 DEBUG oslo_concurrency.lockutils [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.699 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:17:f5 10.2.2.200'], port_security=['fa:16:3e:c4:17:f5 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f455f90-843e-4bc7-94c8-f96455201770', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '2', 'neutron:security_group_ids': '676548a0-8cc7-4c39-b257-246676a83552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f637f42f-be62-4dc5-920d-f9478e4e92e7, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a935a2a5-757d-49f8-9924-1f5f12e03325) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.699 232437 DEBUG nova.compute.manager [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-changed-a935a2a5-757d-49f8-9924-1f5f12e03325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.699 232437 DEBUG nova.compute.manager [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing instance network info cache due to event network-changed-a935a2a5-757d-49f8-9924-1f5f12e03325. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.699 232437 DEBUG oslo_concurrency.lockutils [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.700 232437 DEBUG oslo_concurrency.lockutils [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.700 232437 DEBUG nova.network.neutron [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing network info cache for port a935a2a5-757d-49f8-9924-1f5f12e03325 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.700 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:25:4d 10.1.1.231'], port_security=['fa:16:3e:e2:25:4d 10.1.1.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-438388958', 'neutron:cidrs': '10.1.1.231/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-438388958', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9bb4bf60-5951-4a22-9fe5-bf4066fde3a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7516919c-9b05-49db-ac3d-0398155a9aa3, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=7989285b-96a5-43e8-b98e-c0fe1badbf5e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.701 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:49:cc 10.1.1.249'], port_security=['fa:16:3e:62:49:cc 10.1.1.249'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.249/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '2', 'neutron:security_group_ids': '676548a0-8cc7-4c39-b257-246676a83552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7516919c-9b05-49db-ac3d-0398155a9aa3, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ee2de0a7-a321-4f41-868f-457ea7275ded) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.703 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:05:bf 10.1.1.137'], port_security=['fa:16:3e:c3:05:bf 10.1.1.137'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-767891495', 'neutron:cidrs': '10.1.1.137/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-767891495', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9bb4bf60-5951-4a22-9fe5-bf4066fde3a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7516919c-9b05-49db-ac3d-0398155a9aa3, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=99fbd66e-87ac-42e3-8722-34d617ef9bb1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.704 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:a8:03 10.1.1.213'], port_security=['fa:16:3e:45:a8:03 10.1.1.213'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.213/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '2', 'neutron:security_group_ids': '676548a0-8cc7-4c39-b257-246676a83552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7516919c-9b05-49db-ac3d-0398155a9aa3, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=7aeffb1d-aa58-4c79-9768-bcd0a7dcce43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.706 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:aa:a0 10.2.2.100'], port_security=['fa:16:3e:b2:aa:a0 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f455f90-843e-4bc7-94c8-f96455201770', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '2', 'neutron:security_group_ids': '676548a0-8cc7-4c39-b257-246676a83552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f637f42f-be62-4dc5-920d-f9478e4e92e7, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=691d36f1-88c4-4e29-b5a7-059d5b66aee3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.699 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[be3f7be3-f940-458d-94db-cf273685d848]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.708 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap15e538c5-11 in ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:30:42 np0005548731 systemd[1]: Started Virtual Machine qemu-51-instance-00000072.
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.712 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap15e538c5-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.712 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5882a687-6e45-489a-b207-f3a6d7f81b76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.713 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4c67f01b-e321-497a-876c-922bbec418e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.726 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[566941b0-ebb3-4671-a901-6bde4d7d0bac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.753 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0221a24b-c031-48ba-ad2a-306ffef4ed12]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.782 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[cf890ccc-add9-4d54-8949-5ccb233710e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.792 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cde4f782-23b1-4c6b-93e5-3f409f04d7b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.7967] manager: (tap15e538c5-10): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00477|binding|INFO|Setting lport 7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 ovn-installed in OVS
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00478|binding|INFO|Setting lport 7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 up in Southbound
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00479|binding|INFO|Setting lport 691d36f1-88c4-4e29-b5a7-059d5b66aee3 ovn-installed in OVS
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00480|binding|INFO|Setting lport 691d36f1-88c4-4e29-b5a7-059d5b66aee3 up in Southbound
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00481|binding|INFO|Setting lport 7989285b-96a5-43e8-b98e-c0fe1badbf5e ovn-installed in OVS
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00482|binding|INFO|Setting lport 7989285b-96a5-43e8-b98e-c0fe1badbf5e up in Southbound
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00483|binding|INFO|Setting lport ee2de0a7-a321-4f41-868f-457ea7275ded ovn-installed in OVS
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00484|binding|INFO|Setting lport ee2de0a7-a321-4f41-868f-457ea7275ded up in Southbound
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00485|binding|INFO|Setting lport a935a2a5-757d-49f8-9924-1f5f12e03325 ovn-installed in OVS
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00486|binding|INFO|Setting lport a935a2a5-757d-49f8-9924-1f5f12e03325 up in Southbound
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00487|binding|INFO|Setting lport 99fbd66e-87ac-42e3-8722-34d617ef9bb1 ovn-installed in OVS
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00488|binding|INFO|Setting lport 99fbd66e-87ac-42e3-8722-34d617ef9bb1 up in Southbound
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.803 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.825 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2c8d03-6f31-4e45-beb2-242ad6400701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.828 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6be7e712-72e2-429b-8914-4ccf625ed193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.8503] device (tap15e538c5-10): carrier: link connected
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.854 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bdbd29-3a5c-4e58-98ed-db58bdaf36f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.871 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[620e159f-a259-42ae-a5e8-a8a0cb76353c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15e538c5-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:66:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650485, 'reachable_time': 31049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279905, 'error': None, 'target': 'ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.888 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c906b72c-d178-4c9c-91b4-3a5ce37be86d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3f:667b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650485, 'tstamp': 650485}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279906, 'error': None, 'target': 'ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.906 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5e2195-b50c-43ef-804e-442775ced274]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15e538c5-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:66:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650485, 'reachable_time': 31049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279907, 'error': None, 'target': 'ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.936 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d22ec18a-c765-4074-b534-6e56556b80e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.988 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[141d48cf-18ee-4c97-bc6c-bf8f71866a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.990 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e538c5-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.990 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.991 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15e538c5-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.993 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:42 np0005548731 kernel: tap15e538c5-10: entered promiscuous mode
Dec  6 02:30:42 np0005548731 NetworkManager[49182]: <info>  [1765006242.9960] manager: (tap15e538c5-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Dec  6 02:30:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:42.996 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15e538c5-10, col_values=(('external_ids', {'iface-id': '18906741-a7df-47da-baeb-390929c7d2aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:42 np0005548731 nova_compute[232433]: 2025-12-06 07:30:42.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:42Z|00489|binding|INFO|Releasing lport 18906741-a7df-47da-baeb-390929c7d2aa from this chassis (sb_readonly=0)
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.011 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.012 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/15e538c5-1144-486f-bc43-86871ca19e5e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/15e538c5-1144-486f-bc43-86871ca19e5e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.013 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[66ec2604-1d93-4168-88f8-9d138b871ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.014 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-15e538c5-1144-486f-bc43-86871ca19e5e
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/15e538c5-1144-486f-bc43-86871ca19e5e.pid.haproxy
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 15e538c5-1144-486f-bc43-86871ca19e5e
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.015 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e', 'env', 'PROCESS_TAG=haproxy-15e538c5-1144-486f-bc43-86871ca19e5e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/15e538c5-1144-486f-bc43-86871ca19e5e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:30:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:43 np0005548731 podman[279975]: 2025-12-06 07:30:43.407570342 +0000 UTC m=+0.049172221 container create 5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:30:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:43 np0005548731 systemd[1]: Started libpod-conmon-5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255.scope.
Dec  6 02:30:43 np0005548731 podman[279975]: 2025-12-06 07:30:43.380736187 +0000 UTC m=+0.022338066 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:30:43 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:30:43 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a0a059e5981d7fac3a19a6464f61a2c43cb394b9f87812895a4a36b3a53ccc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:30:43 np0005548731 podman[279975]: 2025-12-06 07:30:43.494192976 +0000 UTC m=+0.135794875 container init 5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:30:43 np0005548731 podman[279975]: 2025-12-06 07:30:43.500848009 +0000 UTC m=+0.142449878 container start 5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:30:43 np0005548731 neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e[279990]: [NOTICE]   (279994) : New worker (279996) forked
Dec  6 02:30:43 np0005548731 neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e[279990]: [NOTICE]   (279994) : Loading success.
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.562 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a935a2a5-757d-49f8-9924-1f5f12e03325 in datapath 8f455f90-843e-4bc7-94c8-f96455201770 unbound from our chassis#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.564 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f455f90-843e-4bc7-94c8-f96455201770#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.575 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d24f29b7-74fd-42fc-bc44-9967fed2d624]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.576 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f455f90-81 in ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.578 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f455f90-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.578 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c4462ad0-3190-4994-a9a6-a043de33148f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.579 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8d05dd7f-77ca-4c28-ac5f-49ab383e6249]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.590 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c16f4944-036c-4c55-82d8-127054bb5b07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.604 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8afcb1-e1ed-4612-9001-0a5229c6fa77]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.614 232437 DEBUG nova.compute.manager [req-fada7f7e-337d-46d8-963b-02d063b00fcb req-0027bdc9-7853-4592-89d1-38de7aa1349c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-ee2de0a7-a321-4f41-868f-457ea7275ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.615 232437 DEBUG oslo_concurrency.lockutils [req-fada7f7e-337d-46d8-963b-02d063b00fcb req-0027bdc9-7853-4592-89d1-38de7aa1349c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.617 232437 DEBUG oslo_concurrency.lockutils [req-fada7f7e-337d-46d8-963b-02d063b00fcb req-0027bdc9-7853-4592-89d1-38de7aa1349c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.617 232437 DEBUG oslo_concurrency.lockutils [req-fada7f7e-337d-46d8-963b-02d063b00fcb req-0027bdc9-7853-4592-89d1-38de7aa1349c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.617 232437 DEBUG nova.compute.manager [req-fada7f7e-337d-46d8-963b-02d063b00fcb req-0027bdc9-7853-4592-89d1-38de7aa1349c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Processing event network-vif-plugged-ee2de0a7-a321-4f41-868f-457ea7275ded _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.632 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[220d7b43-46c4-4548-a8e3-860458e91d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.637 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c31d96-f9d1-43e2-8fd3-d6fdc1950dd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 NetworkManager[49182]: <info>  [1765006243.6386] manager: (tap8f455f90-80): new Veth device (/org/freedesktop/NetworkManager/Devices/235)
Dec  6 02:30:43 np0005548731 systemd-udevd[279897]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.671 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[14544c84-9056-4a5a-a2f7-5a40c1dd36b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.676 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a354d6-bf68-4f1c-be26-6ecdcc38612d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.693 232437 DEBUG nova.compute.manager [req-103aef4b-de67-4186-bdf7-24cf82c39791 req-1e85f36b-7b64-4928-bde2-e6ab4eb9812b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-70cec09c-3d65-4e69-8c35-7d82e75f4add external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.693 232437 DEBUG oslo_concurrency.lockutils [req-103aef4b-de67-4186-bdf7-24cf82c39791 req-1e85f36b-7b64-4928-bde2-e6ab4eb9812b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.693 232437 DEBUG oslo_concurrency.lockutils [req-103aef4b-de67-4186-bdf7-24cf82c39791 req-1e85f36b-7b64-4928-bde2-e6ab4eb9812b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.694 232437 DEBUG oslo_concurrency.lockutils [req-103aef4b-de67-4186-bdf7-24cf82c39791 req-1e85f36b-7b64-4928-bde2-e6ab4eb9812b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.694 232437 DEBUG nova.compute.manager [req-103aef4b-de67-4186-bdf7-24cf82c39791 req-1e85f36b-7b64-4928-bde2-e6ab4eb9812b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Processing event network-vif-plugged-70cec09c-3d65-4e69-8c35-7d82e75f4add _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:30:43 np0005548731 NetworkManager[49182]: <info>  [1765006243.7019] device (tap8f455f90-80): carrier: link connected
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.707 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3498a22d-6169-40e2-adf9-2158db9dec03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.723 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e805457a-4239-4f13-82fa-dff7ca38e765]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f455f90-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:a4:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650570, 'reachable_time': 19022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280015, 'error': None, 'target': 'ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.737 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2e4fc255-e2b0-426f-953a-199593c00f88]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:a442'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650570, 'tstamp': 650570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280016, 'error': None, 'target': 'ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.752 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[002ac42c-b4b5-4d07-857f-d4b3a4be9210]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f455f90-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:a4:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650570, 'reachable_time': 19022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280017, 'error': None, 'target': 'ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.779 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[40d2ef18-9845-4dff-8b5b-c6bb5a9dcf1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.843 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9896d2c3-23d3-49d4-a362-290d64cb1908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.845 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f455f90-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.845 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.846 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f455f90-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:43 np0005548731 NetworkManager[49182]: <info>  [1765006243.8486] manager: (tap8f455f90-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.848 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:43 np0005548731 kernel: tap8f455f90-80: entered promiscuous mode
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.851 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.852 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f455f90-80, col_values=(('external_ids', {'iface-id': '621b0b6d-f0c1-4f16-afad-2a740006082b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.853 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:43 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:43Z|00490|binding|INFO|Releasing lport 621b0b6d-f0c1-4f16-afad-2a740006082b from this chassis (sb_readonly=0)
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.857 232437 DEBUG nova.compute.manager [req-5e6d55c5-15d8-4d78-8666-fca41fc9ae2a req-88ee2e97-ef43-4295-aac5-ff306112ebfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-a935a2a5-757d-49f8-9924-1f5f12e03325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.858 232437 DEBUG oslo_concurrency.lockutils [req-5e6d55c5-15d8-4d78-8666-fca41fc9ae2a req-88ee2e97-ef43-4295-aac5-ff306112ebfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.858 232437 DEBUG oslo_concurrency.lockutils [req-5e6d55c5-15d8-4d78-8666-fca41fc9ae2a req-88ee2e97-ef43-4295-aac5-ff306112ebfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.858 232437 DEBUG oslo_concurrency.lockutils [req-5e6d55c5-15d8-4d78-8666-fca41fc9ae2a req-88ee2e97-ef43-4295-aac5-ff306112ebfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.858 232437 DEBUG nova.compute.manager [req-5e6d55c5-15d8-4d78-8666-fca41fc9ae2a req-88ee2e97-ef43-4295-aac5-ff306112ebfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Processing event network-vif-plugged-a935a2a5-757d-49f8-9924-1f5f12e03325 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:30:43 np0005548731 nova_compute[232433]: 2025-12-06 07:30:43.867 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.868 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f455f90-843e-4bc7-94c8-f96455201770.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f455f90-843e-4bc7-94c8-f96455201770.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.869 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a1bf23-2762-45ed-bd39-065e685fa1cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.870 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-8f455f90-843e-4bc7-94c8-f96455201770
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/8f455f90-843e-4bc7-94c8-f96455201770.pid.haproxy
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 8f455f90-843e-4bc7-94c8-f96455201770
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:30:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:43.871 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770', 'env', 'PROCESS_TAG=haproxy-8f455f90-843e-4bc7-94c8-f96455201770', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f455f90-843e-4bc7-94c8-f96455201770.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.045 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006244.0446987, 02b3a680-d1bd-462f-95dc-aa0934c7ceac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.045 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] VM Started (Lifecycle Event)#033[00m
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.067 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.071 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006244.0448859, 02b3a680-d1bd-462f-95dc-aa0934c7ceac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.072 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.093 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.096 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.131 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:30:44 np0005548731 podman[280096]: 2025-12-06 07:30:44.204371919 +0000 UTC m=+0.045951443 container create 321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:30:44 np0005548731 systemd[1]: Started libpod-conmon-321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd.scope.
Dec  6 02:30:44 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:30:44 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c581daaef3162c1bf5df5f8c45681aa992379669ae7b173e849aa443791769c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:30:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:44.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:44 np0005548731 podman[280096]: 2025-12-06 07:30:44.257560717 +0000 UTC m=+0.099140261 container init 321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  6 02:30:44 np0005548731 podman[280096]: 2025-12-06 07:30:44.264607449 +0000 UTC m=+0.106186973 container start 321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:30:44 np0005548731 podman[280096]: 2025-12-06 07:30:44.181366478 +0000 UTC m=+0.022946032 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:30:44 np0005548731 neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770[280111]: [NOTICE]   (280116) : New worker (280118) forked
Dec  6 02:30:44 np0005548731 neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770[280111]: [NOTICE]   (280116) : Loading success.
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.322 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 7989285b-96a5-43e8-b98e-c0fe1badbf5e in datapath 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 unbound from our chassis#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.324 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.333 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e15dfaa8-c0c5-4c20-a26d-8ec103201628]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.334 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap95ae8bf7-e1 in ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.336 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap95ae8bf7-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.336 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[abbe1b7e-8a60-4041-8dd3-d6d80ee35ceb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.337 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8d2760-f9cf-41b8-9bbb-ee0da9f724b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.347 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd4f098-d4a3-4a71-b5be-acf9f152a382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.360 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8635ee-4269-4402-8d27-e2ca97132a62]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.389 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5b73b6-e499-4b2a-95dd-09dc35cb1e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.394 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6bac9642-4a45-47d1-81ef-36c098c62581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 NetworkManager[49182]: <info>  [1765006244.3958] manager: (tap95ae8bf7-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/237)
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.425 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1f689e-47d0-432d-8600-b6897ff65c46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.428 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea818e3-02e2-4aff-b57e-0edcb390ee05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 NetworkManager[49182]: <info>  [1765006244.4490] device (tap95ae8bf7-e0): carrier: link connected
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.454 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5450960f-d4b8-4f2c-8b95-96f12067bb5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.469 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f2618791-daaf-4fa6-b4f4-c10501dd896f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95ae8bf7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:3a:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650645, 'reachable_time': 37702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280137, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.483 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f35d92-1acb-479c-9f09-eed7cbcb7684]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:3a78'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650645, 'tstamp': 650645}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280138, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.498 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cced63e1-f81e-4061-a608-e38257a35cae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95ae8bf7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:3a:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650645, 'reachable_time': 37702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280139, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.525 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5981f794-e0e7-409d-9dde-9e23e929cf0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.584 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1722b4-55d3-40d9-8ca9-2a1947bc9b8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.586 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95ae8bf7-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.586 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.587 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95ae8bf7-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.588 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:44 np0005548731 kernel: tap95ae8bf7-e0: entered promiscuous mode
Dec  6 02:30:44 np0005548731 NetworkManager[49182]: <info>  [1765006244.5893] manager: (tap95ae8bf7-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.591 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.591 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95ae8bf7-e0, col_values=(('external_ids', {'iface-id': 'cd44e1b4-dbd0-4520-ab3c-e03fc0a44edf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.592 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:44Z|00491|binding|INFO|Releasing lport cd44e1b4-dbd0-4520-ab3c-e03fc0a44edf from this chassis (sb_readonly=0)
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.594 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.594 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.595 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b4cc5d34-b58b-4162-a6cd-aae1c5363e58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.596 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5.pid.haproxy
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:30:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:44.596 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'env', 'PROCESS_TAG=haproxy-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:30:44 np0005548731 nova_compute[232433]: 2025-12-06 07:30:44.607 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:44 np0005548731 podman[280171]: 2025-12-06 07:30:44.934242763 +0000 UTC m=+0.042593501 container create c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 02:30:44 np0005548731 systemd[1]: Started libpod-conmon-c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415.scope.
Dec  6 02:30:44 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:30:44 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f056ec2555c7c24796754ecae4c3892c2a14bc1c0d7e6e7abd3efcdca79656a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:30:44 np0005548731 podman[280171]: 2025-12-06 07:30:44.993522339 +0000 UTC m=+0.101873097 container init c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:30:45 np0005548731 podman[280171]: 2025-12-06 07:30:45.001039473 +0000 UTC m=+0.109390231 container start c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:30:45 np0005548731 podman[280171]: 2025-12-06 07:30:44.913084546 +0000 UTC m=+0.021435314 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:30:45 np0005548731 neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5[280186]: [NOTICE]   (280190) : New worker (280192) forked
Dec  6 02:30:45 np0005548731 neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5[280186]: [NOTICE]   (280190) : Loading success.
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.047 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ee2de0a7-a321-4f41-868f-457ea7275ded in datapath 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 unbound from our chassis#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.049 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.062 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ce877672-739b-4fe8-8d3a-cc833665743a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.091 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb7b8c6-0153-4307-aa6f-61bf0699a28f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.094 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[1db5133f-52f1-48a3-a800-a069b71f2597]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.123 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[07dfd233-819f-41c4-ada5-eb8dc1b97b97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.137 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[93ec92ad-bf84-48c4-9742-e44c0e79d10c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95ae8bf7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:3a:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 180, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 180, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650645, 'reachable_time': 37702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280206, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.150 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[30962d7a-628b-4105-aed3-918d473227c9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap95ae8bf7-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650655, 'tstamp': 650655}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280207, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap95ae8bf7-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650658, 'tstamp': 650658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280207, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.152 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95ae8bf7-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.153 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.154 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.154 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95ae8bf7-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.155 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.155 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95ae8bf7-e0, col_values=(('external_ids', {'iface-id': 'cd44e1b4-dbd0-4520-ab3c-e03fc0a44edf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.155 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.156 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 99fbd66e-87ac-42e3-8722-34d617ef9bb1 in datapath 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 unbound from our chassis#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.158 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.172 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[767a5db0-a098-4be2-9852-f51d57aaa939]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.203 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3944ff-c2e0-4433-b933-6cceb86bf184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.206 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e9dffa-4b72-40b3-a72b-1201f6ebb832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.235 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[cb90a7c5-79a7-4d67-8827-a67e5a689c46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.251 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d93a5d-09a9-4682-9d23-1bdb716126cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95ae8bf7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:3a:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650645, 'reachable_time': 37702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280213, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.259 232437 DEBUG nova.network.neutron [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updated VIF entry in instance network info cache for port a935a2a5-757d-49f8-9924-1f5f12e03325. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.260 232437 DEBUG nova.network.neutron [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [{"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.267 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[50d3a920-312c-4e28-9c80-76544b740f7a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap95ae8bf7-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650655, 'tstamp': 650655}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280214, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap95ae8bf7-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650658, 'tstamp': 650658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280214, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.269 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95ae8bf7-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.270 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.271 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.271 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95ae8bf7-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.272 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.272 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95ae8bf7-e0, col_values=(('external_ids', {'iface-id': 'cd44e1b4-dbd0-4520-ab3c-e03fc0a44edf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.272 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.273 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 in datapath 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 unbound from our chassis#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.274 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.279 232437 DEBUG oslo_concurrency.lockutils [req-e9d674a0-457a-44ac-953e-747eb1952b69 req-24f7bfea-baa3-4c8e-a401-fec4d7665b31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.289 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[67b70ab7-f00e-49df-a011-9c40c09b9b0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.320 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c92cf91d-3e7b-4a43-b3a9-8e398f651867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.323 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e672a433-eb1c-4ac5-85a4-ef5818e68c58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.351 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.352 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[80792729-5066-4d47-a075-e3cd7032e003]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.369 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[94e41204-be6b-4e68-93a9-64832624fbda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95ae8bf7-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:3a:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 9, 'rx_bytes': 266, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 9, 'rx_bytes': 266, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650645, 'reachable_time': 37702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280220, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.385 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c824e7-7b8c-4ebe-8c54-f30503894f8c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap95ae8bf7-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650655, 'tstamp': 650655}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280221, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap95ae8bf7-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650658, 'tstamp': 650658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280221, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.386 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95ae8bf7-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.388 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.389 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.389 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95ae8bf7-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.390 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.390 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95ae8bf7-e0, col_values=(('external_ids', {'iface-id': 'cd44e1b4-dbd0-4520-ab3c-e03fc0a44edf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.390 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.391 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 691d36f1-88c4-4e29-b5a7-059d5b66aee3 in datapath 8f455f90-843e-4bc7-94c8-f96455201770 unbound from our chassis#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.393 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f455f90-843e-4bc7-94c8-f96455201770#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.407 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8e6cce-4f8e-4423-98fe-d6dbd2c73491]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.436 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b7487ada-dd8c-4ad3-bb68-482553e69cc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.439 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[750de083-1cae-48f5-b813-534a3ffd9ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:45.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.468 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[15f481f9-0b11-4384-9bf9-065e16f2c37f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.485 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5b12c6cf-f4c5-4a29-b82f-1986504df8be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f455f90-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:a4:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650570, 'reachable_time': 19022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280227, 'error': None, 'target': 'ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.501 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c7951e99-7e09-41f4-8e7d-9fd936cf0af1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f455f90-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650581, 'tstamp': 650581}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280228, 'error': None, 'target': 'ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tap8f455f90-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650584, 'tstamp': 650584}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280228, 'error': None, 'target': 'ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.503 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f455f90-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.504 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.505 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.505 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f455f90-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.506 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.506 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f455f90-80, col_values=(('external_ids', {'iface-id': '621b0b6d-f0c1-4f16-afad-2a740006082b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:30:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:30:45.506 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.805 232437 DEBUG nova.compute.manager [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-70cec09c-3d65-4e69-8c35-7d82e75f4add external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.805 232437 DEBUG oslo_concurrency.lockutils [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.806 232437 DEBUG oslo_concurrency.lockutils [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.806 232437 DEBUG oslo_concurrency.lockutils [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.806 232437 DEBUG nova.compute.manager [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No event matching network-vif-plugged-70cec09c-3d65-4e69-8c35-7d82e75f4add in dict_keys([('network-vif-plugged', '99fbd66e-87ac-42e3-8722-34d617ef9bb1'), ('network-vif-plugged', '7989285b-96a5-43e8-b98e-c0fe1badbf5e'), ('network-vif-plugged', '7aeffb1d-aa58-4c79-9768-bcd0a7dcce43'), ('network-vif-plugged', '691d36f1-88c4-4e29-b5a7-059d5b66aee3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.806 232437 WARNING nova.compute.manager [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-70cec09c-3d65-4e69-8c35-7d82e75f4add for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.806 232437 DEBUG nova.compute.manager [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.807 232437 DEBUG oslo_concurrency.lockutils [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.807 232437 DEBUG oslo_concurrency.lockutils [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.807 232437 DEBUG oslo_concurrency.lockutils [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.807 232437 DEBUG nova.compute.manager [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Processing event network-vif-plugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.808 232437 DEBUG nova.compute.manager [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.808 232437 DEBUG oslo_concurrency.lockutils [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.808 232437 DEBUG oslo_concurrency.lockutils [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.808 232437 DEBUG oslo_concurrency.lockutils [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.808 232437 DEBUG nova.compute.manager [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No event matching network-vif-plugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 in dict_keys([('network-vif-plugged', '99fbd66e-87ac-42e3-8722-34d617ef9bb1'), ('network-vif-plugged', '7989285b-96a5-43e8-b98e-c0fe1badbf5e'), ('network-vif-plugged', '7aeffb1d-aa58-4c79-9768-bcd0a7dcce43')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.809 232437 WARNING nova.compute.manager [req-47c8afd4-ac19-4ad8-8687-ddb89aa68d14 req-dba466e0-6e1b-4e57-83f0-a956f4902bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.811 232437 DEBUG nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-ee2de0a7-a321-4f41-868f-457ea7275ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.812 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.812 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.812 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.812 232437 DEBUG nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No event matching network-vif-plugged-ee2de0a7-a321-4f41-868f-457ea7275ded in dict_keys([('network-vif-plugged', '99fbd66e-87ac-42e3-8722-34d617ef9bb1'), ('network-vif-plugged', '7989285b-96a5-43e8-b98e-c0fe1badbf5e'), ('network-vif-plugged', '7aeffb1d-aa58-4c79-9768-bcd0a7dcce43')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.812 232437 WARNING nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-ee2de0a7-a321-4f41-868f-457ea7275ded for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.813 232437 DEBUG nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.813 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.813 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.813 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.813 232437 DEBUG nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Processing event network-vif-plugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.813 232437 DEBUG nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.814 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.814 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.814 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.814 232437 DEBUG nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No event matching network-vif-plugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 in dict_keys([('network-vif-plugged', '99fbd66e-87ac-42e3-8722-34d617ef9bb1'), ('network-vif-plugged', '7989285b-96a5-43e8-b98e-c0fe1badbf5e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.814 232437 WARNING nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.815 232437 DEBUG nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.815 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.815 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.815 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.815 232437 DEBUG nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Processing event network-vif-plugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.816 232437 DEBUG nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.816 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.816 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.816 232437 DEBUG oslo_concurrency.lockutils [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.816 232437 DEBUG nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No event matching network-vif-plugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 in dict_keys([('network-vif-plugged', '7989285b-96a5-43e8-b98e-c0fe1badbf5e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.817 232437 WARNING nova.compute.manager [req-ae7c07de-123c-4706-936c-75d3e8b83dda req-812dd5ea-04af-4308-8282-b87093a70a49 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.988 232437 DEBUG nova.compute.manager [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-a935a2a5-757d-49f8-9924-1f5f12e03325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.989 232437 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.989 232437 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.989 232437 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.989 232437 DEBUG nova.compute.manager [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No event matching network-vif-plugged-a935a2a5-757d-49f8-9924-1f5f12e03325 in dict_keys([('network-vif-plugged', '7989285b-96a5-43e8-b98e-c0fe1badbf5e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.989 232437 WARNING nova.compute.manager [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-a935a2a5-757d-49f8-9924-1f5f12e03325 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.990 232437 DEBUG nova.compute.manager [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.990 232437 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.990 232437 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.990 232437 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.991 232437 DEBUG nova.compute.manager [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Processing event network-vif-plugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.991 232437 DEBUG nova.compute.manager [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.991 232437 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.991 232437 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.992 232437 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.992 232437 DEBUG nova.compute.manager [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-plugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.992 232437 WARNING nova.compute.manager [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.993 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.996 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006245.9964375, 02b3a680-d1bd-462f-95dc-aa0934c7ceac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.997 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:30:45 np0005548731 nova_compute[232433]: 2025-12-06 07:30:45.999 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.002 232437 INFO nova.virt.libvirt.driver [-] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Instance spawned successfully.#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.002 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.025 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.033 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.036 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.036 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.036 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.037 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.037 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.037 232437 DEBUG nova.virt.libvirt.driver [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.070 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.106 232437 INFO nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Took 31.04 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.106 232437 DEBUG nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.163 232437 INFO nova.compute.manager [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Took 37.27 seconds to build instance.#033[00m
Dec  6 02:30:46 np0005548731 nova_compute[232433]: 2025-12-06 07:30:46.177 232437 DEBUG oslo_concurrency.lockutils [None req-adb712c5-e0cf-4dc3-8c70-9016209a6f69 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 37.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:30:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:46.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:46 np0005548731 podman[280230]: 2025-12-06 07:30:46.896012613 +0000 UTC m=+0.056934521 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 02:30:46 np0005548731 podman[280232]: 2025-12-06 07:30:46.910434885 +0000 UTC m=+0.061724988 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:30:46 np0005548731 podman[280231]: 2025-12-06 07:30:46.930496124 +0000 UTC m=+0.087854234 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:30:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:47 np0005548731 nova_compute[232433]: 2025-12-06 07:30:47.664 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:48.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:49 np0005548731 nova_compute[232433]: 2025-12-06 07:30:49.146 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:49 np0005548731 NetworkManager[49182]: <info>  [1765006249.1479] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Dec  6 02:30:49 np0005548731 NetworkManager[49182]: <info>  [1765006249.1491] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Dec  6 02:30:49 np0005548731 nova_compute[232433]: 2025-12-06 07:30:49.271 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:49 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:49Z|00492|binding|INFO|Releasing lport 18906741-a7df-47da-baeb-390929c7d2aa from this chassis (sb_readonly=0)
Dec  6 02:30:49 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:49Z|00493|binding|INFO|Releasing lport 621b0b6d-f0c1-4f16-afad-2a740006082b from this chassis (sb_readonly=0)
Dec  6 02:30:49 np0005548731 ovn_controller[133927]: 2025-12-06T07:30:49Z|00494|binding|INFO|Releasing lport cd44e1b4-dbd0-4520-ab3c-e03fc0a44edf from this chassis (sb_readonly=0)
Dec  6 02:30:49 np0005548731 nova_compute[232433]: 2025-12-06 07:30:49.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:49.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:49 np0005548731 nova_compute[232433]: 2025-12-06 07:30:49.568 232437 DEBUG nova.compute.manager [req-8b88366f-6068-4f6d-b644-8ab345a20c01 req-e7431ddd-5623-4e32-a43a-9d46345b53d8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-changed-70cec09c-3d65-4e69-8c35-7d82e75f4add external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:30:49 np0005548731 nova_compute[232433]: 2025-12-06 07:30:49.568 232437 DEBUG nova.compute.manager [req-8b88366f-6068-4f6d-b644-8ab345a20c01 req-e7431ddd-5623-4e32-a43a-9d46345b53d8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing instance network info cache due to event network-changed-70cec09c-3d65-4e69-8c35-7d82e75f4add. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:30:49 np0005548731 nova_compute[232433]: 2025-12-06 07:30:49.569 232437 DEBUG oslo_concurrency.lockutils [req-8b88366f-6068-4f6d-b644-8ab345a20c01 req-e7431ddd-5623-4e32-a43a-9d46345b53d8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:30:49 np0005548731 nova_compute[232433]: 2025-12-06 07:30:49.569 232437 DEBUG oslo_concurrency.lockutils [req-8b88366f-6068-4f6d-b644-8ab345a20c01 req-e7431ddd-5623-4e32-a43a-9d46345b53d8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:30:49 np0005548731 nova_compute[232433]: 2025-12-06 07:30:49.569 232437 DEBUG nova.network.neutron [req-8b88366f-6068-4f6d-b644-8ab345a20c01 req-e7431ddd-5623-4e32-a43a-9d46345b53d8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Refreshing network info cache for port 70cec09c-3d65-4e69-8c35-7d82e75f4add _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:30:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:30:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:50.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:30:50 np0005548731 nova_compute[232433]: 2025-12-06 07:30:50.353 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:50 np0005548731 nova_compute[232433]: 2025-12-06 07:30:50.692 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:51 np0005548731 nova_compute[232433]: 2025-12-06 07:30:51.098 232437 DEBUG nova.network.neutron [req-8b88366f-6068-4f6d-b644-8ab345a20c01 req-e7431ddd-5623-4e32-a43a-9d46345b53d8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updated VIF entry in instance network info cache for port 70cec09c-3d65-4e69-8c35-7d82e75f4add. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:30:51 np0005548731 nova_compute[232433]: 2025-12-06 07:30:51.099 232437 DEBUG nova.network.neutron [req-8b88366f-6068-4f6d-b644-8ab345a20c01 req-e7431ddd-5623-4e32-a43a-9d46345b53d8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [{"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:30:51 np0005548731 nova_compute[232433]: 2025-12-06 07:30:51.120 232437 DEBUG oslo_concurrency.lockutils [req-8b88366f-6068-4f6d-b644-8ab345a20c01 req-e7431ddd-5623-4e32-a43a-9d46345b53d8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:30:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:51.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:52.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:52 np0005548731 nova_compute[232433]: 2025-12-06 07:30:52.666 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:30:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:53.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:30:53 np0005548731 nova_compute[232433]: 2025-12-06 07:30:53.642 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:54.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:55 np0005548731 nova_compute[232433]: 2025-12-06 07:30:55.355 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:55.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:56.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:57.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:30:57 np0005548731 nova_compute[232433]: 2025-12-06 07:30:57.669 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:30:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:30:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:30:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:30:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:30:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:30:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:30:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:58.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:30:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:30:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:30:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:30:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:59.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:00.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:00 np0005548731 nova_compute[232433]: 2025-12-06 07:31:00.358 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:00.873 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:31:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:00.873 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:31:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:00.874 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:31:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:01.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:31:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:02.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:31:02 np0005548731 nova_compute[232433]: 2025-12-06 07:31:02.671 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:03.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:04 np0005548731 nova_compute[232433]: 2025-12-06 07:31:04.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:31:04 np0005548731 nova_compute[232433]: 2025-12-06 07:31:04.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:31:04 np0005548731 nova_compute[232433]: 2025-12-06 07:31:04.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:31:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:04.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:04 np0005548731 nova_compute[232433]: 2025-12-06 07:31:04.305 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:31:04 np0005548731 nova_compute[232433]: 2025-12-06 07:31:04.305 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:31:04 np0005548731 nova_compute[232433]: 2025-12-06 07:31:04.306 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:31:04 np0005548731 nova_compute[232433]: 2025-12-06 07:31:04.306 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 02b3a680-d1bd-462f-95dc-aa0934c7ceac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:31:04 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:31:05 np0005548731 nova_compute[232433]: 2025-12-06 07:31:05.360 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:31:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:05.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:31:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:06.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:07.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:07 np0005548731 nova_compute[232433]: 2025-12-06 07:31:07.724 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:08.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:31:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1080974136' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:31:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:31:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1080974136' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:31:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:09.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:10.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:10 np0005548731 nova_compute[232433]: 2025-12-06 07:31:10.361 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:11.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:12.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:12 np0005548731 nova_compute[232433]: 2025-12-06 07:31:12.727 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:13.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:14.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:31:15 np0005548731 nova_compute[232433]: 2025-12-06 07:31:15.363 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:15.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:15 np0005548731 nova_compute[232433]: 2025-12-06 07:31:15.873 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [{"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.088 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-02b3a680-d1bd-462f-95dc-aa0934c7ceac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.089 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.089 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.090 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.090 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.091 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.092 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.092 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.093 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.128 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.129 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:31:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:16.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:31:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/920108480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.583 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.684 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.685 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.685 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.685 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.941 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.942 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4229MB free_disk=20.96718978881836GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.943 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:31:16 np0005548731 nova_compute[232433]: 2025-12-06 07:31:16.943 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:31:17 np0005548731 nova_compute[232433]: 2025-12-06 07:31:17.079 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 02b3a680-d1bd-462f-95dc-aa0934c7ceac actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:31:17 np0005548731 nova_compute[232433]: 2025-12-06 07:31:17.080 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:31:17 np0005548731 nova_compute[232433]: 2025-12-06 07:31:17.080 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:31:17 np0005548731 nova_compute[232433]: 2025-12-06 07:31:17.116 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:31:17 np0005548731 radosgw[81136]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec  6 02:31:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:17.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:31:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/759483955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:31:17 np0005548731 nova_compute[232433]: 2025-12-06 07:31:17.602 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:31:17 np0005548731 nova_compute[232433]: 2025-12-06 07:31:17.607 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:31:17 np0005548731 nova_compute[232433]: 2025-12-06 07:31:17.620 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:31:17 np0005548731 nova_compute[232433]: 2025-12-06 07:31:17.640 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:31:17 np0005548731 nova_compute[232433]: 2025-12-06 07:31:17.640 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:31:17 np0005548731 nova_compute[232433]: 2025-12-06 07:31:17.730 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:17.794 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:31:17 np0005548731 nova_compute[232433]: 2025-12-06 07:31:17.795 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:17.796 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:31:17 np0005548731 podman[280635]: 2025-12-06 07:31:17.905978078 +0000 UTC m=+0.065861038 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:31:17 np0005548731 podman[280637]: 2025-12-06 07:31:17.90726289 +0000 UTC m=+0.062995949 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:31:17 np0005548731 podman[280636]: 2025-12-06 07:31:17.971450626 +0000 UTC m=+0.131065009 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:31:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:31:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:18.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:31:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:31:19 np0005548731 radosgw[81136]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Dec  6 02:31:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:19.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:19 np0005548731 nova_compute[232433]: 2025-12-06 07:31:19.634 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:31:19 np0005548731 nova_compute[232433]: 2025-12-06 07:31:19.635 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:31:19 np0005548731 nova_compute[232433]: 2025-12-06 07:31:19.657 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:31:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:20.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:20 np0005548731 nova_compute[232433]: 2025-12-06 07:31:20.365 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:31:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:21.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:31:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:22.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:22 np0005548731 radosgw[81136]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Dec  6 02:31:22 np0005548731 nova_compute[232433]: 2025-12-06 07:31:22.732 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:23.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:23 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:23Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b2:aa:a0 10.2.2.100
Dec  6 02:31:23 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:23Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:aa:a0 10.2.2.100
Dec  6 02:31:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:24Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:05:bf 10.1.1.137
Dec  6 02:31:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:24Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:05:bf 10.1.1.137
Dec  6 02:31:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:24Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:17:f5 10.2.2.200
Dec  6 02:31:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:24Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:17:f5 10.2.2.200
Dec  6 02:31:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:24Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:25:4d 10.1.1.231
Dec  6 02:31:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:24Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:25:4d 10.1.1.231
Dec  6 02:31:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:24.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:24Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:a4:68 10.100.0.11
Dec  6 02:31:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:24Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:a4:68 10.100.0.11
Dec  6 02:31:25 np0005548731 nova_compute[232433]: 2025-12-06 07:31:25.415 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:25Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:a8:03 10.1.1.213
Dec  6 02:31:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:25Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:a8:03 10.1.1.213
Dec  6 02:31:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:25.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:25Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:62:49:cc 10.1.1.249
Dec  6 02:31:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:25Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:62:49:cc 10.1.1.249
Dec  6 02:31:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:26.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:27.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:27 np0005548731 nova_compute[232433]: 2025-12-06 07:31:27.735 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:27.797 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:31:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:28.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:28Z|00495|binding|INFO|Releasing lport 18906741-a7df-47da-baeb-390929c7d2aa from this chassis (sb_readonly=0)
Dec  6 02:31:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:28Z|00496|binding|INFO|Releasing lport 621b0b6d-f0c1-4f16-afad-2a740006082b from this chassis (sb_readonly=0)
Dec  6 02:31:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:28Z|00497|binding|INFO|Releasing lport cd44e1b4-dbd0-4520-ab3c-e03fc0a44edf from this chassis (sb_readonly=0)
Dec  6 02:31:28 np0005548731 nova_compute[232433]: 2025-12-06 07:31:28.623 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:29.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:30.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:30 np0005548731 nova_compute[232433]: 2025-12-06 07:31:30.416 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:31.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:32.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:32 np0005548731 nova_compute[232433]: 2025-12-06 07:31:32.738 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:33.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:34.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:35 np0005548731 nova_compute[232433]: 2025-12-06 07:31:35.418 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:31:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:35.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:31:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:36.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:37.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:37 np0005548731 nova_compute[232433]: 2025-12-06 07:31:37.741 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:38.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.296631) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006299296938, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2527, "num_deletes": 253, "total_data_size": 6389861, "memory_usage": 6483872, "flush_reason": "Manual Compaction"}
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006299322974, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 4120130, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45308, "largest_seqno": 47830, "table_properties": {"data_size": 4109543, "index_size": 6761, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22728, "raw_average_key_size": 21, "raw_value_size": 4088292, "raw_average_value_size": 3796, "num_data_blocks": 293, "num_entries": 1077, "num_filter_entries": 1077, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006024, "oldest_key_time": 1765006024, "file_creation_time": 1765006299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 26158 microseconds, and 10385 cpu microseconds.
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.323024) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 4120130 bytes OK
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.323042) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.324912) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.324926) EVENT_LOG_v1 {"time_micros": 1765006299324921, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.324941) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 6378663, prev total WAL file size 6378663, number of live WAL files 2.
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.326265) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(4023KB)], [87(10MB)]
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006299326308, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 15230293, "oldest_snapshot_seqno": -1}
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7900 keys, 13150409 bytes, temperature: kUnknown
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006299406721, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 13150409, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13096018, "index_size": 33543, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19781, "raw_key_size": 204340, "raw_average_key_size": 25, "raw_value_size": 12953583, "raw_average_value_size": 1639, "num_data_blocks": 1327, "num_entries": 7900, "num_filter_entries": 7900, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765006299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.407036) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 13150409 bytes
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.409068) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.1 rd, 163.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 10.6 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 8427, records dropped: 527 output_compression: NoCompression
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.409083) EVENT_LOG_v1 {"time_micros": 1765006299409076, "job": 54, "event": "compaction_finished", "compaction_time_micros": 80546, "compaction_time_cpu_micros": 30372, "output_level": 6, "num_output_files": 1, "total_output_size": 13150409, "num_input_records": 8427, "num_output_records": 7900, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006299409978, "job": 54, "event": "table_file_deletion", "file_number": 89}
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006299412199, "job": 54, "event": "table_file_deletion", "file_number": 87}
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.326128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.412396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.412403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.412405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.412407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:31:39 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:31:39.412409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:31:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:39.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:39Z|00498|binding|INFO|Releasing lport 18906741-a7df-47da-baeb-390929c7d2aa from this chassis (sb_readonly=0)
Dec  6 02:31:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:39Z|00499|binding|INFO|Releasing lport 621b0b6d-f0c1-4f16-afad-2a740006082b from this chassis (sb_readonly=0)
Dec  6 02:31:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:39Z|00500|binding|INFO|Releasing lport cd44e1b4-dbd0-4520-ab3c-e03fc0a44edf from this chassis (sb_readonly=0)
Dec  6 02:31:39 np0005548731 nova_compute[232433]: 2025-12-06 07:31:39.650 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:40.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:40 np0005548731 nova_compute[232433]: 2025-12-06 07:31:40.420 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:41.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:42.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:42 np0005548731 nova_compute[232433]: 2025-12-06 07:31:42.745 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:43.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:43 np0005548731 nova_compute[232433]: 2025-12-06 07:31:43.565 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:45 np0005548731 nova_compute[232433]: 2025-12-06 07:31:45.421 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:45.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:46.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:47 np0005548731 nova_compute[232433]: 2025-12-06 07:31:47.403 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:47 np0005548731 nova_compute[232433]: 2025-12-06 07:31:47.748 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:31:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:48.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:31:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:48 np0005548731 podman[280759]: 2025-12-06 07:31:48.891406135 +0000 UTC m=+0.053092187 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:31:48 np0005548731 podman[280761]: 2025-12-06 07:31:48.895680069 +0000 UTC m=+0.049684594 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Dec  6 02:31:48 np0005548731 podman[280760]: 2025-12-06 07:31:48.926302857 +0000 UTC m=+0.086639616 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 02:31:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:49.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:50 np0005548731 nova_compute[232433]: 2025-12-06 07:31:50.059 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:50.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:50 np0005548731 nova_compute[232433]: 2025-12-06 07:31:50.423 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:51.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:31:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:31:52 np0005548731 nova_compute[232433]: 2025-12-06 07:31:52.750 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:53.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:53 np0005548731 nova_compute[232433]: 2025-12-06 07:31:53.971 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:54.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:54.797 144074 DEBUG eventlet.wsgi.server [-] (144074) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  6 02:31:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:54.799 144074 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Dec  6 02:31:54 np0005548731 ovn_metadata_agent[143960]: Accept: */*#015
Dec  6 02:31:54 np0005548731 ovn_metadata_agent[143960]: Connection: close#015
Dec  6 02:31:54 np0005548731 ovn_metadata_agent[143960]: Content-Type: text/plain#015
Dec  6 02:31:54 np0005548731 ovn_metadata_agent[143960]: Host: 169.254.169.254#015
Dec  6 02:31:54 np0005548731 ovn_metadata_agent[143960]: User-Agent: curl/7.84.0#015
Dec  6 02:31:54 np0005548731 ovn_metadata_agent[143960]: X-Forwarded-For: 10.100.0.11#015
Dec  6 02:31:54 np0005548731 ovn_metadata_agent[143960]: X-Ovn-Network-Id: 15e538c5-1144-486f-bc43-86871ca19e5e __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  6 02:31:55 np0005548731 nova_compute[232433]: 2025-12-06 07:31:55.425 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:55.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:55.895 144074 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  6 02:31:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:55.896 144074 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2548 time: 1.0970142#033[00m
Dec  6 02:31:55 np0005548731 haproxy-metadata-proxy-15e538c5-1144-486f-bc43-86871ca19e5e[279996]: 10.100.0.11:60058 [06/Dec/2025:07:31:54.796] listener listener/metadata 0/0/0/1099/1099 200 2532 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Dec  6 02:31:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:56.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:56 np0005548731 nova_compute[232433]: 2025-12-06 07:31:56.990 232437 DEBUG oslo_concurrency.lockutils [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:31:56 np0005548731 nova_compute[232433]: 2025-12-06 07:31:56.991 232437 DEBUG oslo_concurrency.lockutils [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:31:56 np0005548731 nova_compute[232433]: 2025-12-06 07:31:56.991 232437 DEBUG oslo_concurrency.lockutils [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:31:56 np0005548731 nova_compute[232433]: 2025-12-06 07:31:56.991 232437 DEBUG oslo_concurrency.lockutils [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:31:56 np0005548731 nova_compute[232433]: 2025-12-06 07:31:56.992 232437 DEBUG oslo_concurrency.lockutils [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:31:56 np0005548731 nova_compute[232433]: 2025-12-06 07:31:56.993 232437 INFO nova.compute.manager [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Terminating instance#033[00m
Dec  6 02:31:56 np0005548731 nova_compute[232433]: 2025-12-06 07:31:56.994 232437 DEBUG nova.compute.manager [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:31:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:57.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:57 np0005548731 nova_compute[232433]: 2025-12-06 07:31:57.753 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:58.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:31:58 np0005548731 kernel: tap70cec09c-3d (unregistering): left promiscuous mode
Dec  6 02:31:58 np0005548731 NetworkManager[49182]: <info>  [1765006318.8177] device (tap70cec09c-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:31:58 np0005548731 nova_compute[232433]: 2025-12-06 07:31:58.836 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00501|binding|INFO|Releasing lport 70cec09c-3d65-4e69-8c35-7d82e75f4add from this chassis (sb_readonly=0)
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00502|binding|INFO|Setting lport 70cec09c-3d65-4e69-8c35-7d82e75f4add down in Southbound
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00503|binding|INFO|Removing iface tap70cec09c-3d ovn-installed in OVS
Dec  6 02:31:58 np0005548731 nova_compute[232433]: 2025-12-06 07:31:58.839 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:58.848 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:a4:68 10.100.0.11'], port_security=['fa:16:3e:89:a4:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15e538c5-1144-486f-bc43-86871ca19e5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '4', 'neutron:security_group_ids': '676548a0-8cc7-4c39-b257-246676a83552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4091bb9f-da77-4225-9ab9-5795e8064344, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=70cec09c-3d65-4e69-8c35-7d82e75f4add) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:31:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:58.850 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 70cec09c-3d65-4e69-8c35-7d82e75f4add in datapath 15e538c5-1144-486f-bc43-86871ca19e5e unbound from our chassis#033[00m
Dec  6 02:31:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:58.852 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15e538c5-1144-486f-bc43-86871ca19e5e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:31:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:58.854 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[889f28ec-3fb5-4da9-ac9a-2a2707f1a31b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:58 np0005548731 nova_compute[232433]: 2025-12-06 07:31:58.855 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:58.856 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e namespace which is not needed anymore#033[00m
Dec  6 02:31:58 np0005548731 kernel: tap99fbd66e-87 (unregistering): left promiscuous mode
Dec  6 02:31:58 np0005548731 NetworkManager[49182]: <info>  [1765006318.8705] device (tap99fbd66e-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00504|binding|INFO|Releasing lport 99fbd66e-87ac-42e3-8722-34d617ef9bb1 from this chassis (sb_readonly=0)
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00505|binding|INFO|Setting lport 99fbd66e-87ac-42e3-8722-34d617ef9bb1 down in Southbound
Dec  6 02:31:58 np0005548731 nova_compute[232433]: 2025-12-06 07:31:58.880 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00506|binding|INFO|Removing iface tap99fbd66e-87 ovn-installed in OVS
Dec  6 02:31:58 np0005548731 nova_compute[232433]: 2025-12-06 07:31:58.883 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:58.890 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:05:bf 10.1.1.137'], port_security=['fa:16:3e:c3:05:bf 10.1.1.137'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-767891495', 'neutron:cidrs': '10.1.1.137/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-767891495', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9bb4bf60-5951-4a22-9fe5-bf4066fde3a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7516919c-9b05-49db-ac3d-0398155a9aa3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=99fbd66e-87ac-42e3-8722-34d617ef9bb1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:31:58 np0005548731 nova_compute[232433]: 2025-12-06 07:31:58.896 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 kernel: tap7989285b-96 (unregistering): left promiscuous mode
Dec  6 02:31:58 np0005548731 NetworkManager[49182]: <info>  [1765006318.9173] device (tap7989285b-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00507|binding|INFO|Releasing lport 7989285b-96a5-43e8-b98e-c0fe1badbf5e from this chassis (sb_readonly=0)
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00508|binding|INFO|Setting lport 7989285b-96a5-43e8-b98e-c0fe1badbf5e down in Southbound
Dec  6 02:31:58 np0005548731 nova_compute[232433]: 2025-12-06 07:31:58.932 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00509|binding|INFO|Removing iface tap7989285b-96 ovn-installed in OVS
Dec  6 02:31:58 np0005548731 nova_compute[232433]: 2025-12-06 07:31:58.934 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:58.938 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:25:4d 10.1.1.231'], port_security=['fa:16:3e:e2:25:4d 10.1.1.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-438388958', 'neutron:cidrs': '10.1.1.231/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-438388958', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9bb4bf60-5951-4a22-9fe5-bf4066fde3a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7516919c-9b05-49db-ac3d-0398155a9aa3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=7989285b-96a5-43e8-b98e-c0fe1badbf5e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:31:58 np0005548731 nova_compute[232433]: 2025-12-06 07:31:58.947 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 kernel: tapee2de0a7-a3 (unregistering): left promiscuous mode
Dec  6 02:31:58 np0005548731 NetworkManager[49182]: <info>  [1765006318.9576] device (tapee2de0a7-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:31:58 np0005548731 kernel: tap7aeffb1d-aa (unregistering): left promiscuous mode
Dec  6 02:31:58 np0005548731 NetworkManager[49182]: <info>  [1765006318.9754] device (tap7aeffb1d-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00510|binding|INFO|Releasing lport ee2de0a7-a321-4f41-868f-457ea7275ded from this chassis (sb_readonly=0)
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00511|binding|INFO|Setting lport ee2de0a7-a321-4f41-868f-457ea7275ded down in Southbound
Dec  6 02:31:58 np0005548731 nova_compute[232433]: 2025-12-06 07:31:58.975 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:58Z|00512|binding|INFO|Removing iface tapee2de0a7-a3 ovn-installed in OVS
Dec  6 02:31:58 np0005548731 nova_compute[232433]: 2025-12-06 07:31:58.978 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:58.983 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:49:cc 10.1.1.249'], port_security=['fa:16:3e:62:49:cc 10.1.1.249'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.249/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '4', 'neutron:security_group_ids': '676548a0-8cc7-4c39-b257-246676a83552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7516919c-9b05-49db-ac3d-0398155a9aa3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ee2de0a7-a321-4f41-868f-457ea7275ded) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:31:58 np0005548731 neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e[279990]: [NOTICE]   (279994) : haproxy version is 2.8.14-c23fe91
Dec  6 02:31:58 np0005548731 neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e[279990]: [NOTICE]   (279994) : path to executable is /usr/sbin/haproxy
Dec  6 02:31:58 np0005548731 neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e[279990]: [WARNING]  (279994) : Exiting Master process...
Dec  6 02:31:58 np0005548731 neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e[279990]: [ALERT]    (279994) : Current worker (279996) exited with code 143 (Terminated)
Dec  6 02:31:58 np0005548731 neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e[279990]: [WARNING]  (279994) : All workers exited. Exiting... (0)
Dec  6 02:31:58 np0005548731 systemd[1]: libpod-5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255.scope: Deactivated successfully.
Dec  6 02:31:58 np0005548731 kernel: tap691d36f1-88 (unregistering): left promiscuous mode
Dec  6 02:31:59 np0005548731 NetworkManager[49182]: <info>  [1765006319.0004] device (tap691d36f1-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:31:59 np0005548731 podman[280912]: 2025-12-06 07:31:59.002937063 +0000 UTC m=+0.058454917 container died 5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:31:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:59Z|00513|binding|INFO|Releasing lport 7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 from this chassis (sb_readonly=0)
Dec  6 02:31:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:59Z|00514|binding|INFO|Setting lport 7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 down in Southbound
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.005 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:59Z|00515|binding|INFO|Removing iface tap7aeffb1d-aa ovn-installed in OVS
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.008 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.009 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.013 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:a8:03 10.1.1.213'], port_security=['fa:16:3e:45:a8:03 10.1.1.213'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.213/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '4', 'neutron:security_group_ids': '676548a0-8cc7-4c39-b257-246676a83552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7516919c-9b05-49db-ac3d-0398155a9aa3, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=7aeffb1d-aa58-4c79-9768-bcd0a7dcce43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:31:59 np0005548731 kernel: tapa935a2a5-75 (unregistering): left promiscuous mode
Dec  6 02:31:59 np0005548731 NetworkManager[49182]: <info>  [1765006319.0244] device (tapa935a2a5-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.036 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:59Z|00516|binding|INFO|Releasing lport 691d36f1-88c4-4e29-b5a7-059d5b66aee3 from this chassis (sb_readonly=0)
Dec  6 02:31:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:59Z|00517|binding|INFO|Setting lport 691d36f1-88c4-4e29-b5a7-059d5b66aee3 down in Southbound
Dec  6 02:31:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:59Z|00518|binding|INFO|Removing iface tap691d36f1-88 ovn-installed in OVS
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.038 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.043 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:aa:a0 10.2.2.100'], port_security=['fa:16:3e:b2:aa:a0 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f455f90-843e-4bc7-94c8-f96455201770', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '4', 'neutron:security_group_ids': '676548a0-8cc7-4c39-b257-246676a83552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f637f42f-be62-4dc5-920d-f9478e4e92e7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=691d36f1-88c4-4e29-b5a7-059d5b66aee3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:31:59 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255-userdata-shm.mount: Deactivated successfully.
Dec  6 02:31:59 np0005548731 systemd[1]: var-lib-containers-storage-overlay-b1a0a059e5981d7fac3a19a6464f61a2c43cb394b9f87812895a4a36b3a53ccc-merged.mount: Deactivated successfully.
Dec  6 02:31:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:59Z|00519|binding|INFO|Releasing lport a935a2a5-757d-49f8-9924-1f5f12e03325 from this chassis (sb_readonly=0)
Dec  6 02:31:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:59Z|00520|binding|INFO|Setting lport a935a2a5-757d-49f8-9924-1f5f12e03325 down in Southbound
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.069 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 podman[280912]: 2025-12-06 07:31:59.069790135 +0000 UTC m=+0.125307989 container cleanup 5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 02:31:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:31:59Z|00521|binding|INFO|Removing iface tapa935a2a5-75 ovn-installed in OVS
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.076 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:17:f5 10.2.2.200'], port_security=['fa:16:3e:c4:17:f5 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '02b3a680-d1bd-462f-95dc-aa0934c7ceac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f455f90-843e-4bc7-94c8-f96455201770', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5dd5f9ca5747618b386c87a40ede88', 'neutron:revision_number': '4', 'neutron:security_group_ids': '676548a0-8cc7-4c39-b257-246676a83552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f637f42f-be62-4dc5-920d-f9478e4e92e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a935a2a5-757d-49f8-9924-1f5f12e03325) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:31:59 np0005548731 systemd[1]: libpod-conmon-5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255.scope: Deactivated successfully.
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.083 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000072.scope: Deactivated successfully.
Dec  6 02:31:59 np0005548731 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000072.scope: Consumed 19.029s CPU time.
Dec  6 02:31:59 np0005548731 systemd-machined[195355]: Machine qemu-51-instance-00000072 terminated.
Dec  6 02:31:59 np0005548731 podman[280974]: 2025-12-06 07:31:59.139977798 +0000 UTC m=+0.045859490 container remove 5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.146 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f1f073-ea8a-462e-a070-7c43ccdad59e]: (4, ('Sat Dec  6 07:31:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e (5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255)\n5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255\nSat Dec  6 07:31:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e (5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255)\n5c903aa46b91f16591a9964e4edb57255c6461e4b40848e7a6c74e71090f7255\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.148 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8659f9eb-1535-421d-b913-7bab2ad9f83f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.149 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15e538c5-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.161 232437 DEBUG nova.compute.manager [req-8a6d2e79-3517-4879-91ed-3a73c47446ae req-40887b3c-67b7-4195-9fa7-f6c0592dd49d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-70cec09c-3d65-4e69-8c35-7d82e75f4add external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.162 232437 DEBUG oslo_concurrency.lockutils [req-8a6d2e79-3517-4879-91ed-3a73c47446ae req-40887b3c-67b7-4195-9fa7-f6c0592dd49d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.162 232437 DEBUG oslo_concurrency.lockutils [req-8a6d2e79-3517-4879-91ed-3a73c47446ae req-40887b3c-67b7-4195-9fa7-f6c0592dd49d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.162 232437 DEBUG oslo_concurrency.lockutils [req-8a6d2e79-3517-4879-91ed-3a73c47446ae req-40887b3c-67b7-4195-9fa7-f6c0592dd49d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.163 232437 DEBUG nova.compute.manager [req-8a6d2e79-3517-4879-91ed-3a73c47446ae req-40887b3c-67b7-4195-9fa7-f6c0592dd49d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-unplugged-70cec09c-3d65-4e69-8c35-7d82e75f4add pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.163 232437 DEBUG nova.compute.manager [req-8a6d2e79-3517-4879-91ed-3a73c47446ae req-40887b3c-67b7-4195-9fa7-f6c0592dd49d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-70cec09c-3d65-4e69-8c35-7d82e75f4add for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.200 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 kernel: tap15e538c5-10: left promiscuous mode
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.205 232437 DEBUG nova.compute.manager [req-a52a17a0-2741-4969-9dba-0e8d780b077e req-12442426-cc81-49e4-8035-f4f17ccd483b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.206 232437 DEBUG oslo_concurrency.lockutils [req-a52a17a0-2741-4969-9dba-0e8d780b077e req-12442426-cc81-49e4-8035-f4f17ccd483b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.206 232437 DEBUG oslo_concurrency.lockutils [req-a52a17a0-2741-4969-9dba-0e8d780b077e req-12442426-cc81-49e4-8035-f4f17ccd483b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.206 232437 DEBUG oslo_concurrency.lockutils [req-a52a17a0-2741-4969-9dba-0e8d780b077e req-12442426-cc81-49e4-8035-f4f17ccd483b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.206 232437 DEBUG nova.compute.manager [req-a52a17a0-2741-4969-9dba-0e8d780b077e req-12442426-cc81-49e4-8035-f4f17ccd483b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-unplugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.207 232437 DEBUG nova.compute.manager [req-a52a17a0-2741-4969-9dba-0e8d780b077e req-12442426-cc81-49e4-8035-f4f17ccd483b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:31:59 np0005548731 NetworkManager[49182]: <info>  [1765006319.2141] manager: (tap70cec09c-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Dec  6 02:31:59 np0005548731 NetworkManager[49182]: <info>  [1765006319.2311] manager: (tap99fbd66e-87): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.230 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.232 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.233 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0704c2bf-4246-435c-a609-928bb050e833]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 NetworkManager[49182]: <info>  [1765006319.2411] manager: (tap7989285b-96): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.251 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2eab7d-0211-4133-af17-d6dbdb6d8215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.252 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7804e1-337e-4761-8b9d-f8cb1d8e35bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.270 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2e104f94-53f4-4366-b063-250c5e919279]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650478, 'reachable_time': 43639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281026, 'error': None, 'target': 'ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 systemd[1]: run-netns-ovnmeta\x2d15e538c5\x2d1144\x2d486f\x2dbc43\x2d86871ca19e5e.mount: Deactivated successfully.
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.273 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-15e538c5-1144-486f-bc43-86871ca19e5e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.273 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[bb184435-ec5f-43f2-9847-80f727e78a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.275 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 99fbd66e-87ac-42e3-8722-34d617ef9bb1 in datapath 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 unbound from our chassis#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.276 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.277 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[75916c2e-c539-412d-a945-82e36d168aa1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.277 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 namespace which is not needed anymore#033[00m
Dec  6 02:31:59 np0005548731 NetworkManager[49182]: <info>  [1765006319.2921] manager: (tap691d36f1-88): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.327 232437 INFO nova.virt.libvirt.driver [-] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Instance destroyed successfully.#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.327 232437 DEBUG nova.objects.instance [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lazy-loading 'resources' on Instance uuid 02b3a680-d1bd-462f-95dc-aa0934c7ceac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.351 232437 DEBUG nova.virt.libvirt.vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:30:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:30:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.351 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "address": "fa:16:3e:89:a4:68", "network": {"id": "15e538c5-1144-486f-bc43-86871ca19e5e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1912448327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70cec09c-3d", "ovs_interfaceid": "70cec09c-3d65-4e69-8c35-7d82e75f4add", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.352 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:a4:68,bridge_name='br-int',has_traffic_filtering=True,id=70cec09c-3d65-4e69-8c35-7d82e75f4add,network=Network(15e538c5-1144-486f-bc43-86871ca19e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70cec09c-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.352 232437 DEBUG os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:a4:68,bridge_name='br-int',has_traffic_filtering=True,id=70cec09c-3d65-4e69-8c35-7d82e75f4add,network=Network(15e538c5-1144-486f-bc43-86871ca19e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70cec09c-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.354 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.354 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70cec09c-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.356 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.359 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.364 232437 DEBUG nova.compute.manager [req-a0467000-2b8a-4c80-acb1-5ec0f19c50b1 req-8c388cd2-5953-4685-84ac-fe6ae0059949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.365 232437 DEBUG oslo_concurrency.lockutils [req-a0467000-2b8a-4c80-acb1-5ec0f19c50b1 req-8c388cd2-5953-4685-84ac-fe6ae0059949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.365 232437 DEBUG oslo_concurrency.lockutils [req-a0467000-2b8a-4c80-acb1-5ec0f19c50b1 req-8c388cd2-5953-4685-84ac-fe6ae0059949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.365 232437 DEBUG oslo_concurrency.lockutils [req-a0467000-2b8a-4c80-acb1-5ec0f19c50b1 req-8c388cd2-5953-4685-84ac-fe6ae0059949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.365 232437 DEBUG nova.compute.manager [req-a0467000-2b8a-4c80-acb1-5ec0f19c50b1 req-8c388cd2-5953-4685-84ac-fe6ae0059949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-unplugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.366 232437 DEBUG nova.compute.manager [req-a0467000-2b8a-4c80-acb1-5ec0f19c50b1 req-8c388cd2-5953-4685-84ac-fe6ae0059949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.370 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.373 232437 INFO os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:a4:68,bridge_name='br-int',has_traffic_filtering=True,id=70cec09c-3d65-4e69-8c35-7d82e75f4add,network=Network(15e538c5-1144-486f-bc43-86871ca19e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70cec09c-3d')#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.374 232437 DEBUG nova.virt.libvirt.vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:30:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:30:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.374 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.375 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:05:bf,bridge_name='br-int',has_traffic_filtering=True,id=99fbd66e-87ac-42e3-8722-34d617ef9bb1,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap99fbd66e-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.375 232437 DEBUG os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:05:bf,bridge_name='br-int',has_traffic_filtering=True,id=99fbd66e-87ac-42e3-8722-34d617ef9bb1,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap99fbd66e-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.376 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.377 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99fbd66e-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.378 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.380 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.389 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.391 232437 INFO os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:05:bf,bridge_name='br-int',has_traffic_filtering=True,id=99fbd66e-87ac-42e3-8722-34d617ef9bb1,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap99fbd66e-87')#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.392 232437 DEBUG nova.virt.libvirt.vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:30:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:30:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.392 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.393 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:25:4d,bridge_name='br-int',has_traffic_filtering=True,id=7989285b-96a5-43e8-b98e-c0fe1badbf5e,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7989285b-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:31:59 np0005548731 neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5[280186]: [NOTICE]   (280190) : haproxy version is 2.8.14-c23fe91
Dec  6 02:31:59 np0005548731 neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5[280186]: [NOTICE]   (280190) : path to executable is /usr/sbin/haproxy
Dec  6 02:31:59 np0005548731 neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5[280186]: [WARNING]  (280190) : Exiting Master process...
Dec  6 02:31:59 np0005548731 neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5[280186]: [WARNING]  (280190) : Exiting Master process...
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.394 232437 DEBUG os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:25:4d,bridge_name='br-int',has_traffic_filtering=True,id=7989285b-96a5-43e8-b98e-c0fe1badbf5e,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7989285b-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:31:59 np0005548731 neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5[280186]: [ALERT]    (280190) : Current worker (280192) exited with code 143 (Terminated)
Dec  6 02:31:59 np0005548731 neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5[280186]: [WARNING]  (280190) : All workers exited. Exiting... (0)
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.396 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.396 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7989285b-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:31:59 np0005548731 systemd[1]: libpod-c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415.scope: Deactivated successfully.
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.398 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.400 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:31:59 np0005548731 podman[281104]: 2025-12-06 07:31:59.404838212 +0000 UTC m=+0.042427176 container died c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.409 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.411 232437 INFO os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:25:4d,bridge_name='br-int',has_traffic_filtering=True,id=7989285b-96a5-43e8-b98e-c0fe1badbf5e,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7989285b-96')#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.412 232437 DEBUG nova.virt.libvirt.vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:30:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:30:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.412 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.413 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:62:49:cc,bridge_name='br-int',has_traffic_filtering=True,id=ee2de0a7-a321-4f41-868f-457ea7275ded,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2de0a7-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.414 232437 DEBUG os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:49:cc,bridge_name='br-int',has_traffic_filtering=True,id=ee2de0a7-a321-4f41-868f-457ea7275ded,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2de0a7-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.415 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.416 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee2de0a7-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.416 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.418 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.424 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.426 232437 INFO os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:49:cc,bridge_name='br-int',has_traffic_filtering=True,id=ee2de0a7-a321-4f41-868f-457ea7275ded,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee2de0a7-a3')#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.427 232437 DEBUG nova.virt.libvirt.vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:30:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:30:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.427 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.428 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:a8:03,bridge_name='br-int',has_traffic_filtering=True,id=7aeffb1d-aa58-4c79-9768-bcd0a7dcce43,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7aeffb1d-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.428 232437 DEBUG os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:a8:03,bridge_name='br-int',has_traffic_filtering=True,id=7aeffb1d-aa58-4c79-9768-bcd0a7dcce43,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7aeffb1d-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.429 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.430 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aeffb1d-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:31:59 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415-userdata-shm.mount: Deactivated successfully.
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.431 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.433 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:31:59 np0005548731 systemd[1]: var-lib-containers-storage-overlay-f056ec2555c7c24796754ecae4c3892c2a14bc1c0d7e6e7abd3efcdca79656a5-merged.mount: Deactivated successfully.
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.439 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.441 232437 INFO os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:a8:03,bridge_name='br-int',has_traffic_filtering=True,id=7aeffb1d-aa58-4c79-9768-bcd0a7dcce43,network=Network(95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7aeffb1d-aa')#033[00m
Dec  6 02:31:59 np0005548731 podman[281104]: 2025-12-06 07:31:59.442244785 +0000 UTC m=+0.079833749 container cleanup c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.442 232437 DEBUG nova.virt.libvirt.vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:30:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:30:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.443 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.443 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b2:aa:a0,bridge_name='br-int',has_traffic_filtering=True,id=691d36f1-88c4-4e29-b5a7-059d5b66aee3,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691d36f1-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.443 232437 DEBUG os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:aa:a0,bridge_name='br-int',has_traffic_filtering=True,id=691d36f1-88c4-4e29-b5a7-059d5b66aee3,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691d36f1-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.445 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.445 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691d36f1-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.446 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.447 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:31:59 np0005548731 systemd[1]: libpod-conmon-c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415.scope: Deactivated successfully.
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.450 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.451 232437 INFO os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:aa:a0,bridge_name='br-int',has_traffic_filtering=True,id=691d36f1-88c4-4e29-b5a7-059d5b66aee3,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691d36f1-88')#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.451 232437 DEBUG nova.virt.libvirt.vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-978600578',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-device-tagging-server-978600578',id=114,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4Nu6vj+a9h6J+HIG3bY+gVQqZOzYK629F5aAnRyfcs57jY2yCirotfYPKEDEfhO5X4Qoxtk+llC6W6Xx8bCVjzTNv5PNuAwtpZd6jzHApTipRVoLWw741uAxIfd5PHrg==',key_name='tempest-keypair-661966322',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:30:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a5dd5f9ca5747618b386c87a40ede88',ramdisk_id='',reservation_id='r-5f0cqyp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1145519801',owner_user_name='tempest-TaggedBootDevicesTest_v242-1145519801-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:30:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e5d532f4af34acc9e3a7a08819bb089',uuid=02b3a680-d1bd-462f-95dc-aa0934c7ceac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.452 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converting VIF {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.452 232437 DEBUG nova.network.os_vif_util [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:17:f5,bridge_name='br-int',has_traffic_filtering=True,id=a935a2a5-757d-49f8-9924-1f5f12e03325,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa935a2a5-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.452 232437 DEBUG os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:17:f5,bridge_name='br-int',has_traffic_filtering=True,id=a935a2a5-757d-49f8-9924-1f5f12e03325,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa935a2a5-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.453 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.453 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa935a2a5-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.454 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.455 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.456 232437 INFO os_vif [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:17:f5,bridge_name='br-int',has_traffic_filtering=True,id=a935a2a5-757d-49f8-9924-1f5f12e03325,network=Network(8f455f90-843e-4bc7-94c8-f96455201770),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa935a2a5-75')#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.503 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 podman[281156]: 2025-12-06 07:31:59.507790525 +0000 UTC m=+0.044273492 container remove c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.513 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae4bede-60d6-4e92-acd1-aa704bed8a55]: (4, ('Sat Dec  6 07:31:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 (c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415)\nc84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415\nSat Dec  6 07:31:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 (c84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415)\nc84c8cddff60c5acc43e37b1b4068bdd9a2149ce96e0addfc8e82b6fb1d19415\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.514 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[28092ada-f3b5-416c-b5b7-734ab2f75cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.515 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95ae8bf7-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.517 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 kernel: tap95ae8bf7-e0: left promiscuous mode
Dec  6 02:31:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:31:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:31:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:59.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.530 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.534 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[91eaa133-0ce8-48e2-806a-05e56dba65b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.563 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[08b22ada-f019-48f9-97af-2b6e4cfd57c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.564 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e5299ade-e5a3-41a7-87e4-cd9b2f2cc603]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.581 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[19b68d34-64c2-41d9-a094-b555fd439262]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650638, 'reachable_time': 22050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281189, 'error': None, 'target': 'ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.583 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.583 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d6af73-f0b4-409f-9a86-d3d7c8583b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.584 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 7989285b-96a5-43e8-b98e-c0fe1badbf5e in datapath 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 unbound from our chassis#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.585 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.586 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[424a3a01-35cb-43a6-be65-ee3cd321629a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.586 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ee2de0a7-a321-4f41-868f-457ea7275ded in datapath 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 unbound from our chassis#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.587 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.588 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d10bb2-2998-4144-a2cd-51760362a6ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.589 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 in datapath 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5 unbound from our chassis#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.590 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.590 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[32ee280b-6dec-41b8-9304-a71040c85474]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.591 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 691d36f1-88c4-4e29-b5a7-059d5b66aee3 in datapath 8f455f90-843e-4bc7-94c8-f96455201770 unbound from our chassis#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.592 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f455f90-843e-4bc7-94c8-f96455201770, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.592 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e988e12a-5445-476a-9dd8-357c24fd2c31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.592 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770 namespace which is not needed anymore#033[00m
Dec  6 02:31:59 np0005548731 neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770[280111]: [NOTICE]   (280116) : haproxy version is 2.8.14-c23fe91
Dec  6 02:31:59 np0005548731 neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770[280111]: [NOTICE]   (280116) : path to executable is /usr/sbin/haproxy
Dec  6 02:31:59 np0005548731 neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770[280111]: [WARNING]  (280116) : Exiting Master process...
Dec  6 02:31:59 np0005548731 neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770[280111]: [ALERT]    (280116) : Current worker (280118) exited with code 143 (Terminated)
Dec  6 02:31:59 np0005548731 neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770[280111]: [WARNING]  (280116) : All workers exited. Exiting... (0)
Dec  6 02:31:59 np0005548731 systemd[1]: libpod-321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd.scope: Deactivated successfully.
Dec  6 02:31:59 np0005548731 podman[281207]: 2025-12-06 07:31:59.738958187 +0000 UTC m=+0.059058582 container died 321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:31:59 np0005548731 podman[281207]: 2025-12-06 07:31:59.766161521 +0000 UTC m=+0.086261906 container cleanup 321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:31:59 np0005548731 systemd[1]: libpod-conmon-321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd.scope: Deactivated successfully.
Dec  6 02:31:59 np0005548731 podman[281230]: 2025-12-06 07:31:59.83003891 +0000 UTC m=+0.045245355 container remove 321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.836 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5148a4e1-acda-41f1-8fa1-69d5fafe4d21]: (4, ('Sat Dec  6 07:31:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770 (321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd)\n321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd\nSat Dec  6 07:31:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770 (321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd)\n321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.838 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f06b88-d6cd-4832-b28b-ee1cadde5acb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.838 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f455f90-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.840 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 kernel: tap8f455f90-80: left promiscuous mode
Dec  6 02:31:59 np0005548731 nova_compute[232433]: 2025-12-06 07:31:59.853 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.856 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[37303662-354b-4909-aaa6-5f2f5ca0c21e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.871 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb92f0e-e813-4baf-83d6-98faed816276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.873 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[abdb89cb-85f5-4319-87cf-4f53624b3558]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.886 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5990d7-c358-41df-b60e-65c194287ccd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650563, 'reachable_time': 32162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281251, 'error': None, 'target': 'ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.888 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f455f90-843e-4bc7-94c8-f96455201770 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.888 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c756647e-a7ea-4fdd-9b29-fa695c7d53c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.889 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a935a2a5-757d-49f8-9924-1f5f12e03325 in datapath 8f455f90-843e-4bc7-94c8-f96455201770 unbound from our chassis#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.891 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f455f90-843e-4bc7-94c8-f96455201770, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:31:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:31:59.891 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[320b6f08-8ba3-427c-8dc9-0cc77a1b3bae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:32:00 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Dec  6 02:32:00 np0005548731 systemd[1]: run-netns-ovnmeta\x2d95ae8bf7\x2de7e5\x2d4bbf\x2da4f2\x2d58f1e2b487a5.mount: Deactivated successfully.
Dec  6 02:32:00 np0005548731 systemd[1]: var-lib-containers-storage-overlay-0c581daaef3162c1bf5df5f8c45681aa992379669ae7b173e849aa443791769c-merged.mount: Deactivated successfully.
Dec  6 02:32:00 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-321ae91615c42277cc5c7d6e97fc12f1c9c30472fc356a07e13a84a982a3bbfd-userdata-shm.mount: Deactivated successfully.
Dec  6 02:32:00 np0005548731 systemd[1]: run-netns-ovnmeta\x2d8f455f90\x2d843e\x2d4bc7\x2d94c8\x2df96455201770.mount: Deactivated successfully.
Dec  6 02:32:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:00.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:32:00.873 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:32:00.873 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:32:00.874 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.259 232437 DEBUG nova.compute.manager [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-70cec09c-3d65-4e69-8c35-7d82e75f4add external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.260 232437 DEBUG oslo_concurrency.lockutils [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.260 232437 DEBUG oslo_concurrency.lockutils [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.260 232437 DEBUG oslo_concurrency.lockutils [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.268 232437 DEBUG nova.compute.manager [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-plugged-70cec09c-3d65-4e69-8c35-7d82e75f4add pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.269 232437 WARNING nova.compute.manager [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-70cec09c-3d65-4e69-8c35-7d82e75f4add for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.269 232437 DEBUG nova.compute.manager [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.269 232437 DEBUG oslo_concurrency.lockutils [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.270 232437 DEBUG oslo_concurrency.lockutils [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.270 232437 DEBUG oslo_concurrency.lockutils [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.270 232437 DEBUG nova.compute.manager [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-unplugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.270 232437 DEBUG nova.compute.manager [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.271 232437 DEBUG nova.compute.manager [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.271 232437 DEBUG oslo_concurrency.lockutils [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.271 232437 DEBUG oslo_concurrency.lockutils [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.272 232437 DEBUG oslo_concurrency.lockutils [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.272 232437 DEBUG nova.compute.manager [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-plugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.272 232437 WARNING nova.compute.manager [req-415ec089-39e8-452b-ac3c-9ecda9f2e5aa req-2373f7c9-8dc2-4362-82ea-ff38047b1439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-691d36f1-88c4-4e29-b5a7-059d5b66aee3 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.318 232437 DEBUG nova.compute.manager [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.318 232437 DEBUG oslo_concurrency.lockutils [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.319 232437 DEBUG oslo_concurrency.lockutils [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.319 232437 DEBUG oslo_concurrency.lockutils [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.319 232437 DEBUG nova.compute.manager [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-plugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.320 232437 WARNING nova.compute.manager [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-7989285b-96a5-43e8-b98e-c0fe1badbf5e for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.320 232437 DEBUG nova.compute.manager [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-a935a2a5-757d-49f8-9924-1f5f12e03325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.320 232437 DEBUG oslo_concurrency.lockutils [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.321 232437 DEBUG oslo_concurrency.lockutils [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.321 232437 DEBUG oslo_concurrency.lockutils [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.322 232437 DEBUG nova.compute.manager [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-unplugged-a935a2a5-757d-49f8-9924-1f5f12e03325 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.322 232437 DEBUG nova.compute.manager [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-a935a2a5-757d-49f8-9924-1f5f12e03325 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.322 232437 DEBUG nova.compute.manager [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-a935a2a5-757d-49f8-9924-1f5f12e03325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.322 232437 DEBUG oslo_concurrency.lockutils [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.323 232437 DEBUG oslo_concurrency.lockutils [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.323 232437 DEBUG oslo_concurrency.lockutils [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.323 232437 DEBUG nova.compute.manager [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-plugged-a935a2a5-757d-49f8-9924-1f5f12e03325 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.323 232437 WARNING nova.compute.manager [req-ccc99fff-0058-4e9c-8ff9-c5ba97c287b2 req-507d70b6-615a-4f6d-afc3-a7a92f540200 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-a935a2a5-757d-49f8-9924-1f5f12e03325 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.492 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.493 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.494 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.494 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.495 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-plugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.495 232437 WARNING nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-99fbd66e-87ac-42e3-8722-34d617ef9bb1 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.496 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-ee2de0a7-a321-4f41-868f-457ea7275ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.496 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.497 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.497 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.498 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-unplugged-ee2de0a7-a321-4f41-868f-457ea7275ded pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.498 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-ee2de0a7-a321-4f41-868f-457ea7275ded for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.498 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-ee2de0a7-a321-4f41-868f-457ea7275ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.499 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.499 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.500 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.500 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-plugged-ee2de0a7-a321-4f41-868f-457ea7275ded pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.501 232437 WARNING nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-ee2de0a7-a321-4f41-868f-457ea7275ded for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.501 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.501 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.502 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.502 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.503 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-unplugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.503 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-unplugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.504 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-plugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.504 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.504 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.505 232437 DEBUG oslo_concurrency.lockutils [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.505 232437 DEBUG nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] No waiting events found dispatching network-vif-plugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:32:01 np0005548731 nova_compute[232433]: 2025-12-06 07:32:01.506 232437 WARNING nova.compute.manager [req-22de6859-bf48-4ced-b284-0aae52494abb req-60c51a17-f6c9-456c-9900-7d71f4243041 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received unexpected event network-vif-plugged-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:32:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:32:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:01.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:32:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:32:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:02.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:32:02 np0005548731 nova_compute[232433]: 2025-12-06 07:32:02.755 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:32:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:03.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:32:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:04.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:32:04 np0005548731 nova_compute[232433]: 2025-12-06 07:32:04.456 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:05 np0005548731 nova_compute[232433]: 2025-12-06 07:32:05.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:32:05 np0005548731 nova_compute[232433]: 2025-12-06 07:32:05.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:32:05 np0005548731 nova_compute[232433]: 2025-12-06 07:32:05.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:32:05 np0005548731 nova_compute[232433]: 2025-12-06 07:32:05.123 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  6 02:32:05 np0005548731 nova_compute[232433]: 2025-12-06 07:32:05.124 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:32:05 np0005548731 nova_compute[232433]: 2025-12-06 07:32:05.124 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:32:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:05.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:06 np0005548731 nova_compute[232433]: 2025-12-06 07:32:06.119 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:32:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:06.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:06 np0005548731 nova_compute[232433]: 2025-12-06 07:32:06.872 232437 INFO nova.virt.libvirt.driver [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Deleting instance files /var/lib/nova/instances/02b3a680-d1bd-462f-95dc-aa0934c7ceac_del#033[00m
Dec  6 02:32:06 np0005548731 nova_compute[232433]: 2025-12-06 07:32:06.873 232437 INFO nova.virt.libvirt.driver [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Deletion of /var/lib/nova/instances/02b3a680-d1bd-462f-95dc-aa0934c7ceac_del complete#033[00m
Dec  6 02:32:07 np0005548731 nova_compute[232433]: 2025-12-06 07:32:07.028 232437 INFO nova.compute.manager [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Took 10.03 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:32:07 np0005548731 nova_compute[232433]: 2025-12-06 07:32:07.029 232437 DEBUG oslo.service.loopingcall [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:32:07 np0005548731 nova_compute[232433]: 2025-12-06 07:32:07.029 232437 DEBUG nova.compute.manager [-] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:32:07 np0005548731 nova_compute[232433]: 2025-12-06 07:32:07.030 232437 DEBUG nova.network.neutron [-] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:32:07 np0005548731 nova_compute[232433]: 2025-12-06 07:32:07.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:32:07 np0005548731 nova_compute[232433]: 2025-12-06 07:32:07.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:32:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:07.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:07 np0005548731 nova_compute[232433]: 2025-12-06 07:32:07.757 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:08.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:32:09 np0005548731 nova_compute[232433]: 2025-12-06 07:32:09.459 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:32:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:09.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:32:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:32:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1774354065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:32:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:32:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1774354065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:32:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:10.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:11 np0005548731 nova_compute[232433]: 2025-12-06 07:32:11.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:32:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:11.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:12.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:12 np0005548731 nova_compute[232433]: 2025-12-06 07:32:12.759 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:32:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:13.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.141 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.141 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.141 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.141 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.142 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.268 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.326 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006319.3251548, 02b3a680-d1bd-462f-95dc-aa0934c7ceac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.326 232437 INFO nova.compute.manager [-] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.353 232437 DEBUG nova.compute.manager [None req-26b5ffae-3952-41a6-8e74-a76609a46d7d - - - - - -] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:32:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 02:32:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:14.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.461 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:32:14 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3515848525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.611 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.775 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.776 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4505MB free_disk=20.94660186767578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.776 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.776 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.855 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 02b3a680-d1bd-462f-95dc-aa0934c7ceac actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.855 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.855 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:32:14 np0005548731 nova_compute[232433]: 2025-12-06 07:32:14.886 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:32:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:32:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3910440295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:32:15 np0005548731 nova_compute[232433]: 2025-12-06 07:32:15.305 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:32:15 np0005548731 nova_compute[232433]: 2025-12-06 07:32:15.311 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:32:15 np0005548731 nova_compute[232433]: 2025-12-06 07:32:15.328 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:32:15 np0005548731 nova_compute[232433]: 2025-12-06 07:32:15.350 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:32:15 np0005548731 nova_compute[232433]: 2025-12-06 07:32:15.351 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.003000071s ======
Dec  6 02:32:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:15.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Dec  6 02:32:16 np0005548731 nova_compute[232433]: 2025-12-06 07:32:16.351 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:32:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:32:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:16.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:32:16 np0005548731 nova_compute[232433]: 2025-12-06 07:32:16.918 232437 DEBUG nova.compute.manager [req-7a5ede8c-ac8c-4c66-a10f-ee422e4e7fa9 req-d4229185-3930-40d1-b881-7ed34eeeea9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-deleted-70cec09c-3d65-4e69-8c35-7d82e75f4add external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:16 np0005548731 nova_compute[232433]: 2025-12-06 07:32:16.918 232437 INFO nova.compute.manager [req-7a5ede8c-ac8c-4c66-a10f-ee422e4e7fa9 req-d4229185-3930-40d1-b881-7ed34eeeea9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Neutron deleted interface 70cec09c-3d65-4e69-8c35-7d82e75f4add; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:32:16 np0005548731 nova_compute[232433]: 2025-12-06 07:32:16.918 232437 DEBUG nova.network.neutron [req-7a5ede8c-ac8c-4c66-a10f-ee422e4e7fa9 req-d4229185-3930-40d1-b881-7ed34eeeea9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [{"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "address": "fa:16:3e:45:a8:03", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.213", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7aeffb1d-aa", "ovs_interfaceid": "7aeffb1d-aa58-4c79-9768-bcd0a7dcce43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:32:16 np0005548731 nova_compute[232433]: 2025-12-06 07:32:16.944 232437 DEBUG nova.compute.manager [req-7a5ede8c-ac8c-4c66-a10f-ee422e4e7fa9 req-d4229185-3930-40d1-b881-7ed34eeeea9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Detach interface failed, port_id=70cec09c-3d65-4e69-8c35-7d82e75f4add, reason: Instance 02b3a680-d1bd-462f-95dc-aa0934c7ceac could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:32:17 np0005548731 nova_compute[232433]: 2025-12-06 07:32:17.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:32:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000048s ======
Dec  6 02:32:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:17.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Dec  6 02:32:17 np0005548731 nova_compute[232433]: 2025-12-06 07:32:17.761 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 02:32:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 02:32:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:32:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:18.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:32:18 np0005548731 podman[281648]: 2025-12-06 07:32:18.646596044 +0000 UTC m=+0.063367677 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  6 02:32:18 np0005548731 podman[281648]: 2025-12-06 07:32:18.739981954 +0000 UTC m=+0.156753587 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 02:32:19 np0005548731 podman[281720]: 2025-12-06 07:32:19.000555323 +0000 UTC m=+0.050683857 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 02:32:19 np0005548731 podman[281721]: 2025-12-06 07:32:19.003571707 +0000 UTC m=+0.053513777 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:32:19 np0005548731 podman[281722]: 2025-12-06 07:32:19.034379099 +0000 UTC m=+0.079172263 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 02:32:19 np0005548731 podman[281862]: 2025-12-06 07:32:19.34168886 +0000 UTC m=+0.053662492 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:32:19 np0005548731 podman[281862]: 2025-12-06 07:32:19.352798521 +0000 UTC m=+0.064772133 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:32:19 np0005548731 nova_compute[232433]: 2025-12-06 07:32:19.462 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:19 np0005548731 podman[281926]: 2025-12-06 07:32:19.536482744 +0000 UTC m=+0.051149760 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, build-date=2023-02-22T09:23:20, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec  6 02:32:19 np0005548731 podman[281926]: 2025-12-06 07:32:19.549939612 +0000 UTC m=+0.064606608 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, version=2.2.4, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Dec  6 02:32:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:19.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:19 np0005548731 nova_compute[232433]: 2025-12-06 07:32:19.932 232437 DEBUG nova.compute.manager [req-65cc8386-2da1-426e-bff5-d8ca7dfd91a6 req-7a56d91b-f132-4efa-80de-da669810cdf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-deleted-7aeffb1d-aa58-4c79-9768-bcd0a7dcce43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:19 np0005548731 nova_compute[232433]: 2025-12-06 07:32:19.933 232437 INFO nova.compute.manager [req-65cc8386-2da1-426e-bff5-d8ca7dfd91a6 req-7a56d91b-f132-4efa-80de-da669810cdf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Neutron deleted interface 7aeffb1d-aa58-4c79-9768-bcd0a7dcce43; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:32:19 np0005548731 nova_compute[232433]: 2025-12-06 07:32:19.933 232437 DEBUG nova.network.neutron [req-65cc8386-2da1-426e-bff5-d8ca7dfd91a6 req-7a56d91b-f132-4efa-80de-da669810cdf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [{"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a935a2a5-757d-49f8-9924-1f5f12e03325", "address": "fa:16:3e:c4:17:f5", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa935a2a5-75", "ovs_interfaceid": "a935a2a5-757d-49f8-9924-1f5f12e03325", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:32:19 np0005548731 nova_compute[232433]: 2025-12-06 07:32:19.987 232437 DEBUG nova.compute.manager [req-65cc8386-2da1-426e-bff5-d8ca7dfd91a6 req-7a56d91b-f132-4efa-80de-da669810cdf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Detach interface failed, port_id=7aeffb1d-aa58-4c79-9768-bcd0a7dcce43, reason: Instance 02b3a680-d1bd-462f-95dc-aa0934c7ceac could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:32:19 np0005548731 nova_compute[232433]: 2025-12-06 07:32:19.987 232437 DEBUG nova.compute.manager [req-65cc8386-2da1-426e-bff5-d8ca7dfd91a6 req-7a56d91b-f132-4efa-80de-da669810cdf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-deleted-a935a2a5-757d-49f8-9924-1f5f12e03325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:19 np0005548731 nova_compute[232433]: 2025-12-06 07:32:19.988 232437 INFO nova.compute.manager [req-65cc8386-2da1-426e-bff5-d8ca7dfd91a6 req-7a56d91b-f132-4efa-80de-da669810cdf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Neutron deleted interface a935a2a5-757d-49f8-9924-1f5f12e03325; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:32:19 np0005548731 nova_compute[232433]: 2025-12-06 07:32:19.988 232437 DEBUG nova.network.neutron [req-65cc8386-2da1-426e-bff5-d8ca7dfd91a6 req-7a56d91b-f132-4efa-80de-da669810cdf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [{"id": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "address": "fa:16:3e:c3:05:bf", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbd66e-87", "ovs_interfaceid": "99fbd66e-87ac-42e3-8722-34d617ef9bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "address": "fa:16:3e:e2:25:4d", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7989285b-96", "ovs_interfaceid": "7989285b-96a5-43e8-b98e-c0fe1badbf5e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "ee2de0a7-a321-4f41-868f-457ea7275ded", "address": "fa:16:3e:62:49:cc", "network": {"id": "95ae8bf7-e7e5-4bbf-a4f2-58f1e2b487a5", "bridge": "br-int", "label": "tempest-device-tagging-net1-103242350", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee2de0a7-a3", "ovs_interfaceid": "ee2de0a7-a321-4f41-868f-457ea7275ded", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "address": "fa:16:3e:b2:aa:a0", "network": {"id": "8f455f90-843e-4bc7-94c8-f96455201770", "bridge": "br-int", "label": "tempest-device-tagging-net2-1809570820", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5dd5f9ca5747618b386c87a40ede88", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691d36f1-88", "ovs_interfaceid": "691d36f1-88c4-4e29-b5a7-059d5b66aee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:32:20 np0005548731 nova_compute[232433]: 2025-12-06 07:32:20.077 232437 DEBUG nova.compute.manager [req-65cc8386-2da1-426e-bff5-d8ca7dfd91a6 req-7a56d91b-f132-4efa-80de-da669810cdf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Detach interface failed, port_id=a935a2a5-757d-49f8-9924-1f5f12e03325, reason: Instance 02b3a680-d1bd-462f-95dc-aa0934c7ceac could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:32:20 np0005548731 nova_compute[232433]: 2025-12-06 07:32:20.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:32:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:32:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:32:21 np0005548731 nova_compute[232433]: 2025-12-06 07:32:21.042 232437 DEBUG nova.network.neutron [-] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:32:21 np0005548731 nova_compute[232433]: 2025-12-06 07:32:21.059 232437 INFO nova.compute.manager [-] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Took 14.03 seconds to deallocate network for instance.#033[00m
Dec  6 02:32:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:21.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:21 np0005548731 nova_compute[232433]: 2025-12-06 07:32:21.688 232437 INFO nova.compute.manager [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Took 0.63 seconds to detach 3 volumes for instance.#033[00m
Dec  6 02:32:21 np0005548731 nova_compute[232433]: 2025-12-06 07:32:21.748 232437 DEBUG oslo_concurrency.lockutils [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:21 np0005548731 nova_compute[232433]: 2025-12-06 07:32:21.749 232437 DEBUG oslo_concurrency.lockutils [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:21 np0005548731 nova_compute[232433]: 2025-12-06 07:32:21.825 232437 DEBUG oslo_concurrency.processutils [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:32:22 np0005548731 nova_compute[232433]: 2025-12-06 07:32:22.017 232437 DEBUG nova.compute.manager [req-461beac8-2aff-4056-a820-71e6768c6feb req-87ae2dc0-2dc5-40d8-a8e9-9a75af18318d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-deleted-ee2de0a7-a321-4f41-868f-457ea7275ded external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:22 np0005548731 nova_compute[232433]: 2025-12-06 07:32:22.018 232437 DEBUG nova.compute.manager [req-461beac8-2aff-4056-a820-71e6768c6feb req-87ae2dc0-2dc5-40d8-a8e9-9a75af18318d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02b3a680-d1bd-462f-95dc-aa0934c7ceac] Received event network-vif-deleted-691d36f1-88c4-4e29-b5a7-059d5b66aee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:32:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:32:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/72302301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:32:22 np0005548731 nova_compute[232433]: 2025-12-06 07:32:22.269 232437 DEBUG oslo_concurrency.processutils [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:32:22 np0005548731 nova_compute[232433]: 2025-12-06 07:32:22.275 232437 DEBUG nova.compute.provider_tree [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:32:22 np0005548731 nova_compute[232433]: 2025-12-06 07:32:22.293 232437 DEBUG nova.scheduler.client.report [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:32:22 np0005548731 nova_compute[232433]: 2025-12-06 07:32:22.318 232437 DEBUG oslo_concurrency.lockutils [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:22 np0005548731 nova_compute[232433]: 2025-12-06 07:32:22.355 232437 INFO nova.scheduler.client.report [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Deleted allocations for instance 02b3a680-d1bd-462f-95dc-aa0934c7ceac#033[00m
Dec  6 02:32:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:22.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:22 np0005548731 nova_compute[232433]: 2025-12-06 07:32:22.489 232437 DEBUG oslo_concurrency.lockutils [None req-d5dc85e5-23c1-453a-b8a7-ee96a92496a0 8e5d532f4af34acc9e3a7a08819bb089 1a5dd5f9ca5747618b386c87a40ede88 - - default default] Lock "02b3a680-d1bd-462f-95dc-aa0934c7ceac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 25.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:22 np0005548731 nova_compute[232433]: 2025-12-06 07:32:22.762 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:32:22.781 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:32:22 np0005548731 nova_compute[232433]: 2025-12-06 07:32:22.781 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:32:22.782 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:32:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:32:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:23.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:24 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:32:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:24.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:24 np0005548731 nova_compute[232433]: 2025-12-06 07:32:24.466 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:32:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:25.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:32:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:27 np0005548731 nova_compute[232433]: 2025-12-06 07:32:27.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:27 np0005548731 nova_compute[232433]: 2025-12-06 07:32:27.460 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:27.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:27 np0005548731 nova_compute[232433]: 2025-12-06 07:32:27.764 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:28 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:32:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:28.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:32:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).paxos(paxos updating c 4017..4552) lease_timeout -- calling new election
Dec  6 02:32:29 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 02:32:29 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(52) init, last seen epoch 52
Dec  6 02:32:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:32:29 np0005548731 nova_compute[232433]: 2025-12-06 07:32:29.471 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:29.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:32:29.785 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:32:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:30.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:31.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:32 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:32:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:32.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:32 np0005548731 nova_compute[232433]: 2025-12-06 07:32:32.766 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:33.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:34 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(53) init, last seen epoch 53, mid-election, bumping
Dec  6 02:32:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:32:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:32:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:34.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:34 np0005548731 nova_compute[232433]: 2025-12-06 07:32:34.474 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.105 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.105 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.106 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.106 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.106 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.106 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.127 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.140 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.140 232437 WARNING nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.140 232437 WARNING nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.140 232437 WARNING nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.140 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Removable base files: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98 /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.141 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.141 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.141 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.141 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.141 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.142 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec  6 02:32:35 np0005548731 nova_compute[232433]: 2025-12-06 07:32:35.142 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec  6 02:32:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:32:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:35.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:32:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:36 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:32:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:36.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 02:32:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:37.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:37 np0005548731 nova_compute[232433]: 2025-12-06 07:32:37.768 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:32:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:32:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:32:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:38.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:39 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 02:32:39 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 02:32:39 np0005548731 ceph-mon[77458]: mon.compute-1 calling monitor election
Dec  6 02:32:39 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 02:32:39 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 02:32:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:32:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:32:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:32:39 np0005548731 nova_compute[232433]: 2025-12-06 07:32:39.477 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:39.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:40.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:32:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:32:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:41.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:42.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:42 np0005548731 nova_compute[232433]: 2025-12-06 07:32:42.769 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:32:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:43.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:32:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:44.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:32:44 np0005548731 nova_compute[232433]: 2025-12-06 07:32:44.480 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:32:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:45.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:32:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:46.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:47.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:47 np0005548731 nova_compute[232433]: 2025-12-06 07:32:47.772 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:32:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:32:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:32:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:48.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:49 np0005548731 nova_compute[232433]: 2025-12-06 07:32:49.483 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:49.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:49 np0005548731 podman[282233]: 2025-12-06 07:32:49.899643446 +0000 UTC m=+0.059442163 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3)
Dec  6 02:32:49 np0005548731 podman[282231]: 2025-12-06 07:32:49.900336922 +0000 UTC m=+0.059891813 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  6 02:32:49 np0005548731 podman[282232]: 2025-12-06 07:32:49.932476756 +0000 UTC m=+0.092015987 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  6 02:32:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:50.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:51.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:32:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:52.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:32:52 np0005548731 nova_compute[232433]: 2025-12-06 07:32:52.773 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:32:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:53.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.213072) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374213099, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 764, "num_deletes": 250, "total_data_size": 1504187, "memory_usage": 1535960, "flush_reason": "Manual Compaction"}
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374221335, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 700523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47836, "largest_seqno": 48594, "table_properties": {"data_size": 697117, "index_size": 1186, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9389, "raw_average_key_size": 21, "raw_value_size": 689914, "raw_average_value_size": 1582, "num_data_blocks": 51, "num_entries": 436, "num_filter_entries": 436, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006300, "oldest_key_time": 1765006300, "file_creation_time": 1765006374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 8309 microseconds, and 2851 cpu microseconds.
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.221378) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 700523 bytes OK
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.221398) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.223058) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.223080) EVENT_LOG_v1 {"time_micros": 1765006374223073, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.223098) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 1500101, prev total WAL file size 1500101, number of live WAL files 2.
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.224098) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353235' seq:72057594037927935, type:22 .. '6D6772737461740031373736' seq:0, type:0; will stop at (end)
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(684KB)], [90(12MB)]
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374224142, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 13850932, "oldest_snapshot_seqno": -1}
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 7836 keys, 10259761 bytes, temperature: kUnknown
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374292800, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 10259761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10209969, "index_size": 29072, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 203463, "raw_average_key_size": 25, "raw_value_size": 10072721, "raw_average_value_size": 1285, "num_data_blocks": 1141, "num_entries": 7836, "num_filter_entries": 7836, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765006374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.293055) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10259761 bytes
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.294670) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.5 rd, 149.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(34.4) write-amplify(14.6) OK, records in: 8336, records dropped: 500 output_compression: NoCompression
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.294714) EVENT_LOG_v1 {"time_micros": 1765006374294682, "job": 56, "event": "compaction_finished", "compaction_time_micros": 68733, "compaction_time_cpu_micros": 35966, "output_level": 6, "num_output_files": 1, "total_output_size": 10259761, "num_input_records": 8336, "num_output_records": 7836, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374294954, "job": 56, "event": "table_file_deletion", "file_number": 92}
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374298142, "job": 56, "event": "table_file_deletion", "file_number": 90}
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.224009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.298179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.298184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.298185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.298187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:32:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:32:54.298189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:32:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:54.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:54 np0005548731 nova_compute[232433]: 2025-12-06 07:32:54.527 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:55.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:32:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:56.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:32:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:32:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:57.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:32:57 np0005548731 nova_compute[232433]: 2025-12-06 07:32:57.776 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:32:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:32:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:58.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:32:59 np0005548731 nova_compute[232433]: 2025-12-06 07:32:59.530 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:32:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:32:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:32:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:59.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:00.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:00.874 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:00.874 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:00.874 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:01.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:02.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:02 np0005548731 nova_compute[232433]: 2025-12-06 07:33:02.778 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:33:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:03.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:04.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:04 np0005548731 nova_compute[232433]: 2025-12-06 07:33:04.534 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:05 np0005548731 nova_compute[232433]: 2025-12-06 07:33:05.142 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:05 np0005548731 nova_compute[232433]: 2025-12-06 07:33:05.142 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:33:05 np0005548731 nova_compute[232433]: 2025-12-06 07:33:05.142 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:33:05 np0005548731 nova_compute[232433]: 2025-12-06 07:33:05.161 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:33:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:33:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:05.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:06 np0005548731 nova_compute[232433]: 2025-12-06 07:33:06.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:06.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:07.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:07 np0005548731 nova_compute[232433]: 2025-12-06 07:33:07.779 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:08 np0005548731 nova_compute[232433]: 2025-12-06 07:33:08.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:08.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:33:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2456836380' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:33:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:33:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2456836380' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:33:09 np0005548731 nova_compute[232433]: 2025-12-06 07:33:09.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:09 np0005548731 nova_compute[232433]: 2025-12-06 07:33:09.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:33:09 np0005548731 nova_compute[232433]: 2025-12-06 07:33:09.567 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:09.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:33:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:10.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:33:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4278511991' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:33:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:33:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4278511991' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:33:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:11.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:12.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:12 np0005548731 nova_compute[232433]: 2025-12-06 07:33:12.781 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:13 np0005548731 nova_compute[232433]: 2025-12-06 07:33:13.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:13.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:14.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:14 np0005548731 nova_compute[232433]: 2025-12-06 07:33:14.571 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.133 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.133 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:33:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/527801579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.594 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.627 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "6a50a40c-3b05-4c0e-aa67-1489e203824e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.627 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:15.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.674 232437 DEBUG nova.compute.manager [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.768 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.770 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4534MB free_disk=20.942855834960938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.770 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.770 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:15 np0005548731 nova_compute[232433]: 2025-12-06 07:33:15.829 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.227 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 6a50a40c-3b05-4c0e-aa67-1489e203824e has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.228 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.229 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.372 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:16.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:33:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2460644784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.792 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.797 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.824 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.863 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.864 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.865 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.874 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:33:16 np0005548731 nova_compute[232433]: 2025-12-06 07:33:16.875 232437 INFO nova.compute.claims [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.050 232437 DEBUG oslo_concurrency.processutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.131 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.132 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.132 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.159 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:33:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:33:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1407645777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.486 232437 DEBUG oslo_concurrency.processutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.491 232437 DEBUG nova.compute.provider_tree [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.527 232437 DEBUG nova.scheduler.client.report [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.578 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.579 232437 DEBUG nova.compute.manager [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:33:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:17.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.671 232437 DEBUG nova.compute.manager [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.672 232437 DEBUG nova.network.neutron [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.697 232437 INFO nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.719 232437 DEBUG nova.compute.manager [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.766 232437 INFO nova.virt.block_device [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Booting with volume d2147d68-ea53-4b01-b660-c00c430f356d at /dev/vda#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.782 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.970 232437 DEBUG os_brick.utils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.972 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.982 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.982 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[a0977483-f176-4e3c-8678-7ae65222b742]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.984 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.991 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.991 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[c2de54e4-c996-4989-8d6c-89f09dff9091]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:17 np0005548731 nova_compute[232433]: 2025-12-06 07:33:17.993 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:18 np0005548731 nova_compute[232433]: 2025-12-06 07:33:18.001 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:18 np0005548731 nova_compute[232433]: 2025-12-06 07:33:18.001 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[99866c88-37c7-4227-a9eb-4754d12429ae]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:18 np0005548731 nova_compute[232433]: 2025-12-06 07:33:18.002 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d7402e-7736-4bfc-8fbc-1d232d596b8a]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:18 np0005548731 nova_compute[232433]: 2025-12-06 07:33:18.003 232437 DEBUG oslo_concurrency.processutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:18 np0005548731 nova_compute[232433]: 2025-12-06 07:33:18.026 232437 DEBUG oslo_concurrency.processutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:18 np0005548731 nova_compute[232433]: 2025-12-06 07:33:18.029 232437 DEBUG os_brick.initiator.connectors.lightos [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:33:18 np0005548731 nova_compute[232433]: 2025-12-06 07:33:18.029 232437 DEBUG os_brick.initiator.connectors.lightos [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:33:18 np0005548731 nova_compute[232433]: 2025-12-06 07:33:18.029 232437 DEBUG os_brick.initiator.connectors.lightos [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:33:18 np0005548731 nova_compute[232433]: 2025-12-06 07:33:18.030 232437 DEBUG os_brick.utils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:33:18 np0005548731 nova_compute[232433]: 2025-12-06 07:33:18.030 232437 DEBUG nova.virt.block_device [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Updating existing volume attachment record: 177787eb-317a-4b17-abb9-a2eae2295c31 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:33:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:18 np0005548731 nova_compute[232433]: 2025-12-06 07:33:18.453 232437 DEBUG nova.policy [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '605b5481e0c944048e6a67046c30d693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:33:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:33:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:18.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:33:18 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/107927179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:33:19 np0005548731 nova_compute[232433]: 2025-12-06 07:33:19.132 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:19 np0005548731 nova_compute[232433]: 2025-12-06 07:33:19.573 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:19.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:19 np0005548731 nova_compute[232433]: 2025-12-06 07:33:19.985 232437 DEBUG nova.compute.manager [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:33:19 np0005548731 nova_compute[232433]: 2025-12-06 07:33:19.987 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:33:19 np0005548731 nova_compute[232433]: 2025-12-06 07:33:19.987 232437 INFO nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Creating image(s)#033[00m
Dec  6 02:33:19 np0005548731 nova_compute[232433]: 2025-12-06 07:33:19.988 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:33:19 np0005548731 nova_compute[232433]: 2025-12-06 07:33:19.988 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Ensure instance console log exists: /var/lib/nova/instances/6a50a40c-3b05-4c0e-aa67-1489e203824e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:33:19 np0005548731 nova_compute[232433]: 2025-12-06 07:33:19.989 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:19 np0005548731 nova_compute[232433]: 2025-12-06 07:33:19.989 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:19 np0005548731 nova_compute[232433]: 2025-12-06 07:33:19.989 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:20 np0005548731 nova_compute[232433]: 2025-12-06 07:33:20.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:20 np0005548731 nova_compute[232433]: 2025-12-06 07:33:20.211 232437 DEBUG nova.network.neutron [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Successfully created port: ad544c9a-af55-4a7f-babe-68f6d1b23e25 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:33:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:20.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:20 np0005548731 podman[282482]: 2025-12-06 07:33:20.897496547 +0000 UTC m=+0.057028833 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 02:33:20 np0005548731 podman[282484]: 2025-12-06 07:33:20.898363068 +0000 UTC m=+0.052204555 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:33:20 np0005548731 podman[282483]: 2025-12-06 07:33:20.960522685 +0000 UTC m=+0.117495938 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:33:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:21.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:22.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:22 np0005548731 nova_compute[232433]: 2025-12-06 07:33:22.784 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:22 np0005548731 nova_compute[232433]: 2025-12-06 07:33:22.832 232437 DEBUG nova.network.neutron [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Successfully updated port: ad544c9a-af55-4a7f-babe-68f6d1b23e25 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:33:22 np0005548731 nova_compute[232433]: 2025-12-06 07:33:22.868 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:33:22 np0005548731 nova_compute[232433]: 2025-12-06 07:33:22.868 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:33:22 np0005548731 nova_compute[232433]: 2025-12-06 07:33:22.868 232437 DEBUG nova.network.neutron [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:33:23 np0005548731 nova_compute[232433]: 2025-12-06 07:33:23.054 232437 DEBUG nova.compute.manager [req-658d048f-d949-463c-bdbc-50738f8f81ed req-4b01e1ef-c703-4dc6-9591-7f2c2846cc41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Received event network-changed-ad544c9a-af55-4a7f-babe-68f6d1b23e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:33:23 np0005548731 nova_compute[232433]: 2025-12-06 07:33:23.054 232437 DEBUG nova.compute.manager [req-658d048f-d949-463c-bdbc-50738f8f81ed req-4b01e1ef-c703-4dc6-9591-7f2c2846cc41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Refreshing instance network info cache due to event network-changed-ad544c9a-af55-4a7f-babe-68f6d1b23e25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:33:23 np0005548731 nova_compute[232433]: 2025-12-06 07:33:23.054 232437 DEBUG oslo_concurrency.lockutils [req-658d048f-d949-463c-bdbc-50738f8f81ed req-4b01e1ef-c703-4dc6-9591-7f2c2846cc41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:33:23 np0005548731 nova_compute[232433]: 2025-12-06 07:33:23.236 232437 DEBUG nova.network.neutron [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:33:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:23.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:24.176 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.177 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:24.179 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:33:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:24.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.575 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.843 232437 DEBUG nova.network.neutron [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Updating instance_info_cache with network_info: [{"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.895 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.896 232437 DEBUG nova.compute.manager [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Instance network_info: |[{"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.896 232437 DEBUG oslo_concurrency.lockutils [req-658d048f-d949-463c-bdbc-50738f8f81ed req-4b01e1ef-c703-4dc6-9591-7f2c2846cc41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.896 232437 DEBUG nova.network.neutron [req-658d048f-d949-463c-bdbc-50738f8f81ed req-4b01e1ef-c703-4dc6-9591-7f2c2846cc41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Refreshing network info cache for port ad544c9a-af55-4a7f-babe-68f6d1b23e25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.901 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Start _get_guest_xml network_info=[{"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d2147d68-ea53-4b01-b660-c00c430f356d', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd2147d68-ea53-4b01-b660-c00c430f356d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6a50a40c-3b05-4c0e-aa67-1489e203824e', 'attached_at': '', 'detached_at': '', 'volume_id': 'd2147d68-ea53-4b01-b660-c00c430f356d', 'serial': 'd2147d68-ea53-4b01-b660-c00c430f356d', 'multiattach': True}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'attachment_id': '177787eb-317a-4b17-abb9-a2eae2295c31', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.905 232437 WARNING nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.916 232437 DEBUG nova.virt.libvirt.host [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.917 232437 DEBUG nova.virt.libvirt.host [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.926 232437 DEBUG nova.virt.libvirt.host [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.926 232437 DEBUG nova.virt.libvirt.host [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.927 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.927 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.928 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.928 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.928 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.928 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.928 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.928 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.929 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.929 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.929 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:33:24 np0005548731 nova_compute[232433]: 2025-12-06 07:33:24.929 232437 DEBUG nova.virt.hardware [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.014 232437 DEBUG nova.storage.rbd_utils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 6a50a40c-3b05-4c0e-aa67-1489e203824e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.019 232437 DEBUG oslo_concurrency.processutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:33:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3403146162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.480 232437 DEBUG oslo_concurrency.processutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.517 232437 DEBUG nova.virt.libvirt.vif [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:33:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-386684313',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-386684313',id=120,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-b7297kax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:33:17Z,user_data=None,user_id='605b5481e0c944048e6a67046c30d693',uuid=6a50a40c-3b05-4c0e-aa67-1489e203824e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.517 232437 DEBUG nova.network.os_vif_util [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.518 232437 DEBUG nova.network.os_vif_util [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:fb:86,bridge_name='br-int',has_traffic_filtering=True,id=ad544c9a-af55-4a7f-babe-68f6d1b23e25,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad544c9a-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.519 232437 DEBUG nova.objects.instance [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a50a40c-3b05-4c0e-aa67-1489e203824e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.536 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  <uuid>6a50a40c-3b05-4c0e-aa67-1489e203824e</uuid>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  <name>instance-00000078</name>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <nova:name>tempest-AttachVolumeMultiAttachTest-server-386684313</nova:name>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:33:24</nova:creationTime>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <nova:user uuid="605b5481e0c944048e6a67046c30d693">tempest-AttachVolumeMultiAttachTest-690984293-project-member</nova:user>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <nova:project uuid="833f4cf9f5a64b2ab94c3bf330353a31">tempest-AttachVolumeMultiAttachTest-690984293</nova:project>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <nova:port uuid="ad544c9a-af55-4a7f-babe-68f6d1b23e25">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <entry name="serial">6a50a40c-3b05-4c0e-aa67-1489e203824e</entry>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <entry name="uuid">6a50a40c-3b05-4c0e-aa67-1489e203824e</entry>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/6a50a40c-3b05-4c0e-aa67-1489e203824e_disk.config">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-d2147d68-ea53-4b01-b660-c00c430f356d">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <serial>d2147d68-ea53-4b01-b660-c00c430f356d</serial>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <shareable/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:1f:fb:86"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <target dev="tapad544c9a-af"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/6a50a40c-3b05-4c0e-aa67-1489e203824e/console.log" append="off"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:33:25 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:33:25 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:33:25 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:33:25 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.537 232437 DEBUG nova.compute.manager [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Preparing to wait for external event network-vif-plugged-ad544c9a-af55-4a7f-babe-68f6d1b23e25 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.537 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.537 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.538 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.538 232437 DEBUG nova.virt.libvirt.vif [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:33:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-386684313',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-386684313',id=120,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-b7297kax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:33:17Z,user_data=None,user_id='605b5481e0c944048e6a67046c30d693',uuid=6a50a40c-3b05-4c0e-aa67-1489e203824e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.538 232437 DEBUG nova.network.os_vif_util [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.539 232437 DEBUG nova.network.os_vif_util [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:fb:86,bridge_name='br-int',has_traffic_filtering=True,id=ad544c9a-af55-4a7f-babe-68f6d1b23e25,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad544c9a-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.539 232437 DEBUG os_vif [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:fb:86,bridge_name='br-int',has_traffic_filtering=True,id=ad544c9a-af55-4a7f-babe-68f6d1b23e25,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad544c9a-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.540 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.540 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.540 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.543 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.543 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad544c9a-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.544 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapad544c9a-af, col_values=(('external_ids', {'iface-id': 'ad544c9a-af55-4a7f-babe-68f6d1b23e25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:fb:86', 'vm-uuid': '6a50a40c-3b05-4c0e-aa67-1489e203824e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.545 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:25 np0005548731 NetworkManager[49182]: <info>  [1765006405.5463] manager: (tapad544c9a-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.548 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.552 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.553 232437 INFO os_vif [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:fb:86,bridge_name='br-int',has_traffic_filtering=True,id=ad544c9a-af55-4a7f-babe-68f6d1b23e25,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad544c9a-af')#033[00m
Dec  6 02:33:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:25.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.697 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.698 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.698 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:1f:fb:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.699 232437 INFO nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Using config drive#033[00m
Dec  6 02:33:25 np0005548731 nova_compute[232433]: 2025-12-06 07:33:25.723 232437 DEBUG nova.storage.rbd_utils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 6a50a40c-3b05-4c0e-aa67-1489e203824e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:33:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:33:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:26.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:26 np0005548731 nova_compute[232433]: 2025-12-06 07:33:26.826 232437 INFO nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Creating config drive at /var/lib/nova/instances/6a50a40c-3b05-4c0e-aa67-1489e203824e/disk.config#033[00m
Dec  6 02:33:26 np0005548731 nova_compute[232433]: 2025-12-06 07:33:26.831 232437 DEBUG oslo_concurrency.processutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a50a40c-3b05-4c0e-aa67-1489e203824e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp628g9ho6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:26 np0005548731 nova_compute[232433]: 2025-12-06 07:33:26.963 232437 DEBUG oslo_concurrency.processutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a50a40c-3b05-4c0e-aa67-1489e203824e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp628g9ho6" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:26 np0005548731 nova_compute[232433]: 2025-12-06 07:33:26.993 232437 DEBUG nova.storage.rbd_utils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 6a50a40c-3b05-4c0e-aa67-1489e203824e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:33:26 np0005548731 nova_compute[232433]: 2025-12-06 07:33:26.997 232437 DEBUG oslo_concurrency.processutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a50a40c-3b05-4c0e-aa67-1489e203824e/disk.config 6a50a40c-3b05-4c0e-aa67-1489e203824e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:27 np0005548731 nova_compute[232433]: 2025-12-06 07:33:27.360 232437 DEBUG oslo_concurrency.processutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a50a40c-3b05-4c0e-aa67-1489e203824e/disk.config 6a50a40c-3b05-4c0e-aa67-1489e203824e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:27 np0005548731 nova_compute[232433]: 2025-12-06 07:33:27.361 232437 INFO nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Deleting local config drive /var/lib/nova/instances/6a50a40c-3b05-4c0e-aa67-1489e203824e/disk.config because it was imported into RBD.#033[00m
Dec  6 02:33:27 np0005548731 kernel: tapad544c9a-af: entered promiscuous mode
Dec  6 02:33:27 np0005548731 NetworkManager[49182]: <info>  [1765006407.4138] manager: (tapad544c9a-af): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Dec  6 02:33:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:33:27Z|00522|binding|INFO|Claiming lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 for this chassis.
Dec  6 02:33:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:33:27Z|00523|binding|INFO|ad544c9a-af55-4a7f-babe-68f6d1b23e25: Claiming fa:16:3e:1f:fb:86 10.100.0.9
Dec  6 02:33:27 np0005548731 nova_compute[232433]: 2025-12-06 07:33:27.414 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:27 np0005548731 nova_compute[232433]: 2025-12-06 07:33:27.417 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:27 np0005548731 systemd-udevd[282659]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:33:27 np0005548731 systemd-machined[195355]: New machine qemu-52-instance-00000078.
Dec  6 02:33:27 np0005548731 NetworkManager[49182]: <info>  [1765006407.4579] device (tapad544c9a-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:33:27 np0005548731 NetworkManager[49182]: <info>  [1765006407.4589] device (tapad544c9a-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:33:27 np0005548731 nova_compute[232433]: 2025-12-06 07:33:27.476 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:27 np0005548731 systemd[1]: Started Virtual Machine qemu-52-instance-00000078.
Dec  6 02:33:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:33:27Z|00524|binding|INFO|Setting lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 ovn-installed in OVS
Dec  6 02:33:27 np0005548731 nova_compute[232433]: 2025-12-06 07:33:27.484 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:27.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:27 np0005548731 nova_compute[232433]: 2025-12-06 07:33:27.787 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e274 e274: 3 total, 3 up, 3 in
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.182 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:33:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:33:28Z|00525|binding|INFO|Setting lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 up in Southbound
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.363 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:fb:86 10.100.0.9'], port_security=['fa:16:3e:1f:fb:86 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6a50a40c-3b05-4c0e-aa67-1489e203824e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '384b06d0-71ab-455f-9033-7290730c5c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ad544c9a-af55-4a7f-babe-68f6d1b23e25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.364 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ad544c9a-af55-4a7f-babe-68f6d1b23e25 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 bound to our chassis#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.402 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.413 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[703dbaed-916c-423f-a232-8790d3cc5245]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.414 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5bb49e8a-b1 in ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:33:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.417 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5bb49e8a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.417 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[76e2e983-1fe4-44e8-92e0-33c6387ef249]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.418 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b38754-c51b-4184-8c3c-b77b066ad781]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.428 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[84559849-136c-4a4f-9fd6-1ea26614d445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.442 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4aee09-c646-49c0-bff1-e027c758c727]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:28.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.476 232437 DEBUG nova.network.neutron [req-658d048f-d949-463c-bdbc-50738f8f81ed req-4b01e1ef-c703-4dc6-9591-7f2c2846cc41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Updated VIF entry in instance network info cache for port ad544c9a-af55-4a7f-babe-68f6d1b23e25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.477 232437 DEBUG nova.network.neutron [req-658d048f-d949-463c-bdbc-50738f8f81ed req-4b01e1ef-c703-4dc6-9591-7f2c2846cc41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Updating instance_info_cache with network_info: [{"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.476 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[fb882f65-f4ec-47d8-a0cc-009d783354a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.483 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bb86611d-3c2c-41a7-b4e9-66772a50d011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 NetworkManager[49182]: <info>  [1765006408.4847] manager: (tap5bb49e8a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/247)
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.518 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f101d281-190b-4a98-9b20-082819569447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.522 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[723a05b4-111c-4ba4-9460-a028d1f58f8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.543 232437 DEBUG oslo_concurrency.lockutils [req-658d048f-d949-463c-bdbc-50738f8f81ed req-4b01e1ef-c703-4dc6-9591-7f2c2846cc41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:33:28 np0005548731 NetworkManager[49182]: <info>  [1765006408.5459] device (tap5bb49e8a-b0): carrier: link connected
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.551 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3f4ef0a8-3f75-4079-8d93-2479f6d684c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.572 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b31c5e5e-da81-47b0-aea7-9153eed4c07c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 41475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282730, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.587 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e22318bb-d5c1-4f5b-b60e-1d7fa5dc26be]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:bf8b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667055, 'tstamp': 667055}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282731, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.607 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2f49eb75-aa34-4053-a979-6665e2c2569c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 41475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282733, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.640 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d4bba3e8-af3c-4e00-8730-73115bb6621a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.688 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006408.6867776, 6a50a40c-3b05-4c0e-aa67-1489e203824e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.689 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] VM Started (Lifecycle Event)#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.704 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9af66033-7272-4180-81e6-efb50e744101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.705 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.705 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.706 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.707 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:28 np0005548731 kernel: tap5bb49e8a-b0: entered promiscuous mode
Dec  6 02:33:28 np0005548731 NetworkManager[49182]: <info>  [1765006408.7094] manager: (tap5bb49e8a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.712 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:33:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:33:28Z|00526|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.713 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.715 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5bb49e8a-b939-4c79-851c-62c634be0272.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5bb49e8a-b939-4c79-851c-62c634be0272.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.716 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d5afe24a-0933-496d-ba8e-1490459951ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.717 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-5bb49e8a-b939-4c79-851c-62c634be0272
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/5bb49e8a-b939-4c79-851c-62c634be0272.pid.haproxy
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 5bb49e8a-b939-4c79-851c-62c634be0272
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:33:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:33:28.718 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'env', 'PROCESS_TAG=haproxy-5bb49e8a-b939-4c79-851c-62c634be0272', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5bb49e8a-b939-4c79-851c-62c634be0272.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.722 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.769 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.771 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006408.6878922, 6a50a40c-3b05-4c0e-aa67-1489e203824e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.771 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.794 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.799 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.821 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.953 232437 DEBUG nova.compute.manager [req-58ab56e6-5400-4d62-9a3e-d7f05e56ea99 req-22f6ad41-3c8c-45c7-9307-36bfc975dfc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Received event network-vif-plugged-ad544c9a-af55-4a7f-babe-68f6d1b23e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.954 232437 DEBUG oslo_concurrency.lockutils [req-58ab56e6-5400-4d62-9a3e-d7f05e56ea99 req-22f6ad41-3c8c-45c7-9307-36bfc975dfc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.954 232437 DEBUG oslo_concurrency.lockutils [req-58ab56e6-5400-4d62-9a3e-d7f05e56ea99 req-22f6ad41-3c8c-45c7-9307-36bfc975dfc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.954 232437 DEBUG oslo_concurrency.lockutils [req-58ab56e6-5400-4d62-9a3e-d7f05e56ea99 req-22f6ad41-3c8c-45c7-9307-36bfc975dfc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.954 232437 DEBUG nova.compute.manager [req-58ab56e6-5400-4d62-9a3e-d7f05e56ea99 req-22f6ad41-3c8c-45c7-9307-36bfc975dfc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Processing event network-vif-plugged-ad544c9a-af55-4a7f-babe-68f6d1b23e25 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.955 232437 DEBUG nova.compute.manager [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.958 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006408.9580193, 6a50a40c-3b05-4c0e-aa67-1489e203824e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.958 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.960 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.963 232437 INFO nova.virt.libvirt.driver [-] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Instance spawned successfully.#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.963 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.989 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.995 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:33:28 np0005548731 nova_compute[232433]: 2025-12-06 07:33:28.999 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:33:29 np0005548731 nova_compute[232433]: 2025-12-06 07:33:29.000 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:33:29 np0005548731 nova_compute[232433]: 2025-12-06 07:33:29.000 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:33:29 np0005548731 nova_compute[232433]: 2025-12-06 07:33:29.001 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:33:29 np0005548731 nova_compute[232433]: 2025-12-06 07:33:29.001 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:33:29 np0005548731 nova_compute[232433]: 2025-12-06 07:33:29.001 232437 DEBUG nova.virt.libvirt.driver [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:33:29 np0005548731 nova_compute[232433]: 2025-12-06 07:33:29.028 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:33:29 np0005548731 nova_compute[232433]: 2025-12-06 07:33:29.067 232437 INFO nova.compute.manager [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Took 9.08 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:33:29 np0005548731 nova_compute[232433]: 2025-12-06 07:33:29.067 232437 DEBUG nova.compute.manager [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:33:29 np0005548731 nova_compute[232433]: 2025-12-06 07:33:29.172 232437 INFO nova.compute.manager [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Took 13.39 seconds to build instance.#033[00m
Dec  6 02:33:29 np0005548731 podman[282771]: 2025-12-06 07:33:29.092416147 +0000 UTC m=+0.026951699 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:33:29 np0005548731 nova_compute[232433]: 2025-12-06 07:33:29.249 232437 DEBUG oslo_concurrency.lockutils [None req-12f9acdf-0731-4426-bdd6-b0953ab77850 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:29 np0005548731 podman[282771]: 2025-12-06 07:33:29.435015748 +0000 UTC m=+0.369551290 container create 20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 02:33:29 np0005548731 systemd[1]: Started libpod-conmon-20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2.scope.
Dec  6 02:33:29 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:33:29 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0815dbcf6806f53a04728943ebdcd5c4f82587c43c53d3f9bfa3653102522ef5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:33:29 np0005548731 podman[282771]: 2025-12-06 07:33:29.528986781 +0000 UTC m=+0.463522354 container init 20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:33:29 np0005548731 podman[282771]: 2025-12-06 07:33:29.53670569 +0000 UTC m=+0.471241232 container start 20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:33:29 np0005548731 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[282786]: [NOTICE]   (282790) : New worker (282792) forked
Dec  6 02:33:29 np0005548731 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[282786]: [NOTICE]   (282790) : Loading success.
Dec  6 02:33:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:29.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:30.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:30 np0005548731 nova_compute[232433]: 2025-12-06 07:33:30.546 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:31 np0005548731 nova_compute[232433]: 2025-12-06 07:33:31.103 232437 DEBUG nova.compute.manager [req-d65186f7-75dd-4207-93cf-654b25ab7fd6 req-48566ed7-c8ce-4c61-84b4-3b25a4445d1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Received event network-vif-plugged-ad544c9a-af55-4a7f-babe-68f6d1b23e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:33:31 np0005548731 nova_compute[232433]: 2025-12-06 07:33:31.104 232437 DEBUG oslo_concurrency.lockutils [req-d65186f7-75dd-4207-93cf-654b25ab7fd6 req-48566ed7-c8ce-4c61-84b4-3b25a4445d1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:31 np0005548731 nova_compute[232433]: 2025-12-06 07:33:31.104 232437 DEBUG oslo_concurrency.lockutils [req-d65186f7-75dd-4207-93cf-654b25ab7fd6 req-48566ed7-c8ce-4c61-84b4-3b25a4445d1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:31 np0005548731 nova_compute[232433]: 2025-12-06 07:33:31.105 232437 DEBUG oslo_concurrency.lockutils [req-d65186f7-75dd-4207-93cf-654b25ab7fd6 req-48566ed7-c8ce-4c61-84b4-3b25a4445d1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:31 np0005548731 nova_compute[232433]: 2025-12-06 07:33:31.105 232437 DEBUG nova.compute.manager [req-d65186f7-75dd-4207-93cf-654b25ab7fd6 req-48566ed7-c8ce-4c61-84b4-3b25a4445d1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] No waiting events found dispatching network-vif-plugged-ad544c9a-af55-4a7f-babe-68f6d1b23e25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:33:31 np0005548731 nova_compute[232433]: 2025-12-06 07:33:31.105 232437 WARNING nova.compute.manager [req-d65186f7-75dd-4207-93cf-654b25ab7fd6 req-48566ed7-c8ce-4c61-84b4-3b25a4445d1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Received unexpected event network-vif-plugged-ad544c9a-af55-4a7f-babe-68f6d1b23e25 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:33:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:31.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:33:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:32.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:32 np0005548731 nova_compute[232433]: 2025-12-06 07:33:32.788 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:33.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:34.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:35 np0005548731 nova_compute[232433]: 2025-12-06 07:33:35.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:33:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:35.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:36 np0005548731 nova_compute[232433]: 2025-12-06 07:33:36.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:36.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:33:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:37.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:37 np0005548731 nova_compute[232433]: 2025-12-06 07:33:37.790 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e275 e275: 3 total, 3 up, 3 in
Dec  6 02:33:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:33:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:38.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e276 e276: 3 total, 3 up, 3 in
Dec  6 02:33:39 np0005548731 nova_compute[232433]: 2025-12-06 07:33:39.122 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:39 np0005548731 nova_compute[232433]: 2025-12-06 07:33:39.123 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:33:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:39.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:40.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:40 np0005548731 nova_compute[232433]: 2025-12-06 07:33:40.551 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:41.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:42.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:42 np0005548731 nova_compute[232433]: 2025-12-06 07:33:42.793 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:43.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:44.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:45 np0005548731 nova_compute[232433]: 2025-12-06 07:33:45.554 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:45.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e277 e277: 3 total, 3 up, 3 in
Dec  6 02:33:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:46.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:47 np0005548731 nova_compute[232433]: 2025-12-06 07:33:47.269 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:33:47 np0005548731 nova_compute[232433]: 2025-12-06 07:33:47.315 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Triggering sync for uuid 6a50a40c-3b05-4c0e-aa67-1489e203824e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  6 02:33:47 np0005548731 nova_compute[232433]: 2025-12-06 07:33:47.316 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "6a50a40c-3b05-4c0e-aa67-1489e203824e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:47 np0005548731 nova_compute[232433]: 2025-12-06 07:33:47.316 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:47 np0005548731 nova_compute[232433]: 2025-12-06 07:33:47.343 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:33:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:47.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:47 np0005548731 nova_compute[232433]: 2025-12-06 07:33:47.794 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:48 np0005548731 podman[283132]: 2025-12-06 07:33:48.441930701 +0000 UTC m=+0.038413578 container create a897712d486f868b162c806a9cb8567b9e490978a2cdf2e4f48b9d36fb71d9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  6 02:33:48 np0005548731 systemd[1]: Started libpod-conmon-a897712d486f868b162c806a9cb8567b9e490978a2cdf2e4f48b9d36fb71d9e5.scope.
Dec  6 02:33:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:48.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:48 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:33:48 np0005548731 podman[283132]: 2025-12-06 07:33:48.424785583 +0000 UTC m=+0.021268490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 02:33:48 np0005548731 podman[283132]: 2025-12-06 07:33:48.520742266 +0000 UTC m=+0.117225173 container init a897712d486f868b162c806a9cb8567b9e490978a2cdf2e4f48b9d36fb71d9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 02:33:48 np0005548731 podman[283132]: 2025-12-06 07:33:48.529590831 +0000 UTC m=+0.126073708 container start a897712d486f868b162c806a9cb8567b9e490978a2cdf2e4f48b9d36fb71d9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  6 02:33:48 np0005548731 podman[283132]: 2025-12-06 07:33:48.53282898 +0000 UTC m=+0.129311897 container attach a897712d486f868b162c806a9cb8567b9e490978a2cdf2e4f48b9d36fb71d9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mirzakhani, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec  6 02:33:48 np0005548731 ecstatic_mirzakhani[283149]: 167 167
Dec  6 02:33:48 np0005548731 podman[283132]: 2025-12-06 07:33:48.535589898 +0000 UTC m=+0.132072785 container died a897712d486f868b162c806a9cb8567b9e490978a2cdf2e4f48b9d36fb71d9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 02:33:48 np0005548731 systemd[1]: libpod-a897712d486f868b162c806a9cb8567b9e490978a2cdf2e4f48b9d36fb71d9e5.scope: Deactivated successfully.
Dec  6 02:33:48 np0005548731 systemd[1]: var-lib-containers-storage-overlay-61348345f8d2da7b2cc07f55b3d8269d9f5a7065e7af4c5cc3210c8fbbcb6fcc-merged.mount: Deactivated successfully.
Dec  6 02:33:48 np0005548731 podman[283132]: 2025-12-06 07:33:48.573181365 +0000 UTC m=+0.169664242 container remove a897712d486f868b162c806a9cb8567b9e490978a2cdf2e4f48b9d36fb71d9e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 02:33:48 np0005548731 systemd[1]: libpod-conmon-a897712d486f868b162c806a9cb8567b9e490978a2cdf2e4f48b9d36fb71d9e5.scope: Deactivated successfully.
Dec  6 02:33:48 np0005548731 podman[283173]: 2025-12-06 07:33:48.744300671 +0000 UTC m=+0.040453718 container create a8e0f663c1689b0a1a1a0e4475c2e5b06969900c45affc12a4d6cee47ce19aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  6 02:33:48 np0005548731 systemd[1]: Started libpod-conmon-a8e0f663c1689b0a1a1a0e4475c2e5b06969900c45affc12a4d6cee47ce19aec.scope.
Dec  6 02:33:48 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:33:48 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d196a8263cc31962d73f1bd1b7db4cbcc5ec5c281c22f83a17070d504dd35f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 02:33:48 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d196a8263cc31962d73f1bd1b7db4cbcc5ec5c281c22f83a17070d504dd35f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 02:33:48 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d196a8263cc31962d73f1bd1b7db4cbcc5ec5c281c22f83a17070d504dd35f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 02:33:48 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57d196a8263cc31962d73f1bd1b7db4cbcc5ec5c281c22f83a17070d504dd35f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 02:33:48 np0005548731 podman[283173]: 2025-12-06 07:33:48.725996625 +0000 UTC m=+0.022149702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 02:33:48 np0005548731 podman[283173]: 2025-12-06 07:33:48.828021795 +0000 UTC m=+0.124174862 container init a8e0f663c1689b0a1a1a0e4475c2e5b06969900c45affc12a4d6cee47ce19aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 02:33:48 np0005548731 podman[283173]: 2025-12-06 07:33:48.834235726 +0000 UTC m=+0.130388773 container start a8e0f663c1689b0a1a1a0e4475c2e5b06969900c45affc12a4d6cee47ce19aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  6 02:33:48 np0005548731 podman[283173]: 2025-12-06 07:33:48.838229294 +0000 UTC m=+0.134382371 container attach a8e0f663c1689b0a1a1a0e4475c2e5b06969900c45affc12a4d6cee47ce19aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:33:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e278 e278: 3 total, 3 up, 3 in
Dec  6 02:33:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:49.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]: [
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:    {
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:        "available": false,
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:        "ceph_device": false,
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  6 02:33:49 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:        "lsm_data": {},
Dec  6 02:33:49 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:        "lvs": [],
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:        "path": "/dev/sr0",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:        "rejected_reasons": [
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "Insufficient space (<5GB)",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "Has a FileSystem"
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:        ],
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:        "sys_api": {
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "actuators": null,
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "device_nodes": "sr0",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "devname": "sr0",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "human_readable_size": "482.00 KB",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "id_bus": "ata",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "model": "QEMU DVD-ROM",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "nr_requests": "2",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "parent": "/dev/sr0",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "partitions": {},
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "path": "/dev/sr0",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "removable": "1",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "rev": "2.5+",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "ro": "0",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "rotational": "1",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "sas_address": "",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "sas_device_handle": "",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "scheduler_mode": "mq-deadline",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "sectors": 0,
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "sectorsize": "2048",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "size": 493568.0,
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "support_discard": "2048",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "type": "disk",
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:            "vendor": "QEMU"
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:        }
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]:    }
Dec  6 02:33:49 np0005548731 naughty_wilson[283189]: ]
Dec  6 02:33:50 np0005548731 systemd[1]: libpod-a8e0f663c1689b0a1a1a0e4475c2e5b06969900c45affc12a4d6cee47ce19aec.scope: Deactivated successfully.
Dec  6 02:33:50 np0005548731 systemd[1]: libpod-a8e0f663c1689b0a1a1a0e4475c2e5b06969900c45affc12a4d6cee47ce19aec.scope: Consumed 1.183s CPU time.
Dec  6 02:33:50 np0005548731 podman[283173]: 2025-12-06 07:33:50.012041143 +0000 UTC m=+1.308194200 container died a8e0f663c1689b0a1a1a0e4475c2e5b06969900c45affc12a4d6cee47ce19aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  6 02:33:50 np0005548731 systemd[1]: var-lib-containers-storage-overlay-57d196a8263cc31962d73f1bd1b7db4cbcc5ec5c281c22f83a17070d504dd35f-merged.mount: Deactivated successfully.
Dec  6 02:33:50 np0005548731 podman[283173]: 2025-12-06 07:33:50.070174151 +0000 UTC m=+1.366327198 container remove a8e0f663c1689b0a1a1a0e4475c2e5b06969900c45affc12a4d6cee47ce19aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 02:33:50 np0005548731 systemd[1]: libpod-conmon-a8e0f663c1689b0a1a1a0e4475c2e5b06969900c45affc12a4d6cee47ce19aec.scope: Deactivated successfully.
Dec  6 02:33:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:50.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:50 np0005548731 nova_compute[232433]: 2025-12-06 07:33:50.555 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.184 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.185 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.199 232437 DEBUG nova.compute.manager [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.265 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.265 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:33:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:33:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:33:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:33:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.273 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.274 232437 INFO nova.compute.claims [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.483 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:51.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:51 np0005548731 podman[284374]: 2025-12-06 07:33:51.899183631 +0000 UTC m=+0.060324693 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:33:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:33:51 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3573876812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:33:51 np0005548731 podman[284372]: 2025-12-06 07:33:51.918317848 +0000 UTC m=+0.081351096 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  6 02:33:51 np0005548731 podman[284373]: 2025-12-06 07:33:51.925196106 +0000 UTC m=+0.088446529 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.934 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.940 232437 DEBUG nova.compute.provider_tree [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.961 232437 DEBUG nova.scheduler.client.report [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.991 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:51 np0005548731 nova_compute[232433]: 2025-12-06 07:33:51.992 232437 DEBUG nova.compute.manager [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.053 232437 DEBUG nova.compute.manager [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.054 232437 DEBUG nova.network.neutron [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.070 232437 INFO nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.085 232437 DEBUG nova.compute.manager [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.157 232437 DEBUG nova.compute.manager [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.158 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.158 232437 INFO nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Creating image(s)#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.181 232437 DEBUG nova.storage.rbd_utils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] rbd image e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.204 232437 DEBUG nova.storage.rbd_utils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] rbd image e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.225 232437 DEBUG nova.storage.rbd_utils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] rbd image e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.229 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.290 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.291 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.292 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.292 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.314 232437 DEBUG nova.storage.rbd_utils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] rbd image e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.317 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:52.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.556 232437 DEBUG nova.policy [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb72a3e638064a9896ab40615cfc7d67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b09bb22417b499cbf8771188f7ae36f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:33:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e279 e279: 3 total, 3 up, 3 in
Dec  6 02:33:52 np0005548731 nova_compute[232433]: 2025-12-06 07:33:52.798 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:53 np0005548731 nova_compute[232433]: 2025-12-06 07:33:53.068 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:53 np0005548731 nova_compute[232433]: 2025-12-06 07:33:53.142 232437 DEBUG nova.storage.rbd_utils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] resizing rbd image e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:33:53 np0005548731 nova_compute[232433]: 2025-12-06 07:33:53.272 232437 DEBUG nova.network.neutron [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Successfully created port: f41e3444-24ee-4425-969b-99c4c3ebbd47 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:33:53 np0005548731 nova_compute[232433]: 2025-12-06 07:33:53.338 232437 DEBUG nova.objects.instance [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lazy-loading 'migration_context' on Instance uuid e3caec0b-7382-4ccf-9d1b-c554a41c2c52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:33:53 np0005548731 nova_compute[232433]: 2025-12-06 07:33:53.357 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:33:53 np0005548731 nova_compute[232433]: 2025-12-06 07:33:53.358 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Ensure instance console log exists: /var/lib/nova/instances/e3caec0b-7382-4ccf-9d1b-c554a41c2c52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:33:53 np0005548731 nova_compute[232433]: 2025-12-06 07:33:53.358 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:53 np0005548731 nova_compute[232433]: 2025-12-06 07:33:53.359 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:53 np0005548731 nova_compute[232433]: 2025-12-06 07:33:53.359 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:53 np0005548731 ovn_controller[133927]: 2025-12-06T07:33:53Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1f:fb:86 10.100.0.9
Dec  6 02:33:53 np0005548731 ovn_controller[133927]: 2025-12-06T07:33:53Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:fb:86 10.100.0.9
Dec  6 02:33:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:53.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e280 e280: 3 total, 3 up, 3 in
Dec  6 02:33:54 np0005548731 nova_compute[232433]: 2025-12-06 07:33:54.240 232437 DEBUG nova.network.neutron [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Successfully updated port: f41e3444-24ee-4425-969b-99c4c3ebbd47 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:33:54 np0005548731 nova_compute[232433]: 2025-12-06 07:33:54.256 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquiring lock "refresh_cache-e3caec0b-7382-4ccf-9d1b-c554a41c2c52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:33:54 np0005548731 nova_compute[232433]: 2025-12-06 07:33:54.256 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquired lock "refresh_cache-e3caec0b-7382-4ccf-9d1b-c554a41c2c52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:33:54 np0005548731 nova_compute[232433]: 2025-12-06 07:33:54.256 232437 DEBUG nova.network.neutron [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:33:54 np0005548731 nova_compute[232433]: 2025-12-06 07:33:54.340 232437 DEBUG nova.compute.manager [req-37fcbdf9-fa9b-4539-8b91-34e47af18d00 req-db64e2ef-2cec-44b5-a793-8573a99874be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received event network-changed-f41e3444-24ee-4425-969b-99c4c3ebbd47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:33:54 np0005548731 nova_compute[232433]: 2025-12-06 07:33:54.340 232437 DEBUG nova.compute.manager [req-37fcbdf9-fa9b-4539-8b91-34e47af18d00 req-db64e2ef-2cec-44b5-a793-8573a99874be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Refreshing instance network info cache due to event network-changed-f41e3444-24ee-4425-969b-99c4c3ebbd47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:33:54 np0005548731 nova_compute[232433]: 2025-12-06 07:33:54.340 232437 DEBUG oslo_concurrency.lockutils [req-37fcbdf9-fa9b-4539-8b91-34e47af18d00 req-db64e2ef-2cec-44b5-a793-8573a99874be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e3caec0b-7382-4ccf-9d1b-c554a41c2c52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:33:54 np0005548731 nova_compute[232433]: 2025-12-06 07:33:54.417 232437 DEBUG nova.network.neutron [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:33:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:54.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e281 e281: 3 total, 3 up, 3 in
Dec  6 02:33:55 np0005548731 nova_compute[232433]: 2025-12-06 07:33:55.557 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:55.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.502 232437 DEBUG nova.network.neutron [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Updating instance_info_cache with network_info: [{"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:33:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:56.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.540 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Releasing lock "refresh_cache-e3caec0b-7382-4ccf-9d1b-c554a41c2c52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.541 232437 DEBUG nova.compute.manager [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Instance network_info: |[{"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.541 232437 DEBUG oslo_concurrency.lockutils [req-37fcbdf9-fa9b-4539-8b91-34e47af18d00 req-db64e2ef-2cec-44b5-a793-8573a99874be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e3caec0b-7382-4ccf-9d1b-c554a41c2c52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.541 232437 DEBUG nova.network.neutron [req-37fcbdf9-fa9b-4539-8b91-34e47af18d00 req-db64e2ef-2cec-44b5-a793-8573a99874be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Refreshing network info cache for port f41e3444-24ee-4425-969b-99c4c3ebbd47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.544 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Start _get_guest_xml network_info=[{"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.549 232437 WARNING nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.558 232437 DEBUG nova.virt.libvirt.host [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.559 232437 DEBUG nova.virt.libvirt.host [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.562 232437 DEBUG nova.virt.libvirt.host [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.563 232437 DEBUG nova.virt.libvirt.host [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.565 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.565 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.565 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.566 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.566 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.566 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.567 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.567 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.567 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.567 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.568 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.568 232437 DEBUG nova.virt.hardware [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:33:56 np0005548731 nova_compute[232433]: 2025-12-06 07:33:56.572 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:33:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/789745650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:33:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:33:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:57.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:33:57 np0005548731 nova_compute[232433]: 2025-12-06 07:33:57.800 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:33:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:33:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:58.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:33:58 np0005548731 nova_compute[232433]: 2025-12-06 07:33:58.975 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.001 232437 DEBUG nova.storage.rbd_utils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] rbd image e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.007 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.083 232437 DEBUG nova.network.neutron [req-37fcbdf9-fa9b-4539-8b91-34e47af18d00 req-db64e2ef-2cec-44b5-a793-8573a99874be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Updated VIF entry in instance network info cache for port f41e3444-24ee-4425-969b-99c4c3ebbd47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.084 232437 DEBUG nova.network.neutron [req-37fcbdf9-fa9b-4539-8b91-34e47af18d00 req-db64e2ef-2cec-44b5-a793-8573a99874be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Updating instance_info_cache with network_info: [{"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.102 232437 DEBUG oslo_concurrency.lockutils [req-37fcbdf9-fa9b-4539-8b91-34e47af18d00 req-db64e2ef-2cec-44b5-a793-8573a99874be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e3caec0b-7382-4ccf-9d1b-c554a41c2c52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:33:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:33:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/494249757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:33:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:33:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:33:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:59.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.775 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.777 232437 DEBUG nova.virt.libvirt.vif [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:33:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1654269090',display_name='tempest-InstanceActionsTestJSON-server-1654269090',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1654269090',id=123,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b09bb22417b499cbf8771188f7ae36f',ramdisk_id='',reservation_id='r-yjwup1mc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-163749737',owner_user_name='tempest-InstanceActionsTestJSON-163749737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:33:52Z,user_data=None,user_id='bb72a3e638064a9896ab40615cfc7d67',uuid=e3caec0b-7382-4ccf-9d1b-c554a41c2c52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.777 232437 DEBUG nova.network.os_vif_util [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converting VIF {"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.778 232437 DEBUG nova.network.os_vif_util [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.779 232437 DEBUG nova.objects.instance [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lazy-loading 'pci_devices' on Instance uuid e3caec0b-7382-4ccf-9d1b-c554a41c2c52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.796 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  <uuid>e3caec0b-7382-4ccf-9d1b-c554a41c2c52</uuid>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  <name>instance-0000007b</name>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <nova:name>tempest-InstanceActionsTestJSON-server-1654269090</nova:name>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:33:56</nova:creationTime>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <nova:user uuid="bb72a3e638064a9896ab40615cfc7d67">tempest-InstanceActionsTestJSON-163749737-project-member</nova:user>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <nova:project uuid="1b09bb22417b499cbf8771188f7ae36f">tempest-InstanceActionsTestJSON-163749737</nova:project>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <nova:port uuid="f41e3444-24ee-4425-969b-99c4c3ebbd47">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <entry name="serial">e3caec0b-7382-4ccf-9d1b-c554a41c2c52</entry>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <entry name="uuid">e3caec0b-7382-4ccf-9d1b-c554a41c2c52</entry>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk.config">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:b0:3e:3e"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <target dev="tapf41e3444-24"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/e3caec0b-7382-4ccf-9d1b-c554a41c2c52/console.log" append="off"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:33:59 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:33:59 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:33:59 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:33:59 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.798 232437 DEBUG nova.compute.manager [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Preparing to wait for external event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.798 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.798 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.799 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.800 232437 DEBUG nova.virt.libvirt.vif [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:33:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1654269090',display_name='tempest-InstanceActionsTestJSON-server-1654269090',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1654269090',id=123,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b09bb22417b499cbf8771188f7ae36f',ramdisk_id='',reservation_id='r-yjwup1mc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-163749737',owner_user_name='tempest-InstanceActionsTestJSON-163749737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:33:52Z,user_data=None,user_id='bb72a3e638064a9896ab40615cfc7d67',uuid=e3caec0b-7382-4ccf-9d1b-c554a41c2c52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.800 232437 DEBUG nova.network.os_vif_util [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converting VIF {"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.801 232437 DEBUG nova.network.os_vif_util [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.801 232437 DEBUG os_vif [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.802 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.802 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.803 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.807 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.807 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf41e3444-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.808 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf41e3444-24, col_values=(('external_ids', {'iface-id': 'f41e3444-24ee-4425-969b-99c4c3ebbd47', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:3e:3e', 'vm-uuid': 'e3caec0b-7382-4ccf-9d1b-c554a41c2c52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.810 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:59 np0005548731 NetworkManager[49182]: <info>  [1765006439.8110] manager: (tapf41e3444-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.812 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.817 232437 INFO os_vif [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24')#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.901 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.902 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.902 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] No VIF found with MAC fa:16:3e:b0:3e:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.902 232437 INFO nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Using config drive#033[00m
Dec  6 02:33:59 np0005548731 nova_compute[232433]: 2025-12-06 07:33:59.927 232437 DEBUG nova.storage.rbd_utils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] rbd image e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:00.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:34:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:34:00 np0005548731 nova_compute[232433]: 2025-12-06 07:34:00.728 232437 INFO nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Creating config drive at /var/lib/nova/instances/e3caec0b-7382-4ccf-9d1b-c554a41c2c52/disk.config#033[00m
Dec  6 02:34:00 np0005548731 nova_compute[232433]: 2025-12-06 07:34:00.733 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e3caec0b-7382-4ccf-9d1b-c554a41c2c52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2ryg8ju execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:00 np0005548731 nova_compute[232433]: 2025-12-06 07:34:00.863 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e3caec0b-7382-4ccf-9d1b-c554a41c2c52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2ryg8ju" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:00.875 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:00.875 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:00.876 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:00 np0005548731 nova_compute[232433]: 2025-12-06 07:34:00.890 232437 DEBUG nova.storage.rbd_utils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] rbd image e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:00 np0005548731 nova_compute[232433]: 2025-12-06 07:34:00.893 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e3caec0b-7382-4ccf-9d1b-c554a41c2c52/disk.config e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:01.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:02.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:02 np0005548731 nova_compute[232433]: 2025-12-06 07:34:02.802 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:02.914 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:34:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:02.915 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:34:02 np0005548731 nova_compute[232433]: 2025-12-06 07:34:02.915 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:02 np0005548731 nova_compute[232433]: 2025-12-06 07:34:02.931 232437 DEBUG oslo_concurrency.processutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e3caec0b-7382-4ccf-9d1b-c554a41c2c52/disk.config e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:02 np0005548731 nova_compute[232433]: 2025-12-06 07:34:02.931 232437 INFO nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Deleting local config drive /var/lib/nova/instances/e3caec0b-7382-4ccf-9d1b-c554a41c2c52/disk.config because it was imported into RBD.#033[00m
Dec  6 02:34:02 np0005548731 kernel: tapf41e3444-24: entered promiscuous mode
Dec  6 02:34:02 np0005548731 NetworkManager[49182]: <info>  [1765006442.9918] manager: (tapf41e3444-24): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Dec  6 02:34:02 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:02Z|00527|binding|INFO|Claiming lport f41e3444-24ee-4425-969b-99c4c3ebbd47 for this chassis.
Dec  6 02:34:02 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:02Z|00528|binding|INFO|f41e3444-24ee-4425-969b-99c4c3ebbd47: Claiming fa:16:3e:b0:3e:3e 10.100.0.11
Dec  6 02:34:02 np0005548731 nova_compute[232433]: 2025-12-06 07:34:02.993 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:02 np0005548731 nova_compute[232433]: 2025-12-06 07:34:02.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.005 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:3e:3e 10.100.0.11'], port_security=['fa:16:3e:b0:3e:3e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e3caec0b-7382-4ccf-9d1b-c554a41c2c52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b09bb22417b499cbf8771188f7ae36f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a03d4d58-c3b9-4bf3-9e94-34a5e8c69515', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231e621a-df54-469a-acfd-b66004fc5e38, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f41e3444-24ee-4425-969b-99c4c3ebbd47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.006 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f41e3444-24ee-4425-969b-99c4c3ebbd47 in datapath 6a04bdf1-ac7f-432e-8ec7-636af678cb2a bound to our chassis#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.007 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a04bdf1-ac7f-432e-8ec7-636af678cb2a#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.019 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b966a2-cb04-4faa-bd4c-3f48f8115310]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.020 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a04bdf1-a1 in ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.023 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a04bdf1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.023 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[75b466f0-6269-4833-8d25-26a03d528530]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.024 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4dfba109-aec6-4e28-ac9b-cdff3d4227a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 systemd-udevd[284841]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:34:03 np0005548731 systemd-machined[195355]: New machine qemu-53-instance-0000007b.
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.036 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[138b7615-3c2a-4cec-bf96-8ccdd3abedad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 systemd[1]: Started Virtual Machine qemu-53-instance-0000007b.
Dec  6 02:34:03 np0005548731 NetworkManager[49182]: <info>  [1765006443.0528] device (tapf41e3444-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:34:03 np0005548731 NetworkManager[49182]: <info>  [1765006443.0538] device (tapf41e3444-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.060 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:03Z|00529|binding|INFO|Setting lport f41e3444-24ee-4425-969b-99c4c3ebbd47 ovn-installed in OVS
Dec  6 02:34:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:03Z|00530|binding|INFO|Setting lport f41e3444-24ee-4425-969b-99c4c3ebbd47 up in Southbound
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.064 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.064 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e430684e-f16f-47a7-a250-7f806e740ef2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.095 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5db26586-e344-4093-9944-751f089c6f7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 systemd-udevd[284846]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.103 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5440eed0-218f-4d9d-98fb-1897212d0539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 NetworkManager[49182]: <info>  [1765006443.1043] manager: (tap6a04bdf1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/251)
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.140 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[68fb54dd-7020-44b7-9bd6-5e1f4e8394f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.143 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[fc87b686-5562-4650-bf02-af3a7dfce157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 NetworkManager[49182]: <info>  [1765006443.1691] device (tap6a04bdf1-a0): carrier: link connected
Dec  6 02:34:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e282 e282: 3 total, 3 up, 3 in
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.176 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[a427b13f-83bc-46ea-b07a-cfa7d7a4aa77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.193 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[81aab063-2416-49bd-9158-dc244bc2830c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a04bdf1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:e1:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670517, 'reachable_time': 17597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284874, 'error': None, 'target': 'ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.209 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2406e9c7-2bb1-4477-ba26-1131f65289f4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:e1b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670517, 'tstamp': 670517}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284875, 'error': None, 'target': 'ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.229 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6dd80c-532a-47ec-b7c4-c1ad6f92d12b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a04bdf1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:e1:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670517, 'reachable_time': 17597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284876, 'error': None, 'target': 'ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.259 232437 DEBUG nova.compute.manager [req-6c6cc445-f8ce-41c6-8e44-1dbfe5af8275 req-fe4c8ecd-e9ff-4756-8145-cd972f42da3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.259 232437 DEBUG oslo_concurrency.lockutils [req-6c6cc445-f8ce-41c6-8e44-1dbfe5af8275 req-fe4c8ecd-e9ff-4756-8145-cd972f42da3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.260 232437 DEBUG oslo_concurrency.lockutils [req-6c6cc445-f8ce-41c6-8e44-1dbfe5af8275 req-fe4c8ecd-e9ff-4756-8145-cd972f42da3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.260 232437 DEBUG oslo_concurrency.lockutils [req-6c6cc445-f8ce-41c6-8e44-1dbfe5af8275 req-fe4c8ecd-e9ff-4756-8145-cd972f42da3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.260 232437 DEBUG nova.compute.manager [req-6c6cc445-f8ce-41c6-8e44-1dbfe5af8275 req-fe4c8ecd-e9ff-4756-8145-cd972f42da3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Processing event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.262 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2100da2d-3314-4b21-aa55-57601ae1ae5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.325 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e695c088-c87d-40f2-a033-e481ecba057c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.327 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a04bdf1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.327 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.328 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a04bdf1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:03 np0005548731 kernel: tap6a04bdf1-a0: entered promiscuous mode
Dec  6 02:34:03 np0005548731 NetworkManager[49182]: <info>  [1765006443.3306] manager: (tap6a04bdf1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.332 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a04bdf1-a0, col_values=(('external_ids', {'iface-id': 'ef965c3b-ed76-424f-a7d9-27ee8bdd00ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:03Z|00531|binding|INFO|Releasing lport ef965c3b-ed76-424f-a7d9-27ee8bdd00ef from this chassis (sb_readonly=0)
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.347 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.347 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a04bdf1-ac7f-432e-8ec7-636af678cb2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a04bdf1-ac7f-432e-8ec7-636af678cb2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.348 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2aa9bc-4191-4206-accb-6002e9387eb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.349 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-6a04bdf1-ac7f-432e-8ec7-636af678cb2a
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/6a04bdf1-ac7f-432e-8ec7-636af678cb2a.pid.haproxy
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 6a04bdf1-ac7f-432e-8ec7-636af678cb2a
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:34:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:03.351 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'env', 'PROCESS_TAG=haproxy-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a04bdf1-ac7f-432e-8ec7-636af678cb2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:34:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:03.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:03 np0005548731 podman[284926]: 2025-12-06 07:34:03.710985927 +0000 UTC m=+0.045540873 container create 8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 02:34:03 np0005548731 systemd[1]: Started libpod-conmon-8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53.scope.
Dec  6 02:34:03 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:34:03 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8290ad2ffcf4d27f1a7c75780aa0392288f8c768fc188688d0262ac60449c9bd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:34:03 np0005548731 podman[284926]: 2025-12-06 07:34:03.773346479 +0000 UTC m=+0.107901445 container init 8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 02:34:03 np0005548731 podman[284926]: 2025-12-06 07:34:03.779317594 +0000 UTC m=+0.113872540 container start 8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:34:03 np0005548731 podman[284926]: 2025-12-06 07:34:03.687659958 +0000 UTC m=+0.022214934 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:34:03 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[284960]: [NOTICE]   (284968) : New worker (284970) forked
Dec  6 02:34:03 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[284960]: [NOTICE]   (284968) : Loading success.
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.841 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006443.8408022, e3caec0b-7382-4ccf-9d1b-c554a41c2c52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.841 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] VM Started (Lifecycle Event)#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.844 232437 DEBUG nova.compute.manager [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.847 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.851 232437 INFO nova.virt.libvirt.driver [-] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Instance spawned successfully.#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.851 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.879 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.885 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.889 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.890 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.890 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.891 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.891 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.892 232437 DEBUG nova.virt.libvirt.driver [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.929 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.929 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006443.8434472, e3caec0b-7382-4ccf-9d1b-c554a41c2c52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.929 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.958 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.961 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006443.8464932, e3caec0b-7382-4ccf-9d1b-c554a41c2c52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.961 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.969 232437 INFO nova.compute.manager [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Took 11.81 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.970 232437 DEBUG nova.compute.manager [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.979 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:34:03 np0005548731 nova_compute[232433]: 2025-12-06 07:34:03.981 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:34:04 np0005548731 nova_compute[232433]: 2025-12-06 07:34:04.001 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:34:04 np0005548731 nova_compute[232433]: 2025-12-06 07:34:04.034 232437 INFO nova.compute.manager [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Took 12.80 seconds to build instance.#033[00m
Dec  6 02:34:04 np0005548731 nova_compute[232433]: 2025-12-06 07:34:04.057 232437 DEBUG oslo_concurrency.lockutils [None req-29e6f8d9-8203-432f-9b3f-c995def36d09 bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e283 e283: 3 total, 3 up, 3 in
Dec  6 02:34:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:04.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:04 np0005548731 nova_compute[232433]: 2025-12-06 07:34:04.811 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:05 np0005548731 nova_compute[232433]: 2025-12-06 07:34:05.392 232437 DEBUG nova.compute.manager [req-e051d0c6-39f6-4873-9aa5-354c6fc03ff9 req-eac50541-d7fa-4155-a5cc-6690de36c2e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:34:05 np0005548731 nova_compute[232433]: 2025-12-06 07:34:05.392 232437 DEBUG oslo_concurrency.lockutils [req-e051d0c6-39f6-4873-9aa5-354c6fc03ff9 req-eac50541-d7fa-4155-a5cc-6690de36c2e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:05 np0005548731 nova_compute[232433]: 2025-12-06 07:34:05.393 232437 DEBUG oslo_concurrency.lockutils [req-e051d0c6-39f6-4873-9aa5-354c6fc03ff9 req-eac50541-d7fa-4155-a5cc-6690de36c2e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:05 np0005548731 nova_compute[232433]: 2025-12-06 07:34:05.393 232437 DEBUG oslo_concurrency.lockutils [req-e051d0c6-39f6-4873-9aa5-354c6fc03ff9 req-eac50541-d7fa-4155-a5cc-6690de36c2e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:05 np0005548731 nova_compute[232433]: 2025-12-06 07:34:05.393 232437 DEBUG nova.compute.manager [req-e051d0c6-39f6-4873-9aa5-354c6fc03ff9 req-eac50541-d7fa-4155-a5cc-6690de36c2e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] No waiting events found dispatching network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:34:05 np0005548731 nova_compute[232433]: 2025-12-06 07:34:05.393 232437 WARNING nova.compute.manager [req-e051d0c6-39f6-4873-9aa5-354c6fc03ff9 req-eac50541-d7fa-4155-a5cc-6690de36c2e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received unexpected event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:34:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:05.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:06 np0005548731 nova_compute[232433]: 2025-12-06 07:34:06.229 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:06 np0005548731 NetworkManager[49182]: <info>  [1765006446.2304] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Dec  6 02:34:06 np0005548731 NetworkManager[49182]: <info>  [1765006446.2309] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Dec  6 02:34:06 np0005548731 nova_compute[232433]: 2025-12-06 07:34:06.347 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:06Z|00532|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec  6 02:34:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:06Z|00533|binding|INFO|Releasing lport ef965c3b-ed76-424f-a7d9-27ee8bdd00ef from this chassis (sb_readonly=0)
Dec  6 02:34:06 np0005548731 nova_compute[232433]: 2025-12-06 07:34:06.368 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:06.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:06 np0005548731 nova_compute[232433]: 2025-12-06 07:34:06.763 232437 DEBUG oslo_concurrency.lockutils [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:06 np0005548731 nova_compute[232433]: 2025-12-06 07:34:06.764 232437 DEBUG oslo_concurrency.lockutils [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:06 np0005548731 nova_compute[232433]: 2025-12-06 07:34:06.764 232437 INFO nova.compute.manager [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Rebooting instance#033[00m
Dec  6 02:34:06 np0005548731 nova_compute[232433]: 2025-12-06 07:34:06.778 232437 DEBUG oslo_concurrency.lockutils [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquiring lock "refresh_cache-e3caec0b-7382-4ccf-9d1b-c554a41c2c52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:34:06 np0005548731 nova_compute[232433]: 2025-12-06 07:34:06.779 232437 DEBUG oslo_concurrency.lockutils [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquired lock "refresh_cache-e3caec0b-7382-4ccf-9d1b-c554a41c2c52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:34:06 np0005548731 nova_compute[232433]: 2025-12-06 07:34:06.779 232437 DEBUG nova.network.neutron [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:34:07 np0005548731 nova_compute[232433]: 2025-12-06 07:34:07.150 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:34:07 np0005548731 nova_compute[232433]: 2025-12-06 07:34:07.151 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:34:07 np0005548731 nova_compute[232433]: 2025-12-06 07:34:07.151 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:34:07 np0005548731 nova_compute[232433]: 2025-12-06 07:34:07.524 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:34:07 np0005548731 nova_compute[232433]: 2025-12-06 07:34:07.525 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:34:07 np0005548731 nova_compute[232433]: 2025-12-06 07:34:07.525 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:34:07 np0005548731 nova_compute[232433]: 2025-12-06 07:34:07.525 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6a50a40c-3b05-4c0e-aa67-1489e203824e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:34:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:07.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:07 np0005548731 nova_compute[232433]: 2025-12-06 07:34:07.805 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:08.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:34:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4141776775' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:34:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:34:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4141776775' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:34:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e284 e284: 3 total, 3 up, 3 in
Dec  6 02:34:09 np0005548731 nova_compute[232433]: 2025-12-06 07:34:09.519 232437 DEBUG nova.network.neutron [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Updating instance_info_cache with network_info: [{"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:34:09 np0005548731 nova_compute[232433]: 2025-12-06 07:34:09.544 232437 DEBUG oslo_concurrency.lockutils [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Releasing lock "refresh_cache-e3caec0b-7382-4ccf-9d1b-c554a41c2c52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:34:09 np0005548731 nova_compute[232433]: 2025-12-06 07:34:09.546 232437 DEBUG nova.compute.manager [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:34:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:09.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:09 np0005548731 nova_compute[232433]: 2025-12-06 07:34:09.813 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:10 np0005548731 nova_compute[232433]: 2025-12-06 07:34:10.118 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Updating instance_info_cache with network_info: [{"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:34:10 np0005548731 nova_compute[232433]: 2025-12-06 07:34:10.145 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:34:10 np0005548731 nova_compute[232433]: 2025-12-06 07:34:10.145 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:34:10 np0005548731 nova_compute[232433]: 2025-12-06 07:34:10.145 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:34:10 np0005548731 nova_compute[232433]: 2025-12-06 07:34:10.146 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:34:10 np0005548731 nova_compute[232433]: 2025-12-06 07:34:10.146 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:34:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:10.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.095 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:34:11 np0005548731 kernel: tapf41e3444-24 (unregistering): left promiscuous mode
Dec  6 02:34:11 np0005548731 NetworkManager[49182]: <info>  [1765006451.1429] device (tapf41e3444-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:34:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:11Z|00534|binding|INFO|Releasing lport f41e3444-24ee-4425-969b-99c4c3ebbd47 from this chassis (sb_readonly=0)
Dec  6 02:34:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:11Z|00535|binding|INFO|Setting lport f41e3444-24ee-4425-969b-99c4c3ebbd47 down in Southbound
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.153 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:11Z|00536|binding|INFO|Removing iface tapf41e3444-24 ovn-installed in OVS
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.155 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.159 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:3e:3e 10.100.0.11'], port_security=['fa:16:3e:b0:3e:3e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e3caec0b-7382-4ccf-9d1b-c554a41c2c52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b09bb22417b499cbf8771188f7ae36f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a03d4d58-c3b9-4bf3-9e94-34a5e8c69515', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231e621a-df54-469a-acfd-b66004fc5e38, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f41e3444-24ee-4425-969b-99c4c3ebbd47) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.160 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f41e3444-24ee-4425-969b-99c4c3ebbd47 in datapath 6a04bdf1-ac7f-432e-8ec7-636af678cb2a unbound from our chassis#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.162 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a04bdf1-ac7f-432e-8ec7-636af678cb2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.163 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ba68db14-62ee-48d4-bb3b-042e0fd339b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.163 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a namespace which is not needed anymore#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.169 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:11 np0005548731 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Dec  6 02:34:11 np0005548731 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000007b.scope: Consumed 6.695s CPU time.
Dec  6 02:34:11 np0005548731 systemd-machined[195355]: Machine qemu-53-instance-0000007b terminated.
Dec  6 02:34:11 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[284960]: [NOTICE]   (284968) : haproxy version is 2.8.14-c23fe91
Dec  6 02:34:11 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[284960]: [NOTICE]   (284968) : path to executable is /usr/sbin/haproxy
Dec  6 02:34:11 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[284960]: [WARNING]  (284968) : Exiting Master process...
Dec  6 02:34:11 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[284960]: [ALERT]    (284968) : Current worker (284970) exited with code 143 (Terminated)
Dec  6 02:34:11 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[284960]: [WARNING]  (284968) : All workers exited. Exiting... (0)
Dec  6 02:34:11 np0005548731 systemd[1]: libpod-8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53.scope: Deactivated successfully.
Dec  6 02:34:11 np0005548731 podman[285011]: 2025-12-06 07:34:11.307479269 +0000 UTC m=+0.045879091 container died 8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 02:34:11 np0005548731 systemd[1]: var-lib-containers-storage-overlay-8290ad2ffcf4d27f1a7c75780aa0392288f8c768fc188688d0262ac60449c9bd-merged.mount: Deactivated successfully.
Dec  6 02:34:11 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53-userdata-shm.mount: Deactivated successfully.
Dec  6 02:34:11 np0005548731 podman[285011]: 2025-12-06 07:34:11.347434605 +0000 UTC m=+0.085834437 container cleanup 8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:34:11 np0005548731 systemd[1]: libpod-conmon-8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53.scope: Deactivated successfully.
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.378 232437 INFO nova.virt.libvirt.driver [-] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Instance destroyed successfully.#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.379 232437 DEBUG nova.objects.instance [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lazy-loading 'resources' on Instance uuid e3caec0b-7382-4ccf-9d1b-c554a41c2c52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.398 232437 DEBUG nova.virt.libvirt.vif [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:33:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1654269090',display_name='tempest-InstanceActionsTestJSON-server-1654269090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1654269090',id=123,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:34:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b09bb22417b499cbf8771188f7ae36f',ramdisk_id='',reservation_id='r-yjwup1mc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-163749737',owner_user_name='tempest-InstanceActionsTestJSON-163749737-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:34:09Z,user_data=None,user_id='bb72a3e638064a9896ab40615cfc7d67',uuid=e3caec0b-7382-4ccf-9d1b-c554a41c2c52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.398 232437 DEBUG nova.network.os_vif_util [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converting VIF {"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.399 232437 DEBUG nova.network.os_vif_util [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.399 232437 DEBUG os_vif [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.401 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.402 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf41e3444-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.403 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.405 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.407 232437 INFO os_vif [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24')#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.413 232437 DEBUG nova.virt.libvirt.driver [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Start _get_guest_xml network_info=[{"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.417 232437 WARNING nova.virt.libvirt.driver [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.421 232437 DEBUG nova.virt.libvirt.host [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.422 232437 DEBUG nova.virt.libvirt.host [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:34:11 np0005548731 podman[285046]: 2025-12-06 07:34:11.424292731 +0000 UTC m=+0.052012901 container remove 8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.425 232437 DEBUG nova.virt.libvirt.host [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.425 232437 DEBUG nova.virt.libvirt.host [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.426 232437 DEBUG nova.virt.libvirt.driver [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.426 232437 DEBUG nova.virt.hardware [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.426 232437 DEBUG nova.virt.hardware [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.427 232437 DEBUG nova.virt.hardware [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.427 232437 DEBUG nova.virt.hardware [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.427 232437 DEBUG nova.virt.hardware [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.427 232437 DEBUG nova.virt.hardware [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.427 232437 DEBUG nova.virt.hardware [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.427 232437 DEBUG nova.virt.hardware [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.428 232437 DEBUG nova.virt.hardware [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.428 232437 DEBUG nova.virt.hardware [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.428 232437 DEBUG nova.virt.hardware [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.428 232437 DEBUG nova.objects.instance [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lazy-loading 'vcpu_model' on Instance uuid e3caec0b-7382-4ccf-9d1b-c554a41c2c52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.432 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[88f02e40-3695-492d-bb4a-b4ae6e0f05d3]: (4, ('Sat Dec  6 07:34:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a (8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53)\n8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53\nSat Dec  6 07:34:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a (8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53)\n8cc9bdea11b09aaaed9c55ec166147deb055fd5435900a6e498f5e209bd0ee53\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.433 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b37c0c8d-d1b8-4e16-8d16-b1d99a37c6cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.434 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a04bdf1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.436 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:11 np0005548731 kernel: tap6a04bdf1-a0: left promiscuous mode
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.450 232437 DEBUG oslo_concurrency.processutils [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.454 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[68b0409d-06c1-4284-9f55-10343d5db19b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.467 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0b6212-b3c4-4985-8a3b-c9dcd2d3d667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.468 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b48a0048-c949-458d-9bec-b818b49368a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.473 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.484 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f543da32-0806-49f3-9606-12ed98fcc8ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670509, 'reachable_time': 32108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285069, 'error': None, 'target': 'ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.486 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.486 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9e716e-9b25-458c-85dd-60ea81751610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:11 np0005548731 systemd[1]: run-netns-ovnmeta\x2d6a04bdf1\x2dac7f\x2d432e\x2d8ec7\x2d636af678cb2a.mount: Deactivated successfully.
Dec  6 02:34:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:11.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:34:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1849849855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:34:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:11.917 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.981 232437 DEBUG nova.compute.manager [req-d785e06d-c88e-4653-8935-66fd8cf637fc req-ab88643d-1893-4b18-88ab-eb95208f1475 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received event network-vif-unplugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.982 232437 DEBUG oslo_concurrency.lockutils [req-d785e06d-c88e-4653-8935-66fd8cf637fc req-ab88643d-1893-4b18-88ab-eb95208f1475 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.982 232437 DEBUG oslo_concurrency.lockutils [req-d785e06d-c88e-4653-8935-66fd8cf637fc req-ab88643d-1893-4b18-88ab-eb95208f1475 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.982 232437 DEBUG oslo_concurrency.lockutils [req-d785e06d-c88e-4653-8935-66fd8cf637fc req-ab88643d-1893-4b18-88ab-eb95208f1475 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.982 232437 DEBUG nova.compute.manager [req-d785e06d-c88e-4653-8935-66fd8cf637fc req-ab88643d-1893-4b18-88ab-eb95208f1475 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] No waiting events found dispatching network-vif-unplugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:34:11 np0005548731 nova_compute[232433]: 2025-12-06 07:34:11.983 232437 WARNING nova.compute.manager [req-d785e06d-c88e-4653-8935-66fd8cf637fc req-ab88643d-1893-4b18-88ab-eb95208f1475 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received unexpected event network-vif-unplugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.191 232437 DEBUG oslo_concurrency.processutils [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.223 232437 DEBUG oslo_concurrency.processutils [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:12.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:34:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3016613992' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:34:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e285 e285: 3 total, 3 up, 3 in
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.649 232437 DEBUG oslo_concurrency.processutils [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.651 232437 DEBUG nova.virt.libvirt.vif [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:33:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1654269090',display_name='tempest-InstanceActionsTestJSON-server-1654269090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1654269090',id=123,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:34:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b09bb22417b499cbf8771188f7ae36f',ramdisk_id='',reservation_id='r-yjwup1mc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-163749737',owner_user_name='tempest-InstanceActionsTestJSON-163749737-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:34:09Z,user_data=None,user_id='bb72a3e638064a9896ab40615cfc7d67',uuid=e3caec0b-7382-4ccf-9d1b-c554a41c2c52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.652 232437 DEBUG nova.network.os_vif_util [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converting VIF {"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.653 232437 DEBUG nova.network.os_vif_util [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.654 232437 DEBUG nova.objects.instance [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lazy-loading 'pci_devices' on Instance uuid e3caec0b-7382-4ccf-9d1b-c554a41c2c52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.675 232437 DEBUG nova.virt.libvirt.driver [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  <uuid>e3caec0b-7382-4ccf-9d1b-c554a41c2c52</uuid>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  <name>instance-0000007b</name>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <nova:name>tempest-InstanceActionsTestJSON-server-1654269090</nova:name>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:34:11</nova:creationTime>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <nova:user uuid="bb72a3e638064a9896ab40615cfc7d67">tempest-InstanceActionsTestJSON-163749737-project-member</nova:user>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <nova:project uuid="1b09bb22417b499cbf8771188f7ae36f">tempest-InstanceActionsTestJSON-163749737</nova:project>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <nova:port uuid="f41e3444-24ee-4425-969b-99c4c3ebbd47">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <entry name="serial">e3caec0b-7382-4ccf-9d1b-c554a41c2c52</entry>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <entry name="uuid">e3caec0b-7382-4ccf-9d1b-c554a41c2c52</entry>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e3caec0b-7382-4ccf-9d1b-c554a41c2c52_disk.config">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:b0:3e:3e"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <target dev="tapf41e3444-24"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/e3caec0b-7382-4ccf-9d1b-c554a41c2c52/console.log" append="off"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <input type="keyboard" bus="usb"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:34:12 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:34:12 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:34:12 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:34:12 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.676 232437 DEBUG nova.virt.libvirt.driver [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.677 232437 DEBUG nova.virt.libvirt.driver [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.677 232437 DEBUG nova.virt.libvirt.vif [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:33:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1654269090',display_name='tempest-InstanceActionsTestJSON-server-1654269090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1654269090',id=123,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:34:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='1b09bb22417b499cbf8771188f7ae36f',ramdisk_id='',reservation_id='r-yjwup1mc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-163749737',owner_user_name='tempest-InstanceActionsTestJSON-163749737-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:34:09Z,user_data=None,user_id='bb72a3e638064a9896ab40615cfc7d67',uuid=e3caec0b-7382-4ccf-9d1b-c554a41c2c52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.677 232437 DEBUG nova.network.os_vif_util [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converting VIF {"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.678 232437 DEBUG nova.network.os_vif_util [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.678 232437 DEBUG os_vif [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.679 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.679 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.680 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.683 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.684 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf41e3444-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.684 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf41e3444-24, col_values=(('external_ids', {'iface-id': 'f41e3444-24ee-4425-969b-99c4c3ebbd47', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:3e:3e', 'vm-uuid': 'e3caec0b-7382-4ccf-9d1b-c554a41c2c52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.686 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:12 np0005548731 NetworkManager[49182]: <info>  [1765006452.6872] manager: (tapf41e3444-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.688 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.691 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.691 232437 INFO os_vif [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24')#033[00m
Dec  6 02:34:12 np0005548731 kernel: tapf41e3444-24: entered promiscuous mode
Dec  6 02:34:12 np0005548731 systemd-udevd[284991]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:34:12 np0005548731 NetworkManager[49182]: <info>  [1765006452.7535] manager: (tapf41e3444-24): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Dec  6 02:34:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:12Z|00537|binding|INFO|Claiming lport f41e3444-24ee-4425-969b-99c4c3ebbd47 for this chassis.
Dec  6 02:34:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:12Z|00538|binding|INFO|f41e3444-24ee-4425-969b-99c4c3ebbd47: Claiming fa:16:3e:b0:3e:3e 10.100.0.11
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.754 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.760 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:3e:3e 10.100.0.11'], port_security=['fa:16:3e:b0:3e:3e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e3caec0b-7382-4ccf-9d1b-c554a41c2c52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b09bb22417b499cbf8771188f7ae36f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a03d4d58-c3b9-4bf3-9e94-34a5e8c69515', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231e621a-df54-469a-acfd-b66004fc5e38, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f41e3444-24ee-4425-969b-99c4c3ebbd47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.761 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f41e3444-24ee-4425-969b-99c4c3ebbd47 in datapath 6a04bdf1-ac7f-432e-8ec7-636af678cb2a bound to our chassis#033[00m
Dec  6 02:34:12 np0005548731 NetworkManager[49182]: <info>  [1765006452.7635] device (tapf41e3444-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.763 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a04bdf1-ac7f-432e-8ec7-636af678cb2a#033[00m
Dec  6 02:34:12 np0005548731 NetworkManager[49182]: <info>  [1765006452.7649] device (tapf41e3444-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:34:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:12Z|00539|binding|INFO|Setting lport f41e3444-24ee-4425-969b-99c4c3ebbd47 ovn-installed in OVS
Dec  6 02:34:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:12Z|00540|binding|INFO|Setting lport f41e3444-24ee-4425-969b-99c4c3ebbd47 up in Southbound
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.769 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.771 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.773 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6c53ed91-1ab5-479f-ad4a-b1ef3ed2d9a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.774 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a04bdf1-a1 in ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.775 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a04bdf1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.776 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[31925a6a-7551-4551-81d5-32636d4b2815]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.776 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b8cd2863-e661-4ac1-9c63-96b6f5285266]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.787 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcaf0bf-bbf3-497a-9ea0-2e05fa8ad4f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 systemd-machined[195355]: New machine qemu-54-instance-0000007b.
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.798 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6655f885-8c86-43e4-ab1c-b2a10ce576b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 systemd[1]: Started Virtual Machine qemu-54-instance-0000007b.
Dec  6 02:34:12 np0005548731 nova_compute[232433]: 2025-12-06 07:34:12.807 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.831 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2cd4c0-4ea8-4f2a-ae30-4a2b3499625e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 NetworkManager[49182]: <info>  [1765006452.8379] manager: (tap6a04bdf1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/257)
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.838 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1daa984c-811b-4854-84f4-4e0bb22d0d84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.869 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b90427a2-70b1-40e5-a840-84b3f7ab4c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.871 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcbc1c7-c0af-4222-a791-7a23102f70ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 NetworkManager[49182]: <info>  [1765006452.8906] device (tap6a04bdf1-a0): carrier: link connected
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.897 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a739d4-19d3-4128-b103-1e52374201a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.910 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[31427cb6-e91a-45dc-9a60-bfe87ec8b75d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a04bdf1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:e1:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671489, 'reachable_time': 33074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285179, 'error': None, 'target': 'ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.924 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9e71f3b2-241e-4596-a4d3-fcfd7de51d51]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:e1b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671489, 'tstamp': 671489}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285180, 'error': None, 'target': 'ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.938 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0900e464-78d9-4f46-abb7-7ea870a853e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a04bdf1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:e1:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671489, 'reachable_time': 33074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285181, 'error': None, 'target': 'ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:12.968 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0bf4a1-ae6c-410e-b34c-e1239b4491a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:13.017 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[de852385-81ee-43f3-af6e-822def845ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:13.018 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a04bdf1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:13.018 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:13.018 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a04bdf1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:13 np0005548731 NetworkManager[49182]: <info>  [1765006453.0206] manager: (tap6a04bdf1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Dec  6 02:34:13 np0005548731 kernel: tap6a04bdf1-a0: entered promiscuous mode
Dec  6 02:34:13 np0005548731 nova_compute[232433]: 2025-12-06 07:34:13.020 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:13.023 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a04bdf1-a0, col_values=(('external_ids', {'iface-id': 'ef965c3b-ed76-424f-a7d9-27ee8bdd00ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:13 np0005548731 nova_compute[232433]: 2025-12-06 07:34:13.024 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:13Z|00541|binding|INFO|Releasing lport ef965c3b-ed76-424f-a7d9-27ee8bdd00ef from this chassis (sb_readonly=0)
Dec  6 02:34:13 np0005548731 nova_compute[232433]: 2025-12-06 07:34:13.040 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:13.041 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a04bdf1-ac7f-432e-8ec7-636af678cb2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a04bdf1-ac7f-432e-8ec7-636af678cb2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:13.041 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2acb4faa-8820-4b95-bb08-6d54e4df9a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:13.042 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-6a04bdf1-ac7f-432e-8ec7-636af678cb2a
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/6a04bdf1-ac7f-432e-8ec7-636af678cb2a.pid.haproxy
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 6a04bdf1-ac7f-432e-8ec7-636af678cb2a
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:34:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:13.042 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'env', 'PROCESS_TAG=haproxy-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a04bdf1-ac7f-432e-8ec7-636af678cb2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:34:13 np0005548731 nova_compute[232433]: 2025-12-06 07:34:13.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:34:13 np0005548731 podman[285229]: 2025-12-06 07:34:13.36145756 +0000 UTC m=+0.042309894 container create 46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:34:13 np0005548731 systemd[1]: Started libpod-conmon-46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5.scope.
Dec  6 02:34:13 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:34:13 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5490f33dfe64fea4487fb5aa7a31646823400242c082333c505383e51d9ebb52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:34:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:13 np0005548731 podman[285229]: 2025-12-06 07:34:13.428317232 +0000 UTC m=+0.109169566 container init 46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 02:34:13 np0005548731 podman[285229]: 2025-12-06 07:34:13.433488098 +0000 UTC m=+0.114340432 container start 46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  6 02:34:13 np0005548731 podman[285229]: 2025-12-06 07:34:13.340128049 +0000 UTC m=+0.020980403 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:34:13 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[285247]: [NOTICE]   (285251) : New worker (285253) forked
Dec  6 02:34:13 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[285247]: [NOTICE]   (285251) : Loading success.
Dec  6 02:34:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:13.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.063 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for e3caec0b-7382-4ccf-9d1b-c554a41c2c52 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.064 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006454.0631678, e3caec0b-7382-4ccf-9d1b-c554a41c2c52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.064 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.066 232437 DEBUG nova.compute.manager [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.069 232437 INFO nova.virt.libvirt.driver [-] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Instance rebooted successfully.#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.070 232437 DEBUG nova.compute.manager [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.074 232437 DEBUG nova.compute.manager [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.074 232437 DEBUG oslo_concurrency.lockutils [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.075 232437 DEBUG oslo_concurrency.lockutils [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.075 232437 DEBUG oslo_concurrency.lockutils [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.075 232437 DEBUG nova.compute.manager [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] No waiting events found dispatching network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.076 232437 WARNING nova.compute.manager [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received unexpected event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.076 232437 DEBUG nova.compute.manager [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.076 232437 DEBUG oslo_concurrency.lockutils [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.076 232437 DEBUG oslo_concurrency.lockutils [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.077 232437 DEBUG oslo_concurrency.lockutils [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.077 232437 DEBUG nova.compute.manager [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] No waiting events found dispatching network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.077 232437 WARNING nova.compute.manager [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received unexpected event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.077 232437 DEBUG nova.compute.manager [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.077 232437 DEBUG oslo_concurrency.lockutils [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.078 232437 DEBUG oslo_concurrency.lockutils [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.078 232437 DEBUG oslo_concurrency.lockutils [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.078 232437 DEBUG nova.compute.manager [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] No waiting events found dispatching network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.078 232437 WARNING nova.compute.manager [req-7384f129-1b47-4c19-b1ba-d54806b74f8a req-d30a8a03-ed34-4c62-b427-18578ac2685e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received unexpected event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.083 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.086 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.109 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.110 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006454.064135, e3caec0b-7382-4ccf-9d1b-c554a41c2c52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.110 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] VM Started (Lifecycle Event)#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.128 232437 DEBUG oslo_concurrency.lockutils [None req-312ce13a-677d-44fe-b787-bcba8e05dc4e bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.130 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:34:14 np0005548731 nova_compute[232433]: 2025-12-06 07:34:14.132 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:34:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:34:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:14.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:34:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:15.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:15 np0005548731 nova_compute[232433]: 2025-12-06 07:34:15.949 232437 DEBUG oslo_concurrency.lockutils [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:15 np0005548731 nova_compute[232433]: 2025-12-06 07:34:15.949 232437 DEBUG oslo_concurrency.lockutils [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:15 np0005548731 nova_compute[232433]: 2025-12-06 07:34:15.950 232437 DEBUG oslo_concurrency.lockutils [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:15 np0005548731 nova_compute[232433]: 2025-12-06 07:34:15.950 232437 DEBUG oslo_concurrency.lockutils [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:15 np0005548731 nova_compute[232433]: 2025-12-06 07:34:15.950 232437 DEBUG oslo_concurrency.lockutils [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:15 np0005548731 nova_compute[232433]: 2025-12-06 07:34:15.951 232437 INFO nova.compute.manager [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Terminating instance#033[00m
Dec  6 02:34:15 np0005548731 nova_compute[232433]: 2025-12-06 07:34:15.952 232437 DEBUG nova.compute.manager [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:34:16 np0005548731 nova_compute[232433]: 2025-12-06 07:34:16.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:34:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:16.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:17 np0005548731 nova_compute[232433]: 2025-12-06 07:34:17.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:34:17 np0005548731 nova_compute[232433]: 2025-12-06 07:34:17.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:17 np0005548731 nova_compute[232433]: 2025-12-06 07:34:17.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:17 np0005548731 nova_compute[232433]: 2025-12-06 07:34:17.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:17 np0005548731 nova_compute[232433]: 2025-12-06 07:34:17.132 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:34:17 np0005548731 nova_compute[232433]: 2025-12-06 07:34:17.132 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:34:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/420931447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:34:17 np0005548731 nova_compute[232433]: 2025-12-06 07:34:17.569 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:17.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:17 np0005548731 nova_compute[232433]: 2025-12-06 07:34:17.731 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:17 np0005548731 nova_compute[232433]: 2025-12-06 07:34:17.809 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:18.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:19 np0005548731 kernel: tapf41e3444-24 (unregistering): left promiscuous mode
Dec  6 02:34:19 np0005548731 NetworkManager[49182]: <info>  [1765006459.3047] device (tapf41e3444-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:34:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:19Z|00542|binding|INFO|Releasing lport f41e3444-24ee-4425-969b-99c4c3ebbd47 from this chassis (sb_readonly=0)
Dec  6 02:34:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:19Z|00543|binding|INFO|Setting lport f41e3444-24ee-4425-969b-99c4c3ebbd47 down in Southbound
Dec  6 02:34:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:19Z|00544|binding|INFO|Removing iface tapf41e3444-24 ovn-installed in OVS
Dec  6 02:34:19 np0005548731 nova_compute[232433]: 2025-12-06 07:34:19.310 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:19 np0005548731 nova_compute[232433]: 2025-12-06 07:34:19.311 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:19 np0005548731 nova_compute[232433]: 2025-12-06 07:34:19.331 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:19 np0005548731 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Dec  6 02:34:19 np0005548731 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000007b.scope: Consumed 2.308s CPU time.
Dec  6 02:34:19 np0005548731 systemd-machined[195355]: Machine qemu-54-instance-0000007b terminated.
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.419 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:3e:3e 10.100.0.11'], port_security=['fa:16:3e:b0:3e:3e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e3caec0b-7382-4ccf-9d1b-c554a41c2c52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b09bb22417b499cbf8771188f7ae36f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a03d4d58-c3b9-4bf3-9e94-34a5e8c69515', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=231e621a-df54-469a-acfd-b66004fc5e38, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f41e3444-24ee-4425-969b-99c4c3ebbd47) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.420 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f41e3444-24ee-4425-969b-99c4c3ebbd47 in datapath 6a04bdf1-ac7f-432e-8ec7-636af678cb2a unbound from our chassis#033[00m
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.423 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a04bdf1-ac7f-432e-8ec7-636af678cb2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.424 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ef90028c-80e2-475d-8ac7-83055cfe8a54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.424 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a namespace which is not needed anymore#033[00m
Dec  6 02:34:19 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[285247]: [NOTICE]   (285251) : haproxy version is 2.8.14-c23fe91
Dec  6 02:34:19 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[285247]: [NOTICE]   (285251) : path to executable is /usr/sbin/haproxy
Dec  6 02:34:19 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[285247]: [WARNING]  (285251) : Exiting Master process...
Dec  6 02:34:19 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[285247]: [WARNING]  (285251) : Exiting Master process...
Dec  6 02:34:19 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[285247]: [ALERT]    (285251) : Current worker (285253) exited with code 143 (Terminated)
Dec  6 02:34:19 np0005548731 neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a[285247]: [WARNING]  (285251) : All workers exited. Exiting... (0)
Dec  6 02:34:19 np0005548731 systemd[1]: libpod-46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5.scope: Deactivated successfully.
Dec  6 02:34:19 np0005548731 conmon[285247]: conmon 46db4cef5d88173fe177 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5.scope/container/memory.events
Dec  6 02:34:19 np0005548731 podman[285386]: 2025-12-06 07:34:19.552125164 +0000 UTC m=+0.044715753 container died 46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:34:19 np0005548731 nova_compute[232433]: 2025-12-06 07:34:19.574 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:19 np0005548731 nova_compute[232433]: 2025-12-06 07:34:19.581 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:19 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5-userdata-shm.mount: Deactivated successfully.
Dec  6 02:34:19 np0005548731 systemd[1]: var-lib-containers-storage-overlay-5490f33dfe64fea4487fb5aa7a31646823400242c082333c505383e51d9ebb52-merged.mount: Deactivated successfully.
Dec  6 02:34:19 np0005548731 nova_compute[232433]: 2025-12-06 07:34:19.594 232437 INFO nova.virt.libvirt.driver [-] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Instance destroyed successfully.#033[00m
Dec  6 02:34:19 np0005548731 nova_compute[232433]: 2025-12-06 07:34:19.595 232437 DEBUG nova.objects.instance [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lazy-loading 'resources' on Instance uuid e3caec0b-7382-4ccf-9d1b-c554a41c2c52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:34:19 np0005548731 podman[285386]: 2025-12-06 07:34:19.600795441 +0000 UTC m=+0.093386030 container cleanup 46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:34:19 np0005548731 systemd[1]: libpod-conmon-46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5.scope: Deactivated successfully.
Dec  6 02:34:19 np0005548731 podman[285423]: 2025-12-06 07:34:19.680012445 +0000 UTC m=+0.053736532 container remove 46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.685 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3b85ceb8-841e-4d87-aa31-ed867096af9e]: (4, ('Sat Dec  6 07:34:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a (46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5)\n46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5\nSat Dec  6 07:34:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a (46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5)\n46db4cef5d88173fe17728a9876cfd85305f7a201ca93659147a6d27565405b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.687 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e96628bc-d32b-45c5-83f2-0c3246b68698]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.688 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a04bdf1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:19 np0005548731 nova_compute[232433]: 2025-12-06 07:34:19.690 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:19 np0005548731 kernel: tap6a04bdf1-a0: left promiscuous mode
Dec  6 02:34:19 np0005548731 nova_compute[232433]: 2025-12-06 07:34:19.706 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:34:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:19.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.709 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d787ea0c-9716-40e4-8cf2-94da44e38b05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.720 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9d327edc-5f11-41f8-a318-1707eaeff308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.721 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[84e6b7a4-dd47-419d-afcc-42aa595ca7af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.734 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b061ac-487f-4041-98b3-b8a8b24e4079]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671483, 'reachable_time': 29590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285439, 'error': None, 'target': 'ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.736 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a04bdf1-ac7f-432e-8ec7-636af678cb2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:34:19 np0005548731 systemd[1]: run-netns-ovnmeta\x2d6a04bdf1\x2dac7f\x2d432e\x2d8ec7\x2d636af678cb2a.mount: Deactivated successfully.
Dec  6 02:34:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:34:19.737 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[66141a9d-4382-4ab2-a145-f480cf50ff93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.077 232437 DEBUG nova.virt.libvirt.vif [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:33:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1654269090',display_name='tempest-InstanceActionsTestJSON-server-1654269090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1654269090',id=123,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:34:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b09bb22417b499cbf8771188f7ae36f',ramdisk_id='',reservation_id='r-yjwup1mc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-163749737',owner_user_name='tempest-InstanceActionsTestJSON-163749737-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:34:14Z,user_data=None,user_id='bb72a3e638064a9896ab40615cfc7d67',uuid=e3caec0b-7382-4ccf-9d1b-c554a41c2c52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.077 232437 DEBUG nova.network.os_vif_util [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converting VIF {"id": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "address": "fa:16:3e:b0:3e:3e", "network": {"id": "6a04bdf1-ac7f-432e-8ec7-636af678cb2a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-483634897-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b09bb22417b499cbf8771188f7ae36f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf41e3444-24", "ovs_interfaceid": "f41e3444-24ee-4425-969b-99c4c3ebbd47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.078 232437 DEBUG nova.network.os_vif_util [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.079 232437 DEBUG os_vif [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.081 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.081 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf41e3444-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.083 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.085 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.088 232437 INFO os_vif [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:3e:3e,bridge_name='br-int',has_traffic_filtering=True,id=f41e3444-24ee-4425-969b-99c4c3ebbd47,network=Network(6a04bdf1-ac7f-432e-8ec7-636af678cb2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf41e3444-24')#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.128 232437 DEBUG nova.compute.manager [req-c4cf8a88-7ece-4c74-9d5d-f3debca2e516 req-4ed30927-a0ad-400f-9dd1-f61a882d98e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received event network-vif-unplugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.128 232437 DEBUG oslo_concurrency.lockutils [req-c4cf8a88-7ece-4c74-9d5d-f3debca2e516 req-4ed30927-a0ad-400f-9dd1-f61a882d98e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.128 232437 DEBUG oslo_concurrency.lockutils [req-c4cf8a88-7ece-4c74-9d5d-f3debca2e516 req-4ed30927-a0ad-400f-9dd1-f61a882d98e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.128 232437 DEBUG oslo_concurrency.lockutils [req-c4cf8a88-7ece-4c74-9d5d-f3debca2e516 req-4ed30927-a0ad-400f-9dd1-f61a882d98e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.129 232437 DEBUG nova.compute.manager [req-c4cf8a88-7ece-4c74-9d5d-f3debca2e516 req-4ed30927-a0ad-400f-9dd1-f61a882d98e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] No waiting events found dispatching network-vif-unplugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.129 232437 DEBUG nova.compute.manager [req-c4cf8a88-7ece-4c74-9d5d-f3debca2e516 req-4ed30927-a0ad-400f-9dd1-f61a882d98e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received event network-vif-unplugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.147 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.148 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.152 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.152 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:34:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e286 e286: 3 total, 3 up, 3 in
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.337 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.338 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4239MB free_disk=20.85525894165039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.339 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.339 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.428 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 6a50a40c-3b05-4c0e-aa67-1489e203824e actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.428 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance e3caec0b-7382-4ccf-9d1b-c554a41c2c52 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.429 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.429 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.474 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:20.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:34:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1013349408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.931 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.937 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.949 232437 INFO nova.virt.libvirt.driver [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Deleting instance files /var/lib/nova/instances/e3caec0b-7382-4ccf-9d1b-c554a41c2c52_del#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.950 232437 INFO nova.virt.libvirt.driver [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Deletion of /var/lib/nova/instances/e3caec0b-7382-4ccf-9d1b-c554a41c2c52_del complete#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.954 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.979 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:34:20 np0005548731 nova_compute[232433]: 2025-12-06 07:34:20.979 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:21 np0005548731 nova_compute[232433]: 2025-12-06 07:34:21.024 232437 INFO nova.compute.manager [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Took 5.07 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:34:21 np0005548731 nova_compute[232433]: 2025-12-06 07:34:21.025 232437 DEBUG oslo.service.loopingcall [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:34:21 np0005548731 nova_compute[232433]: 2025-12-06 07:34:21.025 232437 DEBUG nova.compute.manager [-] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:34:21 np0005548731 nova_compute[232433]: 2025-12-06 07:34:21.025 232437 DEBUG nova.network.neutron [-] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:34:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e287 e287: 3 total, 3 up, 3 in
Dec  6 02:34:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:21.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.218 232437 DEBUG nova.compute.manager [req-b9ae8d1f-0e80-4d04-a5da-796f5c44b3ee req-27676871-d71b-478c-bdd6-cf436e8f1e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.219 232437 DEBUG oslo_concurrency.lockutils [req-b9ae8d1f-0e80-4d04-a5da-796f5c44b3ee req-27676871-d71b-478c-bdd6-cf436e8f1e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.219 232437 DEBUG oslo_concurrency.lockutils [req-b9ae8d1f-0e80-4d04-a5da-796f5c44b3ee req-27676871-d71b-478c-bdd6-cf436e8f1e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.219 232437 DEBUG oslo_concurrency.lockutils [req-b9ae8d1f-0e80-4d04-a5da-796f5c44b3ee req-27676871-d71b-478c-bdd6-cf436e8f1e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.219 232437 DEBUG nova.compute.manager [req-b9ae8d1f-0e80-4d04-a5da-796f5c44b3ee req-27676871-d71b-478c-bdd6-cf436e8f1e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] No waiting events found dispatching network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.220 232437 WARNING nova.compute.manager [req-b9ae8d1f-0e80-4d04-a5da-796f5c44b3ee req-27676871-d71b-478c-bdd6-cf436e8f1e90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received unexpected event network-vif-plugged-f41e3444-24ee-4425-969b-99c4c3ebbd47 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.387 232437 DEBUG nova.network.neutron [-] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.416 232437 INFO nova.compute.manager [-] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Took 1.39 seconds to deallocate network for instance.#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.482 232437 DEBUG nova.compute.manager [req-5994e10e-d805-4a53-a3a7-f4a2ea967229 req-c45061eb-a5d1-4be2-bbbf-b42c3484256e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Received event network-vif-deleted-f41e3444-24ee-4425-969b-99c4c3ebbd47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.484 232437 DEBUG oslo_concurrency.lockutils [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.484 232437 DEBUG oslo_concurrency.lockutils [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:22.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.548 232437 DEBUG oslo_concurrency.processutils [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.810 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:22 np0005548731 podman[285503]: 2025-12-06 07:34:22.887989619 +0000 UTC m=+0.050721808 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 02:34:22 np0005548731 podman[285504]: 2025-12-06 07:34:22.919593671 +0000 UTC m=+0.079849920 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 02:34:22 np0005548731 podman[285505]: 2025-12-06 07:34:22.9244987 +0000 UTC m=+0.080068575 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:34:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:34:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3848048345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.983 232437 DEBUG oslo_concurrency.processutils [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:22 np0005548731 nova_compute[232433]: 2025-12-06 07:34:22.989 232437 DEBUG nova.compute.provider_tree [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:34:23 np0005548731 nova_compute[232433]: 2025-12-06 07:34:23.006 232437 DEBUG nova.scheduler.client.report [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:34:23 np0005548731 nova_compute[232433]: 2025-12-06 07:34:23.031 232437 DEBUG oslo_concurrency.lockutils [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:23 np0005548731 nova_compute[232433]: 2025-12-06 07:34:23.078 232437 INFO nova.scheduler.client.report [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Deleted allocations for instance e3caec0b-7382-4ccf-9d1b-c554a41c2c52#033[00m
Dec  6 02:34:23 np0005548731 nova_compute[232433]: 2025-12-06 07:34:23.215 232437 DEBUG oslo_concurrency.lockutils [None req-89d57a1f-c9c0-4ca1-86f2-0059ac98c3ed bb72a3e638064a9896ab40615cfc7d67 1b09bb22417b499cbf8771188f7ae36f - - default default] Lock "e3caec0b-7382-4ccf-9d1b-c554a41c2c52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:23.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:23 np0005548731 nova_compute[232433]: 2025-12-06 07:34:23.980 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:34:23 np0005548731 nova_compute[232433]: 2025-12-06 07:34:23.980 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:34:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:24.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:25 np0005548731 nova_compute[232433]: 2025-12-06 07:34:25.084 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:25.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e288 e288: 3 total, 3 up, 3 in
Dec  6 02:34:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:26.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:27.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:27 np0005548731 nova_compute[232433]: 2025-12-06 07:34:27.814 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:34:27Z|00545|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec  6 02:34:27 np0005548731 nova_compute[232433]: 2025-12-06 07:34:27.914 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:28.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e289 e289: 3 total, 3 up, 3 in
Dec  6 02:34:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e290 e290: 3 total, 3 up, 3 in
Dec  6 02:34:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:29.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:30 np0005548731 nova_compute[232433]: 2025-12-06 07:34:30.086 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e291 e291: 3 total, 3 up, 3 in
Dec  6 02:34:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:30.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:31.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:32.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:32 np0005548731 nova_compute[232433]: 2025-12-06 07:34:32.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:33.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:34:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:34.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:34:34 np0005548731 nova_compute[232433]: 2025-12-06 07:34:34.591 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006459.5897892, e3caec0b-7382-4ccf-9d1b-c554a41c2c52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:34:34 np0005548731 nova_compute[232433]: 2025-12-06 07:34:34.592 232437 INFO nova.compute.manager [-] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:34:34 np0005548731 nova_compute[232433]: 2025-12-06 07:34:34.613 232437 DEBUG nova.compute.manager [None req-a3ac49cd-33ce-480d-8523-9e2ff8f553da - - - - - -] [instance: e3caec0b-7382-4ccf-9d1b-c554a41c2c52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:34:35 np0005548731 nova_compute[232433]: 2025-12-06 07:34:35.088 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:35.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:36.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:37.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:37 np0005548731 nova_compute[232433]: 2025-12-06 07:34:37.817 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:38 np0005548731 nova_compute[232433]: 2025-12-06 07:34:38.035 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:38.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:39.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e292 e292: 3 total, 3 up, 3 in
Dec  6 02:34:40 np0005548731 nova_compute[232433]: 2025-12-06 07:34:40.089 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:40.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:41.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:42.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:42 np0005548731 nova_compute[232433]: 2025-12-06 07:34:42.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:43.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:44 np0005548731 nova_compute[232433]: 2025-12-06 07:34:44.262 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:44.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:45 np0005548731 nova_compute[232433]: 2025-12-06 07:34:45.091 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:34:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:45.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:34:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:46.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.457 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Acquiring lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.457 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.477 232437 DEBUG nova.compute.manager [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.564 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.565 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.572 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.572 232437 INFO nova.compute.claims [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.658 232437 DEBUG nova.scheduler.client.report [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.674 232437 DEBUG nova.scheduler.client.report [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.674 232437 DEBUG nova.compute.provider_tree [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.692 232437 DEBUG nova.scheduler.client.report [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.713 232437 DEBUG nova.scheduler.client.report [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 02:34:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:47.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.758 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:47 np0005548731 nova_compute[232433]: 2025-12-06 07:34:47.821 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:34:48 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2908565525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.181 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.187 232437 DEBUG nova.compute.provider_tree [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.202 232437 DEBUG nova.scheduler.client.report [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.227 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.228 232437 DEBUG nova.compute.manager [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.268 232437 DEBUG nova.compute.manager [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.269 232437 DEBUG nova.network.neutron [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.291 232437 INFO nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.308 232437 DEBUG nova.compute.manager [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:34:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.455 232437 DEBUG nova.compute.manager [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.456 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.457 232437 INFO nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Creating image(s)#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.485 232437 DEBUG nova.storage.rbd_utils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] rbd image 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.511 232437 DEBUG nova.storage.rbd_utils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] rbd image 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.535 232437 DEBUG nova.storage.rbd_utils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] rbd image 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.538 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:48.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.565 232437 DEBUG nova.policy [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '603967295b474367b2a605071d414bbf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2cfa2f2dfd6f4dc090f85672b1f24e68', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.603 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.604 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.605 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.605 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.630 232437 DEBUG nova.storage.rbd_utils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] rbd image 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:48 np0005548731 nova_compute[232433]: 2025-12-06 07:34:48.634 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:49 np0005548731 nova_compute[232433]: 2025-12-06 07:34:49.150 232437 DEBUG nova.network.neutron [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Successfully created port: 057f5adb-642a-4046-a07d-f33f8ab42b98 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:34:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 e293: 3 total, 3 up, 3 in
Dec  6 02:34:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:50 np0005548731 nova_compute[232433]: 2025-12-06 07:34:50.134 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:50 np0005548731 nova_compute[232433]: 2025-12-06 07:34:50.299 232437 DEBUG nova.network.neutron [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Successfully updated port: 057f5adb-642a-4046-a07d-f33f8ab42b98 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:34:50 np0005548731 nova_compute[232433]: 2025-12-06 07:34:50.336 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Acquiring lock "refresh_cache-3662ed68-91aa-4b97-8b12-5c87f99b02c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:34:50 np0005548731 nova_compute[232433]: 2025-12-06 07:34:50.336 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Acquired lock "refresh_cache-3662ed68-91aa-4b97-8b12-5c87f99b02c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:34:50 np0005548731 nova_compute[232433]: 2025-12-06 07:34:50.336 232437 DEBUG nova.network.neutron [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:34:50 np0005548731 nova_compute[232433]: 2025-12-06 07:34:50.398 232437 DEBUG nova.compute.manager [req-5445e4cd-4877-4d52-bbd7-9ece2575adcd req-c5844852-0f03-4de0-b15f-fe9650106589 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Received event network-changed-057f5adb-642a-4046-a07d-f33f8ab42b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:34:50 np0005548731 nova_compute[232433]: 2025-12-06 07:34:50.398 232437 DEBUG nova.compute.manager [req-5445e4cd-4877-4d52-bbd7-9ece2575adcd req-c5844852-0f03-4de0-b15f-fe9650106589 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Refreshing instance network info cache due to event network-changed-057f5adb-642a-4046-a07d-f33f8ab42b98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:34:50 np0005548731 nova_compute[232433]: 2025-12-06 07:34:50.399 232437 DEBUG oslo_concurrency.lockutils [req-5445e4cd-4877-4d52-bbd7-9ece2575adcd req-c5844852-0f03-4de0-b15f-fe9650106589 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-3662ed68-91aa-4b97-8b12-5c87f99b02c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:34:50 np0005548731 nova_compute[232433]: 2025-12-06 07:34:50.482 232437 DEBUG nova.network.neutron [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:34:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:50.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:51 np0005548731 nova_compute[232433]: 2025-12-06 07:34:51.319 232437 DEBUG nova.network.neutron [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Updating instance_info_cache with network_info: [{"id": "057f5adb-642a-4046-a07d-f33f8ab42b98", "address": "fa:16:3e:58:95:72", "network": {"id": "a17f6e7a-bed5-4778-a86b-02d7790910b2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1529778336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2cfa2f2dfd6f4dc090f85672b1f24e68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap057f5adb-64", "ovs_interfaceid": "057f5adb-642a-4046-a07d-f33f8ab42b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:34:51 np0005548731 nova_compute[232433]: 2025-12-06 07:34:51.348 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Releasing lock "refresh_cache-3662ed68-91aa-4b97-8b12-5c87f99b02c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:34:51 np0005548731 nova_compute[232433]: 2025-12-06 07:34:51.348 232437 DEBUG nova.compute.manager [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Instance network_info: |[{"id": "057f5adb-642a-4046-a07d-f33f8ab42b98", "address": "fa:16:3e:58:95:72", "network": {"id": "a17f6e7a-bed5-4778-a86b-02d7790910b2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1529778336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2cfa2f2dfd6f4dc090f85672b1f24e68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap057f5adb-64", "ovs_interfaceid": "057f5adb-642a-4046-a07d-f33f8ab42b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:34:51 np0005548731 nova_compute[232433]: 2025-12-06 07:34:51.348 232437 DEBUG oslo_concurrency.lockutils [req-5445e4cd-4877-4d52-bbd7-9ece2575adcd req-c5844852-0f03-4de0-b15f-fe9650106589 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-3662ed68-91aa-4b97-8b12-5c87f99b02c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:34:51 np0005548731 nova_compute[232433]: 2025-12-06 07:34:51.349 232437 DEBUG nova.network.neutron [req-5445e4cd-4877-4d52-bbd7-9ece2575adcd req-c5844852-0f03-4de0-b15f-fe9650106589 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Refreshing network info cache for port 057f5adb-642a-4046-a07d-f33f8ab42b98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:34:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:52 np0005548731 nova_compute[232433]: 2025-12-06 07:34:52.261 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:52 np0005548731 nova_compute[232433]: 2025-12-06 07:34:52.331 232437 DEBUG nova.storage.rbd_utils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] resizing rbd image 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:34:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:52.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:52 np0005548731 nova_compute[232433]: 2025-12-06 07:34:52.938 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:53 np0005548731 nova_compute[232433]: 2025-12-06 07:34:53.574 232437 DEBUG nova.network.neutron [req-5445e4cd-4877-4d52-bbd7-9ece2575adcd req-c5844852-0f03-4de0-b15f-fe9650106589 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Updated VIF entry in instance network info cache for port 057f5adb-642a-4046-a07d-f33f8ab42b98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:34:53 np0005548731 nova_compute[232433]: 2025-12-06 07:34:53.575 232437 DEBUG nova.network.neutron [req-5445e4cd-4877-4d52-bbd7-9ece2575adcd req-c5844852-0f03-4de0-b15f-fe9650106589 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Updating instance_info_cache with network_info: [{"id": "057f5adb-642a-4046-a07d-f33f8ab42b98", "address": "fa:16:3e:58:95:72", "network": {"id": "a17f6e7a-bed5-4778-a86b-02d7790910b2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1529778336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2cfa2f2dfd6f4dc090f85672b1f24e68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap057f5adb-64", "ovs_interfaceid": "057f5adb-642a-4046-a07d-f33f8ab42b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:34:53 np0005548731 nova_compute[232433]: 2025-12-06 07:34:53.598 232437 DEBUG oslo_concurrency.lockutils [req-5445e4cd-4877-4d52-bbd7-9ece2575adcd req-c5844852-0f03-4de0-b15f-fe9650106589 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-3662ed68-91aa-4b97-8b12-5c87f99b02c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:34:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:53.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:53 np0005548731 podman[285805]: 2025-12-06 07:34:53.922032466 +0000 UTC m=+0.081684601 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:34:53 np0005548731 podman[285807]: 2025-12-06 07:34:53.931428735 +0000 UTC m=+0.083845904 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  6 02:34:53 np0005548731 podman[285806]: 2025-12-06 07:34:53.954451656 +0000 UTC m=+0.111823236 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:34:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:54.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.136 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.236 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.236 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.254 232437 DEBUG nova.compute.manager [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.320 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.320 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.326 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.327 232437 INFO nova.compute.claims [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.377 232437 DEBUG nova.objects.instance [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lazy-loading 'migration_context' on Instance uuid 3662ed68-91aa-4b97-8b12-5c87f99b02c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.393 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.393 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Ensure instance console log exists: /var/lib/nova/instances/3662ed68-91aa-4b97-8b12-5c87f99b02c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.394 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.394 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.394 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.396 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Start _get_guest_xml network_info=[{"id": "057f5adb-642a-4046-a07d-f33f8ab42b98", "address": "fa:16:3e:58:95:72", "network": {"id": "a17f6e7a-bed5-4778-a86b-02d7790910b2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1529778336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2cfa2f2dfd6f4dc090f85672b1f24e68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap057f5adb-64", "ovs_interfaceid": "057f5adb-642a-4046-a07d-f33f8ab42b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.400 232437 WARNING nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.405 232437 DEBUG nova.virt.libvirt.host [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.406 232437 DEBUG nova.virt.libvirt.host [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.408 232437 DEBUG nova.virt.libvirt.host [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.409 232437 DEBUG nova.virt.libvirt.host [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.410 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.410 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.410 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.410 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.411 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.411 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.411 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.411 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.411 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.412 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.412 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.412 232437 DEBUG nova.virt.hardware [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.415 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.494 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:34:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3450741167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:34:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:34:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3678721108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.927 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.932 232437 DEBUG nova.compute.provider_tree [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.947 232437 DEBUG nova.scheduler.client.report [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.966 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:55 np0005548731 nova_compute[232433]: 2025-12-06 07:34:55.967 232437 DEBUG nova.compute.manager [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.004 232437 DEBUG nova.compute.manager [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.004 232437 DEBUG nova.network.neutron [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.023 232437 INFO nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.037 232437 DEBUG nova.compute.manager [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.154 232437 DEBUG nova.compute.manager [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.155 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.156 232437 INFO nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Creating image(s)#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.183 232437 DEBUG nova.storage.rbd_utils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.216 232437 DEBUG nova.storage.rbd_utils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.251 232437 DEBUG nova.storage.rbd_utils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.259 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.297 232437 DEBUG nova.policy [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '605b5481e0c944048e6a67046c30d693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.329 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.330 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.331 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.331 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.362 232437 DEBUG nova.storage.rbd_utils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.367 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.561 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:34:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:56.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.605 232437 DEBUG nova.storage.rbd_utils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] rbd image 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:56 np0005548731 nova_compute[232433]: 2025-12-06 07:34:56.613 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:34:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4270521250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.085 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.087 232437 DEBUG nova.virt.libvirt.vif [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1410038847',display_name='tempest-NoVNCConsoleTestJSON-server-1410038847',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1410038847',id=126,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2cfa2f2dfd6f4dc090f85672b1f24e68',ramdisk_id='',reservation_id='r-7hnen40u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-726879140',owner_user_name='tempest-NoVNCConsoleTestJSON-726879140-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:34:48Z,user_data=None,user_id='603967295b474367b2a605071d414bbf',uuid=3662ed68-91aa-4b97-8b12-5c87f99b02c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "057f5adb-642a-4046-a07d-f33f8ab42b98", "address": "fa:16:3e:58:95:72", "network": {"id": "a17f6e7a-bed5-4778-a86b-02d7790910b2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1529778336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2cfa2f2dfd6f4dc090f85672b1f24e68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap057f5adb-64", "ovs_interfaceid": "057f5adb-642a-4046-a07d-f33f8ab42b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.088 232437 DEBUG nova.network.os_vif_util [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Converting VIF {"id": "057f5adb-642a-4046-a07d-f33f8ab42b98", "address": "fa:16:3e:58:95:72", "network": {"id": "a17f6e7a-bed5-4778-a86b-02d7790910b2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1529778336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2cfa2f2dfd6f4dc090f85672b1f24e68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap057f5adb-64", "ovs_interfaceid": "057f5adb-642a-4046-a07d-f33f8ab42b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.089 232437 DEBUG nova.network.os_vif_util [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:95:72,bridge_name='br-int',has_traffic_filtering=True,id=057f5adb-642a-4046-a07d-f33f8ab42b98,network=Network(a17f6e7a-bed5-4778-a86b-02d7790910b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap057f5adb-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.090 232437 DEBUG nova.objects.instance [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3662ed68-91aa-4b97-8b12-5c87f99b02c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.106 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  <uuid>3662ed68-91aa-4b97-8b12-5c87f99b02c9</uuid>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  <name>instance-0000007e</name>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <nova:name>tempest-NoVNCConsoleTestJSON-server-1410038847</nova:name>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:34:55</nova:creationTime>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <nova:user uuid="603967295b474367b2a605071d414bbf">tempest-NoVNCConsoleTestJSON-726879140-project-member</nova:user>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <nova:project uuid="2cfa2f2dfd6f4dc090f85672b1f24e68">tempest-NoVNCConsoleTestJSON-726879140</nova:project>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <nova:port uuid="057f5adb-642a-4046-a07d-f33f8ab42b98">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <entry name="serial">3662ed68-91aa-4b97-8b12-5c87f99b02c9</entry>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <entry name="uuid">3662ed68-91aa-4b97-8b12-5c87f99b02c9</entry>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk.config">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:58:95:72"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <target dev="tap057f5adb-64"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/3662ed68-91aa-4b97-8b12-5c87f99b02c9/console.log" append="off"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:34:57 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:34:57 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:34:57 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:34:57 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.109 232437 DEBUG nova.compute.manager [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Preparing to wait for external event network-vif-plugged-057f5adb-642a-4046-a07d-f33f8ab42b98 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.109 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Acquiring lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.109 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.110 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.110 232437 DEBUG nova.virt.libvirt.vif [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1410038847',display_name='tempest-NoVNCConsoleTestJSON-server-1410038847',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1410038847',id=126,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2cfa2f2dfd6f4dc090f85672b1f24e68',ramdisk_id='',reservation_id='r-7hnen40u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-726879140',owner_user_name='tempest-NoVNCConsoleTestJSON-726879140-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:34:48Z,user_data=None,user_id='603967295b474367b2a605071d414bbf',uuid=3662ed68-91aa-4b97-8b12-5c87f99b02c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "057f5adb-642a-4046-a07d-f33f8ab42b98", "address": "fa:16:3e:58:95:72", "network": {"id": "a17f6e7a-bed5-4778-a86b-02d7790910b2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1529778336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2cfa2f2dfd6f4dc090f85672b1f24e68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap057f5adb-64", "ovs_interfaceid": "057f5adb-642a-4046-a07d-f33f8ab42b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.111 232437 DEBUG nova.network.os_vif_util [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Converting VIF {"id": "057f5adb-642a-4046-a07d-f33f8ab42b98", "address": "fa:16:3e:58:95:72", "network": {"id": "a17f6e7a-bed5-4778-a86b-02d7790910b2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1529778336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2cfa2f2dfd6f4dc090f85672b1f24e68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap057f5adb-64", "ovs_interfaceid": "057f5adb-642a-4046-a07d-f33f8ab42b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.112 232437 DEBUG nova.network.os_vif_util [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:95:72,bridge_name='br-int',has_traffic_filtering=True,id=057f5adb-642a-4046-a07d-f33f8ab42b98,network=Network(a17f6e7a-bed5-4778-a86b-02d7790910b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap057f5adb-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.112 232437 DEBUG os_vif [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:95:72,bridge_name='br-int',has_traffic_filtering=True,id=057f5adb-642a-4046-a07d-f33f8ab42b98,network=Network(a17f6e7a-bed5-4778-a86b-02d7790910b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap057f5adb-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.113 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.114 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.115 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.119 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.120 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap057f5adb-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.121 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap057f5adb-64, col_values=(('external_ids', {'iface-id': '057f5adb-642a-4046-a07d-f33f8ab42b98', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:95:72', 'vm-uuid': '3662ed68-91aa-4b97-8b12-5c87f99b02c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.123 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:57 np0005548731 NetworkManager[49182]: <info>  [1765006497.1239] manager: (tap057f5adb-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.125 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.130 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.132 232437 INFO os_vif [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:95:72,bridge_name='br-int',has_traffic_filtering=True,id=057f5adb-642a-4046-a07d-f33f8ab42b98,network=Network(a17f6e7a-bed5-4778-a86b-02d7790910b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap057f5adb-64')#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.182 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.183 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.183 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] No VIF found with MAC fa:16:3e:58:95:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.183 232437 INFO nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Using config drive#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.208 232437 DEBUG nova.storage.rbd_utils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] rbd image 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.545 232437 DEBUG nova.network.neutron [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Successfully created port: 657a8a48-7a74-4b37-a294-df0b2fd9332c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.723 232437 INFO nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Creating config drive at /var/lib/nova/instances/3662ed68-91aa-4b97-8b12-5c87f99b02c9/disk.config#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.728 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3662ed68-91aa-4b97-8b12-5c87f99b02c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzgywvuou execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:57.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.824 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.858 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3662ed68-91aa-4b97-8b12-5c87f99b02c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzgywvuou" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.887 232437 DEBUG nova.storage.rbd_utils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] rbd image 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:34:57 np0005548731 nova_compute[232433]: 2025-12-06 07:34:57.890 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3662ed68-91aa-4b97-8b12-5c87f99b02c9/disk.config 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:34:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:34:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:58.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:58 np0005548731 nova_compute[232433]: 2025-12-06 07:34:58.713 232437 DEBUG nova.network.neutron [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Successfully updated port: 657a8a48-7a74-4b37-a294-df0b2fd9332c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:34:58 np0005548731 nova_compute[232433]: 2025-12-06 07:34:58.793 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:34:58 np0005548731 nova_compute[232433]: 2025-12-06 07:34:58.793 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:34:58 np0005548731 nova_compute[232433]: 2025-12-06 07:34:58.793 232437 DEBUG nova.network.neutron [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:34:58 np0005548731 nova_compute[232433]: 2025-12-06 07:34:58.866 232437 DEBUG nova.compute.manager [req-3f190a58-0d94-4d43-8200-927ad066eaf8 req-a179f6d1-f29a-41e0-8d10-05f280e40121 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-changed-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:34:58 np0005548731 nova_compute[232433]: 2025-12-06 07:34:58.866 232437 DEBUG nova.compute.manager [req-3f190a58-0d94-4d43-8200-927ad066eaf8 req-a179f6d1-f29a-41e0-8d10-05f280e40121 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Refreshing instance network info cache due to event network-changed-657a8a48-7a74-4b37-a294-df0b2fd9332c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:34:58 np0005548731 nova_compute[232433]: 2025-12-06 07:34:58.867 232437 DEBUG oslo_concurrency.lockutils [req-3f190a58-0d94-4d43-8200-927ad066eaf8 req-a179f6d1-f29a-41e0-8d10-05f280e40121 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:34:58 np0005548731 nova_compute[232433]: 2025-12-06 07:34:58.980 232437 DEBUG nova.network.neutron [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:34:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:34:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:34:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:59.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:34:59 np0005548731 nova_compute[232433]: 2025-12-06 07:34:59.913 232437 DEBUG nova.network.neutron [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:34:59 np0005548731 nova_compute[232433]: 2025-12-06 07:34:59.928 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:34:59 np0005548731 nova_compute[232433]: 2025-12-06 07:34:59.928 232437 DEBUG nova.compute.manager [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Instance network_info: |[{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:34:59 np0005548731 nova_compute[232433]: 2025-12-06 07:34:59.928 232437 DEBUG oslo_concurrency.lockutils [req-3f190a58-0d94-4d43-8200-927ad066eaf8 req-a179f6d1-f29a-41e0-8d10-05f280e40121 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:34:59 np0005548731 nova_compute[232433]: 2025-12-06 07:34:59.928 232437 DEBUG nova.network.neutron [req-3f190a58-0d94-4d43-8200-927ad066eaf8 req-a179f6d1-f29a-41e0-8d10-05f280e40121 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Refreshing network info cache for port 657a8a48-7a74-4b37-a294-df0b2fd9332c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:35:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:00.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:00.876 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:00.877 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:00.878 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:01 np0005548731 nova_compute[232433]: 2025-12-06 07:35:01.146 232437 DEBUG nova.network.neutron [req-3f190a58-0d94-4d43-8200-927ad066eaf8 req-a179f6d1-f29a-41e0-8d10-05f280e40121 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updated VIF entry in instance network info cache for port 657a8a48-7a74-4b37-a294-df0b2fd9332c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:35:01 np0005548731 nova_compute[232433]: 2025-12-06 07:35:01.147 232437 DEBUG nova.network.neutron [req-3f190a58-0d94-4d43-8200-927ad066eaf8 req-a179f6d1-f29a-41e0-8d10-05f280e40121 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:35:01 np0005548731 nova_compute[232433]: 2025-12-06 07:35:01.166 232437 DEBUG oslo_concurrency.lockutils [req-3f190a58-0d94-4d43-8200-927ad066eaf8 req-a179f6d1-f29a-41e0-8d10-05f280e40121 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:35:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:01.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:02 np0005548731 nova_compute[232433]: 2025-12-06 07:35:02.124 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:35:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:02.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:35:02 np0005548731 nova_compute[232433]: 2025-12-06 07:35:02.827 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:03 np0005548731 nova_compute[232433]: 2025-12-06 07:35:03.587 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:03 np0005548731 nova_compute[232433]: 2025-12-06 07:35:03.656 232437 DEBUG nova.storage.rbd_utils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] resizing rbd image b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:35:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:35:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:03.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:35:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:04.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:35:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:05.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:35:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:06.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:35:07 np0005548731 nova_compute[232433]: 2025-12-06 07:35:07.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:35:07 np0005548731 nova_compute[232433]: 2025-12-06 07:35:07.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:35:07 np0005548731 nova_compute[232433]: 2025-12-06 07:35:07.127 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:07 np0005548731 nova_compute[232433]: 2025-12-06 07:35:07.225 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:35:07 np0005548731 nova_compute[232433]: 2025-12-06 07:35:07.225 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:35:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:07.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:35:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:35:07 np0005548731 nova_compute[232433]: 2025-12-06 07:35:07.828 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:08.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.221 232437 DEBUG nova.objects.instance [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'migration_context' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.234 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.235 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Ensure instance console log exists: /var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.235 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.235 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.236 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.238 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Start _get_guest_xml network_info=[{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.242 232437 WARNING nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.248 232437 DEBUG nova.virt.libvirt.host [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.249 232437 DEBUG nova.virt.libvirt.host [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.254 232437 DEBUG nova.virt.libvirt.host [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.254 232437 DEBUG nova.virt.libvirt.host [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.255 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.255 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.256 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.256 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.256 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.256 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.256 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.257 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.257 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.257 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.257 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.258 232437 DEBUG nova.virt.hardware [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:35:09 np0005548731 nova_compute[232433]: 2025-12-06 07:35:09.260 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:35:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3238803652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:35:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:09.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:10 np0005548731 nova_compute[232433]: 2025-12-06 07:35:10.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:35:10 np0005548731 nova_compute[232433]: 2025-12-06 07:35:10.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:35:10 np0005548731 nova_compute[232433]: 2025-12-06 07:35:10.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:35:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:35:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3009168779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:35:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:35:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3009168779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:35:10 np0005548731 nova_compute[232433]: 2025-12-06 07:35:10.211 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.951s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:10 np0005548731 nova_compute[232433]: 2025-12-06 07:35:10.233 232437 DEBUG nova.storage.rbd_utils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:35:10 np0005548731 nova_compute[232433]: 2025-12-06 07:35:10.236 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:10.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:35:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1980977133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:35:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:11.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.130 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:12.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.752 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.754 232437 DEBUG nova.virt.libvirt.vif [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:34:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=127,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-nhxynips',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:34:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=b4be0ef8-945f-47a1-a3a8-5962f1e692e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.754 232437 DEBUG nova.network.os_vif_util [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.755 232437 DEBUG nova.network.os_vif_util [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.756 232437 DEBUG nova.objects.instance [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.772 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  <uuid>b4be0ef8-945f-47a1-a3a8-5962f1e692e5</uuid>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  <name>instance-0000007f</name>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <nova:name>multiattach-server-0</nova:name>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:35:09</nova:creationTime>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <nova:user uuid="605b5481e0c944048e6a67046c30d693">tempest-AttachVolumeMultiAttachTest-690984293-project-member</nova:user>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <nova:project uuid="833f4cf9f5a64b2ab94c3bf330353a31">tempest-AttachVolumeMultiAttachTest-690984293</nova:project>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <nova:port uuid="657a8a48-7a74-4b37-a294-df0b2fd9332c">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <entry name="serial">b4be0ef8-945f-47a1-a3a8-5962f1e692e5</entry>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <entry name="uuid">b4be0ef8-945f-47a1-a3a8-5962f1e692e5</entry>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk.config">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:f4:fe:4a"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <target dev="tap657a8a48-7a"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5/console.log" append="off"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:35:12 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:35:12 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:35:12 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:35:12 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.773 232437 DEBUG nova.compute.manager [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Preparing to wait for external event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.773 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.773 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.773 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.774 232437 DEBUG nova.virt.libvirt.vif [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:34:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=127,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-nhxynips',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:34:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=b4be0ef8-945f-47a1-a3a8-5962f1e692e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.774 232437 DEBUG nova.network.os_vif_util [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.775 232437 DEBUG nova.network.os_vif_util [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.775 232437 DEBUG os_vif [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.776 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.776 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.777 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.779 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.779 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap657a8a48-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.779 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap657a8a48-7a, col_values=(('external_ids', {'iface-id': '657a8a48-7a74-4b37-a294-df0b2fd9332c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:fe:4a', 'vm-uuid': 'b4be0ef8-945f-47a1-a3a8-5962f1e692e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.781 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:12 np0005548731 NetworkManager[49182]: <info>  [1765006512.7824] manager: (tap657a8a48-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.784 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.788 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.789 232437 INFO os_vif [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a')#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.831 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.831 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.831 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:f4:fe:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.832 232437 INFO nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Using config drive#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.854 232437 DEBUG nova.storage.rbd_utils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:35:12 np0005548731 nova_compute[232433]: 2025-12-06 07:35:12.859 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.209 232437 DEBUG oslo_concurrency.processutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3662ed68-91aa-4b97-8b12-5c87f99b02c9/disk.config 3662ed68-91aa-4b97-8b12-5c87f99b02c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 15.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.209 232437 INFO nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Deleting local config drive /var/lib/nova/instances/3662ed68-91aa-4b97-8b12-5c87f99b02c9/disk.config because it was imported into RBD.#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.216 232437 INFO nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Creating config drive at /var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5/disk.config#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.224 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl01kgngw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:13 np0005548731 kernel: tap057f5adb-64: entered promiscuous mode
Dec  6 02:35:13 np0005548731 NetworkManager[49182]: <info>  [1765006513.2720] manager: (tap057f5adb-64): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Dec  6 02:35:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:13Z|00546|binding|INFO|Claiming lport 057f5adb-642a-4046-a07d-f33f8ab42b98 for this chassis.
Dec  6 02:35:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:13Z|00547|binding|INFO|057f5adb-642a-4046-a07d-f33f8ab42b98: Claiming fa:16:3e:58:95:72 10.100.0.4
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.275 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.282 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:95:72 10.100.0.4'], port_security=['fa:16:3e:58:95:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3662ed68-91aa-4b97-8b12-5c87f99b02c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a17f6e7a-bed5-4778-a86b-02d7790910b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2cfa2f2dfd6f4dc090f85672b1f24e68', 'neutron:revision_number': '2', 'neutron:security_group_ids': '466e6e24-bbe8-42bb-80dd-0bbf2c427dc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6badd1f-e4bd-41d4-8ee4-941164e4fdd1, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=057f5adb-642a-4046-a07d-f33f8ab42b98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.284 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 057f5adb-642a-4046-a07d-f33f8ab42b98 in datapath a17f6e7a-bed5-4778-a86b-02d7790910b2 bound to our chassis#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.288 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a17f6e7a-bed5-4778-a86b-02d7790910b2#033[00m
Dec  6 02:35:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:13Z|00548|binding|INFO|Setting lport 057f5adb-642a-4046-a07d-f33f8ab42b98 ovn-installed in OVS
Dec  6 02:35:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:13Z|00549|binding|INFO|Setting lport 057f5adb-642a-4046-a07d-f33f8ab42b98 up in Southbound
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.320 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.322 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[06913847-d142-4173-879f-c2cea78ef409]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.324 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa17f6e7a-b1 in ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.326 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa17f6e7a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.326 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[73225415-a7f4-49f1-9297-910831f8af40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.327 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5a93d7cd-65ff-48f7-bb77-59ee876586c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 systemd-udevd[286491]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:35:13 np0005548731 systemd-machined[195355]: New machine qemu-55-instance-0000007e.
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.339 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0faaeb-d3e2-4538-8bc8-47a3de30f64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 NetworkManager[49182]: <info>  [1765006513.3417] device (tap057f5adb-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:35:13 np0005548731 NetworkManager[49182]: <info>  [1765006513.3425] device (tap057f5adb-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:35:13 np0005548731 systemd[1]: Started Virtual Machine qemu-55-instance-0000007e.
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.364 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl01kgngw" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.366 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfc9c09-6e32-4a38-ae0c-13e04ff8fbdb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.395 232437 DEBUG nova.storage.rbd_utils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.397 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[eda94a21-177b-4d39-b9b4-4bd665bed725]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.399 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5/disk.config b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.402 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8752a0aa-a387-483d-80c8-c14342fbf40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 NetworkManager[49182]: <info>  [1765006513.4038] manager: (tapa17f6e7a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/262)
Dec  6 02:35:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.442 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8009fc-3958-4d9f-b82d-f274a68adebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.445 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[da95fbb6-495b-49de-8739-7bc5ec97acc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 NetworkManager[49182]: <info>  [1765006513.4694] device (tapa17f6e7a-b0): carrier: link connected
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.477 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[dad5d354-8b1c-4db6-aeeb-a35bfc215955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.497 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6203486a-efef-4f7e-b47f-cbd3bef4cdce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa17f6e7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:0f:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677547, 'reachable_time': 20470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286573, 'error': None, 'target': 'ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.511 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecb8e94-29fa-4d50-bb94-477e32974aaf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:fce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677547, 'tstamp': 677547}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286584, 'error': None, 'target': 'ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.527 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dca00090-431f-455f-b9c9-6806f61a3d36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa17f6e7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:0f:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677547, 'reachable_time': 20470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286586, 'error': None, 'target': 'ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.557 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8cc515-a15c-4db3-b95b-1c69c4d91341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.613 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[32e1b775-1dbc-4457-8281-c5282d5998cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.615 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa17f6e7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.615 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.615 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa17f6e7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.617 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:13 np0005548731 NetworkManager[49182]: <info>  [1765006513.6183] manager: (tapa17f6e7a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Dec  6 02:35:13 np0005548731 kernel: tapa17f6e7a-b0: entered promiscuous mode
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.622 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.623 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa17f6e7a-b0, col_values=(('external_ids', {'iface-id': '3209e4f2-5fb8-4d49-a4f1-f0f7d2b7d88d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.624 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:13Z|00550|binding|INFO|Releasing lport 3209e4f2-5fb8-4d49-a4f1-f0f7d2b7d88d from this chassis (sb_readonly=0)
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.638 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.643 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.643 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a17f6e7a-bed5-4778-a86b-02d7790910b2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a17f6e7a-bed5-4778-a86b-02d7790910b2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.644 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3b174b46-4055-41a9-afd7-84961ccb7091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.645 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-a17f6e7a-bed5-4778-a86b-02d7790910b2
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/a17f6e7a-bed5-4778-a86b-02d7790910b2.pid.haproxy
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID a17f6e7a-bed5-4778-a86b-02d7790910b2
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:35:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:13.645 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2', 'env', 'PROCESS_TAG=haproxy-a17f6e7a-bed5-4778-a86b-02d7790910b2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a17f6e7a-bed5-4778-a86b-02d7790910b2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.734 232437 DEBUG nova.compute.manager [req-e2d9897b-c380-41fa-8d4a-7ff4d1aba384 req-7e2e7695-a834-41f9-92f4-d96209833695 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Received event network-vif-plugged-057f5adb-642a-4046-a07d-f33f8ab42b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.735 232437 DEBUG oslo_concurrency.lockutils [req-e2d9897b-c380-41fa-8d4a-7ff4d1aba384 req-7e2e7695-a834-41f9-92f4-d96209833695 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.735 232437 DEBUG oslo_concurrency.lockutils [req-e2d9897b-c380-41fa-8d4a-7ff4d1aba384 req-7e2e7695-a834-41f9-92f4-d96209833695 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.736 232437 DEBUG oslo_concurrency.lockutils [req-e2d9897b-c380-41fa-8d4a-7ff4d1aba384 req-7e2e7695-a834-41f9-92f4-d96209833695 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.736 232437 DEBUG nova.compute.manager [req-e2d9897b-c380-41fa-8d4a-7ff4d1aba384 req-7e2e7695-a834-41f9-92f4-d96209833695 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Processing event network-vif-plugged-057f5adb-642a-4046-a07d-f33f8ab42b98 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:35:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:35:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:13.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.961 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006513.9609046, 3662ed68-91aa-4b97-8b12-5c87f99b02c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.962 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] VM Started (Lifecycle Event)#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.964 232437 DEBUG nova.compute.manager [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.968 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.971 232437 INFO nova.virt.libvirt.driver [-] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Instance spawned successfully.#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.971 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:35:13 np0005548731 nova_compute[232433]: 2025-12-06 07:35:13.997 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.004 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.008 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.008 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.009 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.009 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.010 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.010 232437 DEBUG nova.virt.libvirt.driver [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:14 np0005548731 podman[286694]: 2025-12-06 07:35:14.023237907 +0000 UTC m=+0.054911259 container create c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.040 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.040 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006513.961265, 3662ed68-91aa-4b97-8b12-5c87f99b02c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.041 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:35:14 np0005548731 systemd[1]: Started libpod-conmon-c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a.scope.
Dec  6 02:35:14 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:35:14 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fd36e431c683ab5de9d17da3bd0dfbd71376b7f2e1beedf534936ba143c295/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:35:14 np0005548731 podman[286694]: 2025-12-06 07:35:13.991796581 +0000 UTC m=+0.023469953 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:35:14 np0005548731 podman[286694]: 2025-12-06 07:35:14.095710152 +0000 UTC m=+0.127383524 container init c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 02:35:14 np0005548731 podman[286694]: 2025-12-06 07:35:14.101913473 +0000 UTC m=+0.133586825 container start c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.117 232437 DEBUG oslo_concurrency.processutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5/disk.config b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.718s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.118 232437 INFO nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Deleting local config drive /var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5/disk.config because it was imported into RBD.#033[00m
Dec  6 02:35:14 np0005548731 neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2[286709]: [NOTICE]   (286713) : New worker (286715) forked
Dec  6 02:35:14 np0005548731 neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2[286709]: [NOTICE]   (286713) : Loading success.
Dec  6 02:35:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:35:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:35:14 np0005548731 kernel: tap657a8a48-7a: entered promiscuous mode
Dec  6 02:35:14 np0005548731 NetworkManager[49182]: <info>  [1765006514.1688] manager: (tap657a8a48-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Dec  6 02:35:14 np0005548731 systemd-udevd[286525]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:35:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:14Z|00551|binding|INFO|Claiming lport 657a8a48-7a74-4b37-a294-df0b2fd9332c for this chassis.
Dec  6 02:35:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:14Z|00552|binding|INFO|657a8a48-7a74-4b37-a294-df0b2fd9332c: Claiming fa:16:3e:f4:fe:4a 10.100.0.7
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.173 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:14 np0005548731 NetworkManager[49182]: <info>  [1765006514.1854] device (tap657a8a48-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:35:14 np0005548731 NetworkManager[49182]: <info>  [1765006514.1863] device (tap657a8a48-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:35:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:14Z|00553|binding|INFO|Setting lport 657a8a48-7a74-4b37-a294-df0b2fd9332c ovn-installed in OVS
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.191 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.197 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:14 np0005548731 systemd-machined[195355]: New machine qemu-56-instance-0000007f.
Dec  6 02:35:14 np0005548731 systemd[1]: Started Virtual Machine qemu-56-instance-0000007f.
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.244 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.250 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006513.9669948, 3662ed68-91aa-4b97-8b12-5c87f99b02c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.250 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:35:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:14Z|00554|binding|INFO|Setting lport 657a8a48-7a74-4b37-a294-df0b2fd9332c up in Southbound
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.302 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:fe:4a 10.100.0.7'], port_security=['fa:16:3e:f4:fe:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b4be0ef8-945f-47a1-a3a8-5962f1e692e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=657a8a48-7a74-4b37-a294-df0b2fd9332c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.304 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 657a8a48-7a74-4b37-a294-df0b2fd9332c in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 bound to our chassis#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.305 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.320 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c2df78-dc0c-4d72-8afa-776dd6763148]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.349 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[232a5bbe-b35b-4068-b77d-5c2c277a2bcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.352 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3b6d17-a19c-47cf-8df9-d06fb06295b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.380 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c61aa70d-dac1-4e4a-b4e8-f66a14e87819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.397 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6e9e70-5033-4939-91a0-98897bd8ce46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286750, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.419 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3c30c2e4-463b-410d-a946-f169c98a62d1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286751, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286751, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.422 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.423 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.424 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.425 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.425 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.426 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:14.426 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.436 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.439 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.498 232437 INFO nova.compute.manager [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Took 26.04 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.499 232437 DEBUG nova.compute.manager [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.521 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:35:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:14.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.632 232437 INFO nova.compute.manager [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Took 27.10 seconds to build instance.#033[00m
Dec  6 02:35:14 np0005548731 nova_compute[232433]: 2025-12-06 07:35:14.839 232437 DEBUG oslo_concurrency.lockutils [None req-29f199c3-439f-49ad-a6a7-ceafc9c9ff68 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.206 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006515.205861, b4be0ef8-945f-47a1-a3a8-5962f1e692e5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.207 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] VM Started (Lifecycle Event)#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.227 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.231 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006515.2059703, b4be0ef8-945f-47a1-a3a8-5962f1e692e5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.232 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.249 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.254 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.275 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.363 232437 DEBUG nova.compute.manager [None req-335b470c-48e1-49ab-baea-51fc4d1c660a 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Dec  6 02:35:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:35:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:15.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.880 232437 DEBUG nova.compute.manager [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Received event network-vif-plugged-057f5adb-642a-4046-a07d-f33f8ab42b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.881 232437 DEBUG oslo_concurrency.lockutils [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.882 232437 DEBUG oslo_concurrency.lockutils [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.882 232437 DEBUG oslo_concurrency.lockutils [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.882 232437 DEBUG nova.compute.manager [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] No waiting events found dispatching network-vif-plugged-057f5adb-642a-4046-a07d-f33f8ab42b98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.883 232437 WARNING nova.compute.manager [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Received unexpected event network-vif-plugged-057f5adb-642a-4046-a07d-f33f8ab42b98 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.883 232437 DEBUG nova.compute.manager [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.883 232437 DEBUG oslo_concurrency.lockutils [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.884 232437 DEBUG oslo_concurrency.lockutils [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.884 232437 DEBUG oslo_concurrency.lockutils [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.885 232437 DEBUG nova.compute.manager [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Processing event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.885 232437 DEBUG nova.compute.manager [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.885 232437 DEBUG oslo_concurrency.lockutils [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.886 232437 DEBUG oslo_concurrency.lockutils [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.886 232437 DEBUG oslo_concurrency.lockutils [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.886 232437 DEBUG nova.compute.manager [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] No waiting events found dispatching network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.886 232437 WARNING nova.compute.manager [req-6c0a8e85-1f9b-480a-8a8d-fa9722e52968 req-398a92a4-2a8d-41d7-b77e-3c327f0a5ef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received unexpected event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.888 232437 DEBUG nova.compute.manager [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.892 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006515.8914554, b4be0ef8-945f-47a1-a3a8-5962f1e692e5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.892 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.896 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.908 232437 INFO nova.virt.libvirt.driver [-] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Instance spawned successfully.#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.908 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.921 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.924 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.931 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.932 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.932 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.932 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.933 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.933 232437 DEBUG nova.virt.libvirt.driver [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.965 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.997 232437 INFO nova.compute.manager [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Took 19.84 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:35:15 np0005548731 nova_compute[232433]: 2025-12-06 07:35:15.997 232437 DEBUG nova.compute.manager [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.061 232437 INFO nova.compute.manager [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Took 20.76 seconds to build instance.#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.073 232437 DEBUG nova.compute.manager [None req-33c29c4b-4892-42a1-a798-307a29fd4ab4 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.092 232437 DEBUG oslo_concurrency.lockutils [None req-29877d5d-e2cb-4da6-9142-8f5d2636bd64 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.122 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.502 232437 DEBUG oslo_concurrency.lockutils [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Acquiring lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.503 232437 DEBUG oslo_concurrency.lockutils [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.503 232437 DEBUG oslo_concurrency.lockutils [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Acquiring lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.503 232437 DEBUG oslo_concurrency.lockutils [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.504 232437 DEBUG oslo_concurrency.lockutils [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.505 232437 INFO nova.compute.manager [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Terminating instance#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.506 232437 DEBUG nova.compute.manager [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:35:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:16.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:16 np0005548731 kernel: tap057f5adb-64 (unregistering): left promiscuous mode
Dec  6 02:35:16 np0005548731 NetworkManager[49182]: <info>  [1765006516.6651] device (tap057f5adb-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:35:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:16Z|00555|binding|INFO|Releasing lport 057f5adb-642a-4046-a07d-f33f8ab42b98 from this chassis (sb_readonly=0)
Dec  6 02:35:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:16Z|00556|binding|INFO|Setting lport 057f5adb-642a-4046-a07d-f33f8ab42b98 down in Southbound
Dec  6 02:35:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:16Z|00557|binding|INFO|Removing iface tap057f5adb-64 ovn-installed in OVS
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.671 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.674 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.680 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:95:72 10.100.0.4'], port_security=['fa:16:3e:58:95:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3662ed68-91aa-4b97-8b12-5c87f99b02c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a17f6e7a-bed5-4778-a86b-02d7790910b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2cfa2f2dfd6f4dc090f85672b1f24e68', 'neutron:revision_number': '4', 'neutron:security_group_ids': '466e6e24-bbe8-42bb-80dd-0bbf2c427dc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6badd1f-e4bd-41d4-8ee4-941164e4fdd1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=057f5adb-642a-4046-a07d-f33f8ab42b98) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.681 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 057f5adb-642a-4046-a07d-f33f8ab42b98 in datapath a17f6e7a-bed5-4778-a86b-02d7790910b2 unbound from our chassis#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.684 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a17f6e7a-bed5-4778-a86b-02d7790910b2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.685 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[57dfc256-d323-4364-9f5f-7dd76098ead1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.685 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2 namespace which is not needed anymore#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.696 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:16 np0005548731 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Dec  6 02:35:16 np0005548731 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007e.scope: Consumed 3.034s CPU time.
Dec  6 02:35:16 np0005548731 systemd-machined[195355]: Machine qemu-55-instance-0000007e terminated.
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.739 232437 INFO nova.virt.libvirt.driver [-] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Instance destroyed successfully.#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.740 232437 DEBUG nova.objects.instance [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lazy-loading 'resources' on Instance uuid 3662ed68-91aa-4b97-8b12-5c87f99b02c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.764 232437 DEBUG nova.virt.libvirt.vif [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1410038847',display_name='tempest-NoVNCConsoleTestJSON-server-1410038847',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1410038847',id=126,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:35:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2cfa2f2dfd6f4dc090f85672b1f24e68',ramdisk_id='',reservation_id='r-7hnen40u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-NoVNCConsoleTestJSON-726879140',owner_user_name='tempest-NoVNCConsoleTestJSON-726879140-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:35:14Z,user_data=None,user_id='603967295b474367b2a605071d414bbf',uuid=3662ed68-91aa-4b97-8b12-5c87f99b02c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "057f5adb-642a-4046-a07d-f33f8ab42b98", "address": "fa:16:3e:58:95:72", "network": {"id": "a17f6e7a-bed5-4778-a86b-02d7790910b2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1529778336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2cfa2f2dfd6f4dc090f85672b1f24e68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap057f5adb-64", "ovs_interfaceid": "057f5adb-642a-4046-a07d-f33f8ab42b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.766 232437 DEBUG nova.network.os_vif_util [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Converting VIF {"id": "057f5adb-642a-4046-a07d-f33f8ab42b98", "address": "fa:16:3e:58:95:72", "network": {"id": "a17f6e7a-bed5-4778-a86b-02d7790910b2", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1529778336-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2cfa2f2dfd6f4dc090f85672b1f24e68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap057f5adb-64", "ovs_interfaceid": "057f5adb-642a-4046-a07d-f33f8ab42b98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.767 232437 DEBUG nova.network.os_vif_util [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:95:72,bridge_name='br-int',has_traffic_filtering=True,id=057f5adb-642a-4046-a07d-f33f8ab42b98,network=Network(a17f6e7a-bed5-4778-a86b-02d7790910b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap057f5adb-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.768 232437 DEBUG os_vif [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:95:72,bridge_name='br-int',has_traffic_filtering=True,id=057f5adb-642a-4046-a07d-f33f8ab42b98,network=Network(a17f6e7a-bed5-4778-a86b-02d7790910b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap057f5adb-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.770 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.771 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap057f5adb-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.800 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.803 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.806 232437 INFO os_vif [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:95:72,bridge_name='br-int',has_traffic_filtering=True,id=057f5adb-642a-4046-a07d-f33f8ab42b98,network=Network(a17f6e7a-bed5-4778-a86b-02d7790910b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap057f5adb-64')#033[00m
Dec  6 02:35:16 np0005548731 neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2[286709]: [NOTICE]   (286713) : haproxy version is 2.8.14-c23fe91
Dec  6 02:35:16 np0005548731 neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2[286709]: [NOTICE]   (286713) : path to executable is /usr/sbin/haproxy
Dec  6 02:35:16 np0005548731 neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2[286709]: [WARNING]  (286713) : Exiting Master process...
Dec  6 02:35:16 np0005548731 neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2[286709]: [ALERT]    (286713) : Current worker (286715) exited with code 143 (Terminated)
Dec  6 02:35:16 np0005548731 neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2[286709]: [WARNING]  (286713) : All workers exited. Exiting... (0)
Dec  6 02:35:16 np0005548731 systemd[1]: libpod-c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a.scope: Deactivated successfully.
Dec  6 02:35:16 np0005548731 podman[286830]: 2025-12-06 07:35:16.819355652 +0000 UTC m=+0.044787632 container died c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  6 02:35:16 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a-userdata-shm.mount: Deactivated successfully.
Dec  6 02:35:16 np0005548731 systemd[1]: var-lib-containers-storage-overlay-78fd36e431c683ab5de9d17da3bd0dfbd71376b7f2e1beedf534936ba143c295-merged.mount: Deactivated successfully.
Dec  6 02:35:16 np0005548731 podman[286830]: 2025-12-06 07:35:16.860263978 +0000 UTC m=+0.085695958 container cleanup c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  6 02:35:16 np0005548731 systemd[1]: libpod-conmon-c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a.scope: Deactivated successfully.
Dec  6 02:35:16 np0005548731 podman[286896]: 2025-12-06 07:35:16.924274058 +0000 UTC m=+0.040719613 container remove c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.930 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f7885904-b89d-4b3e-8507-cdb5d3bce397]: (4, ('Sat Dec  6 07:35:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2 (c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a)\nc4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a\nSat Dec  6 07:35:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2 (c4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a)\nc4939a460e756189a1e8290c6e6f0c5c76d9f3366dc126304f43d86c0ea8731a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.932 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e37435c6-6c3d-4cac-b7b7-98a4754fc517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.933 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa17f6e7a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.935 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:16 np0005548731 kernel: tapa17f6e7a-b0: left promiscuous mode
Dec  6 02:35:16 np0005548731 nova_compute[232433]: 2025-12-06 07:35:16.950 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.954 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0cd86a-e502-436d-b84d-45083fb04420]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.970 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[498bb2b2-2120-4547-ace5-40576b9729f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.972 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[76386d4f-c545-4548-8208-cd9fc7232acc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.986 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[32561f1a-98fe-492c-8c92-972e79cb63c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677539, 'reachable_time': 43684, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286941, 'error': None, 'target': 'ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:16 np0005548731 systemd[1]: run-netns-ovnmeta\x2da17f6e7a\x2dbed5\x2d4778\x2da86b\x2d02d7790910b2.mount: Deactivated successfully.
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.990 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a17f6e7a-bed5-4778-a86b-02d7790910b2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:35:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:16.990 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7181e7-0cb5-4daf-bd6a-6f26df7115d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.128 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.128 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:35:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1096218273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.548 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.661 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.661 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.665 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.665 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.668 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.669 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:35:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:17.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.830 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.831 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4117MB free_disk=20.743236541748047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.832 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.833 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.834 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.922 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 6a50a40c-3b05-4c0e-aa67-1489e203824e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.922 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 3662ed68-91aa-4b97-8b12-5c87f99b02c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.923 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.923 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.923 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.976 232437 DEBUG nova.compute.manager [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Received event network-vif-unplugged-057f5adb-642a-4046-a07d-f33f8ab42b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.976 232437 DEBUG oslo_concurrency.lockutils [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.976 232437 DEBUG oslo_concurrency.lockutils [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.977 232437 DEBUG oslo_concurrency.lockutils [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.977 232437 DEBUG nova.compute.manager [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] No waiting events found dispatching network-vif-unplugged-057f5adb-642a-4046-a07d-f33f8ab42b98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.978 232437 DEBUG nova.compute.manager [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Received event network-vif-unplugged-057f5adb-642a-4046-a07d-f33f8ab42b98 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.978 232437 DEBUG nova.compute.manager [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Received event network-vif-plugged-057f5adb-642a-4046-a07d-f33f8ab42b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.978 232437 DEBUG oslo_concurrency.lockutils [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.978 232437 DEBUG oslo_concurrency.lockutils [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.979 232437 DEBUG oslo_concurrency.lockutils [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.979 232437 DEBUG nova.compute.manager [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] No waiting events found dispatching network-vif-plugged-057f5adb-642a-4046-a07d-f33f8ab42b98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:35:17 np0005548731 nova_compute[232433]: 2025-12-06 07:35:17.979 232437 WARNING nova.compute.manager [req-b2ab9975-81fb-4353-af7f-ea8d4bb5636f req-322f0f26-43c1-475b-94d6-ff8a6bfee066 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Received unexpected event network-vif-plugged-057f5adb-642a-4046-a07d-f33f8ab42b98 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:35:18 np0005548731 nova_compute[232433]: 2025-12-06 07:35:18.017 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:35:18 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/661832997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:35:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:18 np0005548731 nova_compute[232433]: 2025-12-06 07:35:18.437 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:18 np0005548731 nova_compute[232433]: 2025-12-06 07:35:18.442 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:35:18 np0005548731 nova_compute[232433]: 2025-12-06 07:35:18.457 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:35:18 np0005548731 nova_compute[232433]: 2025-12-06 07:35:18.486 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:35:18 np0005548731 nova_compute[232433]: 2025-12-06 07:35:18.486 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:18.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:35:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:19.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:35:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:35:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:20.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:35:20 np0005548731 nova_compute[232433]: 2025-12-06 07:35:20.923 232437 DEBUG nova.compute.manager [req-a247af79-a0f5-4730-ba33-f98c865cc02f req-06f9c25a-402d-4afb-8350-60574904b2f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-changed-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:20 np0005548731 nova_compute[232433]: 2025-12-06 07:35:20.924 232437 DEBUG nova.compute.manager [req-a247af79-a0f5-4730-ba33-f98c865cc02f req-06f9c25a-402d-4afb-8350-60574904b2f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Refreshing instance network info cache due to event network-changed-657a8a48-7a74-4b37-a294-df0b2fd9332c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:35:20 np0005548731 nova_compute[232433]: 2025-12-06 07:35:20.924 232437 DEBUG oslo_concurrency.lockutils [req-a247af79-a0f5-4730-ba33-f98c865cc02f req-06f9c25a-402d-4afb-8350-60574904b2f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:35:20 np0005548731 nova_compute[232433]: 2025-12-06 07:35:20.924 232437 DEBUG oslo_concurrency.lockutils [req-a247af79-a0f5-4730-ba33-f98c865cc02f req-06f9c25a-402d-4afb-8350-60574904b2f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:35:20 np0005548731 nova_compute[232433]: 2025-12-06 07:35:20.925 232437 DEBUG nova.network.neutron [req-a247af79-a0f5-4730-ba33-f98c865cc02f req-06f9c25a-402d-4afb-8350-60574904b2f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Refreshing network info cache for port 657a8a48-7a74-4b37-a294-df0b2fd9332c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:35:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:35:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:21.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:35:21 np0005548731 nova_compute[232433]: 2025-12-06 07:35:21.801 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:22 np0005548731 nova_compute[232433]: 2025-12-06 07:35:22.487 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:35:22 np0005548731 nova_compute[232433]: 2025-12-06 07:35:22.487 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:35:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:35:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:22.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:35:22 np0005548731 nova_compute[232433]: 2025-12-06 07:35:22.880 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:22 np0005548731 nova_compute[232433]: 2025-12-06 07:35:22.964 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:22 np0005548731 nova_compute[232433]: 2025-12-06 07:35:22.964 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:22 np0005548731 nova_compute[232433]: 2025-12-06 07:35:22.987 232437 DEBUG nova.compute.manager [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.058 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.058 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.063 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.064 232437 INFO nova.compute.claims [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.291 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.588 232437 DEBUG nova.network.neutron [req-a247af79-a0f5-4730-ba33-f98c865cc02f req-06f9c25a-402d-4afb-8350-60574904b2f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updated VIF entry in instance network info cache for port 657a8a48-7a74-4b37-a294-df0b2fd9332c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.589 232437 DEBUG nova.network.neutron [req-a247af79-a0f5-4730-ba33-f98c865cc02f req-06f9c25a-402d-4afb-8350-60574904b2f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.695 232437 DEBUG oslo_concurrency.lockutils [req-a247af79-a0f5-4730-ba33-f98c865cc02f req-06f9c25a-402d-4afb-8350-60574904b2f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:35:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:35:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3294554558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.722 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.727 232437 DEBUG nova.compute.provider_tree [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.748 232437 DEBUG nova.scheduler.client.report [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.768 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.769 232437 DEBUG nova.compute.manager [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:35:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:35:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:23.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.866 232437 DEBUG nova.compute.manager [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.866 232437 DEBUG nova.network.neutron [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:35:23 np0005548731 nova_compute[232433]: 2025-12-06 07:35:23.940 232437 INFO nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.126 232437 DEBUG nova.compute.manager [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.407 232437 DEBUG nova.compute.manager [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.408 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.409 232437 INFO nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Creating image(s)#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.434 232437 DEBUG nova.storage.rbd_utils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.461 232437 DEBUG nova.storage.rbd_utils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.487 232437 DEBUG nova.storage.rbd_utils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.490 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.539 232437 DEBUG nova.policy [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '605b5481e0c944048e6a67046c30d693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.555 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.556 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.557 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.557 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.582 232437 DEBUG nova.storage.rbd_utils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:35:24 np0005548731 nova_compute[232433]: 2025-12-06 07:35:24.585 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:24.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:24 np0005548731 podman[287104]: 2025-12-06 07:35:24.905577837 +0000 UTC m=+0.057561043 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 02:35:24 np0005548731 podman[287106]: 2025-12-06 07:35:24.91103911 +0000 UTC m=+0.062408392 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 02:35:24 np0005548731 podman[287105]: 2025-12-06 07:35:24.933524878 +0000 UTC m=+0.086205922 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec  6 02:35:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:25.601 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:35:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:25.603 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:35:25 np0005548731 nova_compute[232433]: 2025-12-06 07:35:25.603 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:25.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:25 np0005548731 nova_compute[232433]: 2025-12-06 07:35:25.963 232437 DEBUG nova.network.neutron [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Successfully created port: bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:35:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:26.605 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:26.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:26 np0005548731 nova_compute[232433]: 2025-12-06 07:35:26.806 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:27 np0005548731 nova_compute[232433]: 2025-12-06 07:35:27.657 232437 DEBUG nova.network.neutron [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Successfully updated port: bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:35:27 np0005548731 nova_compute[232433]: 2025-12-06 07:35:27.679 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:35:27 np0005548731 nova_compute[232433]: 2025-12-06 07:35:27.680 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:35:27 np0005548731 nova_compute[232433]: 2025-12-06 07:35:27.680 232437 DEBUG nova.network.neutron [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:35:27 np0005548731 nova_compute[232433]: 2025-12-06 07:35:27.759 232437 DEBUG nova.compute.manager [req-de2f08ca-f4c5-40c6-9ae1-4133196b4531 req-552ca465-c02e-4e66-8892-0a3aed7d3eec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:27 np0005548731 nova_compute[232433]: 2025-12-06 07:35:27.760 232437 DEBUG nova.compute.manager [req-de2f08ca-f4c5-40c6-9ae1-4133196b4531 req-552ca465-c02e-4e66-8892-0a3aed7d3eec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing instance network info cache due to event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:35:27 np0005548731 nova_compute[232433]: 2025-12-06 07:35:27.760 232437 DEBUG oslo_concurrency.lockutils [req-de2f08ca-f4c5-40c6-9ae1-4133196b4531 req-552ca465-c02e-4e66-8892-0a3aed7d3eec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:35:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:35:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:27.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:35:27 np0005548731 nova_compute[232433]: 2025-12-06 07:35:27.834 232437 DEBUG nova.network.neutron [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:35:27 np0005548731 nova_compute[232433]: 2025-12-06 07:35:27.882 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:28.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:28 np0005548731 nova_compute[232433]: 2025-12-06 07:35:28.777 232437 DEBUG nova.network.neutron [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating instance_info_cache with network_info: [{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:35:28 np0005548731 nova_compute[232433]: 2025-12-06 07:35:28.811 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:35:28 np0005548731 nova_compute[232433]: 2025-12-06 07:35:28.812 232437 DEBUG nova.compute.manager [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Instance network_info: |[{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:35:28 np0005548731 nova_compute[232433]: 2025-12-06 07:35:28.813 232437 DEBUG oslo_concurrency.lockutils [req-de2f08ca-f4c5-40c6-9ae1-4133196b4531 req-552ca465-c02e-4e66-8892-0a3aed7d3eec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:35:28 np0005548731 nova_compute[232433]: 2025-12-06 07:35:28.813 232437 DEBUG nova.network.neutron [req-de2f08ca-f4c5-40c6-9ae1-4133196b4531 req-552ca465-c02e-4e66-8892-0a3aed7d3eec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:35:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:29.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:29 np0005548731 nova_compute[232433]: 2025-12-06 07:35:29.878 232437 DEBUG nova.network.neutron [req-de2f08ca-f4c5-40c6-9ae1-4133196b4531 req-552ca465-c02e-4e66-8892-0a3aed7d3eec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updated VIF entry in instance network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:35:29 np0005548731 nova_compute[232433]: 2025-12-06 07:35:29.879 232437 DEBUG nova.network.neutron [req-de2f08ca-f4c5-40c6-9ae1-4133196b4531 req-552ca465-c02e-4e66-8892-0a3aed7d3eec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating instance_info_cache with network_info: [{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:35:29 np0005548731 nova_compute[232433]: 2025-12-06 07:35:29.892 232437 DEBUG oslo_concurrency.lockutils [req-de2f08ca-f4c5-40c6-9ae1-4133196b4531 req-552ca465-c02e-4e66-8892-0a3aed7d3eec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:35:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:30.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:30 np0005548731 nova_compute[232433]: 2025-12-06 07:35:30.883 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:31.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:31 np0005548731 nova_compute[232433]: 2025-12-06 07:35:31.937 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:31 np0005548731 nova_compute[232433]: 2025-12-06 07:35:31.939 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006516.7372317, 3662ed68-91aa-4b97-8b12-5c87f99b02c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:35:31 np0005548731 nova_compute[232433]: 2025-12-06 07:35:31.939 232437 INFO nova.compute.manager [-] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:35:31 np0005548731 nova_compute[232433]: 2025-12-06 07:35:31.981 232437 DEBUG nova.compute.manager [None req-8e7b4dae-5655-49e4-a520-0fb9abc9ec17 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:31 np0005548731 nova_compute[232433]: 2025-12-06 07:35:31.987 232437 DEBUG nova.storage.rbd_utils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] resizing rbd image 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:35:32 np0005548731 nova_compute[232433]: 2025-12-06 07:35:32.025 232437 DEBUG nova.compute.manager [None req-8e7b4dae-5655-49e4-a520-0fb9abc9ec17 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:35:32 np0005548731 nova_compute[232433]: 2025-12-06 07:35:32.055 232437 INFO nova.compute.manager [None req-8e7b4dae-5655-49e4-a520-0fb9abc9ec17 - - - - - -] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Dec  6 02:35:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:32.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:32 np0005548731 nova_compute[232433]: 2025-12-06 07:35:32.885 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 02:35:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:33.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 02:35:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:34.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:34 np0005548731 nova_compute[232433]: 2025-12-06 07:35:34.916 232437 DEBUG nova.objects.instance [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'migration_context' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.082 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.083 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Ensure instance console log exists: /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.084 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.085 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.085 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.089 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Start _get_guest_xml network_info=[{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.096 232437 WARNING nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.106 232437 DEBUG nova.virt.libvirt.host [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.106 232437 DEBUG nova.virt.libvirt.host [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.109 232437 DEBUG nova.virt.libvirt.host [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.110 232437 DEBUG nova.virt.libvirt.host [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.111 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.111 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.112 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.112 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.112 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.112 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.113 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.113 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.113 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.114 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.114 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.114 232437 DEBUG nova.virt.hardware [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.117 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:35:35 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4096406409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.576 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.604 232437 DEBUG nova.storage.rbd_utils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:35:35 np0005548731 nova_compute[232433]: 2025-12-06 07:35:35.609 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:35:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:35.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:35:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:35:36 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2344886864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.075 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.079 232437 DEBUG nova.virt.libvirt.vif [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-1',id=129,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-pbz3nfcu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:35:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=3d4b97f9-b8c3-4764-87f9-4006968fecd4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.079 232437 DEBUG nova.network.os_vif_util [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.080 232437 DEBUG nova.network.os_vif_util [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.082 232437 DEBUG nova.objects.instance [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.167 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  <uuid>3d4b97f9-b8c3-4764-87f9-4006968fecd4</uuid>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  <name>instance-00000081</name>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <nova:name>multiattach-server-1</nova:name>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:35:35</nova:creationTime>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <nova:user uuid="605b5481e0c944048e6a67046c30d693">tempest-AttachVolumeMultiAttachTest-690984293-project-member</nova:user>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <nova:project uuid="833f4cf9f5a64b2ab94c3bf330353a31">tempest-AttachVolumeMultiAttachTest-690984293</nova:project>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <nova:port uuid="bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <entry name="serial">3d4b97f9-b8c3-4764-87f9-4006968fecd4</entry>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <entry name="uuid">3d4b97f9-b8c3-4764-87f9-4006968fecd4</entry>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk.config">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:c3:31:81"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <target dev="tapbfac9ac2-c1"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/console.log" append="off"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:35:36 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:35:36 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:35:36 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:35:36 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.168 232437 DEBUG nova.compute.manager [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Preparing to wait for external event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.169 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.169 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.169 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.170 232437 DEBUG nova.virt.libvirt.vif [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-1',id=129,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-pbz3nfcu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:35:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=3d4b97f9-b8c3-4764-87f9-4006968fecd4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.170 232437 DEBUG nova.network.os_vif_util [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.171 232437 DEBUG nova.network.os_vif_util [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.171 232437 DEBUG os_vif [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.172 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.172 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.173 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.176 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.177 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfac9ac2-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.177 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbfac9ac2-c1, col_values=(('external_ids', {'iface-id': 'bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:31:81', 'vm-uuid': '3d4b97f9-b8c3-4764-87f9-4006968fecd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.179 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:36 np0005548731 NetworkManager[49182]: <info>  [1765006536.1804] manager: (tapbfac9ac2-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.181 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.186 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.188 232437 INFO os_vif [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1')#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.476 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.477 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.477 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:c3:31:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.477 232437 INFO nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Using config drive#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.505 232437 DEBUG nova.storage.rbd_utils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:35:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:36.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.855 232437 INFO nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Creating config drive at /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/disk.config#033[00m
Dec  6 02:35:36 np0005548731 nova_compute[232433]: 2025-12-06 07:35:36.862 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzjpbwu8l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:36 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Dec  6 02:35:37 np0005548731 nova_compute[232433]: 2025-12-06 07:35:37.003 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzjpbwu8l" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:37 np0005548731 nova_compute[232433]: 2025-12-06 07:35:37.035 232437 DEBUG nova.storage.rbd_utils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:35:37 np0005548731 nova_compute[232433]: 2025-12-06 07:35:37.040 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/disk.config 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:35:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:37.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:37 np0005548731 nova_compute[232433]: 2025-12-06 07:35:37.886 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:38.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:39Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:fe:4a 10.100.0.7
Dec  6 02:35:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:39Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:fe:4a 10.100.0.7
Dec  6 02:35:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:39.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:40.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:41 np0005548731 nova_compute[232433]: 2025-12-06 07:35:41.181 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:41.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:41 np0005548731 nova_compute[232433]: 2025-12-06 07:35:41.819 232437 DEBUG oslo_concurrency.processutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/disk.config 3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.778s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:35:41 np0005548731 nova_compute[232433]: 2025-12-06 07:35:41.819 232437 INFO nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Deleting local config drive /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/disk.config because it was imported into RBD.#033[00m
Dec  6 02:35:41 np0005548731 kernel: tapbfac9ac2-c1: entered promiscuous mode
Dec  6 02:35:41 np0005548731 NetworkManager[49182]: <info>  [1765006541.8800] manager: (tapbfac9ac2-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Dec  6 02:35:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:41Z|00558|binding|INFO|Claiming lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for this chassis.
Dec  6 02:35:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:41Z|00559|binding|INFO|bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1: Claiming fa:16:3e:c3:31:81 10.100.0.6
Dec  6 02:35:41 np0005548731 nova_compute[232433]: 2025-12-06 07:35:41.880 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:41Z|00560|binding|INFO|Setting lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 ovn-installed in OVS
Dec  6 02:35:41 np0005548731 systemd-udevd[287435]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:35:41 np0005548731 nova_compute[232433]: 2025-12-06 07:35:41.942 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:41 np0005548731 NetworkManager[49182]: <info>  [1765006541.9439] device (tapbfac9ac2-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:35:41 np0005548731 NetworkManager[49182]: <info>  [1765006541.9458] device (tapbfac9ac2-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:35:41 np0005548731 systemd-machined[195355]: New machine qemu-57-instance-00000081.
Dec  6 02:35:41 np0005548731 systemd[1]: Started Virtual Machine qemu-57-instance-00000081.
Dec  6 02:35:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:35:41Z|00561|binding|INFO|Setting lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 up in Southbound
Dec  6 02:35:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:41.980 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:31:81 10.100.0.6'], port_security=['fa:16:3e:c3:31:81 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3d4b97f9-b8c3-4764-87f9-4006968fecd4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:35:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:41.981 143965 INFO neutron.agent.ovn.metadata.agent [-] Port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 bound to our chassis#033[00m
Dec  6 02:35:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:41.983 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:35:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:42.005 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec64245-6ab0-4a36-a89e-3c06022b7549]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:42.033 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[568dea47-b715-4e6b-aaad-a8e46faa9cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:42.037 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f43c9887-f221-499d-a475-b6ecccdf2ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:42.063 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd58dc6-1ec6-4467-bcf7-aa5e3d0225ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:42.081 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b69938-12b2-40a4-994e-c1c5dffee052]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287450, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:42.096 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[04323634-3102-4ae9-9b3d-7a59ea54dae2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287451, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287451, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:35:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:42.098 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:42 np0005548731 nova_compute[232433]: 2025-12-06 07:35:42.099 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:42 np0005548731 nova_compute[232433]: 2025-12-06 07:35:42.100 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:42.100 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:42.101 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:35:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:42.101 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:35:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:35:42.101 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:35:42 np0005548731 nova_compute[232433]: 2025-12-06 07:35:42.407 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006542.407192, 3d4b97f9-b8c3-4764-87f9-4006968fecd4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:35:42 np0005548731 nova_compute[232433]: 2025-12-06 07:35:42.408 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] VM Started (Lifecycle Event)#033[00m
Dec  6 02:35:42 np0005548731 nova_compute[232433]: 2025-12-06 07:35:42.517 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:42 np0005548731 nova_compute[232433]: 2025-12-06 07:35:42.520 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006542.4098098, 3d4b97f9-b8c3-4764-87f9-4006968fecd4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:35:42 np0005548731 nova_compute[232433]: 2025-12-06 07:35:42.520 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:35:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:42.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:42 np0005548731 nova_compute[232433]: 2025-12-06 07:35:42.643 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:42 np0005548731 nova_compute[232433]: 2025-12-06 07:35:42.646 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:35:42 np0005548731 nova_compute[232433]: 2025-12-06 07:35:42.857 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:35:42 np0005548731 nova_compute[232433]: 2025-12-06 07:35:42.888 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.710 232437 DEBUG nova.compute.manager [req-5c2193c4-2168-4b46-8727-50f8b48094c0 req-30ec89e0-53d0-43b0-8147-e6ebac0935ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.711 232437 DEBUG oslo_concurrency.lockutils [req-5c2193c4-2168-4b46-8727-50f8b48094c0 req-30ec89e0-53d0-43b0-8147-e6ebac0935ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.712 232437 DEBUG oslo_concurrency.lockutils [req-5c2193c4-2168-4b46-8727-50f8b48094c0 req-30ec89e0-53d0-43b0-8147-e6ebac0935ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.713 232437 DEBUG oslo_concurrency.lockutils [req-5c2193c4-2168-4b46-8727-50f8b48094c0 req-30ec89e0-53d0-43b0-8147-e6ebac0935ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.713 232437 DEBUG nova.compute.manager [req-5c2193c4-2168-4b46-8727-50f8b48094c0 req-30ec89e0-53d0-43b0-8147-e6ebac0935ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Processing event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.714 232437 DEBUG nova.compute.manager [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.718 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006543.7181408, 3d4b97f9-b8c3-4764-87f9-4006968fecd4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.718 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.722 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.727 232437 INFO nova.virt.libvirt.driver [-] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Instance spawned successfully.#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.727 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.764 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.769 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.770 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.770 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.771 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.771 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.772 232437 DEBUG nova.virt.libvirt.driver [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.776 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:35:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:43.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.922 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.923 232437 INFO nova.compute.manager [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Took 19.52 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:35:43 np0005548731 nova_compute[232433]: 2025-12-06 07:35:43.924 232437 DEBUG nova.compute.manager [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:35:44 np0005548731 nova_compute[232433]: 2025-12-06 07:35:44.044 232437 INFO nova.compute.manager [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Took 21.01 seconds to build instance.#033[00m
Dec  6 02:35:44 np0005548731 nova_compute[232433]: 2025-12-06 07:35:44.069 232437 DEBUG oslo_concurrency.lockutils [None req-b0817264-8471-4345-9cfb-5d004f2937b6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:35:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:44.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:35:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:45.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:45 np0005548731 nova_compute[232433]: 2025-12-06 07:35:45.821 232437 DEBUG nova.compute.manager [req-1f8aed97-198d-4612-9509-7b8e9cc4d792 req-ab93c00c-345a-4ecb-aaa4-d11d456fdaa4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:45 np0005548731 nova_compute[232433]: 2025-12-06 07:35:45.822 232437 DEBUG oslo_concurrency.lockutils [req-1f8aed97-198d-4612-9509-7b8e9cc4d792 req-ab93c00c-345a-4ecb-aaa4-d11d456fdaa4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:35:45 np0005548731 nova_compute[232433]: 2025-12-06 07:35:45.822 232437 DEBUG oslo_concurrency.lockutils [req-1f8aed97-198d-4612-9509-7b8e9cc4d792 req-ab93c00c-345a-4ecb-aaa4-d11d456fdaa4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:35:45 np0005548731 nova_compute[232433]: 2025-12-06 07:35:45.822 232437 DEBUG oslo_concurrency.lockutils [req-1f8aed97-198d-4612-9509-7b8e9cc4d792 req-ab93c00c-345a-4ecb-aaa4-d11d456fdaa4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:35:45 np0005548731 nova_compute[232433]: 2025-12-06 07:35:45.822 232437 DEBUG nova.compute.manager [req-1f8aed97-198d-4612-9509-7b8e9cc4d792 req-ab93c00c-345a-4ecb-aaa4-d11d456fdaa4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] No waiting events found dispatching network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:35:45 np0005548731 nova_compute[232433]: 2025-12-06 07:35:45.822 232437 WARNING nova.compute.manager [req-1f8aed97-198d-4612-9509-7b8e9cc4d792 req-ab93c00c-345a-4ecb-aaa4-d11d456fdaa4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received unexpected event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:35:46 np0005548731 nova_compute[232433]: 2025-12-06 07:35:46.183 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:46.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:47 np0005548731 nova_compute[232433]: 2025-12-06 07:35:47.461 232437 DEBUG nova.compute.manager [req-2476435a-5d30-4568-9932-9e38b3112f6b req-fe8e6450-693b-485e-94f8-2a71243837d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-changed-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:47 np0005548731 nova_compute[232433]: 2025-12-06 07:35:47.461 232437 DEBUG nova.compute.manager [req-2476435a-5d30-4568-9932-9e38b3112f6b req-fe8e6450-693b-485e-94f8-2a71243837d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Refreshing instance network info cache due to event network-changed-657a8a48-7a74-4b37-a294-df0b2fd9332c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:35:47 np0005548731 nova_compute[232433]: 2025-12-06 07:35:47.462 232437 DEBUG oslo_concurrency.lockutils [req-2476435a-5d30-4568-9932-9e38b3112f6b req-fe8e6450-693b-485e-94f8-2a71243837d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:35:47 np0005548731 nova_compute[232433]: 2025-12-06 07:35:47.462 232437 DEBUG oslo_concurrency.lockutils [req-2476435a-5d30-4568-9932-9e38b3112f6b req-fe8e6450-693b-485e-94f8-2a71243837d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:35:47 np0005548731 nova_compute[232433]: 2025-12-06 07:35:47.462 232437 DEBUG nova.network.neutron [req-2476435a-5d30-4568-9932-9e38b3112f6b req-fe8e6450-693b-485e-94f8-2a71243837d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Refreshing network info cache for port 657a8a48-7a74-4b37-a294-df0b2fd9332c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:35:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:47.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:47 np0005548731 nova_compute[232433]: 2025-12-06 07:35:47.892 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:48.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:49 np0005548731 nova_compute[232433]: 2025-12-06 07:35:49.519 232437 DEBUG nova.network.neutron [req-2476435a-5d30-4568-9932-9e38b3112f6b req-fe8e6450-693b-485e-94f8-2a71243837d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updated VIF entry in instance network info cache for port 657a8a48-7a74-4b37-a294-df0b2fd9332c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:35:49 np0005548731 nova_compute[232433]: 2025-12-06 07:35:49.520 232437 DEBUG nova.network.neutron [req-2476435a-5d30-4568-9932-9e38b3112f6b req-fe8e6450-693b-485e-94f8-2a71243837d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:35:49 np0005548731 nova_compute[232433]: 2025-12-06 07:35:49.596 232437 DEBUG oslo_concurrency.lockutils [req-2476435a-5d30-4568-9932-9e38b3112f6b req-fe8e6450-693b-485e-94f8-2a71243837d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:35:49 np0005548731 nova_compute[232433]: 2025-12-06 07:35:49.617 232437 DEBUG nova.compute.manager [req-3eb1dafc-f8f3-4bc3-95c5-94ee675ca26d req-190777af-2aeb-441e-b8b0-7d00ee0cb56e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:35:49 np0005548731 nova_compute[232433]: 2025-12-06 07:35:49.617 232437 DEBUG nova.compute.manager [req-3eb1dafc-f8f3-4bc3-95c5-94ee675ca26d req-190777af-2aeb-441e-b8b0-7d00ee0cb56e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing instance network info cache due to event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:35:49 np0005548731 nova_compute[232433]: 2025-12-06 07:35:49.617 232437 DEBUG oslo_concurrency.lockutils [req-3eb1dafc-f8f3-4bc3-95c5-94ee675ca26d req-190777af-2aeb-441e-b8b0-7d00ee0cb56e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:35:49 np0005548731 nova_compute[232433]: 2025-12-06 07:35:49.617 232437 DEBUG oslo_concurrency.lockutils [req-3eb1dafc-f8f3-4bc3-95c5-94ee675ca26d req-190777af-2aeb-441e-b8b0-7d00ee0cb56e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:35:49 np0005548731 nova_compute[232433]: 2025-12-06 07:35:49.618 232437 DEBUG nova.network.neutron [req-3eb1dafc-f8f3-4bc3-95c5-94ee675ca26d req-190777af-2aeb-441e-b8b0-7d00ee0cb56e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:35:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:35:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:49.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:35:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:50.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:51 np0005548731 nova_compute[232433]: 2025-12-06 07:35:51.234 232437 DEBUG nova.network.neutron [req-3eb1dafc-f8f3-4bc3-95c5-94ee675ca26d req-190777af-2aeb-441e-b8b0-7d00ee0cb56e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updated VIF entry in instance network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:35:51 np0005548731 nova_compute[232433]: 2025-12-06 07:35:51.235 232437 DEBUG nova.network.neutron [req-3eb1dafc-f8f3-4bc3-95c5-94ee675ca26d req-190777af-2aeb-441e-b8b0-7d00ee0cb56e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating instance_info_cache with network_info: [{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:35:51 np0005548731 nova_compute[232433]: 2025-12-06 07:35:51.236 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:51 np0005548731 nova_compute[232433]: 2025-12-06 07:35:51.279 232437 DEBUG oslo_concurrency.lockutils [req-3eb1dafc-f8f3-4bc3-95c5-94ee675ca26d req-190777af-2aeb-441e-b8b0-7d00ee0cb56e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:35:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:35:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:51.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:35:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:35:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:52.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:35:52 np0005548731 nova_compute[232433]: 2025-12-06 07:35:52.894 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:53.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e294 e294: 3 total, 3 up, 3 in
Dec  6 02:35:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:54.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:55.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:55 np0005548731 podman[287503]: 2025-12-06 07:35:55.904431159 +0000 UTC m=+0.061951360 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:35:55 np0005548731 podman[287502]: 2025-12-06 07:35:55.925676877 +0000 UTC m=+0.083807143 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:35:55 np0005548731 podman[287501]: 2025-12-06 07:35:55.926304002 +0000 UTC m=+0.084377467 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  6 02:35:56 np0005548731 nova_compute[232433]: 2025-12-06 07:35:56.237 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:35:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:56.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:35:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:57.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:35:57 np0005548731 nova_compute[232433]: 2025-12-06 07:35:57.896 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:35:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:35:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:35:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:58.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:35:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:35:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:35:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:59.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:00.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:00.877 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:00.878 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:00.878 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:01 np0005548731 nova_compute[232433]: 2025-12-06 07:36:01.274 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:01.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:02.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:02 np0005548731 nova_compute[232433]: 2025-12-06 07:36:02.897 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:03 np0005548731 nova_compute[232433]: 2025-12-06 07:36:03.620 232437 INFO nova.virt.libvirt.driver [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Deleting instance files /var/lib/nova/instances/3662ed68-91aa-4b97-8b12-5c87f99b02c9_del#033[00m
Dec  6 02:36:03 np0005548731 nova_compute[232433]: 2025-12-06 07:36:03.621 232437 INFO nova.virt.libvirt.driver [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Deletion of /var/lib/nova/instances/3662ed68-91aa-4b97-8b12-5c87f99b02c9_del complete#033[00m
Dec  6 02:36:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:03.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:36:04Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:31:81 10.100.0.6
Dec  6 02:36:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:36:04Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:31:81 10.100.0.6
Dec  6 02:36:04 np0005548731 nova_compute[232433]: 2025-12-06 07:36:04.213 232437 INFO nova.compute.manager [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Took 47.71 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:36:04 np0005548731 nova_compute[232433]: 2025-12-06 07:36:04.214 232437 DEBUG oslo.service.loopingcall [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:36:04 np0005548731 nova_compute[232433]: 2025-12-06 07:36:04.215 232437 DEBUG nova.compute.manager [-] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:36:04 np0005548731 nova_compute[232433]: 2025-12-06 07:36:04.215 232437 DEBUG nova.network.neutron [-] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:36:04 np0005548731 nova_compute[232433]: 2025-12-06 07:36:04.402 232437 DEBUG oslo_concurrency.lockutils [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:04 np0005548731 nova_compute[232433]: 2025-12-06 07:36:04.402 232437 DEBUG oslo_concurrency.lockutils [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:04 np0005548731 nova_compute[232433]: 2025-12-06 07:36:04.491 232437 DEBUG nova.objects.instance [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:04.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:04 np0005548731 nova_compute[232433]: 2025-12-06 07:36:04.813 232437 DEBUG oslo_concurrency.lockutils [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.497 232437 DEBUG nova.network.neutron [-] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.499 232437 DEBUG oslo_concurrency.lockutils [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.500 232437 DEBUG oslo_concurrency.lockutils [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.500 232437 INFO nova.compute.manager [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Attaching volume fb115316-35b8-4af3-9847-c6315737a158 to /dev/vdb#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.733 232437 DEBUG nova.compute.manager [req-6d126d69-78a0-413d-9efb-7e1d4c129b41 req-4ac55e8f-74e5-4768-bdc6-d78ecbf6ab85 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Received event network-vif-deleted-057f5adb-642a-4046-a07d-f33f8ab42b98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.733 232437 INFO nova.compute.manager [req-6d126d69-78a0-413d-9efb-7e1d4c129b41 req-4ac55e8f-74e5-4768-bdc6-d78ecbf6ab85 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Neutron deleted interface 057f5adb-642a-4046-a07d-f33f8ab42b98; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.734 232437 DEBUG nova.network.neutron [req-6d126d69-78a0-413d-9efb-7e1d4c129b41 req-4ac55e8f-74e5-4768-bdc6-d78ecbf6ab85 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:36:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:05.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.881 232437 DEBUG os_brick.utils [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.882 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.894 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.894 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[360b2c34-c465-403a-a055-160eeff1d290]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.896 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.902 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.903 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8b7b16-432a-460d-a533-7b8464219cb0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.904 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.913 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.913 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[c92ef341-3308-4045-83c7-fc2623f0d107]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.915 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[d9979415-1279-439c-b518-75ecdb4e808e]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.915 232437 DEBUG oslo_concurrency.processutils [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.939 232437 DEBUG oslo_concurrency.processutils [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.942 232437 DEBUG os_brick.initiator.connectors.lightos [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.943 232437 DEBUG os_brick.initiator.connectors.lightos [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.943 232437 DEBUG os_brick.initiator.connectors.lightos [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.943 232437 DEBUG os_brick.utils [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] <== get_connector_properties: return (62ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.944 232437 DEBUG nova.virt.block_device [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating existing volume attachment record: 588b948b-da3b-4aaf-bfab-1367bd77ec8a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.977 232437 INFO nova.compute.manager [-] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Took 1.76 seconds to deallocate network for instance.#033[00m
Dec  6 02:36:05 np0005548731 nova_compute[232433]: 2025-12-06 07:36:05.991 232437 DEBUG nova.compute.manager [req-6d126d69-78a0-413d-9efb-7e1d4c129b41 req-4ac55e8f-74e5-4768-bdc6-d78ecbf6ab85 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3662ed68-91aa-4b97-8b12-5c87f99b02c9] Detach interface failed, port_id=057f5adb-642a-4046-a07d-f33f8ab42b98, reason: Instance 3662ed68-91aa-4b97-8b12-5c87f99b02c9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.067 232437 DEBUG oslo_concurrency.lockutils [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.067 232437 DEBUG oslo_concurrency.lockutils [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.164 232437 DEBUG oslo_concurrency.processutils [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.276 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:36:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2192358059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.594 232437 DEBUG oslo_concurrency.processutils [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.601 232437 DEBUG nova.compute.provider_tree [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.620 232437 DEBUG nova.scheduler.client.report [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.653 232437 DEBUG oslo_concurrency.lockutils [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:06.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:36:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1808890436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.705 232437 INFO nova.scheduler.client.report [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Deleted allocations for instance 3662ed68-91aa-4b97-8b12-5c87f99b02c9#033[00m
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.804 232437 DEBUG nova.objects.instance [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.809 232437 DEBUG oslo_concurrency.lockutils [None req-43867805-c1b0-49e7-91a1-fd3c9fff58dd 603967295b474367b2a605071d414bbf 2cfa2f2dfd6f4dc090f85672b1f24e68 - - default default] Lock "3662ed68-91aa-4b97-8b12-5c87f99b02c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 50.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.834 232437 DEBUG nova.virt.libvirt.driver [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Attempting to attach volume fb115316-35b8-4af3-9847-c6315737a158 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 02:36:06 np0005548731 nova_compute[232433]: 2025-12-06 07:36:06.838 232437 DEBUG nova.virt.libvirt.guest [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:36:06 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:36:06 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-fb115316-35b8-4af3-9847-c6315737a158">
Dec  6 02:36:06 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:36:06 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:36:06 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:36:06 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:36:06 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:36:06 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:36:06 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:36:06 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:36:06 np0005548731 nova_compute[232433]:  <serial>fb115316-35b8-4af3-9847-c6315737a158</serial>
Dec  6 02:36:06 np0005548731 nova_compute[232433]:  <shareable/>
Dec  6 02:36:06 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:36:06 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:36:07 np0005548731 nova_compute[232433]: 2025-12-06 07:36:07.521 232437 DEBUG nova.virt.libvirt.driver [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:36:07 np0005548731 nova_compute[232433]: 2025-12-06 07:36:07.521 232437 DEBUG nova.virt.libvirt.driver [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:36:07 np0005548731 nova_compute[232433]: 2025-12-06 07:36:07.521 232437 DEBUG nova.virt.libvirt.driver [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:36:07 np0005548731 nova_compute[232433]: 2025-12-06 07:36:07.522 232437 DEBUG nova.virt.libvirt.driver [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:f4:fe:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:36:07 np0005548731 nova_compute[232433]: 2025-12-06 07:36:07.785 232437 DEBUG oslo_concurrency.lockutils [None req-21ec0113-4b61-46ae-b8f5-3735b92c9f2b 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:36:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:07.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:36:07 np0005548731 nova_compute[232433]: 2025-12-06 07:36:07.900 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.335 232437 DEBUG oslo_concurrency.lockutils [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.335 232437 DEBUG oslo_concurrency.lockutils [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.337 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.337 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.337 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.338 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6a50a40c-3b05-4c0e-aa67-1489e203824e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.358 232437 DEBUG nova.objects.instance [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.401 232437 DEBUG oslo_concurrency.lockutils [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:08.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.713 232437 DEBUG oslo_concurrency.lockutils [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.713 232437 DEBUG oslo_concurrency.lockutils [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.714 232437 INFO nova.compute.manager [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Attaching volume fb115316-35b8-4af3-9847-c6315737a158 to /dev/vdb#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.887 232437 DEBUG os_brick.utils [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.888 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.897 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.898 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b1f043-0643-42df-b34d-bdc0533c7387]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.899 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.906 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.907 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[d0313e40-3653-46f8-b3f3-319978b76baf]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.908 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.916 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.916 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[421f66fa-62b1-47ce-a0ac-5275d32c79a6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.918 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[20f3f36e-d5d1-451a-b2a7-baf8788c24d7]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.918 232437 DEBUG oslo_concurrency.processutils [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.941 232437 DEBUG oslo_concurrency.processutils [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.943 232437 DEBUG os_brick.initiator.connectors.lightos [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.944 232437 DEBUG os_brick.initiator.connectors.lightos [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.944 232437 DEBUG os_brick.initiator.connectors.lightos [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.944 232437 DEBUG os_brick.utils [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:36:08 np0005548731 nova_compute[232433]: 2025-12-06 07:36:08.944 232437 DEBUG nova.virt.block_device [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating existing volume attachment record: b75fc88a-f547-4e16-b4bb-08670c6d6262 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:36:09 np0005548731 nova_compute[232433]: 2025-12-06 07:36:09.803 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Updating instance_info_cache with network_info: [{"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:36:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:09.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:09 np0005548731 nova_compute[232433]: 2025-12-06 07:36:09.890 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:36:09 np0005548731 nova_compute[232433]: 2025-12-06 07:36:09.890 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:36:09 np0005548731 nova_compute[232433]: 2025-12-06 07:36:09.891 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:36:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:10.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:36:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3904011872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:36:10 np0005548731 nova_compute[232433]: 2025-12-06 07:36:10.883 232437 DEBUG nova.objects.instance [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:10 np0005548731 nova_compute[232433]: 2025-12-06 07:36:10.911 232437 DEBUG nova.virt.libvirt.driver [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Attempting to attach volume fb115316-35b8-4af3-9847-c6315737a158 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 02:36:10 np0005548731 nova_compute[232433]: 2025-12-06 07:36:10.915 232437 DEBUG nova.virt.libvirt.guest [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:36:10 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:36:10 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-fb115316-35b8-4af3-9847-c6315737a158">
Dec  6 02:36:10 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:36:10 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:36:10 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:36:10 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:36:10 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:36:10 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:36:10 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:36:10 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:36:10 np0005548731 nova_compute[232433]:  <serial>fb115316-35b8-4af3-9847-c6315737a158</serial>
Dec  6 02:36:10 np0005548731 nova_compute[232433]:  <shareable/>
Dec  6 02:36:10 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:36:10 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:36:11 np0005548731 nova_compute[232433]: 2025-12-06 07:36:11.017 232437 DEBUG nova.virt.libvirt.driver [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:36:11 np0005548731 nova_compute[232433]: 2025-12-06 07:36:11.018 232437 DEBUG nova.virt.libvirt.driver [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:36:11 np0005548731 nova_compute[232433]: 2025-12-06 07:36:11.020 232437 DEBUG nova.virt.libvirt.driver [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:36:11 np0005548731 nova_compute[232433]: 2025-12-06 07:36:11.021 232437 DEBUG nova.virt.libvirt.driver [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:c3:31:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:36:11 np0005548731 nova_compute[232433]: 2025-12-06 07:36:11.224 232437 DEBUG oslo_concurrency.lockutils [None req-9c9c2f1d-84a0-4817-8684-8523acdb3bd1 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:11 np0005548731 nova_compute[232433]: 2025-12-06 07:36:11.310 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:36:11Z|00562|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.438891) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571438975, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 2134, "num_deletes": 258, "total_data_size": 5002953, "memory_usage": 5066160, "flush_reason": "Manual Compaction"}
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e295 e295: 3 total, 3 up, 3 in
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571458356, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 3284966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48599, "largest_seqno": 50728, "table_properties": {"data_size": 3275826, "index_size": 5698, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19961, "raw_average_key_size": 21, "raw_value_size": 3257343, "raw_average_value_size": 3472, "num_data_blocks": 245, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006374, "oldest_key_time": 1765006374, "file_creation_time": 1765006571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 19656 microseconds, and 8268 cpu microseconds.
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.458526) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 3284966 bytes OK
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.458582) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.460642) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.460663) EVENT_LOG_v1 {"time_micros": 1765006571460656, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.460693) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 4993253, prev total WAL file size 4993294, number of live WAL files 2.
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.462466) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(3207KB)], [93(10019KB)]
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571462571, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 13544727, "oldest_snapshot_seqno": -1}
Dec  6 02:36:11 np0005548731 nova_compute[232433]: 2025-12-06 07:36:11.468 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 8244 keys, 11524714 bytes, temperature: kUnknown
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571559502, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 11524714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11470961, "index_size": 32009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 213323, "raw_average_key_size": 25, "raw_value_size": 11325290, "raw_average_value_size": 1373, "num_data_blocks": 1258, "num_entries": 8244, "num_filter_entries": 8244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765006571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.559868) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 11524714 bytes
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.561691) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.5 rd, 118.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 8774, records dropped: 530 output_compression: NoCompression
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.561710) EVENT_LOG_v1 {"time_micros": 1765006571561701, "job": 58, "event": "compaction_finished", "compaction_time_micros": 97109, "compaction_time_cpu_micros": 45255, "output_level": 6, "num_output_files": 1, "total_output_size": 11524714, "num_input_records": 8774, "num_output_records": 8244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571562497, "job": 58, "event": "table_file_deletion", "file_number": 95}
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571565102, "job": 58, "event": "table_file_deletion", "file_number": 93}
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.462384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.565182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.565191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.565194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.565196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:11 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:11.565198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:36:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:11.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:36:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:12.009 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:36:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:12.010 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:36:12 np0005548731 nova_compute[232433]: 2025-12-06 07:36:12.010 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:12 np0005548731 nova_compute[232433]: 2025-12-06 07:36:12.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:36:12 np0005548731 nova_compute[232433]: 2025-12-06 07:36:12.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:36:12 np0005548731 nova_compute[232433]: 2025-12-06 07:36:12.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:36:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:12.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:12 np0005548731 nova_compute[232433]: 2025-12-06 07:36:12.902 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:13.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.241607) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574241665, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 296, "num_deletes": 261, "total_data_size": 77029, "memory_usage": 83824, "flush_reason": "Manual Compaction"}
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574243951, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 50360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50729, "largest_seqno": 51024, "table_properties": {"data_size": 48413, "index_size": 112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4970, "raw_average_key_size": 17, "raw_value_size": 44515, "raw_average_value_size": 157, "num_data_blocks": 5, "num_entries": 282, "num_filter_entries": 282, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006571, "oldest_key_time": 1765006571, "file_creation_time": 1765006574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 2372 microseconds, and 794 cpu microseconds.
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.243986) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 50360 bytes OK
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.244001) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.244771) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.244784) EVENT_LOG_v1 {"time_micros": 1765006574244780, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.244800) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 74816, prev total WAL file size 74816, number of live WAL files 2.
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.245086) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353132' seq:72057594037927935, type:22 .. '6C6F676D0031373639' seq:0, type:0; will stop at (end)
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(49KB)], [96(10MB)]
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574245114, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 11575074, "oldest_snapshot_seqno": -1}
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 7995 keys, 11428186 bytes, temperature: kUnknown
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574314946, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 11428186, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11375676, "index_size": 31406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20037, "raw_key_size": 209126, "raw_average_key_size": 26, "raw_value_size": 11234048, "raw_average_value_size": 1405, "num_data_blocks": 1228, "num_entries": 7995, "num_filter_entries": 7995, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765006574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.315190) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 11428186 bytes
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.316651) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.6 rd, 163.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 11.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(456.8) write-amplify(226.9) OK, records in: 8526, records dropped: 531 output_compression: NoCompression
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.316667) EVENT_LOG_v1 {"time_micros": 1765006574316660, "job": 60, "event": "compaction_finished", "compaction_time_micros": 69908, "compaction_time_cpu_micros": 28187, "output_level": 6, "num_output_files": 1, "total_output_size": 11428186, "num_input_records": 8526, "num_output_records": 7995, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574316774, "job": 60, "event": "table_file_deletion", "file_number": 98}
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574319227, "job": 60, "event": "table_file_deletion", "file_number": 96}
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.245026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.319277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.319282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.319284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.319285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:36:14.319287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:36:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:14.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 02:36:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:36:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:36:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.123 232437 DEBUG nova.compute.manager [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.222 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.222 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.242 232437 DEBUG nova.objects.instance [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'pci_requests' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.259 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.260 232437 INFO nova.compute.claims [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.260 232437 DEBUG nova.objects.instance [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'resources' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.273 232437 DEBUG nova.objects.instance [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.319 232437 INFO nova.compute.resource_tracker [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating resource usage from migration 46fa4558-b61b-4de6-8c71-66dd0299ecc4#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.415 232437 DEBUG oslo_concurrency.processutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:36:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:15.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:36:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:36:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2477259455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.878 232437 DEBUG oslo_concurrency.processutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.884 232437 DEBUG nova.compute.provider_tree [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.904 232437 DEBUG nova.scheduler.client.report [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.927 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.928 232437 INFO nova.compute.manager [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Migrating#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.962 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.962 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:36:15 np0005548731 nova_compute[232433]: 2025-12-06 07:36:15.963 232437 DEBUG nova.network.neutron [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:36:16 np0005548731 nova_compute[232433]: 2025-12-06 07:36:16.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:36:16 np0005548731 nova_compute[232433]: 2025-12-06 07:36:16.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:36:16 np0005548731 nova_compute[232433]: 2025-12-06 07:36:16.314 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:16.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:17.012 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:17 np0005548731 nova_compute[232433]: 2025-12-06 07:36:17.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:36:17 np0005548731 nova_compute[232433]: 2025-12-06 07:36:17.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:17 np0005548731 nova_compute[232433]: 2025-12-06 07:36:17.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:17 np0005548731 nova_compute[232433]: 2025-12-06 07:36:17.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:17 np0005548731 nova_compute[232433]: 2025-12-06 07:36:17.128 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:36:17 np0005548731 nova_compute[232433]: 2025-12-06 07:36:17.128 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:36:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1306820683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:36:17 np0005548731 nova_compute[232433]: 2025-12-06 07:36:17.596 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:17.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:17 np0005548731 nova_compute[232433]: 2025-12-06 07:36:17.904 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.350 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.350 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.350 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.354 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.355 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.355 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.358 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.359 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:36:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.446 232437 DEBUG nova.network.neutron [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.464 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.550 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.551 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3785MB free_disk=20.64830780029297GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.551 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.551 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.582 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.584 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.608 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Applying migration context for instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 as it has an incoming, in-progress migration 46fa4558-b61b-4de6-8c71-66dd0299ecc4. Migration status is migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.610 232437 INFO nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating resource usage from migration 46fa4558-b61b-4de6-8c71-66dd0299ecc4#033[00m
Dec  6 02:36:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:18.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.706 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 6a50a40c-3b05-4c0e-aa67-1489e203824e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.706 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.706 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Migration 46fa4558-b61b-4de6-8c71-66dd0299ecc4 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.707 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.707 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.707 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=1088MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:36:18 np0005548731 nova_compute[232433]: 2025-12-06 07:36:18.827 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:36:19 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1629657261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:36:19 np0005548731 nova_compute[232433]: 2025-12-06 07:36:19.272 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:19 np0005548731 nova_compute[232433]: 2025-12-06 07:36:19.278 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:36:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e296 e296: 3 total, 3 up, 3 in
Dec  6 02:36:19 np0005548731 nova_compute[232433]: 2025-12-06 07:36:19.298 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:36:19 np0005548731 nova_compute[232433]: 2025-12-06 07:36:19.327 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:36:19 np0005548731 nova_compute[232433]: 2025-12-06 07:36:19.328 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:19.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:20.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:20 np0005548731 nova_compute[232433]: 2025-12-06 07:36:20.969 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.366 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:21 np0005548731 kernel: tap657a8a48-7a (unregistering): left promiscuous mode
Dec  6 02:36:21 np0005548731 NetworkManager[49182]: <info>  [1765006581.4608] device (tap657a8a48-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:36:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:36:21Z|00563|binding|INFO|Releasing lport 657a8a48-7a74-4b37-a294-df0b2fd9332c from this chassis (sb_readonly=0)
Dec  6 02:36:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:36:21Z|00564|binding|INFO|Setting lport 657a8a48-7a74-4b37-a294-df0b2fd9332c down in Southbound
Dec  6 02:36:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:36:21Z|00565|binding|INFO|Removing iface tap657a8a48-7a ovn-installed in OVS
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.484 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.487 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.494 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:fe:4a 10.100.0.7'], port_security=['fa:16:3e:f4:fe:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b4be0ef8-945f-47a1-a3a8-5962f1e692e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=657a8a48-7a74-4b37-a294-df0b2fd9332c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.495 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 657a8a48-7a74-4b37-a294-df0b2fd9332c in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.496 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.505 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.514 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[05c5267a-80dc-4302-967b-ad0201c7ec9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:21 np0005548731 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Dec  6 02:36:21 np0005548731 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007f.scope: Consumed 16.262s CPU time.
Dec  6 02:36:21 np0005548731 systemd-machined[195355]: Machine qemu-56-instance-0000007f terminated.
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.544 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca038ce-997b-48b2-8750-5512101d36d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.548 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9760364a-9107-45c4-a9b6-b0101aab39b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.578 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f9bce3d1-07d7-4eeb-a0ce-a1524cda5606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.594 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f6122b-8a6f-4f52-bb2e-c5f84905d6bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288013, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.610 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1b01ae40-22b7-4204-b345-2276bf6864b7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288014, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288014, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.612 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.613 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.617 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.617 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.617 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.618 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:21.618 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.718 232437 INFO nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Instance shutdown successfully after 3 seconds.#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.723 232437 INFO nova.virt.libvirt.driver [-] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Instance destroyed successfully.#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.724 232437 DEBUG nova.virt.libvirt.vif [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:34:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=127,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:35:16Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-nhxynips',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:36:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=b4be0ef8-945f-47a1-a3a8-5962f1e692e5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:f4:fe:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.724 232437 DEBUG nova.network.os_vif_util [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:f4:fe:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.725 232437 DEBUG nova.network.os_vif_util [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.726 232437 DEBUG os_vif [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.727 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.727 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap657a8a48-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.729 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.731 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.733 232437 INFO os_vif [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a')#033[00m
Dec  6 02:36:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:21.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.985 232437 INFO nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Detected multiple connections on this host for volume: fb115316-35b8-4af3-9847-c6315737a158, skipping target disconnect.#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.990 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.990 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:36:21 np0005548731 nova_compute[232433]: 2025-12-06 07:36:21.990 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:36:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:36:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:36:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:22.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:22 np0005548731 nova_compute[232433]: 2025-12-06 07:36:22.907 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:23 np0005548731 nova_compute[232433]: 2025-12-06 07:36:23.328 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:36:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:23.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.061 232437 DEBUG nova.network.neutron [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Port 657a8a48-7a74-4b37-a294-df0b2fd9332c binding to destination host compute-2.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.193 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.193 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.193 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.457 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.457 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.457 232437 DEBUG nova.network.neutron [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:36:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:24.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.796 232437 DEBUG nova.compute.manager [req-7bca77fc-29be-4de2-b474-811ae18bece2 req-4a1b1a58-25d9-47d6-af8f-ba59f9178331 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-vif-unplugged-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.797 232437 DEBUG oslo_concurrency.lockutils [req-7bca77fc-29be-4de2-b474-811ae18bece2 req-4a1b1a58-25d9-47d6-af8f-ba59f9178331 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.798 232437 DEBUG oslo_concurrency.lockutils [req-7bca77fc-29be-4de2-b474-811ae18bece2 req-4a1b1a58-25d9-47d6-af8f-ba59f9178331 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.798 232437 DEBUG oslo_concurrency.lockutils [req-7bca77fc-29be-4de2-b474-811ae18bece2 req-4a1b1a58-25d9-47d6-af8f-ba59f9178331 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.798 232437 DEBUG nova.compute.manager [req-7bca77fc-29be-4de2-b474-811ae18bece2 req-4a1b1a58-25d9-47d6-af8f-ba59f9178331 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] No waiting events found dispatching network-vif-unplugged-657a8a48-7a74-4b37-a294-df0b2fd9332c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:36:24 np0005548731 nova_compute[232433]: 2025-12-06 07:36:24.798 232437 WARNING nova.compute.manager [req-7bca77fc-29be-4de2-b474-811ae18bece2 req-4a1b1a58-25d9-47d6-af8f-ba59f9178331 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received unexpected event network-vif-unplugged-657a8a48-7a74-4b37-a294-df0b2fd9332c for instance with vm_state active and task_state resize_migrated.#033[00m
Dec  6 02:36:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:25.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:25 np0005548731 nova_compute[232433]: 2025-12-06 07:36:25.898 232437 DEBUG nova.network.neutron [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:36:25 np0005548731 nova_compute[232433]: 2025-12-06 07:36:25.915 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:36:25 np0005548731 nova_compute[232433]: 2025-12-06 07:36:25.984 232437 DEBUG os_brick.utils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:36:25 np0005548731 nova_compute[232433]: 2025-12-06 07:36:25.986 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:25 np0005548731 nova_compute[232433]: 2025-12-06 07:36:25.996 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:25 np0005548731 nova_compute[232433]: 2025-12-06 07:36:25.996 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[ab105a2b-16d2-448e-82cf-7cc6edb3c177]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:25 np0005548731 nova_compute[232433]: 2025-12-06 07:36:25.997 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.004 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.005 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5c56dd-68c8-441f-8c0e-2eb838a0ddc9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.006 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.015 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.015 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[27ded1eb-c89c-4e9b-bbc3-219e0054c998]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.017 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[64c5bd03-b5fa-408b-9d38-3e9880ca93c2]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.017 232437 DEBUG oslo_concurrency.processutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.039 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.043 232437 DEBUG oslo_concurrency.processutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.045 232437 DEBUG os_brick.initiator.connectors.lightos [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.045 232437 DEBUG os_brick.initiator.connectors.lightos [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.045 232437 DEBUG os_brick.initiator.connectors.lightos [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.045 232437 DEBUG os_brick.utils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:36:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:26.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.729 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:26 np0005548731 podman[288039]: 2025-12-06 07:36:26.903165272 +0000 UTC m=+0.057023500 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:36:26 np0005548731 podman[288037]: 2025-12-06 07:36:26.921437818 +0000 UTC m=+0.079909199 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.940 232437 DEBUG nova.compute.manager [req-1b742d50-7b27-4d73-9fd0-564d0710ac29 req-238abec8-7054-4c0f-b46a-74a88a578a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.940 232437 DEBUG oslo_concurrency.lockutils [req-1b742d50-7b27-4d73-9fd0-564d0710ac29 req-238abec8-7054-4c0f-b46a-74a88a578a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.941 232437 DEBUG oslo_concurrency.lockutils [req-1b742d50-7b27-4d73-9fd0-564d0710ac29 req-238abec8-7054-4c0f-b46a-74a88a578a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.941 232437 DEBUG oslo_concurrency.lockutils [req-1b742d50-7b27-4d73-9fd0-564d0710ac29 req-238abec8-7054-4c0f-b46a-74a88a578a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.941 232437 DEBUG nova.compute.manager [req-1b742d50-7b27-4d73-9fd0-564d0710ac29 req-238abec8-7054-4c0f-b46a-74a88a578a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] No waiting events found dispatching network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:36:26 np0005548731 nova_compute[232433]: 2025-12-06 07:36:26.942 232437 WARNING nova.compute.manager [req-1b742d50-7b27-4d73-9fd0-564d0710ac29 req-238abec8-7054-4c0f-b46a-74a88a578a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received unexpected event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c for instance with vm_state active and task_state resize_finish.#033[00m
Dec  6 02:36:26 np0005548731 podman[288038]: 2025-12-06 07:36:26.95769045 +0000 UTC m=+0.116061577 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 02:36:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:27.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:27 np0005548731 nova_compute[232433]: 2025-12-06 07:36:27.906 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:28 np0005548731 nova_compute[232433]: 2025-12-06 07:36:28.129 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Dec  6 02:36:28 np0005548731 nova_compute[232433]: 2025-12-06 07:36:28.130 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 02:36:28 np0005548731 nova_compute[232433]: 2025-12-06 07:36:28.130 232437 INFO nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Creating image(s)#033[00m
Dec  6 02:36:28 np0005548731 nova_compute[232433]: 2025-12-06 07:36:28.169 232437 DEBUG nova.storage.rbd_utils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] creating snapshot(nova-resize) on rbd image(b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:36:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:28.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:29.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:30.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:31 np0005548731 nova_compute[232433]: 2025-12-06 07:36:31.730 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:31.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:32.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:32 np0005548731 nova_compute[232433]: 2025-12-06 07:36:32.909 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:33.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e297 e297: 3 total, 3 up, 3 in
Dec  6 02:36:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:36:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:34.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:36:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:35.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:36.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:36 np0005548731 nova_compute[232433]: 2025-12-06 07:36:36.719 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006581.7175586, b4be0ef8-945f-47a1-a3a8-5962f1e692e5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:36:36 np0005548731 nova_compute[232433]: 2025-12-06 07:36:36.719 232437 INFO nova.compute.manager [-] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:36:36 np0005548731 nova_compute[232433]: 2025-12-06 07:36:36.732 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:36 np0005548731 nova_compute[232433]: 2025-12-06 07:36:36.756 232437 DEBUG nova.objects.instance [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:36 np0005548731 nova_compute[232433]: 2025-12-06 07:36:36.827 232437 DEBUG nova.compute.manager [None req-eb510a5b-7f7a-48e1-8019-70a15a8285b0 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:36:36 np0005548731 nova_compute[232433]: 2025-12-06 07:36:36.831 232437 DEBUG nova.compute.manager [None req-eb510a5b-7f7a-48e1-8019-70a15a8285b0 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:36:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e298 e298: 3 total, 3 up, 3 in
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.246 232437 INFO nova.compute.manager [None req-eb510a5b-7f7a-48e1-8019-70a15a8285b0 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.376 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.377 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Ensure instance console log exists: /var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.377 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.378 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.378 232437 DEBUG oslo_concurrency.lockutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.380 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Start _get_guest_xml network_info=[{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:f4:fe:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-fb115316-35b8-4af3-9847-c6315737a158', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'fb115316-35b8-4af3-9847-c6315737a158', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'b4be0ef8-945f-47a1-a3a8-5962f1e692e5', 'attached_at': '2025-12-06T07:36:27.000000', 'detached_at': '', 'volume_id': 'fb115316-35b8-4af3-9847-c6315737a158', 'multiattach': True, 'serial': 'fb115316-35b8-4af3-9847-c6315737a158'}, 'disk_bus': 'virtio', 'boot_index': None, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'attachment_id': '6e4fa302-71b2-4e3f-92ba-7d0c0085bcc7', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.385 232437 WARNING nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.397 232437 DEBUG nova.virt.libvirt.host [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.398 232437 DEBUG nova.virt.libvirt.host [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.403 232437 DEBUG nova.virt.libvirt.host [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.404 232437 DEBUG nova.virt.libvirt.host [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.405 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.406 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.406 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.406 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.407 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.407 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.407 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.407 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.408 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.408 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.408 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.408 232437 DEBUG nova.virt.hardware [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.409 232437 DEBUG nova.objects.instance [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.569 232437 DEBUG oslo_concurrency.processutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:37.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:37 np0005548731 nova_compute[232433]: 2025-12-06 07:36:37.912 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:36:38 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/607710496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:36:38 np0005548731 nova_compute[232433]: 2025-12-06 07:36:38.034 232437 DEBUG oslo_concurrency.processutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:38 np0005548731 nova_compute[232433]: 2025-12-06 07:36:38.527 232437 DEBUG oslo_concurrency.processutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:38.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:36:38 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3620983311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:36:38 np0005548731 nova_compute[232433]: 2025-12-06 07:36:38.997 232437 DEBUG oslo_concurrency.processutils [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.109 232437 DEBUG nova.virt.libvirt.vif [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:34:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=127,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:35:16Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-nhxynips',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:36:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=b4be0ef8-945f-47a1-a3a8-5962f1e692e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:f4:fe:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.110 232437 DEBUG nova.network.os_vif_util [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:f4:fe:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.111 232437 DEBUG nova.network.os_vif_util [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.114 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  <uuid>b4be0ef8-945f-47a1-a3a8-5962f1e692e5</uuid>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  <name>instance-0000007f</name>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  <memory>196608</memory>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <nova:name>multiattach-server-0</nova:name>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:36:37</nova:creationTime>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.micro">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <nova:memory>192</nova:memory>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <nova:user uuid="605b5481e0c944048e6a67046c30d693">tempest-AttachVolumeMultiAttachTest-690984293-project-member</nova:user>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <nova:project uuid="833f4cf9f5a64b2ab94c3bf330353a31">tempest-AttachVolumeMultiAttachTest-690984293</nova:project>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <nova:port uuid="657a8a48-7a74-4b37-a294-df0b2fd9332c">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <entry name="serial">b4be0ef8-945f-47a1-a3a8-5962f1e692e5</entry>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <entry name="uuid">b4be0ef8-945f-47a1-a3a8-5962f1e692e5</entry>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk.config">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-fb115316-35b8-4af3-9847-c6315737a158">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <target dev="vdb" bus="virtio"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <serial>fb115316-35b8-4af3-9847-c6315737a158</serial>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <shareable/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:f4:fe:4a"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <target dev="tap657a8a48-7a"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5/console.log" append="off"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:36:39 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:36:39 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:36:39 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:36:39 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.115 232437 DEBUG nova.virt.libvirt.vif [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:34:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=127,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:35:16Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-nhxynips',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:36:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=b4be0ef8-945f-47a1-a3a8-5962f1e692e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:f4:fe:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.118 232437 DEBUG nova.network.os_vif_util [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:f4:fe:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.119 232437 DEBUG nova.network.os_vif_util [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.119 232437 DEBUG os_vif [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.120 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.120 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.120 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.123 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.123 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap657a8a48-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.124 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap657a8a48-7a, col_values=(('external_ids', {'iface-id': '657a8a48-7a74-4b37-a294-df0b2fd9332c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:fe:4a', 'vm-uuid': 'b4be0ef8-945f-47a1-a3a8-5962f1e692e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.125 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:39 np0005548731 NetworkManager[49182]: <info>  [1765006599.1266] manager: (tap657a8a48-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.129 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.134 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.135 232437 INFO os_vif [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a')#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.410 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.410 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.411 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.411 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:f4:fe:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.412 232437 INFO nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Using config drive#033[00m
Dec  6 02:36:39 np0005548731 kernel: tap657a8a48-7a: entered promiscuous mode
Dec  6 02:36:39 np0005548731 NetworkManager[49182]: <info>  [1765006599.5080] manager: (tap657a8a48-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.510 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:36:39Z|00566|binding|INFO|Claiming lport 657a8a48-7a74-4b37-a294-df0b2fd9332c for this chassis.
Dec  6 02:36:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:36:39Z|00567|binding|INFO|657a8a48-7a74-4b37-a294-df0b2fd9332c: Claiming fa:16:3e:f4:fe:4a 10.100.0.7
Dec  6 02:36:39 np0005548731 systemd-udevd[288323]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:36:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:36:39Z|00568|binding|INFO|Setting lport 657a8a48-7a74-4b37-a294-df0b2fd9332c ovn-installed in OVS
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.545 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:39 np0005548731 systemd-machined[195355]: New machine qemu-58-instance-0000007f.
Dec  6 02:36:39 np0005548731 NetworkManager[49182]: <info>  [1765006599.5588] device (tap657a8a48-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:36:39 np0005548731 NetworkManager[49182]: <info>  [1765006599.5599] device (tap657a8a48-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:36:39 np0005548731 systemd[1]: Started Virtual Machine qemu-58-instance-0000007f.
Dec  6 02:36:39 np0005548731 ovn_controller[133927]: 2025-12-06T07:36:39Z|00569|binding|INFO|Setting lport 657a8a48-7a74-4b37-a294-df0b2fd9332c up in Southbound
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.610 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:fe:4a 10.100.0.7'], port_security=['fa:16:3e:f4:fe:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b4be0ef8-945f-47a1-a3a8-5962f1e692e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '5', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=657a8a48-7a74-4b37-a294-df0b2fd9332c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.611 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 657a8a48-7a74-4b37-a294-df0b2fd9332c in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 bound to our chassis#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.613 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.630 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[401e1d5d-2888-4b92-b8d2-8f31bcb48309]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.660 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[831c7240-4b20-4d93-92e5-e5693fdc9577]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.665 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d3fd4ebd-8939-4bde-ba39-236b080520ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.704 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e6990aa7-9f2f-46d7-a913-790a51c69774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.726 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3910d16e-66a9-4ed5-8dec-596711091468]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288339, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.747 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[34509e1e-c122-4ba6-8554-b0c8a5de0f4c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288340, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288340, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.748 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.750 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:39 np0005548731 nova_compute[232433]: 2025-12-06 07:36:39.751 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.751 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.751 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.752 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:39.752 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:36:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:39.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.038 232437 DEBUG nova.compute.manager [req-9fb8c1db-8b0e-4f00-8f75-d55a080dc677 req-a6da6cd9-b930-498c-b156-8d69ad674deb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.038 232437 DEBUG oslo_concurrency.lockutils [req-9fb8c1db-8b0e-4f00-8f75-d55a080dc677 req-a6da6cd9-b930-498c-b156-8d69ad674deb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.039 232437 DEBUG oslo_concurrency.lockutils [req-9fb8c1db-8b0e-4f00-8f75-d55a080dc677 req-a6da6cd9-b930-498c-b156-8d69ad674deb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.039 232437 DEBUG oslo_concurrency.lockutils [req-9fb8c1db-8b0e-4f00-8f75-d55a080dc677 req-a6da6cd9-b930-498c-b156-8d69ad674deb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.039 232437 DEBUG nova.compute.manager [req-9fb8c1db-8b0e-4f00-8f75-d55a080dc677 req-a6da6cd9-b930-498c-b156-8d69ad674deb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] No waiting events found dispatching network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.039 232437 WARNING nova.compute.manager [req-9fb8c1db-8b0e-4f00-8f75-d55a080dc677 req-a6da6cd9-b930-498c-b156-8d69ad674deb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received unexpected event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c for instance with vm_state active and task_state resize_finish.#033[00m
Dec  6 02:36:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:40.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.990 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006600.9901605, b4be0ef8-945f-47a1-a3a8-5962f1e692e5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.990 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.992 232437 DEBUG nova.compute.manager [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.996 232437 INFO nova.virt.libvirt.driver [-] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Instance running successfully.#033[00m
Dec  6 02:36:40 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.999 232437 DEBUG nova.virt.libvirt.guest [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 02:36:40 np0005548731 nova_compute[232433]: 2025-12-06 07:36:40.999 232437 DEBUG nova.virt.libvirt.driver [None req-f54157de-2bf7-49ae-a3a0-69979514d3f7 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Dec  6 02:36:41 np0005548731 nova_compute[232433]: 2025-12-06 07:36:41.131 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:36:41 np0005548731 nova_compute[232433]: 2025-12-06 07:36:41.135 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:36:41 np0005548731 nova_compute[232433]: 2025-12-06 07:36:41.155 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 02:36:41 np0005548731 nova_compute[232433]: 2025-12-06 07:36:41.155 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006600.992596, b4be0ef8-945f-47a1-a3a8-5962f1e692e5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:36:41 np0005548731 nova_compute[232433]: 2025-12-06 07:36:41.155 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] VM Started (Lifecycle Event)#033[00m
Dec  6 02:36:41 np0005548731 nova_compute[232433]: 2025-12-06 07:36:41.189 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:36:41 np0005548731 nova_compute[232433]: 2025-12-06 07:36:41.198 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:36:41 np0005548731 nova_compute[232433]: 2025-12-06 07:36:41.234 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 02:36:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e299 e299: 3 total, 3 up, 3 in
Dec  6 02:36:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:41.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:42 np0005548731 nova_compute[232433]: 2025-12-06 07:36:42.232 232437 DEBUG nova.compute.manager [req-09ee38a5-50b0-4acd-be95-bfc7814abf40 req-8a4d2b63-56d8-4fa7-bee7-2f4399a9b72d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:36:42 np0005548731 nova_compute[232433]: 2025-12-06 07:36:42.232 232437 DEBUG oslo_concurrency.lockutils [req-09ee38a5-50b0-4acd-be95-bfc7814abf40 req-8a4d2b63-56d8-4fa7-bee7-2f4399a9b72d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:42 np0005548731 nova_compute[232433]: 2025-12-06 07:36:42.234 232437 DEBUG oslo_concurrency.lockutils [req-09ee38a5-50b0-4acd-be95-bfc7814abf40 req-8a4d2b63-56d8-4fa7-bee7-2f4399a9b72d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:42 np0005548731 nova_compute[232433]: 2025-12-06 07:36:42.234 232437 DEBUG oslo_concurrency.lockutils [req-09ee38a5-50b0-4acd-be95-bfc7814abf40 req-8a4d2b63-56d8-4fa7-bee7-2f4399a9b72d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:42 np0005548731 nova_compute[232433]: 2025-12-06 07:36:42.235 232437 DEBUG nova.compute.manager [req-09ee38a5-50b0-4acd-be95-bfc7814abf40 req-8a4d2b63-56d8-4fa7-bee7-2f4399a9b72d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] No waiting events found dispatching network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:36:42 np0005548731 nova_compute[232433]: 2025-12-06 07:36:42.235 232437 WARNING nova.compute.manager [req-09ee38a5-50b0-4acd-be95-bfc7814abf40 req-8a4d2b63-56d8-4fa7-bee7-2f4399a9b72d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received unexpected event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c for instance with vm_state resized and task_state None.#033[00m
Dec  6 02:36:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:42.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:42 np0005548731 nova_compute[232433]: 2025-12-06 07:36:42.914 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:43.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:44 np0005548731 nova_compute[232433]: 2025-12-06 07:36:44.127 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:44 np0005548731 nova_compute[232433]: 2025-12-06 07:36:44.679 232437 DEBUG oslo_concurrency.lockutils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:44 np0005548731 nova_compute[232433]: 2025-12-06 07:36:44.680 232437 DEBUG oslo_concurrency.lockutils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:44 np0005548731 nova_compute[232433]: 2025-12-06 07:36:44.680 232437 DEBUG nova.compute.manager [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Going to confirm migration 18 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Dec  6 02:36:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:44.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:44 np0005548731 nova_compute[232433]: 2025-12-06 07:36:44.959 232437 DEBUG oslo_concurrency.lockutils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:36:44 np0005548731 nova_compute[232433]: 2025-12-06 07:36:44.960 232437 DEBUG oslo_concurrency.lockutils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:36:44 np0005548731 nova_compute[232433]: 2025-12-06 07:36:44.960 232437 DEBUG nova.network.neutron [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:36:44 np0005548731 nova_compute[232433]: 2025-12-06 07:36:44.961 232437 DEBUG nova.objects.instance [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'info_cache' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e300 e300: 3 total, 3 up, 3 in
Dec  6 02:36:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:45.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:46.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:47.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:47 np0005548731 nova_compute[232433]: 2025-12-06 07:36:47.915 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:48 np0005548731 nova_compute[232433]: 2025-12-06 07:36:48.221 232437 DEBUG nova.network.neutron [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:36:48 np0005548731 nova_compute[232433]: 2025-12-06 07:36:48.248 232437 DEBUG oslo_concurrency.lockutils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:36:48 np0005548731 nova_compute[232433]: 2025-12-06 07:36:48.249 232437 DEBUG nova.objects.instance [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'migration_context' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:36:48 np0005548731 nova_compute[232433]: 2025-12-06 07:36:48.361 232437 DEBUG nova.storage.rbd_utils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] removing snapshot(nova-resize) on rbd image(b4be0ef8-945f-47a1-a3a8-5962f1e692e5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:36:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:48.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:49 np0005548731 nova_compute[232433]: 2025-12-06 07:36:49.129 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:49.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:50.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:51.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:52.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:52 np0005548731 nova_compute[232433]: 2025-12-06 07:36:52.967 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e301 e301: 3 total, 3 up, 3 in
Dec  6 02:36:53 np0005548731 nova_compute[232433]: 2025-12-06 07:36:53.320 232437 DEBUG oslo_concurrency.lockutils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:53 np0005548731 nova_compute[232433]: 2025-12-06 07:36:53.321 232437 DEBUG oslo_concurrency.lockutils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:53 np0005548731 nova_compute[232433]: 2025-12-06 07:36:53.521 232437 DEBUG oslo_concurrency.processutils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:53.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:36:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:36:53 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1062886133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:36:54 np0005548731 nova_compute[232433]: 2025-12-06 07:36:54.008 232437 DEBUG oslo_concurrency.processutils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:54 np0005548731 nova_compute[232433]: 2025-12-06 07:36:54.015 232437 DEBUG nova.compute.provider_tree [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:36:54 np0005548731 nova_compute[232433]: 2025-12-06 07:36:54.088 232437 DEBUG nova.scheduler.client.report [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:36:54 np0005548731 nova_compute[232433]: 2025-12-06 07:36:54.133 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:54 np0005548731 nova_compute[232433]: 2025-12-06 07:36:54.212 232437 DEBUG oslo_concurrency.lockutils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:54 np0005548731 nova_compute[232433]: 2025-12-06 07:36:54.486 232437 INFO nova.scheduler.client.report [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Deleted allocation for migration 46fa4558-b61b-4de6-8c71-66dd0299ecc4#033[00m
Dec  6 02:36:54 np0005548731 nova_compute[232433]: 2025-12-06 07:36:54.631 232437 DEBUG oslo_concurrency.lockutils [None req-a5d28de0-db0b-4471-aa17-4dc03d87f27a 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 9.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:54.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e302 e302: 3 total, 3 up, 3 in
Dec  6 02:36:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:36:55Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:fe:4a 10.100.0.7
Dec  6 02:36:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:55.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:56.244 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:36:56 np0005548731 nova_compute[232433]: 2025-12-06 07:36:56.245 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:56.245 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:36:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:36:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:56.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:36:57 np0005548731 podman[288492]: 2025-12-06 07:36:57.663486303 +0000 UTC m=+0.068013269 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  6 02:36:57 np0005548731 podman[288494]: 2025-12-06 07:36:57.668283059 +0000 UTC m=+0.072550068 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 02:36:57 np0005548731 podman[288493]: 2025-12-06 07:36:57.69537606 +0000 UTC m=+0.099880915 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:36:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:57.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:57 np0005548731 nova_compute[232433]: 2025-12-06 07:36:57.969 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.159 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.159 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.179 232437 DEBUG nova.compute.manager [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.274 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.274 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.281 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.282 232437 INFO nova.compute.claims [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:36:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.500 232437 DEBUG oslo_concurrency.processutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:36:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:58.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:36:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:36:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/825412902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.937 232437 DEBUG oslo_concurrency.processutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.944 232437 DEBUG nova.compute.provider_tree [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.961 232437 DEBUG nova.scheduler.client.report [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.981 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:36:58 np0005548731 nova_compute[232433]: 2025-12-06 07:36:58.981 232437 DEBUG nova.compute.manager [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.040 232437 DEBUG nova.compute.manager [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.041 232437 DEBUG nova.network.neutron [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.062 232437 INFO nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.086 232437 DEBUG nova.compute.manager [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.138 232437 INFO nova.virt.block_device [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Booting with volume a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c at /dev/vda#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.140 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:36:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:36:59.247 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.294 232437 DEBUG nova.policy [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7960f8f407754e5d83bee65690a6b772', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '21cd37bffa864aaebe5c734ba468f466', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.313 232437 DEBUG os_brick.utils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.314 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.325 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.325 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[9b00b474-1968-43f1-b585-53570629670e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.326 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.333 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.334 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[5de3c5a6-68c0-4138-b4c2-4d00f81640c4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.335 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.341 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.341 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4e4ad9-ce5d-462d-8b82-dc678f06c64b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.342 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[e3919a74-2791-4aa7-89b3-483b44e28d97]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.343 232437 DEBUG oslo_concurrency.processutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.367 232437 DEBUG oslo_concurrency.processutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.369 232437 DEBUG os_brick.initiator.connectors.lightos [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.369 232437 DEBUG os_brick.initiator.connectors.lightos [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.370 232437 DEBUG os_brick.initiator.connectors.lightos [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.370 232437 DEBUG os_brick.utils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:36:59 np0005548731 nova_compute[232433]: 2025-12-06 07:36:59.370 232437 DEBUG nova.virt.block_device [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Updating existing volume attachment record: a904617c-462d-4f96-aaee-4174b3538c6e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:36:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e303 e303: 3 total, 3 up, 3 in
Dec  6 02:36:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:36:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:36:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:37:00 np0005548731 nova_compute[232433]: 2025-12-06 07:37:00.376 232437 DEBUG nova.compute.manager [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:00 np0005548731 nova_compute[232433]: 2025-12-06 07:37:00.376 232437 DEBUG nova.compute.manager [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing instance network info cache due to event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:37:00 np0005548731 nova_compute[232433]: 2025-12-06 07:37:00.377 232437 DEBUG oslo_concurrency.lockutils [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:37:00 np0005548731 nova_compute[232433]: 2025-12-06 07:37:00.377 232437 DEBUG oslo_concurrency.lockutils [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:37:00 np0005548731 nova_compute[232433]: 2025-12-06 07:37:00.377 232437 DEBUG nova.network.neutron [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:37:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:00.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:00.878 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:00.879 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:00.880 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:37:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3009061304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:37:01 np0005548731 nova_compute[232433]: 2025-12-06 07:37:01.292 232437 DEBUG nova.compute.manager [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:37:01 np0005548731 nova_compute[232433]: 2025-12-06 07:37:01.293 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:37:01 np0005548731 nova_compute[232433]: 2025-12-06 07:37:01.294 232437 INFO nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Creating image(s)#033[00m
Dec  6 02:37:01 np0005548731 nova_compute[232433]: 2025-12-06 07:37:01.294 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:37:01 np0005548731 nova_compute[232433]: 2025-12-06 07:37:01.294 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Ensure instance console log exists: /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:37:01 np0005548731 nova_compute[232433]: 2025-12-06 07:37:01.295 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:01 np0005548731 nova_compute[232433]: 2025-12-06 07:37:01.295 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:01 np0005548731 nova_compute[232433]: 2025-12-06 07:37:01.295 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:01 np0005548731 nova_compute[232433]: 2025-12-06 07:37:01.429 232437 DEBUG nova.network.neutron [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Successfully created port: 500715f2-2222-4003-b268-0369959f5e25 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:37:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:01.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:02 np0005548731 nova_compute[232433]: 2025-12-06 07:37:02.656 232437 DEBUG nova.network.neutron [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updated VIF entry in instance network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:37:02 np0005548731 nova_compute[232433]: 2025-12-06 07:37:02.657 232437 DEBUG nova.network.neutron [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating instance_info_cache with network_info: [{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:37:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:37:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:02.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:37:02 np0005548731 nova_compute[232433]: 2025-12-06 07:37:02.972 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:03 np0005548731 nova_compute[232433]: 2025-12-06 07:37:03.266 232437 DEBUG oslo_concurrency.lockutils [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:37:03 np0005548731 nova_compute[232433]: 2025-12-06 07:37:03.266 232437 DEBUG nova.compute.manager [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-changed-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:03 np0005548731 nova_compute[232433]: 2025-12-06 07:37:03.266 232437 DEBUG nova.compute.manager [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Refreshing instance network info cache due to event network-changed-657a8a48-7a74-4b37-a294-df0b2fd9332c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:37:03 np0005548731 nova_compute[232433]: 2025-12-06 07:37:03.267 232437 DEBUG oslo_concurrency.lockutils [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:37:03 np0005548731 nova_compute[232433]: 2025-12-06 07:37:03.267 232437 DEBUG oslo_concurrency.lockutils [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:37:03 np0005548731 nova_compute[232433]: 2025-12-06 07:37:03.267 232437 DEBUG nova.network.neutron [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Refreshing network info cache for port 657a8a48-7a74-4b37-a294-df0b2fd9332c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:37:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:37:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:03.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:37:04 np0005548731 nova_compute[232433]: 2025-12-06 07:37:04.142 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:37:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:04.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:37:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:37:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:05.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:37:06 np0005548731 nova_compute[232433]: 2025-12-06 07:37:06.109 232437 DEBUG nova.network.neutron [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Successfully updated port: 500715f2-2222-4003-b268-0369959f5e25 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:37:06 np0005548731 nova_compute[232433]: 2025-12-06 07:37:06.292 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquiring lock "refresh_cache-ffee41ca-ba3b-4787-8435-b3903ffb29a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:37:06 np0005548731 nova_compute[232433]: 2025-12-06 07:37:06.292 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquired lock "refresh_cache-ffee41ca-ba3b-4787-8435-b3903ffb29a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:37:06 np0005548731 nova_compute[232433]: 2025-12-06 07:37:06.293 232437 DEBUG nova.network.neutron [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:37:06 np0005548731 nova_compute[232433]: 2025-12-06 07:37:06.546 232437 DEBUG nova.compute.manager [req-9bf058ec-60f5-4d00-b524-b689466d1c02 req-4a5e01c8-40b3-4b1e-8419-fb7b8abdab23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-changed-500715f2-2222-4003-b268-0369959f5e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:06 np0005548731 nova_compute[232433]: 2025-12-06 07:37:06.546 232437 DEBUG nova.compute.manager [req-9bf058ec-60f5-4d00-b524-b689466d1c02 req-4a5e01c8-40b3-4b1e-8419-fb7b8abdab23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Refreshing instance network info cache due to event network-changed-500715f2-2222-4003-b268-0369959f5e25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:37:06 np0005548731 nova_compute[232433]: 2025-12-06 07:37:06.546 232437 DEBUG oslo_concurrency.lockutils [req-9bf058ec-60f5-4d00-b524-b689466d1c02 req-4a5e01c8-40b3-4b1e-8419-fb7b8abdab23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ffee41ca-ba3b-4787-8435-b3903ffb29a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:37:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:06.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:06 np0005548731 nova_compute[232433]: 2025-12-06 07:37:06.824 232437 DEBUG nova.network.neutron [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:37:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:07.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:08 np0005548731 nova_compute[232433]: 2025-12-06 07:37:08.009 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:08 np0005548731 nova_compute[232433]: 2025-12-06 07:37:08.169 232437 DEBUG nova.network.neutron [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updated VIF entry in instance network info cache for port 657a8a48-7a74-4b37-a294-df0b2fd9332c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:37:08 np0005548731 nova_compute[232433]: 2025-12-06 07:37:08.170 232437 DEBUG nova.network.neutron [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:37:08 np0005548731 nova_compute[232433]: 2025-12-06 07:37:08.383 232437 DEBUG oslo_concurrency.lockutils [req-3ccc96d7-22eb-437e-bbc1-ef2692ab556f req-f12362ad-f5a3-4cb8-986a-389aec7db900 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:37:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:08 np0005548731 nova_compute[232433]: 2025-12-06 07:37:08.578 232437 DEBUG nova.network.neutron [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Updating instance_info_cache with network_info: [{"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:37:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:08.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:37:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/225869628' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:37:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:37:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/225869628' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.145 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.447 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Releasing lock "refresh_cache-ffee41ca-ba3b-4787-8435-b3903ffb29a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.448 232437 DEBUG nova.compute.manager [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Instance network_info: |[{"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.448 232437 DEBUG oslo_concurrency.lockutils [req-9bf058ec-60f5-4d00-b524-b689466d1c02 req-4a5e01c8-40b3-4b1e-8419-fb7b8abdab23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ffee41ca-ba3b-4787-8435-b3903ffb29a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.449 232437 DEBUG nova.network.neutron [req-9bf058ec-60f5-4d00-b524-b689466d1c02 req-4a5e01c8-40b3-4b1e-8419-fb7b8abdab23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Refreshing network info cache for port 500715f2-2222-4003-b268-0369959f5e25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.451 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Start _get_guest_xml network_info=[{"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ffee41ca-ba3b-4787-8435-b3903ffb29a9', 'attached_at': '', 'detached_at': '', 'volume_id': 'a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c', 'serial': 'a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'attachment_id': 'a904617c-462d-4f96-aaee-4174b3538c6e', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.455 232437 WARNING nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.463 232437 DEBUG nova.virt.libvirt.host [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.464 232437 DEBUG nova.virt.libvirt.host [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.472 232437 DEBUG nova.virt.libvirt.host [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.473 232437 DEBUG nova.virt.libvirt.host [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.474 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.474 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.475 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.475 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.475 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.476 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.476 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.476 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.476 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.476 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.477 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.477 232437 DEBUG nova.virt.hardware [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.504 232437 DEBUG nova.storage.rbd_utils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] rbd image ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.508 232437 DEBUG oslo_concurrency.processutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.581 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.582 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:37:09 np0005548731 nova_compute[232433]: 2025-12-06 07:37:09.582 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:37:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:37:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:09.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:37:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:37:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1594552602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.031 232437 DEBUG oslo_concurrency.processutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.189 232437 DEBUG nova.virt.libvirt.vif [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:36:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1645861904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1645861904',id=133,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC09UQYuvXJm1Rko+1Enhuo3SSbAWagIRaWgjbl1eMQTI/iz1qciOUWudPgubxE+dfDGSMSVZfhJ7I1j6mbbf0xS8CEMgr5ptTVwPCXCiAWETgl5ZTVu74pyI+Gi8rzOng==',key_name='tempest-keypair-2069448021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21cd37bffa864aaebe5c734ba468f466',ramdisk_id='',reservation_id='r-xs37ll8f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-1595119527',owner_user_name='tempest-ServerActionsV293TestJSON-1595119527-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:36:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7960f8f407754e5d83bee65690a6b772',uuid=ffee41ca-ba3b-4787-8435-b3903ffb29a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.189 232437 DEBUG nova.network.os_vif_util [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converting VIF {"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.190 232437 DEBUG nova.network.os_vif_util [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.191 232437 DEBUG nova.objects.instance [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lazy-loading 'pci_devices' on Instance uuid ffee41ca-ba3b-4787-8435-b3903ffb29a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.278 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  <uuid>ffee41ca-ba3b-4787-8435-b3903ffb29a9</uuid>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  <name>instance-00000085</name>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerActionsV293TestJSON-server-1645861904</nova:name>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:37:09</nova:creationTime>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <nova:user uuid="7960f8f407754e5d83bee65690a6b772">tempest-ServerActionsV293TestJSON-1595119527-project-member</nova:user>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <nova:project uuid="21cd37bffa864aaebe5c734ba468f466">tempest-ServerActionsV293TestJSON-1595119527</nova:project>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <nova:port uuid="500715f2-2222-4003-b268-0369959f5e25">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <entry name="serial">ffee41ca-ba3b-4787-8435-b3903ffb29a9</entry>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <entry name="uuid">ffee41ca-ba3b-4787-8435-b3903ffb29a9</entry>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <serial>a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c</serial>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:d3:0b:9b"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <target dev="tap500715f2-22"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/console.log" append="off"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:37:10 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:37:10 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:37:10 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:37:10 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.279 232437 DEBUG nova.compute.manager [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Preparing to wait for external event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.279 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.279 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.279 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.280 232437 DEBUG nova.virt.libvirt.vif [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:36:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1645861904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1645861904',id=133,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC09UQYuvXJm1Rko+1Enhuo3SSbAWagIRaWgjbl1eMQTI/iz1qciOUWudPgubxE+dfDGSMSVZfhJ7I1j6mbbf0xS8CEMgr5ptTVwPCXCiAWETgl5ZTVu74pyI+Gi8rzOng==',key_name='tempest-keypair-2069448021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21cd37bffa864aaebe5c734ba468f466',ramdisk_id='',reservation_id='r-xs37ll8f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-1595119527',owner_user_name='tempest-ServerActionsV293TestJSON-1595119527-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:36:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7960f8f407754e5d83bee65690a6b772',uuid=ffee41ca-ba3b-4787-8435-b3903ffb29a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.280 232437 DEBUG nova.network.os_vif_util [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converting VIF {"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.281 232437 DEBUG nova.network.os_vif_util [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.281 232437 DEBUG os_vif [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.282 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.282 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.283 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.286 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap500715f2-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.286 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap500715f2-22, col_values=(('external_ids', {'iface-id': '500715f2-2222-4003-b268-0369959f5e25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:0b:9b', 'vm-uuid': 'ffee41ca-ba3b-4787-8435-b3903ffb29a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.288 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:10 np0005548731 NetworkManager[49182]: <info>  [1765006630.2891] manager: (tap500715f2-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.290 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.294 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.295 232437 INFO os_vif [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22')#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.671 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.671 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.671 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] No VIF found with MAC fa:16:3e:d3:0b:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.672 232437 INFO nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Using config drive#033[00m
Dec  6 02:37:10 np0005548731 nova_compute[232433]: 2025-12-06 07:37:10.694 232437 DEBUG nova.storage.rbd_utils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] rbd image ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:37:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:10.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.133 232437 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.134 232437 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.134 232437 DEBUG nova.network.neutron [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.410 232437 INFO nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Creating config drive at /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.417 232437 DEBUG oslo_concurrency.processutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpktaxior8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.548 232437 DEBUG oslo_concurrency.processutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpktaxior8" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.575 232437 DEBUG nova.storage.rbd_utils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] rbd image ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.578 232437 DEBUG oslo_concurrency.processutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.810 232437 DEBUG nova.network.neutron [req-9bf058ec-60f5-4d00-b524-b689466d1c02 req-4a5e01c8-40b3-4b1e-8419-fb7b8abdab23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Updated VIF entry in instance network info cache for port 500715f2-2222-4003-b268-0369959f5e25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.811 232437 DEBUG nova.network.neutron [req-9bf058ec-60f5-4d00-b524-b689466d1c02 req-4a5e01c8-40b3-4b1e-8419-fb7b8abdab23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Updating instance_info_cache with network_info: [{"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.820 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.884 232437 DEBUG oslo_concurrency.lockutils [req-9bf058ec-60f5-4d00-b524-b689466d1c02 req-4a5e01c8-40b3-4b1e-8419-fb7b8abdab23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ffee41ca-ba3b-4787-8435-b3903ffb29a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.892 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.893 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:37:11 np0005548731 nova_compute[232433]: 2025-12-06 07:37:11.893 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:37:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:37:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:11.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:37:12 np0005548731 nova_compute[232433]: 2025-12-06 07:37:12.661 232437 DEBUG nova.network.neutron [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating instance_info_cache with network_info: [{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:37:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:12.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:12 np0005548731 nova_compute[232433]: 2025-12-06 07:37:12.775 232437 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:37:12 np0005548731 nova_compute[232433]: 2025-12-06 07:37:12.918 232437 DEBUG oslo_concurrency.processutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:37:12 np0005548731 nova_compute[232433]: 2025-12-06 07:37:12.919 232437 INFO nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Deleting local config drive /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config because it was imported into RBD.#033[00m
Dec  6 02:37:12 np0005548731 kernel: tap500715f2-22: entered promiscuous mode
Dec  6 02:37:12 np0005548731 NetworkManager[49182]: <info>  [1765006632.9714] manager: (tap500715f2-22): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Dec  6 02:37:12 np0005548731 nova_compute[232433]: 2025-12-06 07:37:12.973 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:12Z|00570|binding|INFO|Claiming lport 500715f2-2222-4003-b268-0369959f5e25 for this chassis.
Dec  6 02:37:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:12Z|00571|binding|INFO|500715f2-2222-4003-b268-0369959f5e25: Claiming fa:16:3e:d3:0b:9b 10.100.0.7
Dec  6 02:37:12 np0005548731 systemd-udevd[288730]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:37:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:13Z|00572|binding|INFO|Setting lport 500715f2-2222-4003-b268-0369959f5e25 ovn-installed in OVS
Dec  6 02:37:13 np0005548731 nova_compute[232433]: 2025-12-06 07:37:13.003 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:13 np0005548731 systemd-machined[195355]: New machine qemu-59-instance-00000085.
Dec  6 02:37:13 np0005548731 nova_compute[232433]: 2025-12-06 07:37:13.008 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:13 np0005548731 NetworkManager[49182]: <info>  [1765006633.0106] device (tap500715f2-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:37:13 np0005548731 NetworkManager[49182]: <info>  [1765006633.0125] device (tap500715f2-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:37:13 np0005548731 nova_compute[232433]: 2025-12-06 07:37:13.012 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:13 np0005548731 systemd[1]: Started Virtual Machine qemu-59-instance-00000085.
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.096 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:0b:9b 10.100.0.7'], port_security=['fa:16:3e:d3:0b:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ffee41ca-ba3b-4787-8435-b3903ffb29a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21cd37bffa864aaebe5c734ba468f466', 'neutron:revision_number': '2', 'neutron:security_group_ids': '349efd6b-63d0-41af-8405-3d1f52d82535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=359f89a6-280c-4f37-8512-64abc4582e65, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=500715f2-2222-4003-b268-0369959f5e25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:37:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:13Z|00573|binding|INFO|Setting lport 500715f2-2222-4003-b268-0369959f5e25 up in Southbound
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.097 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 500715f2-2222-4003-b268-0369959f5e25 in datapath e4ed5caf-995c-4d95-a87b-6b8a5de780fa bound to our chassis#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.099 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4ed5caf-995c-4d95-a87b-6b8a5de780fa#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.110 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba3577a-01d4-4069-89d2-44b982dae85a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.111 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4ed5caf-91 in ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.113 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4ed5caf-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.113 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dec7f5ca-e06c-4390-a254-3b3deafcf8ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.114 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[76504c83-7816-4007-9e4a-dd5433f18d02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.127 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[d241e59f-6b71-4152-84b6-10c5e79eb9c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.153 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a68d0b56-1753-4147-9556-4fa1859444c9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.183 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d04803-2b60-41cb-bdcc-b0ed2844913c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.188 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[93b644df-febd-49a4-acd5-c36fe355fbff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 systemd-udevd[288733]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:37:13 np0005548731 NetworkManager[49182]: <info>  [1765006633.1896] manager: (tape4ed5caf-90): new Veth device (/org/freedesktop/NetworkManager/Devices/271)
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.222 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0a3287-79ec-4d84-97e7-adc67770216e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.226 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3db84f-581b-4eb8-a7db-ffd74b6d0735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 NetworkManager[49182]: <info>  [1765006633.2532] device (tape4ed5caf-90): carrier: link connected
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.259 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3690bf8d-ced2-4f88-9c10-d833aaf5ee4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.276 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1b32a8-3bed-4401-9f19-d8f3ea1b2e36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4ed5caf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:38:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689526, 'reachable_time': 35356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288764, 'error': None, 'target': 'ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.297 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[37287aec-d891-4776-becb-2fc025893c3a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:3865'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689526, 'tstamp': 689526}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288765, 'error': None, 'target': 'ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.317 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[39057479-7236-43cc-bb13-ce754dbc226c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4ed5caf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:38:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689526, 'reachable_time': 35356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288766, 'error': None, 'target': 'ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.348 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ace071-78c6-4af4-99c0-c7f6134c34ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.405 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9a7006-e9b6-4a01-a15a-df2d1d5a382d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.407 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4ed5caf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.407 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.407 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4ed5caf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:13 np0005548731 NetworkManager[49182]: <info>  [1765006633.4102] manager: (tape4ed5caf-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Dec  6 02:37:13 np0005548731 kernel: tape4ed5caf-90: entered promiscuous mode
Dec  6 02:37:13 np0005548731 nova_compute[232433]: 2025-12-06 07:37:13.409 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:13 np0005548731 nova_compute[232433]: 2025-12-06 07:37:13.411 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.412 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4ed5caf-90, col_values=(('external_ids', {'iface-id': 'aa7ac13f-a2c4-4bc9-9feb-e67530c4f4ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:13 np0005548731 nova_compute[232433]: 2025-12-06 07:37:13.413 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:13Z|00574|binding|INFO|Releasing lport aa7ac13f-a2c4-4bc9-9feb-e67530c4f4ab from this chassis (sb_readonly=1)
Dec  6 02:37:13 np0005548731 nova_compute[232433]: 2025-12-06 07:37:13.414 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.415 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4ed5caf-995c-4d95-a87b-6b8a5de780fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4ed5caf-995c-4d95-a87b-6b8a5de780fa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.416 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0eeecdbe-b207-45f3-9154-2e7f141666c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.417 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-e4ed5caf-995c-4d95-a87b-6b8a5de780fa
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/e4ed5caf-995c-4d95-a87b-6b8a5de780fa.pid.haproxy
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID e4ed5caf-995c-4d95-a87b-6b8a5de780fa
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:37:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:13.419 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'env', 'PROCESS_TAG=haproxy-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4ed5caf-995c-4d95-a87b-6b8a5de780fa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:37:13 np0005548731 nova_compute[232433]: 2025-12-06 07:37:13.429 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:13 np0005548731 nova_compute[232433]: 2025-12-06 07:37:13.588 232437 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Dec  6 02:37:13 np0005548731 nova_compute[232433]: 2025-12-06 07:37:13.589 232437 DEBUG nova.virt.libvirt.volume.remotefs [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Creating file /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/49c974a9e1ac496a9cbe98d9877a47a9.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Dec  6 02:37:13 np0005548731 nova_compute[232433]: 2025-12-06 07:37:13.590 232437 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/49c974a9e1ac496a9cbe98d9877a47a9.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:37:13 np0005548731 podman[288799]: 2025-12-06 07:37:13.784825712 +0000 UTC m=+0.047807806 container create ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 02:37:13 np0005548731 systemd[1]: Started libpod-conmon-ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133.scope.
Dec  6 02:37:13 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:37:13 np0005548731 podman[288799]: 2025-12-06 07:37:13.76012111 +0000 UTC m=+0.023103224 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:37:13 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7915ed9c5be7dafa716ca02205b2f5d45befabedcf9d8688bd3347cfadcb3f54/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:37:13 np0005548731 podman[288799]: 2025-12-06 07:37:13.870082339 +0000 UTC m=+0.133064463 container init ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 02:37:13 np0005548731 podman[288799]: 2025-12-06 07:37:13.875167563 +0000 UTC m=+0.138149657 container start ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 02:37:13 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[288814]: [NOTICE]   (288818) : New worker (288827) forked
Dec  6 02:37:13 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[288814]: [NOTICE]   (288818) : Loading success.
Dec  6 02:37:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:13.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.016 232437 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/49c974a9e1ac496a9cbe98d9877a47a9.tmp" returned: 1 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.017 232437 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/49c974a9e1ac496a9cbe98d9877a47a9.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.018 232437 DEBUG nova.virt.libvirt.volume.remotefs [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Creating directory /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.018 232437 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.161 232437 DEBUG nova.compute.manager [req-16ef908f-677e-47db-8626-76f0862f66e5 req-897779f8-44d7-4f5e-86f6-8a6b95b40617 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.161 232437 DEBUG oslo_concurrency.lockutils [req-16ef908f-677e-47db-8626-76f0862f66e5 req-897779f8-44d7-4f5e-86f6-8a6b95b40617 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.161 232437 DEBUG oslo_concurrency.lockutils [req-16ef908f-677e-47db-8626-76f0862f66e5 req-897779f8-44d7-4f5e-86f6-8a6b95b40617 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.162 232437 DEBUG oslo_concurrency.lockutils [req-16ef908f-677e-47db-8626-76f0862f66e5 req-897779f8-44d7-4f5e-86f6-8a6b95b40617 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.162 232437 DEBUG nova.compute.manager [req-16ef908f-677e-47db-8626-76f0862f66e5 req-897779f8-44d7-4f5e-86f6-8a6b95b40617 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Processing event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.192 232437 DEBUG nova.compute.manager [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.193 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006634.1920319, ffee41ca-ba3b-4787-8435-b3903ffb29a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.193 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] VM Started (Lifecycle Event)#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.196 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.198 232437 INFO nova.virt.libvirt.driver [-] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Instance spawned successfully.#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.199 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.217 232437 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.222 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.224 232437 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.227 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.227 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.228 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.228 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.228 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.229 232437 DEBUG nova.virt.libvirt.driver [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.235 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.271 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.272 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006634.193192, ffee41ca-ba3b-4787-8435-b3903ffb29a9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.272 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.302 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.305 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006634.1957123, ffee41ca-ba3b-4787-8435-b3903ffb29a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.305 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.320 232437 INFO nova.compute.manager [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Took 13.03 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.321 232437 DEBUG nova.compute.manager [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.331 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.333 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.367 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.395 232437 INFO nova.compute.manager [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Took 16.15 seconds to build instance.#033[00m
Dec  6 02:37:14 np0005548731 nova_compute[232433]: 2025-12-06 07:37:14.414 232437 DEBUG oslo_concurrency.lockutils [None req-04a52976-2ff7-415f-a8f6-fc3b9e0cbc13 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:14.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:15 np0005548731 nova_compute[232433]: 2025-12-06 07:37:15.289 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:15.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:16 np0005548731 nova_compute[232433]: 2025-12-06 07:37:16.093 232437 DEBUG nova.compute.manager [req-5dbfbaf6-def9-4a6d-a6c0-29f72d7f2318 req-32701fb1-51c2-46d7-ac8b-3b90e54b5f30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-changed-500715f2-2222-4003-b268-0369959f5e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:16 np0005548731 nova_compute[232433]: 2025-12-06 07:37:16.093 232437 DEBUG nova.compute.manager [req-5dbfbaf6-def9-4a6d-a6c0-29f72d7f2318 req-32701fb1-51c2-46d7-ac8b-3b90e54b5f30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Refreshing instance network info cache due to event network-changed-500715f2-2222-4003-b268-0369959f5e25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:37:16 np0005548731 nova_compute[232433]: 2025-12-06 07:37:16.094 232437 DEBUG oslo_concurrency.lockutils [req-5dbfbaf6-def9-4a6d-a6c0-29f72d7f2318 req-32701fb1-51c2-46d7-ac8b-3b90e54b5f30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ffee41ca-ba3b-4787-8435-b3903ffb29a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:37:16 np0005548731 nova_compute[232433]: 2025-12-06 07:37:16.094 232437 DEBUG oslo_concurrency.lockutils [req-5dbfbaf6-def9-4a6d-a6c0-29f72d7f2318 req-32701fb1-51c2-46d7-ac8b-3b90e54b5f30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ffee41ca-ba3b-4787-8435-b3903ffb29a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:37:16 np0005548731 nova_compute[232433]: 2025-12-06 07:37:16.094 232437 DEBUG nova.network.neutron [req-5dbfbaf6-def9-4a6d-a6c0-29f72d7f2318 req-32701fb1-51c2-46d7-ac8b-3b90e54b5f30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Refreshing network info cache for port 500715f2-2222-4003-b268-0369959f5e25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:37:16 np0005548731 nova_compute[232433]: 2025-12-06 07:37:16.267 232437 DEBUG nova.compute.manager [req-84149753-36fb-4a7b-a491-5621e6e6a8c3 req-eecb2e20-8c14-4bb1-ac7a-f4f781410c73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:16 np0005548731 nova_compute[232433]: 2025-12-06 07:37:16.267 232437 DEBUG oslo_concurrency.lockutils [req-84149753-36fb-4a7b-a491-5621e6e6a8c3 req-eecb2e20-8c14-4bb1-ac7a-f4f781410c73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:16 np0005548731 nova_compute[232433]: 2025-12-06 07:37:16.268 232437 DEBUG oslo_concurrency.lockutils [req-84149753-36fb-4a7b-a491-5621e6e6a8c3 req-eecb2e20-8c14-4bb1-ac7a-f4f781410c73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:16 np0005548731 nova_compute[232433]: 2025-12-06 07:37:16.268 232437 DEBUG oslo_concurrency.lockutils [req-84149753-36fb-4a7b-a491-5621e6e6a8c3 req-eecb2e20-8c14-4bb1-ac7a-f4f781410c73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:16 np0005548731 nova_compute[232433]: 2025-12-06 07:37:16.268 232437 DEBUG nova.compute.manager [req-84149753-36fb-4a7b-a491-5621e6e6a8c3 req-eecb2e20-8c14-4bb1-ac7a-f4f781410c73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] No waiting events found dispatching network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:37:16 np0005548731 nova_compute[232433]: 2025-12-06 07:37:16.269 232437 WARNING nova.compute.manager [req-84149753-36fb-4a7b-a491-5621e6e6a8c3 req-eecb2e20-8c14-4bb1-ac7a-f4f781410c73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received unexpected event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:37:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:16.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:17 np0005548731 nova_compute[232433]: 2025-12-06 07:37:17.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:37:17 np0005548731 nova_compute[232433]: 2025-12-06 07:37:17.677 232437 DEBUG nova.network.neutron [req-5dbfbaf6-def9-4a6d-a6c0-29f72d7f2318 req-32701fb1-51c2-46d7-ac8b-3b90e54b5f30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Updated VIF entry in instance network info cache for port 500715f2-2222-4003-b268-0369959f5e25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:37:17 np0005548731 nova_compute[232433]: 2025-12-06 07:37:17.678 232437 DEBUG nova.network.neutron [req-5dbfbaf6-def9-4a6d-a6c0-29f72d7f2318 req-32701fb1-51c2-46d7-ac8b-3b90e54b5f30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Updating instance_info_cache with network_info: [{"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:37:17 np0005548731 nova_compute[232433]: 2025-12-06 07:37:17.695 232437 DEBUG oslo_concurrency.lockutils [req-5dbfbaf6-def9-4a6d-a6c0-29f72d7f2318 req-32701fb1-51c2-46d7-ac8b-3b90e54b5f30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ffee41ca-ba3b-4787-8435-b3903ffb29a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:37:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:17.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.015 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.251 232437 INFO nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Instance shutdown successfully after 4 seconds.#033[00m
Dec  6 02:37:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:18 np0005548731 kernel: tapbfac9ac2-c1 (unregistering): left promiscuous mode
Dec  6 02:37:18 np0005548731 NetworkManager[49182]: <info>  [1765006638.4723] device (tapbfac9ac2-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:37:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:18Z|00575|binding|INFO|Releasing lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 from this chassis (sb_readonly=0)
Dec  6 02:37:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:18Z|00576|binding|INFO|Setting lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 down in Southbound
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.483 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:18Z|00577|binding|INFO|Removing iface tapbfac9ac2-c1 ovn-installed in OVS
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.486 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.496 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:31:81 10.100.0.6'], port_security=['fa:16:3e:c3:31:81 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3d4b97f9-b8c3-4764-87f9-4006968fecd4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.497 143965 INFO neutron.agent.ovn.metadata.agent [-] Port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.500 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.500 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.514 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b33acb8a-d607-4040-ade9-fef5e4880fc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000081.scope: Deactivated successfully.
Dec  6 02:37:18 np0005548731 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000081.scope: Consumed 17.401s CPU time.
Dec  6 02:37:18 np0005548731 systemd-machined[195355]: Machine qemu-57-instance-00000081 terminated.
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.539 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3bbb155f-5194-4702-aedc-d60a391c055b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.542 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[362717fd-c165-48b5-9d42-b0eff58a6e8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.567 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f4932a02-e22b-4859-a020-7cebeba26c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.588 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[051675a9-7255-4798-a79e-d51da1bc09f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 868, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 868, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288936, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.605 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[89d07c25-c413-4209-8e12-4832f39ed5de]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288937, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288937, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.606 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.607 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.612 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.612 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.612 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.613 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.613 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:37:18 np0005548731 kernel: tapbfac9ac2-c1: entered promiscuous mode
Dec  6 02:37:18 np0005548731 NetworkManager[49182]: <info>  [1765006638.6674] manager: (tapbfac9ac2-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/273)
Dec  6 02:37:18 np0005548731 kernel: tapbfac9ac2-c1 (unregistering): left promiscuous mode
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.669 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:18Z|00578|binding|INFO|Claiming lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for this chassis.
Dec  6 02:37:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:18Z|00579|binding|INFO|bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1: Claiming fa:16:3e:c3:31:81 10.100.0.6
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.681 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:31:81 10.100.0.6'], port_security=['fa:16:3e:c3:31:81 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3d4b97f9-b8c3-4764-87f9-4006968fecd4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.683 143965 INFO neutron.agent.ovn.metadata.agent [-] Port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 bound to our chassis#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.685 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.695 232437 INFO nova.virt.libvirt.driver [-] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Instance destroyed successfully.#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.696 232437 DEBUG nova.virt.libvirt.vif [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-1',id=129,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:35:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-pbz3nfcu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:37:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=3d4b97f9-b8c3-4764-87f9-4006968fecd4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:c3:31:81"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.697 232437 DEBUG nova.network.os_vif_util [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:c3:31:81"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.698 232437 DEBUG nova.network.os_vif_util [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.698 232437 DEBUG os_vif [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.703 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.703 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfac9ac2-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.706 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:18Z|00580|binding|INFO|Releasing lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 from this chassis (sb_readonly=0)
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.708 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.710 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e68005-afab-4cb3-b572-fdfe8bae2a05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.718 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:31:81 10.100.0.6'], port_security=['fa:16:3e:c3:31:81 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3d4b97f9-b8c3-4764-87f9-4006968fecd4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.737 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.739 232437 INFO os_vif [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1')#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.747 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f85d2a-06d6-441d-914b-d63173f1907e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.748 232437 DEBUG nova.compute.manager [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-unplugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.748 232437 DEBUG oslo_concurrency.lockutils [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.749 232437 DEBUG oslo_concurrency.lockutils [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.749 232437 DEBUG oslo_concurrency.lockutils [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.749 232437 DEBUG nova.compute.manager [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] No waiting events found dispatching network-vif-unplugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.750 232437 WARNING nova.compute.manager [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received unexpected event network-vif-unplugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for instance with vm_state active and task_state resize_migrating.#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.750 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d0aed28d-c1d3-45f5-bdde-6babb08c9ce9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:18.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.777 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7e927f63-f4b1-4b8b-b554-f92e74cd3d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.792 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[59c9b3d1-d2bb-429a-a8cb-21e640f291c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 868, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 868, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288952, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.806 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c717adb9-758b-47aa-92f2-66b8e7438703]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288953, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288953, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.808 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.809 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.810 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.810 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.811 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.811 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.812 143965 INFO neutron.agent.ovn.metadata.agent [-] Port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.814 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.829 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c676cb21-b779-49f1-abdc-31ef454afaaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.857 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d8eb3895-85ef-4305-92fc-6d7af54c4b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.860 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c6857d2d-48cf-4c71-b1d3-f65f2eccebea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.887 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[50c0a74e-86dd-4d84-9d4b-1b62c57d5cb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.903 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[00cb1dad-1e1b-4ee8-b5ec-dc292adbce96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 17, 'rx_bytes': 868, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 17, 'rx_bytes': 868, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288959, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.918 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[097a8def-d45f-4a23-8eae-e702a4e4fb5c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288960, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288960, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.919 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.921 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.922 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.922 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.922 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:18.923 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.963 232437 INFO nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Detected multiple connections on this host for volume: fb115316-35b8-4af3-9847-c6315737a158, skipping target disconnect.#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.967 232437 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.967 232437 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:18 np0005548731 nova_compute[232433]: 2025-12-06 07:37:18.968 232437 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.133 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.135 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.135 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.476 232437 DEBUG neutronclient.v2_0.client [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Dec  6 02:37:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:37:19 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3578164270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.585 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.630 232437 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.630 232437 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.631 232437 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.698 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.699 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.699 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.703 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.703 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.703 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.706 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.706 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.709 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.709 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.874 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.876 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3808MB free_disk=20.647933959960938GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.876 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.877 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:19.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.929 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Migration for instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.947 232437 INFO nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating resource usage from migration e793cfc3-2f7a-47a4-9e94-7497b4835c62#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.948 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Starting to track outgoing migration e793cfc3-2f7a-47a4-9e94-7497b4835c62 with flavor 25848a18-11d9-4f11-80b5-5d005675c76d _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.978 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 6a50a40c-3b05-4c0e-aa67-1489e203824e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.979 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.979 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance ffee41ca-ba3b-4787-8435-b3903ffb29a9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.980 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Migration e793cfc3-2f7a-47a4-9e94-7497b4835c62 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.980 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:37:19 np0005548731 nova_compute[232433]: 2025-12-06 07:37:19.980 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=1088MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.075 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:37:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:37:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2304411446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.506 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.512 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.529 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.559 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.560 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:37:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:20.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.905 232437 DEBUG nova.compute.manager [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.906 232437 DEBUG oslo_concurrency.lockutils [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.907 232437 DEBUG oslo_concurrency.lockutils [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.907 232437 DEBUG oslo_concurrency.lockutils [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.907 232437 DEBUG nova.compute.manager [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] No waiting events found dispatching network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:37:20 np0005548731 nova_compute[232433]: 2025-12-06 07:37:20.907 232437 WARNING nova.compute.manager [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received unexpected event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for instance with vm_state active and task_state resize_migrated.#033[00m
Dec  6 02:37:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:37:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:21.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:37:22 np0005548731 nova_compute[232433]: 2025-12-06 07:37:22.555 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:37:22 np0005548731 nova_compute[232433]: 2025-12-06 07:37:22.582 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:37:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:22.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:23 np0005548731 nova_compute[232433]: 2025-12-06 07:37:23.017 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:23 np0005548731 nova_compute[232433]: 2025-12-06 07:37:23.037 232437 DEBUG nova.compute.manager [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:23 np0005548731 nova_compute[232433]: 2025-12-06 07:37:23.038 232437 DEBUG nova.compute.manager [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing instance network info cache due to event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:37:23 np0005548731 nova_compute[232433]: 2025-12-06 07:37:23.038 232437 DEBUG oslo_concurrency.lockutils [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:37:23 np0005548731 nova_compute[232433]: 2025-12-06 07:37:23.039 232437 DEBUG oslo_concurrency.lockutils [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:37:23 np0005548731 nova_compute[232433]: 2025-12-06 07:37:23.039 232437 DEBUG nova.network.neutron [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:37:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:37:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:37:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:37:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:23 np0005548731 nova_compute[232433]: 2025-12-06 07:37:23.707 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:37:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:23.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:37:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:37:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873242481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:37:24 np0005548731 nova_compute[232433]: 2025-12-06 07:37:24.692 232437 DEBUG nova.network.neutron [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updated VIF entry in instance network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:37:24 np0005548731 nova_compute[232433]: 2025-12-06 07:37:24.693 232437 DEBUG nova.network.neutron [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating instance_info_cache with network_info: [{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:37:24 np0005548731 nova_compute[232433]: 2025-12-06 07:37:24.722 232437 DEBUG oslo_concurrency.lockutils [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:37:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:37:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:24.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:37:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:37:24 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671171046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:37:25 np0005548731 nova_compute[232433]: 2025-12-06 07:37:25.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:37:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:25.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:37:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:26.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:37:27 np0005548731 podman[289141]: 2025-12-06 07:37:27.898156451 +0000 UTC m=+0.057627465 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 02:37:27 np0005548731 podman[289143]: 2025-12-06 07:37:27.909384364 +0000 UTC m=+0.065874225 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec  6 02:37:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:27.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:27 np0005548731 podman[289142]: 2025-12-06 07:37:27.959443674 +0000 UTC m=+0.116767366 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:37:28 np0005548731 nova_compute[232433]: 2025-12-06 07:37:28.019 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:28 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Dec  6 02:37:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:28 np0005548731 nova_compute[232433]: 2025-12-06 07:37:28.709 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:28.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:29.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:30.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:37:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:31.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:37:31 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:37:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:32.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:33 np0005548731 nova_compute[232433]: 2025-12-06 07:37:33.021 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:33 np0005548731 nova_compute[232433]: 2025-12-06 07:37:33.691 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006638.6901693, 3d4b97f9-b8c3-4764-87f9-4006968fecd4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:37:33 np0005548731 nova_compute[232433]: 2025-12-06 07:37:33.691 232437 INFO nova.compute.manager [-] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:37:33 np0005548731 nova_compute[232433]: 2025-12-06 07:37:33.711 232437 DEBUG nova.compute.manager [None req-70b041d5-e060-4413-9b1a-18d84bd6adb6 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:37:33 np0005548731 nova_compute[232433]: 2025-12-06 07:37:33.712 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:33 np0005548731 nova_compute[232433]: 2025-12-06 07:37:33.715 232437 DEBUG nova.compute.manager [None req-70b041d5-e060-4413-9b1a-18d84bd6adb6 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:37:33 np0005548731 nova_compute[232433]: 2025-12-06 07:37:33.735 232437 INFO nova.compute.manager [None req-70b041d5-e060-4413-9b1a-18d84bd6adb6 - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Dec  6 02:37:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e304 e304: 3 total, 3 up, 3 in
Dec  6 02:37:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:37:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:33.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:37:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:34.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:37:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:37:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:35.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:36Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:0b:9b 10.100.0.7
Dec  6 02:37:36 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:36Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:0b:9b 10.100.0.7
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.731 232437 DEBUG nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.731 232437 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.731 232437 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.732 232437 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.732 232437 DEBUG nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] No waiting events found dispatching network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.732 232437 WARNING nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received unexpected event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for instance with vm_state resized and task_state None.#033[00m
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.732 232437 DEBUG nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.733 232437 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.733 232437 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.733 232437 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.733 232437 DEBUG nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] No waiting events found dispatching network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:37:36 np0005548731 nova_compute[232433]: 2025-12-06 07:37:36.733 232437 WARNING nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received unexpected event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for instance with vm_state resized and task_state None.#033[00m
Dec  6 02:37:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:37:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:36.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:37:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:37.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:38 np0005548731 nova_compute[232433]: 2025-12-06 07:37:38.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:38 np0005548731 nova_compute[232433]: 2025-12-06 07:37:38.606 232437 DEBUG oslo_concurrency.lockutils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:38 np0005548731 nova_compute[232433]: 2025-12-06 07:37:38.607 232437 DEBUG oslo_concurrency.lockutils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:38 np0005548731 nova_compute[232433]: 2025-12-06 07:37:38.607 232437 DEBUG nova.compute.manager [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Going to confirm migration 19 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Dec  6 02:37:38 np0005548731 nova_compute[232433]: 2025-12-06 07:37:38.715 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:38.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:39 np0005548731 nova_compute[232433]: 2025-12-06 07:37:39.012 232437 DEBUG neutronclient.v2_0.client [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Dec  6 02:37:39 np0005548731 nova_compute[232433]: 2025-12-06 07:37:39.013 232437 DEBUG oslo_concurrency.lockutils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:37:39 np0005548731 nova_compute[232433]: 2025-12-06 07:37:39.013 232437 DEBUG oslo_concurrency.lockutils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:37:39 np0005548731 nova_compute[232433]: 2025-12-06 07:37:39.013 232437 DEBUG nova.network.neutron [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:37:39 np0005548731 nova_compute[232433]: 2025-12-06 07:37:39.013 232437 DEBUG nova.objects.instance [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'info_cache' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:37:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:39.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 02:37:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:40.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 02:37:41 np0005548731 nova_compute[232433]: 2025-12-06 07:37:41.004 232437 DEBUG nova.network.neutron [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating instance_info_cache with network_info: [{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:37:41 np0005548731 nova_compute[232433]: 2025-12-06 07:37:41.030 232437 DEBUG oslo_concurrency.lockutils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:37:41 np0005548731 nova_compute[232433]: 2025-12-06 07:37:41.030 232437 DEBUG nova.objects.instance [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'migration_context' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:37:41 np0005548731 nova_compute[232433]: 2025-12-06 07:37:41.129 232437 DEBUG nova.storage.rbd_utils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] removing snapshot(nova-resize) on rbd image(3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:37:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:37:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:41.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:37:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e305 e305: 3 total, 3 up, 3 in
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.092 232437 DEBUG nova.virt.libvirt.vif [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=129,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:37:36Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-pbz3nfcu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:37:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=3d4b97f9-b8c3-4764-87f9-4006968fecd4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.093 232437 DEBUG nova.network.os_vif_util [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.094 232437 DEBUG nova.network.os_vif_util [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.095 232437 DEBUG os_vif [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.097 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.097 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfac9ac2-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.098 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.101 232437 INFO os_vif [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1')#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.101 232437 DEBUG oslo_concurrency.lockutils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.101 232437 DEBUG oslo_concurrency.lockutils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.249 232437 DEBUG oslo_concurrency.processutils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:37:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:37:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3416436855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.745 232437 DEBUG oslo_concurrency.processutils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.751 232437 DEBUG nova.compute.provider_tree [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:37:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:42.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.893 232437 DEBUG nova.scheduler.client.report [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:37:42 np0005548731 nova_compute[232433]: 2025-12-06 07:37:42.969 232437 DEBUG oslo_concurrency.lockutils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:43 np0005548731 nova_compute[232433]: 2025-12-06 07:37:43.059 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:43 np0005548731 nova_compute[232433]: 2025-12-06 07:37:43.151 232437 INFO nova.scheduler.client.report [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Deleted allocation for migration e793cfc3-2f7a-47a4-9e94-7497b4835c62#033[00m
Dec  6 02:37:43 np0005548731 nova_compute[232433]: 2025-12-06 07:37:43.246 232437 DEBUG oslo_concurrency.lockutils [None req-9c4a2ffd-767e-48af-b28c-8164a7a53043 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:43 np0005548731 nova_compute[232433]: 2025-12-06 07:37:43.717 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:43.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:37:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:44.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:37:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:45Z|00581|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec  6 02:37:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:45.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:37:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:46.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:37:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:47.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:48 np0005548731 nova_compute[232433]: 2025-12-06 07:37:48.061 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:48 np0005548731 nova_compute[232433]: 2025-12-06 07:37:48.718 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:37:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:48.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:37:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:49.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e306 e306: 3 total, 3 up, 3 in
Dec  6 02:37:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:50.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:51 np0005548731 nova_compute[232433]: 2025-12-06 07:37:51.514 232437 INFO nova.compute.manager [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Rebuilding instance#033[00m
Dec  6 02:37:51 np0005548731 nova_compute[232433]: 2025-12-06 07:37:51.952 232437 DEBUG nova.objects.instance [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ffee41ca-ba3b-4787-8435-b3903ffb29a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:37:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:51.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:51 np0005548731 nova_compute[232433]: 2025-12-06 07:37:51.973 232437 DEBUG nova.compute.manager [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:37:52 np0005548731 nova_compute[232433]: 2025-12-06 07:37:52.066 232437 DEBUG nova.objects.instance [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lazy-loading 'pci_requests' on Instance uuid ffee41ca-ba3b-4787-8435-b3903ffb29a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:37:52 np0005548731 nova_compute[232433]: 2025-12-06 07:37:52.139 232437 DEBUG nova.objects.instance [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lazy-loading 'pci_devices' on Instance uuid ffee41ca-ba3b-4787-8435-b3903ffb29a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:37:52 np0005548731 nova_compute[232433]: 2025-12-06 07:37:52.171 232437 DEBUG nova.objects.instance [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lazy-loading 'resources' on Instance uuid ffee41ca-ba3b-4787-8435-b3903ffb29a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:37:52 np0005548731 nova_compute[232433]: 2025-12-06 07:37:52.195 232437 DEBUG nova.objects.instance [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lazy-loading 'migration_context' on Instance uuid ffee41ca-ba3b-4787-8435-b3903ffb29a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:37:52 np0005548731 nova_compute[232433]: 2025-12-06 07:37:52.215 232437 DEBUG nova.objects.instance [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec  6 02:37:52 np0005548731 nova_compute[232433]: 2025-12-06 07:37:52.220 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:37:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:52.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:53 np0005548731 nova_compute[232433]: 2025-12-06 07:37:53.064 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:53 np0005548731 nova_compute[232433]: 2025-12-06 07:37:53.290 232437 DEBUG nova.compute.manager [req-063f9cd2-ab66-4ed7-9585-b6f98292a189 req-406ed0cb-f263-456b-ac06-f5f8748d89ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-changed-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:53 np0005548731 nova_compute[232433]: 2025-12-06 07:37:53.290 232437 DEBUG nova.compute.manager [req-063f9cd2-ab66-4ed7-9585-b6f98292a189 req-406ed0cb-f263-456b-ac06-f5f8748d89ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Refreshing instance network info cache due to event network-changed-657a8a48-7a74-4b37-a294-df0b2fd9332c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:37:53 np0005548731 nova_compute[232433]: 2025-12-06 07:37:53.291 232437 DEBUG oslo_concurrency.lockutils [req-063f9cd2-ab66-4ed7-9585-b6f98292a189 req-406ed0cb-f263-456b-ac06-f5f8748d89ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:37:53 np0005548731 nova_compute[232433]: 2025-12-06 07:37:53.291 232437 DEBUG oslo_concurrency.lockutils [req-063f9cd2-ab66-4ed7-9585-b6f98292a189 req-406ed0cb-f263-456b-ac06-f5f8748d89ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:37:53 np0005548731 nova_compute[232433]: 2025-12-06 07:37:53.291 232437 DEBUG nova.network.neutron [req-063f9cd2-ab66-4ed7-9585-b6f98292a189 req-406ed0cb-f263-456b-ac06-f5f8748d89ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Refreshing network info cache for port 657a8a48-7a74-4b37-a294-df0b2fd9332c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:37:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:53 np0005548731 nova_compute[232433]: 2025-12-06 07:37:53.721 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:53.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:37:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:54.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.238 232437 INFO nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Instance shutdown successfully after 3 seconds.#033[00m
Dec  6 02:37:55 np0005548731 kernel: tap500715f2-22 (unregistering): left promiscuous mode
Dec  6 02:37:55 np0005548731 NetworkManager[49182]: <info>  [1765006675.4877] device (tap500715f2-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.498 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:55Z|00582|binding|INFO|Releasing lport 500715f2-2222-4003-b268-0369959f5e25 from this chassis (sb_readonly=0)
Dec  6 02:37:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:55Z|00583|binding|INFO|Setting lport 500715f2-2222-4003-b268-0369959f5e25 down in Southbound
Dec  6 02:37:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:37:55Z|00584|binding|INFO|Removing iface tap500715f2-22 ovn-installed in OVS
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.503 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.514 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.526 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:0b:9b 10.100.0.7'], port_security=['fa:16:3e:d3:0b:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ffee41ca-ba3b-4787-8435-b3903ffb29a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21cd37bffa864aaebe5c734ba468f466', 'neutron:revision_number': '4', 'neutron:security_group_ids': '349efd6b-63d0-41af-8405-3d1f52d82535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=359f89a6-280c-4f37-8512-64abc4582e65, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=500715f2-2222-4003-b268-0369959f5e25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.528 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 500715f2-2222-4003-b268-0369959f5e25 in datapath e4ed5caf-995c-4d95-a87b-6b8a5de780fa unbound from our chassis#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.530 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4ed5caf-995c-4d95-a87b-6b8a5de780fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.532 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3988fa-979a-423d-b873-e0e3ae99a865]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.532 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa namespace which is not needed anymore#033[00m
Dec  6 02:37:55 np0005548731 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000085.scope: Deactivated successfully.
Dec  6 02:37:55 np0005548731 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000085.scope: Consumed 15.851s CPU time.
Dec  6 02:37:55 np0005548731 systemd-machined[195355]: Machine qemu-59-instance-00000085 terminated.
Dec  6 02:37:55 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[288814]: [NOTICE]   (288818) : haproxy version is 2.8.14-c23fe91
Dec  6 02:37:55 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[288814]: [NOTICE]   (288818) : path to executable is /usr/sbin/haproxy
Dec  6 02:37:55 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[288814]: [WARNING]  (288818) : Exiting Master process...
Dec  6 02:37:55 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[288814]: [ALERT]    (288818) : Current worker (288827) exited with code 143 (Terminated)
Dec  6 02:37:55 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[288814]: [WARNING]  (288818) : All workers exited. Exiting... (0)
Dec  6 02:37:55 np0005548731 systemd[1]: libpod-ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133.scope: Deactivated successfully.
Dec  6 02:37:55 np0005548731 podman[289397]: 2025-12-06 07:37:55.673303272 +0000 UTC m=+0.049814144 container died ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.679 232437 INFO nova.virt.libvirt.driver [-] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Instance destroyed successfully.#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.684 232437 INFO nova.virt.libvirt.driver [-] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Instance destroyed successfully.#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.686 232437 DEBUG nova.virt.libvirt.vif [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:36:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1360575984',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1645861904',id=133,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC09UQYuvXJm1Rko+1Enhuo3SSbAWagIRaWgjbl1eMQTI/iz1qciOUWudPgubxE+dfDGSMSVZfhJ7I1j6mbbf0xS8CEMgr5ptTVwPCXCiAWETgl5ZTVu74pyI+Gi8rzOng==',key_name='tempest-keypair-2069448021',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:37:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='21cd37bffa864aaebe5c734ba468f466',ramdisk_id='',reservation_id='r-xs37ll8f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-1595119527',owner_user_name='tempest-ServerActionsV293TestJSON-1595119527-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:37:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7960f8f407754e5d83bee65690a6b772',uuid=ffee41ca-ba3b-4787-8435-b3903ffb29a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.686 232437 DEBUG nova.network.os_vif_util [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converting VIF {"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.687 232437 DEBUG nova.network.os_vif_util [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.687 232437 DEBUG os_vif [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.689 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.690 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap500715f2-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.691 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.692 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.694 232437 INFO os_vif [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22')#033[00m
Dec  6 02:37:55 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133-userdata-shm.mount: Deactivated successfully.
Dec  6 02:37:55 np0005548731 systemd[1]: var-lib-containers-storage-overlay-7915ed9c5be7dafa716ca02205b2f5d45befabedcf9d8688bd3347cfadcb3f54-merged.mount: Deactivated successfully.
Dec  6 02:37:55 np0005548731 podman[289397]: 2025-12-06 07:37:55.734629196 +0000 UTC m=+0.111140078 container cleanup ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:37:55 np0005548731 systemd[1]: libpod-conmon-ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133.scope: Deactivated successfully.
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.766 232437 DEBUG nova.network.neutron [req-063f9cd2-ab66-4ed7-9585-b6f98292a189 req-406ed0cb-f263-456b-ac06-f5f8748d89ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updated VIF entry in instance network info cache for port 657a8a48-7a74-4b37-a294-df0b2fd9332c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.767 232437 DEBUG nova.network.neutron [req-063f9cd2-ab66-4ed7-9585-b6f98292a189 req-406ed0cb-f263-456b-ac06-f5f8748d89ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [{"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:37:55 np0005548731 podman[289453]: 2025-12-06 07:37:55.804019097 +0000 UTC m=+0.043696446 container remove ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.805 232437 DEBUG oslo_concurrency.lockutils [req-063f9cd2-ab66-4ed7-9585-b6f98292a189 req-406ed0cb-f263-456b-ac06-f5f8748d89ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.809 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb51d33-0b8d-4b32-b6a6-f29b0f231072]: (4, ('Sat Dec  6 07:37:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa (ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133)\nac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133\nSat Dec  6 07:37:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa (ac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133)\nac080054491c7bdf230018b8aebe8e6bcbedf8e6b3ec90ee15e608bc70261133\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.811 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[11660d5a-f16d-4aff-b0c2-638f3ace5948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.811 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4ed5caf-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.813 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:55 np0005548731 kernel: tape4ed5caf-90: left promiscuous mode
Dec  6 02:37:55 np0005548731 nova_compute[232433]: 2025-12-06 07:37:55.826 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.829 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[57150407-d0ce-48f3-922d-1b1c81ccb03f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.844 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5d0f79-87a8-4b48-9eeb-8022ddecce02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.846 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3034c5-fcc9-46d9-a346-081abb6b1d33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.860 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7e43495f-6883-4d3c-beaf-bc04b5e8b540]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689518, 'reachable_time': 35676, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289471, 'error': None, 'target': 'ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.863 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:37:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:37:55.863 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbabf65-8b34-433e-9530-3f76e6bb377c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:37:55 np0005548731 systemd[1]: run-netns-ovnmeta\x2de4ed5caf\x2d995c\x2d4d95\x2da87b\x2d6b8a5de780fa.mount: Deactivated successfully.
Dec  6 02:37:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:56.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:56 np0005548731 nova_compute[232433]: 2025-12-06 07:37:56.057 232437 INFO nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Deleting instance files /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9_del#033[00m
Dec  6 02:37:56 np0005548731 nova_compute[232433]: 2025-12-06 07:37:56.058 232437 INFO nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Deletion of /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9_del complete#033[00m
Dec  6 02:37:56 np0005548731 nova_compute[232433]: 2025-12-06 07:37:56.295 232437 DEBUG nova.compute.manager [req-943059b3-40a1-4a64-8802-4d1b623d7fc7 req-f8a6c671-caac-461c-9e50-4036101b7ac3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-vif-unplugged-500715f2-2222-4003-b268-0369959f5e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:56 np0005548731 nova_compute[232433]: 2025-12-06 07:37:56.295 232437 DEBUG oslo_concurrency.lockutils [req-943059b3-40a1-4a64-8802-4d1b623d7fc7 req-f8a6c671-caac-461c-9e50-4036101b7ac3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:56 np0005548731 nova_compute[232433]: 2025-12-06 07:37:56.295 232437 DEBUG oslo_concurrency.lockutils [req-943059b3-40a1-4a64-8802-4d1b623d7fc7 req-f8a6c671-caac-461c-9e50-4036101b7ac3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:56 np0005548731 nova_compute[232433]: 2025-12-06 07:37:56.295 232437 DEBUG oslo_concurrency.lockutils [req-943059b3-40a1-4a64-8802-4d1b623d7fc7 req-f8a6c671-caac-461c-9e50-4036101b7ac3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:56 np0005548731 nova_compute[232433]: 2025-12-06 07:37:56.295 232437 DEBUG nova.compute.manager [req-943059b3-40a1-4a64-8802-4d1b623d7fc7 req-f8a6c671-caac-461c-9e50-4036101b7ac3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] No waiting events found dispatching network-vif-unplugged-500715f2-2222-4003-b268-0369959f5e25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:37:56 np0005548731 nova_compute[232433]: 2025-12-06 07:37:56.296 232437 WARNING nova.compute.manager [req-943059b3-40a1-4a64-8802-4d1b623d7fc7 req-f8a6c671-caac-461c-9e50-4036101b7ac3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received unexpected event network-vif-unplugged-500715f2-2222-4003-b268-0369959f5e25 for instance with vm_state active and task_state rebuilding.#033[00m
Dec  6 02:37:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:56.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:56 np0005548731 nova_compute[232433]: 2025-12-06 07:37:56.826 232437 WARNING nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] During detach_volume, instance disappeared.: nova.exception.InstanceNotFound: Instance ffee41ca-ba3b-4787-8435-b3903ffb29a9 could not be found.#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.491 232437 DEBUG nova.compute.manager [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Preparing to wait for external event volume-reimaged-a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.492 232437 DEBUG oslo_concurrency.lockutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.492 232437 DEBUG oslo_concurrency.lockutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.492 232437 DEBUG oslo_concurrency.lockutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.670 232437 DEBUG oslo_concurrency.lockutils [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.671 232437 DEBUG oslo_concurrency.lockutils [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.686 232437 INFO nova.compute.manager [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Detaching volume fb115316-35b8-4af3-9847-c6315737a158#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.836 232437 INFO nova.virt.block_device [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Attempting to driver detach volume fb115316-35b8-4af3-9847-c6315737a158 from mountpoint /dev/vdb#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.846 232437 DEBUG nova.virt.libvirt.driver [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Attempting to detach device vdb from instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.847 232437 DEBUG nova.virt.libvirt.guest [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-fb115316-35b8-4af3-9847-c6315737a158">
Dec  6 02:37:57 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <serial>fb115316-35b8-4af3-9847-c6315737a158</serial>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <shareable/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:37:57 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.854 232437 INFO nova.virt.libvirt.driver [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully detached device vdb from instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 from the persistent domain config.#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.855 232437 DEBUG nova.virt.libvirt.driver [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.855 232437 DEBUG nova.virt.libvirt.guest [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-fb115316-35b8-4af3-9847-c6315737a158">
Dec  6 02:37:57 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <serial>fb115316-35b8-4af3-9847-c6315737a158</serial>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <shareable/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec  6 02:37:57 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:37:57 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.916 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765006677.9165561, b4be0ef8-945f-47a1-a3a8-5962f1e692e5 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.918 232437 DEBUG nova.virt.libvirt.driver [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:37:57 np0005548731 nova_compute[232433]: 2025-12-06 07:37:57.920 232437 INFO nova.virt.libvirt.driver [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully detached device vdb from instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 from the live domain config.#033[00m
Dec  6 02:37:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:58.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:58 np0005548731 nova_compute[232433]: 2025-12-06 07:37:58.066 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:37:58 np0005548731 podman[289501]: 2025-12-06 07:37:58.103879972 +0000 UTC m=+0.056758923 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 02:37:58 np0005548731 podman[289503]: 2025-12-06 07:37:58.10581867 +0000 UTC m=+0.056703383 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:37:58 np0005548731 podman[289502]: 2025-12-06 07:37:58.154097416 +0000 UTC m=+0.107116700 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller)
Dec  6 02:37:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:37:58 np0005548731 nova_compute[232433]: 2025-12-06 07:37:58.484 232437 DEBUG nova.compute.manager [req-76ddd44e-97ff-4b1d-92f9-465a6cae9ff5 req-91ccab3b-1ce5-49a6-991c-89ef41817f0b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:37:58 np0005548731 nova_compute[232433]: 2025-12-06 07:37:58.485 232437 DEBUG oslo_concurrency.lockutils [req-76ddd44e-97ff-4b1d-92f9-465a6cae9ff5 req-91ccab3b-1ce5-49a6-991c-89ef41817f0b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:58 np0005548731 nova_compute[232433]: 2025-12-06 07:37:58.485 232437 DEBUG oslo_concurrency.lockutils [req-76ddd44e-97ff-4b1d-92f9-465a6cae9ff5 req-91ccab3b-1ce5-49a6-991c-89ef41817f0b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:58 np0005548731 nova_compute[232433]: 2025-12-06 07:37:58.485 232437 DEBUG oslo_concurrency.lockutils [req-76ddd44e-97ff-4b1d-92f9-465a6cae9ff5 req-91ccab3b-1ce5-49a6-991c-89ef41817f0b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:58 np0005548731 nova_compute[232433]: 2025-12-06 07:37:58.485 232437 DEBUG nova.compute.manager [req-76ddd44e-97ff-4b1d-92f9-465a6cae9ff5 req-91ccab3b-1ce5-49a6-991c-89ef41817f0b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] No event matching network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 in dict_keys([('volume-reimaged', 'a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec  6 02:37:58 np0005548731 nova_compute[232433]: 2025-12-06 07:37:58.485 232437 WARNING nova.compute.manager [req-76ddd44e-97ff-4b1d-92f9-465a6cae9ff5 req-91ccab3b-1ce5-49a6-991c-89ef41817f0b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received unexpected event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 for instance with vm_state active and task_state rebuilding.#033[00m
Dec  6 02:37:58 np0005548731 nova_compute[232433]: 2025-12-06 07:37:58.559 232437 DEBUG nova.objects.instance [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:37:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:37:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:37:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:58.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:37:58 np0005548731 nova_compute[232433]: 2025-12-06 07:37:58.854 232437 DEBUG oslo_concurrency.lockutils [None req-deed003e-9739-4f76-8f31-74df2067d881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:37:59 np0005548731 nova_compute[232433]: 2025-12-06 07:37:59.304 232437 DEBUG nova.compute.manager [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Dec  6 02:37:59 np0005548731 nova_compute[232433]: 2025-12-06 07:37:59.532 232437 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:37:59 np0005548731 nova_compute[232433]: 2025-12-06 07:37:59.532 232437 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:37:59 np0005548731 nova_compute[232433]: 2025-12-06 07:37:59.760 232437 DEBUG nova.objects.instance [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:37:59 np0005548731 nova_compute[232433]: 2025-12-06 07:37:59.800 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:37:59 np0005548731 nova_compute[232433]: 2025-12-06 07:37:59.801 232437 INFO nova.compute.claims [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:37:59 np0005548731 nova_compute[232433]: 2025-12-06 07:37:59.801 232437 DEBUG nova.objects.instance [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'resources' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:37:59 np0005548731 nova_compute[232433]: 2025-12-06 07:37:59.849 232437 DEBUG nova.objects.instance [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:38:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:00.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:00 np0005548731 nova_compute[232433]: 2025-12-06 07:38:00.519 232437 INFO nova.compute.resource_tracker [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating resource usage from migration 47ca4852-1aea-48fa-a209-2395d57c0b69#033[00m
Dec  6 02:38:00 np0005548731 nova_compute[232433]: 2025-12-06 07:38:00.520 232437 DEBUG nova.compute.resource_tracker [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Starting to track incoming migration 47ca4852-1aea-48fa-a209-2395d57c0b69 with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Dec  6 02:38:00 np0005548731 nova_compute[232433]: 2025-12-06 07:38:00.691 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:00.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:00.879 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:00.879 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:00.880 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:00 np0005548731 nova_compute[232433]: 2025-12-06 07:38:00.898 232437 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:38:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1196958827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:38:01 np0005548731 nova_compute[232433]: 2025-12-06 07:38:01.307 232437 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:01 np0005548731 nova_compute[232433]: 2025-12-06 07:38:01.313 232437 DEBUG nova.compute.provider_tree [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:38:01 np0005548731 nova_compute[232433]: 2025-12-06 07:38:01.366 232437 DEBUG nova.scheduler.client.report [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:38:01 np0005548731 nova_compute[232433]: 2025-12-06 07:38:01.464 232437 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:01 np0005548731 nova_compute[232433]: 2025-12-06 07:38:01.464 232437 INFO nova.compute.manager [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Migrating#033[00m
Dec  6 02:38:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:02.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:02.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:03 np0005548731 nova_compute[232433]: 2025-12-06 07:38:03.070 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:38:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:38:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:04.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:38:04 np0005548731 systemd[1]: Created slice User Slice of UID 42436.
Dec  6 02:38:04 np0005548731 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec  6 02:38:04 np0005548731 systemd-logind[794]: New session 64 of user nova.
Dec  6 02:38:04 np0005548731 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec  6 02:38:04 np0005548731 systemd[1]: Starting User Manager for UID 42436...
Dec  6 02:38:04 np0005548731 systemd[289617]: Queued start job for default target Main User Target.
Dec  6 02:38:04 np0005548731 systemd[289617]: Created slice User Application Slice.
Dec  6 02:38:04 np0005548731 systemd[289617]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 02:38:04 np0005548731 systemd[289617]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 02:38:04 np0005548731 systemd[289617]: Reached target Paths.
Dec  6 02:38:04 np0005548731 systemd[289617]: Reached target Timers.
Dec  6 02:38:04 np0005548731 systemd[289617]: Starting D-Bus User Message Bus Socket...
Dec  6 02:38:04 np0005548731 systemd[289617]: Starting Create User's Volatile Files and Directories...
Dec  6 02:38:04 np0005548731 systemd[289617]: Finished Create User's Volatile Files and Directories.
Dec  6 02:38:04 np0005548731 systemd[289617]: Listening on D-Bus User Message Bus Socket.
Dec  6 02:38:04 np0005548731 systemd[289617]: Reached target Sockets.
Dec  6 02:38:04 np0005548731 systemd[289617]: Reached target Basic System.
Dec  6 02:38:04 np0005548731 systemd[289617]: Reached target Main User Target.
Dec  6 02:38:04 np0005548731 systemd[289617]: Startup finished in 139ms.
Dec  6 02:38:04 np0005548731 systemd[1]: Started User Manager for UID 42436.
Dec  6 02:38:04 np0005548731 systemd[1]: Started Session 64 of User nova.
Dec  6 02:38:04 np0005548731 systemd[1]: session-64.scope: Deactivated successfully.
Dec  6 02:38:04 np0005548731 systemd-logind[794]: Session 64 logged out. Waiting for processes to exit.
Dec  6 02:38:04 np0005548731 systemd-logind[794]: Removed session 64.
Dec  6 02:38:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:38:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:04.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:38:04 np0005548731 systemd-logind[794]: New session 66 of user nova.
Dec  6 02:38:04 np0005548731 systemd[1]: Started Session 66 of User nova.
Dec  6 02:38:05 np0005548731 systemd[1]: session-66.scope: Deactivated successfully.
Dec  6 02:38:05 np0005548731 systemd-logind[794]: Session 66 logged out. Waiting for processes to exit.
Dec  6 02:38:05 np0005548731 systemd-logind[794]: Removed session 66.
Dec  6 02:38:05 np0005548731 nova_compute[232433]: 2025-12-06 07:38:05.693 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:06.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:06.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:08 np0005548731 nova_compute[232433]: 2025-12-06 07:38:08.071 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:38:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:08.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:09 np0005548731 nova_compute[232433]: 2025-12-06 07:38:09.600 232437 DEBUG nova.compute.manager [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:38:09 np0005548731 nova_compute[232433]: 2025-12-06 07:38:09.600 232437 DEBUG oslo_concurrency.lockutils [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:09 np0005548731 nova_compute[232433]: 2025-12-06 07:38:09.601 232437 DEBUG oslo_concurrency.lockutils [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:09 np0005548731 nova_compute[232433]: 2025-12-06 07:38:09.601 232437 DEBUG oslo_concurrency.lockutils [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:09 np0005548731 nova_compute[232433]: 2025-12-06 07:38:09.601 232437 DEBUG nova.compute.manager [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:38:09 np0005548731 nova_compute[232433]: 2025-12-06 07:38:09.601 232437 WARNING nova.compute.manager [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state active and task_state resize_migrating.#033[00m
Dec  6 02:38:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:10.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:10 np0005548731 nova_compute[232433]: 2025-12-06 07:38:10.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:10 np0005548731 nova_compute[232433]: 2025-12-06 07:38:10.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:38:10 np0005548731 nova_compute[232433]: 2025-12-06 07:38:10.151 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Skipping network cache update for instance because it has been migrated to another host. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9902#033[00m
Dec  6 02:38:10 np0005548731 nova_compute[232433]: 2025-12-06 07:38:10.152 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:38:10 np0005548731 nova_compute[232433]: 2025-12-06 07:38:10.679 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006675.6781595, ffee41ca-ba3b-4787-8435-b3903ffb29a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:38:10 np0005548731 nova_compute[232433]: 2025-12-06 07:38:10.680 232437 INFO nova.compute.manager [-] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:38:10 np0005548731 nova_compute[232433]: 2025-12-06 07:38:10.695 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:10.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:10.943 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:38:10 np0005548731 nova_compute[232433]: 2025-12-06 07:38:10.944 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:10.944 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:38:10 np0005548731 nova_compute[232433]: 2025-12-06 07:38:10.955 232437 DEBUG nova.compute.manager [None req-48abacdc-5f3e-446e-ba45-281f8b861d9a - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:38:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:12.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:12 np0005548731 nova_compute[232433]: 2025-12-06 07:38:12.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:12 np0005548731 nova_compute[232433]: 2025-12-06 07:38:12.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:12 np0005548731 nova_compute[232433]: 2025-12-06 07:38:12.471 232437 DEBUG nova.compute.manager [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:38:12 np0005548731 nova_compute[232433]: 2025-12-06 07:38:12.471 232437 DEBUG oslo_concurrency.lockutils [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:12 np0005548731 nova_compute[232433]: 2025-12-06 07:38:12.472 232437 DEBUG oslo_concurrency.lockutils [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:12 np0005548731 nova_compute[232433]: 2025-12-06 07:38:12.472 232437 DEBUG oslo_concurrency.lockutils [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:12 np0005548731 nova_compute[232433]: 2025-12-06 07:38:12.472 232437 DEBUG nova.compute.manager [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:38:12 np0005548731 nova_compute[232433]: 2025-12-06 07:38:12.472 232437 WARNING nova.compute.manager [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state active and task_state resize_migrated.#033[00m
Dec  6 02:38:12 np0005548731 nova_compute[232433]: 2025-12-06 07:38:12.686 232437 INFO nova.network.neutron [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating port 8690867c-c0a8-4574-b54f-38486691e339 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec  6 02:38:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:12.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:13 np0005548731 nova_compute[232433]: 2025-12-06 07:38:13.108 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:38:13 np0005548731 nova_compute[232433]: 2025-12-06 07:38:13.952 232437 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:38:13 np0005548731 nova_compute[232433]: 2025-12-06 07:38:13.953 232437 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:38:13 np0005548731 nova_compute[232433]: 2025-12-06 07:38:13.953 232437 DEBUG nova.network.neutron [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:38:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:38:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:14.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.102 232437 DEBUG nova.compute.manager [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-changed-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.103 232437 DEBUG nova.compute.manager [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing instance network info cache due to event network-changed-8690867c-c0a8-4574-b54f-38486691e339. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.103 232437 DEBUG oslo_concurrency.lockutils [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.477 232437 DEBUG nova.compute.manager [req-9fd702a1-9d3a-4977-9a60-8215569334de req-51db7bca-bdca-4c8b-8d52-f33ad29d3b5f dc3c916f09ac4f91b8e6a3247de028c1 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event volume-reimaged-a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.477 232437 DEBUG oslo_concurrency.lockutils [req-9fd702a1-9d3a-4977-9a60-8215569334de req-51db7bca-bdca-4c8b-8d52-f33ad29d3b5f dc3c916f09ac4f91b8e6a3247de028c1 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.477 232437 DEBUG oslo_concurrency.lockutils [req-9fd702a1-9d3a-4977-9a60-8215569334de req-51db7bca-bdca-4c8b-8d52-f33ad29d3b5f dc3c916f09ac4f91b8e6a3247de028c1 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.478 232437 DEBUG oslo_concurrency.lockutils [req-9fd702a1-9d3a-4977-9a60-8215569334de req-51db7bca-bdca-4c8b-8d52-f33ad29d3b5f dc3c916f09ac4f91b8e6a3247de028c1 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.478 232437 DEBUG nova.compute.manager [req-9fd702a1-9d3a-4977-9a60-8215569334de req-51db7bca-bdca-4c8b-8d52-f33ad29d3b5f dc3c916f09ac4f91b8e6a3247de028c1 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Processing event volume-reimaged-a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.478 232437 DEBUG nova.compute.manager [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Instance event wait completed in 16 seconds for volume-reimaged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.532 232437 INFO nova.virt.block_device [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Booting with volume a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c at /dev/vda#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.700 232437 DEBUG os_brick.utils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.702 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.715 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.716 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[08f407ff-630a-47c5-bdd0-09f9a79cbf49]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.717 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.725 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.725 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[afe1ad42-7191-431b-9595-3e2517047151]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.727 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.743 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.744 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[53f1a2b2-474e-40ee-8814-cdfc70c68c00]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.745 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[9f634b0a-ed1f-44f7-b005-0313ba88290b]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.746 232437 DEBUG oslo_concurrency.processutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.769 232437 DEBUG oslo_concurrency.processutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.771 232437 DEBUG os_brick.initiator.connectors.lightos [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.771 232437 DEBUG os_brick.initiator.connectors.lightos [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.771 232437 DEBUG os_brick.initiator.connectors.lightos [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.772 232437 DEBUG os_brick.utils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:38:14 np0005548731 nova_compute[232433]: 2025-12-06 07:38:14.772 232437 DEBUG nova.virt.block_device [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Updating existing volume attachment record: b1919775-1b27-454b-91f9-61cf71009f3f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:38:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:38:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:14.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:38:15 np0005548731 systemd[1]: Stopping User Manager for UID 42436...
Dec  6 02:38:15 np0005548731 systemd[289617]: Activating special unit Exit the Session...
Dec  6 02:38:15 np0005548731 systemd[289617]: Stopped target Main User Target.
Dec  6 02:38:15 np0005548731 systemd[289617]: Stopped target Basic System.
Dec  6 02:38:15 np0005548731 systemd[289617]: Stopped target Paths.
Dec  6 02:38:15 np0005548731 systemd[289617]: Stopped target Sockets.
Dec  6 02:38:15 np0005548731 systemd[289617]: Stopped target Timers.
Dec  6 02:38:15 np0005548731 systemd[289617]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  6 02:38:15 np0005548731 systemd[289617]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  6 02:38:15 np0005548731 systemd[289617]: Closed D-Bus User Message Bus Socket.
Dec  6 02:38:15 np0005548731 systemd[289617]: Stopped Create User's Volatile Files and Directories.
Dec  6 02:38:15 np0005548731 systemd[289617]: Removed slice User Application Slice.
Dec  6 02:38:15 np0005548731 systemd[289617]: Reached target Shutdown.
Dec  6 02:38:15 np0005548731 systemd[289617]: Finished Exit the Session.
Dec  6 02:38:15 np0005548731 systemd[289617]: Reached target Exit the Session.
Dec  6 02:38:15 np0005548731 systemd[1]: user@42436.service: Deactivated successfully.
Dec  6 02:38:15 np0005548731 systemd[1]: Stopped User Manager for UID 42436.
Dec  6 02:38:15 np0005548731 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec  6 02:38:15 np0005548731 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec  6 02:38:15 np0005548731 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec  6 02:38:15 np0005548731 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec  6 02:38:15 np0005548731 systemd[1]: Removed slice User Slice of UID 42436.
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.698 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.811 232437 DEBUG nova.network.neutron [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.841 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.841 232437 INFO nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Creating image(s)#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.842 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.842 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Ensure instance console log exists: /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.842 232437 DEBUG oslo_concurrency.lockutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.842 232437 DEBUG oslo_concurrency.lockutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.843 232437 DEBUG oslo_concurrency.lockutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.845 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Start _get_guest_xml network_info=[{"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ffee41ca-ba3b-4787-8435-b3903ffb29a9', 'attached_at': '', 'detached_at': '', 'volume_id': 'a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c', 'serial': 'a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'attachment_id': 'b1919775-1b27-454b-91f9-61cf71009f3f', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.846 232437 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.849 232437 DEBUG oslo_concurrency.lockutils [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.849 232437 DEBUG nova.network.neutron [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing network info cache for port 8690867c-c0a8-4574-b54f-38486691e339 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.853 232437 WARNING nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.859 232437 DEBUG nova.virt.libvirt.host [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.859 232437 DEBUG nova.virt.libvirt.host [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.864 232437 DEBUG nova.virt.libvirt.host [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.864 232437 DEBUG nova.virt.libvirt.host [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.865 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.866 232437 DEBUG nova.virt.hardware [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.866 232437 DEBUG nova.virt.hardware [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.866 232437 DEBUG nova.virt.hardware [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.866 232437 DEBUG nova.virt.hardware [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.866 232437 DEBUG nova.virt.hardware [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.867 232437 DEBUG nova.virt.hardware [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.867 232437 DEBUG nova.virt.hardware [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.867 232437 DEBUG nova.virt.hardware [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.867 232437 DEBUG nova.virt.hardware [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.868 232437 DEBUG nova.virt.hardware [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.868 232437 DEBUG nova.virt.hardware [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.868 232437 DEBUG nova.objects.instance [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ffee41ca-ba3b-4787-8435-b3903ffb29a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.907 232437 DEBUG nova.storage.rbd_utils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] rbd image ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.910 232437 DEBUG oslo_concurrency.processutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.942 232437 DEBUG os_brick.utils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.943 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.955 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.956 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9ae810-4203-4370-aa13-741878d63598]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.957 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.965 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.966 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[be9596ab-e36e-4d91-ba0e-f7fed673af8b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.967 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.975 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.975 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[95c74bb2-57f5-4a71-9423-b026a07b881a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.976 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[2229fade-69df-4572-8a15-32f588fb4ccc]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:15 np0005548731 nova_compute[232433]: 2025-12-06 07:38:15.977 232437 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.007 232437 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.010 232437 DEBUG os_brick.initiator.connectors.lightos [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.010 232437 DEBUG os_brick.initiator.connectors.lightos [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.011 232437 DEBUG os_brick.initiator.connectors.lightos [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.011 232437 DEBUG os_brick.utils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:38:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:16.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:38:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:38:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1297615286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.795 232437 DEBUG oslo_concurrency.processutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.885s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.820 232437 DEBUG nova.virt.libvirt.vif [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:36:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1360575984',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1645861904',id=133,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC09UQYuvXJm1Rko+1Enhuo3SSbAWagIRaWgjbl1eMQTI/iz1qciOUWudPgubxE+dfDGSMSVZfhJ7I1j6mbbf0xS8CEMgr5ptTVwPCXCiAWETgl5ZTVu74pyI+Gi8rzOng==',key_name='tempest-keypair-2069448021',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:37:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='21cd37bffa864aaebe5c734ba468f466',ramdisk_id='',reservation_id='r-xs37ll8f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-1595119527',owner_user_name='tempest-ServerActionsV293TestJSON-1595119527-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:38:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7960f8f407754e5d83bee65690a6b772',uuid=ffee41ca-ba3b-4787-8435-b3903ffb29a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.821 232437 DEBUG nova.network.os_vif_util [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converting VIF {"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.822 232437 DEBUG nova.network.os_vif_util [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.824 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  <uuid>ffee41ca-ba3b-4787-8435-b3903ffb29a9</uuid>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  <name>instance-00000085</name>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerActionsV293TestJSON-server-1360575984</nova:name>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:38:15</nova:creationTime>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <nova:user uuid="7960f8f407754e5d83bee65690a6b772">tempest-ServerActionsV293TestJSON-1595119527-project-member</nova:user>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <nova:project uuid="21cd37bffa864aaebe5c734ba468f466">tempest-ServerActionsV293TestJSON-1595119527</nova:project>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <nova:port uuid="500715f2-2222-4003-b268-0369959f5e25">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <entry name="serial">ffee41ca-ba3b-4787-8435-b3903ffb29a9</entry>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <entry name="uuid">ffee41ca-ba3b-4787-8435-b3903ffb29a9</entry>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <serial>a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c</serial>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:d3:0b:9b"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <target dev="tap500715f2-22"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/console.log" append="off"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:38:16 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:38:16 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:38:16 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:38:16 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.824 232437 DEBUG nova.virt.libvirt.vif [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:36:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1360575984',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1645861904',id=133,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC09UQYuvXJm1Rko+1Enhuo3SSbAWagIRaWgjbl1eMQTI/iz1qciOUWudPgubxE+dfDGSMSVZfhJ7I1j6mbbf0xS8CEMgr5ptTVwPCXCiAWETgl5ZTVu74pyI+Gi8rzOng==',key_name='tempest-keypair-2069448021',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:37:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='21cd37bffa864aaebe5c734ba468f466',ramdisk_id='',reservation_id='r-xs37ll8f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-1595119527',owner_user_name='tempest-ServerActionsV293TestJSON-1595119527-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:38:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7960f8f407754e5d83bee65690a6b772',uuid=ffee41ca-ba3b-4787-8435-b3903ffb29a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.824 232437 DEBUG nova.network.os_vif_util [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converting VIF {"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.825 232437 DEBUG nova.network.os_vif_util [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.825 232437 DEBUG os_vif [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.826 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.826 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.826 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:38:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:16.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.830 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.830 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap500715f2-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.831 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap500715f2-22, col_values=(('external_ids', {'iface-id': '500715f2-2222-4003-b268-0369959f5e25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:0b:9b', 'vm-uuid': 'ffee41ca-ba3b-4787-8435-b3903ffb29a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.832 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:16 np0005548731 NetworkManager[49182]: <info>  [1765006696.8331] manager: (tap500715f2-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.834 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.838 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.839 232437 INFO os_vif [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22')#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.894 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.894 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.894 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] No VIF found with MAC fa:16:3e:d3:0b:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.895 232437 INFO nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Using config drive#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.919 232437 DEBUG nova.storage.rbd_utils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] rbd image ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.938 232437 DEBUG nova.objects.instance [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ffee41ca-ba3b-4787-8435-b3903ffb29a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:38:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:16.946 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:16 np0005548731 nova_compute[232433]: 2025-12-06 07:38:16.977 232437 DEBUG nova.objects.instance [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lazy-loading 'keypairs' on Instance uuid ffee41ca-ba3b-4787-8435-b3903ffb29a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.445 232437 INFO nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Creating config drive at /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.451 232437 DEBUG oslo_concurrency.processutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpznuh5viy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.584 232437 DEBUG oslo_concurrency.processutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpznuh5viy" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.611 232437 DEBUG nova.storage.rbd_utils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] rbd image ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.614 232437 DEBUG oslo_concurrency.processutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.763 232437 DEBUG oslo_concurrency.processutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config ffee41ca-ba3b-4787-8435-b3903ffb29a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.764 232437 INFO nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Deleting local config drive /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9/disk.config because it was imported into RBD.#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.768 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.770 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.770 232437 INFO nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Creating image(s)#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.803 232437 DEBUG nova.storage.rbd_utils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(nova-resize) on rbd image(2de097e3-8182-48e5-b69d-88acbfb84e66_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:38:17 np0005548731 kernel: tap500715f2-22: entered promiscuous mode
Dec  6 02:38:17 np0005548731 NetworkManager[49182]: <info>  [1765006697.8115] manager: (tap500715f2-22): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Dec  6 02:38:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:17Z|00585|binding|INFO|Claiming lport 500715f2-2222-4003-b268-0369959f5e25 for this chassis.
Dec  6 02:38:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:17Z|00586|binding|INFO|500715f2-2222-4003-b268-0369959f5e25: Claiming fa:16:3e:d3:0b:9b 10.100.0.7
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.819 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:0b:9b 10.100.0.7'], port_security=['fa:16:3e:d3:0b:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ffee41ca-ba3b-4787-8435-b3903ffb29a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21cd37bffa864aaebe5c734ba468f466', 'neutron:revision_number': '5', 'neutron:security_group_ids': '349efd6b-63d0-41af-8405-3d1f52d82535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=359f89a6-280c-4f37-8512-64abc4582e65, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=500715f2-2222-4003-b268-0369959f5e25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.821 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 500715f2-2222-4003-b268-0369959f5e25 in datapath e4ed5caf-995c-4d95-a87b-6b8a5de780fa bound to our chassis#033[00m
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.823 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4ed5caf-995c-4d95-a87b-6b8a5de780fa#033[00m
Dec  6 02:38:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:17Z|00587|binding|INFO|Setting lport 500715f2-2222-4003-b268-0369959f5e25 ovn-installed in OVS
Dec  6 02:38:17 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:17Z|00588|binding|INFO|Setting lport 500715f2-2222-4003-b268-0369959f5e25 up in Southbound
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.837 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[143d51c3-efbd-43f5-a298-67c2bbc97922]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.838 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4ed5caf-91 in ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:38:17 np0005548731 systemd-udevd[289801]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.842 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4ed5caf-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.842 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[05a60615-9ab2-4a27-9978-dd910afb8c97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.844 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e913d24f-f840-49a6-9baa-a46595757069]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:17 np0005548731 systemd-machined[195355]: New machine qemu-60-instance-00000085.
Dec  6 02:38:17 np0005548731 NetworkManager[49182]: <info>  [1765006697.8521] device (tap500715f2-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:38:17 np0005548731 NetworkManager[49182]: <info>  [1765006697.8529] device (tap500715f2-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.855 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[db2a0574-8907-4925-a986-d71e3a9a0386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:17 np0005548731 systemd[1]: Started Virtual Machine qemu-60-instance-00000085.
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.868 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0eee3707-f5f3-4c97-86b7-1e20e96b8ca5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.902 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[399576da-17b6-4390-8660-e33010bef32b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:17 np0005548731 NetworkManager[49182]: <info>  [1765006697.9082] manager: (tape4ed5caf-90): new Veth device (/org/freedesktop/NetworkManager/Devices/276)
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.907 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[636c7ae8-37f2-4c91-a3c5-05be4f0cd842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.916 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.919 232437 DEBUG nova.network.neutron [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updated VIF entry in instance network info cache for port 8690867c-c0a8-4574-b54f-38486691e339. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.919 232437 DEBUG nova.network.neutron [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:38:17 np0005548731 nova_compute[232433]: 2025-12-06 07:38:17.935 232437 DEBUG oslo_concurrency.lockutils [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.939 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9f570eaf-88f0-45fe-8b3e-5c38ffe3bce1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.942 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[60f8988b-bf13-4aa9-aafd-964a32456ab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:17 np0005548731 NetworkManager[49182]: <info>  [1765006697.9629] device (tape4ed5caf-90): carrier: link connected
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.967 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[07144bb8-033b-4111-926b-d09804ab1db4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.983 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ae965b81-f62b-4d6e-b771-aa2e04410108]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4ed5caf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:38:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695996, 'reachable_time': 18242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289844, 'error': None, 'target': 'ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:17.997 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[29127335-2dd4-4fcb-82f0-4e86971ae303]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:3865'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695996, 'tstamp': 695996}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289845, 'error': None, 'target': 'ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:18.012 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[05924c28-373b-47c9-8f6a-e8089432c7bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4ed5caf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:38:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695996, 'reachable_time': 18242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289846, 'error': None, 'target': 'ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:18.040 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[809c60c8-6167-4c64-a2af-6667541c1e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:18.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:18.101 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8775da5b-b949-42a9-ae11-2a64adeb4ad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:18.102 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4ed5caf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:18.102 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:18.103 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4ed5caf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.102 232437 DEBUG nova.compute.manager [req-75038a67-5f58-40ae-b951-812edf39ab83 req-9d19eefc-ef14-4bdb-9635-64256c380623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.103 232437 DEBUG oslo_concurrency.lockutils [req-75038a67-5f58-40ae-b951-812edf39ab83 req-9d19eefc-ef14-4bdb-9635-64256c380623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.103 232437 DEBUG oslo_concurrency.lockutils [req-75038a67-5f58-40ae-b951-812edf39ab83 req-9d19eefc-ef14-4bdb-9635-64256c380623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.103 232437 DEBUG oslo_concurrency.lockutils [req-75038a67-5f58-40ae-b951-812edf39ab83 req-9d19eefc-ef14-4bdb-9635-64256c380623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.103 232437 DEBUG nova.compute.manager [req-75038a67-5f58-40ae-b951-812edf39ab83 req-9d19eefc-ef14-4bdb-9635-64256c380623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] No waiting events found dispatching network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.104 232437 WARNING nova.compute.manager [req-75038a67-5f58-40ae-b951-812edf39ab83 req-9d19eefc-ef14-4bdb-9635-64256c380623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received unexpected event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.104 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:18 np0005548731 NetworkManager[49182]: <info>  [1765006698.1049] manager: (tape4ed5caf-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Dec  6 02:38:18 np0005548731 kernel: tape4ed5caf-90: entered promiscuous mode
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:18.106 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4ed5caf-90, col_values=(('external_ids', {'iface-id': 'aa7ac13f-a2c4-4bc9-9feb-e67530c4f4ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.107 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:18Z|00589|binding|INFO|Releasing lport aa7ac13f-a2c4-4bc9-9feb-e67530c4f4ab from this chassis (sb_readonly=0)
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.108 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.121 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:18.122 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4ed5caf-995c-4d95-a87b-6b8a5de780fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4ed5caf-995c-4d95-a87b-6b8a5de780fa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.122 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:18.123 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4e21bf7d-69f6-43b0-b678-21876624a6bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:18.123 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-e4ed5caf-995c-4d95-a87b-6b8a5de780fa
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/e4ed5caf-995c-4d95-a87b-6b8a5de780fa.pid.haproxy
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID e4ed5caf-995c-4d95-a87b-6b8a5de780fa
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:38:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:18.124 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'env', 'PROCESS_TAG=haproxy-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4ed5caf-995c-4d95-a87b-6b8a5de780fa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:38:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:38:18 np0005548731 podman[289927]: 2025-12-06 07:38:18.487706494 +0000 UTC m=+0.052420509 container create 580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  6 02:38:18 np0005548731 systemd[1]: Started libpod-conmon-580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85.scope.
Dec  6 02:38:18 np0005548731 podman[289927]: 2025-12-06 07:38:18.46498377 +0000 UTC m=+0.029697815 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:38:18 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:38:18 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18db066f91339cbba5e23b22028a52934a97ae23ad136d2b812ea5bba68c887c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:38:18 np0005548731 podman[289927]: 2025-12-06 07:38:18.572020248 +0000 UTC m=+0.136734283 container init 580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  6 02:38:18 np0005548731 podman[289927]: 2025-12-06 07:38:18.578208838 +0000 UTC m=+0.142922853 container start 580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  6 02:38:18 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[289943]: [NOTICE]   (289947) : New worker (289949) forked
Dec  6 02:38:18 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[289943]: [NOTICE]   (289947) : Loading success.
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.817 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006698.8170388, ffee41ca-ba3b-4787-8435-b3903ffb29a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.818 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.820 232437 DEBUG nova.compute.manager [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.820 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.823 232437 INFO nova.virt.libvirt.driver [-] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Instance spawned successfully.#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.823 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:38:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:18.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.845 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.850 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.850 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.851 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.851 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.852 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.852 232437 DEBUG nova.virt.libvirt.driver [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.856 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.909 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.909 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006698.8174016, ffee41ca-ba3b-4787-8435-b3903ffb29a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.909 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] VM Started (Lifecycle Event)#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.939 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.944 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.969 232437 DEBUG nova.compute.manager [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:38:18 np0005548731 nova_compute[232433]: 2025-12-06 07:38:18.971 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.027 232437 DEBUG oslo_concurrency.lockutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.028 232437 DEBUG oslo_concurrency.lockutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.028 232437 DEBUG nova.objects.instance [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.103 232437 DEBUG oslo_concurrency.lockutils [None req-9fd702a1-9d3a-4977-9a60-8215569334de 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.107 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.108 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.131 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.131 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:38:19 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2196679724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.593 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.688 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.688 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.692 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.692 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.695 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.696 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.855 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.856 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3791MB free_disk=20.623065948486328GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.856 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.857 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.910 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Applying migration context for instance 2de097e3-8182-48e5-b69d-88acbfb84e66 as it has an incoming, in-progress migration 47ca4852-1aea-48fa-a209-2395d57c0b69. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.911 232437 INFO nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating resource usage from migration 47ca4852-1aea-48fa-a209-2395d57c0b69#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.990 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 6a50a40c-3b05-4c0e-aa67-1489e203824e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.991 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.991 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance ffee41ca-ba3b-4787-8435-b3903ffb29a9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.991 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 2de097e3-8182-48e5-b69d-88acbfb84e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.992 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:38:19 np0005548731 nova_compute[232433]: 2025-12-06 07:38:19.992 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:38:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 e307: 3 total, 3 up, 3 in
Dec  6 02:38:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:20.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.081 232437 DEBUG nova.objects.instance [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.088 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:38:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2244263686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.679 232437 DEBUG nova.compute.manager [req-0db21f4a-aaa9-489b-b9ae-a20856542b3f req-9b135666-2671-4253-b5a0-7dad43c09091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.680 232437 DEBUG oslo_concurrency.lockutils [req-0db21f4a-aaa9-489b-b9ae-a20856542b3f req-9b135666-2671-4253-b5a0-7dad43c09091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.680 232437 DEBUG oslo_concurrency.lockutils [req-0db21f4a-aaa9-489b-b9ae-a20856542b3f req-9b135666-2671-4253-b5a0-7dad43c09091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.681 232437 DEBUG oslo_concurrency.lockutils [req-0db21f4a-aaa9-489b-b9ae-a20856542b3f req-9b135666-2671-4253-b5a0-7dad43c09091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.681 232437 DEBUG nova.compute.manager [req-0db21f4a-aaa9-489b-b9ae-a20856542b3f req-9b135666-2671-4253-b5a0-7dad43c09091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] No waiting events found dispatching network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.681 232437 WARNING nova.compute.manager [req-0db21f4a-aaa9-489b-b9ae-a20856542b3f req-9b135666-2671-4253-b5a0-7dad43c09091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received unexpected event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.682 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.725 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.802 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.802 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Ensure instance console log exists: /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.803 232437 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.803 232437 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.804 232437 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.807 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Start _get_guest_xml network_info=[{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:4a:1f:3e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '2de097e3-8182-48e5-b69d-88acbfb84e66', 'attached_at': '2025-12-06T07:38:17.000000', 'detached_at': '', 'volume_id': '5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6', 'serial': '5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6'}, 'disk_bus': 'virtio', 'boot_index': None, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'attachment_id': 'd9727c8d-937a-430f-bbb7-b0cec953896c', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.812 232437 WARNING nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.825 232437 DEBUG nova.virt.libvirt.host [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.826 232437 DEBUG nova.virt.libvirt.host [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.829 232437 DEBUG nova.virt.libvirt.host [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.830 232437 DEBUG nova.virt.libvirt.host [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:38:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:20.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.832 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.833 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.834 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.834 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.835 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.835 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.835 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.836 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.836 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.837 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.837 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.838 232437 DEBUG nova.virt.hardware [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.838 232437 DEBUG nova.objects.instance [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.842 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.859 232437 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.893 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:38:20 np0005548731 nova_compute[232433]: 2025-12-06 07:38:20.894 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:38:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2972126965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:38:21 np0005548731 nova_compute[232433]: 2025-12-06 07:38:21.291 232437 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:21 np0005548731 nova_compute[232433]: 2025-12-06 07:38:21.324 232437 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:38:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1646551191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:38:21 np0005548731 nova_compute[232433]: 2025-12-06 07:38:21.833 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:22.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:22Z|00590|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.290 232437 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.966s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.321 232437 DEBUG nova.virt.libvirt.vif [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:36:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1439654070',display_name='tempest-ServerActionsTestOtherB-server-1439654070',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1439654070',id=131,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:37:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-i11mga9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:38:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2de097e3-8182-48e5-b69d-88acbfb84e66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:4a:1f:3e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.321 232437 DEBUG nova.network.os_vif_util [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:4a:1f:3e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.322 232437 DEBUG nova.network.os_vif_util [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.326 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  <uuid>2de097e3-8182-48e5-b69d-88acbfb84e66</uuid>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  <name>instance-00000083</name>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  <memory>196608</memory>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerActionsTestOtherB-server-1439654070</nova:name>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:38:20</nova:creationTime>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.micro">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <nova:memory>192</nova:memory>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <nova:user uuid="a70f6c3c5e2c402bb6fa0e0507e9b6dc">tempest-ServerActionsTestOtherB-874907570-project-member</nova:user>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <nova:project uuid="b10aa03d68eb4d4799d53538521cc364">tempest-ServerActionsTestOtherB-874907570</nova:project>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <nova:port uuid="8690867c-c0a8-4574-b54f-38486691e339">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <entry name="serial">2de097e3-8182-48e5-b69d-88acbfb84e66</entry>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <entry name="uuid">2de097e3-8182-48e5-b69d-88acbfb84e66</entry>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2de097e3-8182-48e5-b69d-88acbfb84e66_disk">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2de097e3-8182-48e5-b69d-88acbfb84e66_disk.config">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <target dev="vdb" bus="virtio"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <serial>5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6</serial>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:4a:1f:3e"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <target dev="tap8690867c-c0"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/console.log" append="off"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:38:22 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:38:22 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:38:22 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:38:22 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.328 232437 DEBUG nova.virt.libvirt.vif [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:36:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1439654070',display_name='tempest-ServerActionsTestOtherB-server-1439654070',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1439654070',id=131,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:37:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-i11mga9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:38:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2de097e3-8182-48e5-b69d-88acbfb84e66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:4a:1f:3e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.328 232437 DEBUG nova.network.os_vif_util [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:4a:1f:3e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.329 232437 DEBUG nova.network.os_vif_util [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.329 232437 DEBUG os_vif [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.330 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.331 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.334 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8690867c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.334 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8690867c-c0, col_values=(('external_ids', {'iface-id': '8690867c-c0a8-4574-b54f-38486691e339', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:1f:3e', 'vm-uuid': '2de097e3-8182-48e5-b69d-88acbfb84e66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.336 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 NetworkManager[49182]: <info>  [1765006702.3371] manager: (tap8690867c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.338 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.343 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.344 232437 INFO os_vif [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0')#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.399 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.399 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.400 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.400 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No VIF found with MAC fa:16:3e:4a:1f:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.401 232437 INFO nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Using config drive#033[00m
Dec  6 02:38:22 np0005548731 NetworkManager[49182]: <info>  [1765006702.4921] manager: (tap8690867c-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Dec  6 02:38:22 np0005548731 kernel: tap8690867c-c0: entered promiscuous mode
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.495 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:22Z|00591|binding|INFO|Claiming lport 8690867c-c0a8-4574-b54f-38486691e339 for this chassis.
Dec  6 02:38:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:22Z|00592|binding|INFO|8690867c-c0a8-4574-b54f-38486691e339: Claiming fa:16:3e:4a:1f:3e 10.100.0.11
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.510 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:1f:3e 10.100.0.11'], port_security=['fa:16:3e:4a:1f:3e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2de097e3-8182-48e5-b69d-88acbfb84e66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8690867c-c0a8-4574-b54f-38486691e339) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.512 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8690867c-c0a8-4574-b54f-38486691e339 in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 bound to our chassis#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.513 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.525 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f392dda3-ddde-47bf-ab1e-616c99795e6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.526 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3beede49-11 in ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:38:22 np0005548731 systemd-udevd[290178]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:38:22 np0005548731 systemd-machined[195355]: New machine qemu-61-instance-00000083.
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.528 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3beede49-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.528 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1515fe6f-5edf-4bb4-bebd-dc90d27c4459]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:22Z|00593|binding|INFO|Setting lport 8690867c-c0a8-4574-b54f-38486691e339 ovn-installed in OVS
Dec  6 02:38:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:22Z|00594|binding|INFO|Setting lport 8690867c-c0a8-4574-b54f-38486691e339 up in Southbound
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.532 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2db12f44-91a9-4423-9cde-70d55603a45c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.534 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 NetworkManager[49182]: <info>  [1765006702.5413] device (tap8690867c-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:38:22 np0005548731 systemd[1]: Started Virtual Machine qemu-61-instance-00000083.
Dec  6 02:38:22 np0005548731 NetworkManager[49182]: <info>  [1765006702.5444] device (tap8690867c-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.545 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.546 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[f60f80be-d56c-410d-b383-8161edbf0802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.561 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5191f7a1-b192-4ba9-900f-886aae6db92d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.588 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b92f627c-e44c-4108-b664-4d723cb35dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 systemd-udevd[290181]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:38:22 np0005548731 NetworkManager[49182]: <info>  [1765006702.5976] manager: (tap3beede49-10): new Veth device (/org/freedesktop/NetworkManager/Devices/280)
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.596 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[efb912bb-323a-40e7-bf27-7173dcc943eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.631 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[45cf9dc4-74eb-4b77-8326-909f1a6abf6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.634 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[98cd5f2d-de7a-456f-83ba-8bd926fdffc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 NetworkManager[49182]: <info>  [1765006702.6592] device (tap3beede49-10): carrier: link connected
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.665 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[13a16ab6-76d2-4dca-90f0-de71d3d56b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.683 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bef1079f-f78b-4936-a2d8-84489636f35b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696466, 'reachable_time': 34094, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290210, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.697 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0d6bac-e65b-4775-b738-abdfd563f881]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:c755'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696466, 'tstamp': 696466}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290211, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.716 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1ba193-4b24-47b3-a49b-ae582b8c2eac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696466, 'reachable_time': 34094, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290212, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.744 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[39b69af9-8d42-491b-99e4-d98936ed3628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.806 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a729c092-5092-4d4c-be37-4844d9f03397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.807 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.808 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.808 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.810 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 NetworkManager[49182]: <info>  [1765006702.8106] manager: (tap3beede49-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Dec  6 02:38:22 np0005548731 kernel: tap3beede49-10: entered promiscuous mode
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.811 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.818 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:22Z|00595|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.822 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.823 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[257a55f7-d146-4883-a5d5-c8848df492be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.823 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-3beede49-1cbb-425c-b1af-82f43dc57163
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 3beede49-1cbb-425c-b1af-82f43dc57163
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:38:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:22.824 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'env', 'PROCESS_TAG=haproxy-3beede49-1cbb-425c-b1af-82f43dc57163', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3beede49-1cbb-425c-b1af-82f43dc57163.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:38:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:22.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.833 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.892 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.968 232437 DEBUG nova.compute.manager [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.968 232437 DEBUG oslo_concurrency.lockutils [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.969 232437 DEBUG oslo_concurrency.lockutils [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.969 232437 DEBUG oslo_concurrency.lockutils [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.969 232437 DEBUG nova.compute.manager [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:38:22 np0005548731 nova_compute[232433]: 2025-12-06 07:38:22.969 232437 WARNING nova.compute.manager [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state active and task_state resize_finish.#033[00m
Dec  6 02:38:23 np0005548731 nova_compute[232433]: 2025-12-06 07:38:23.122 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:23 np0005548731 podman[290280]: 2025-12-06 07:38:23.167056186 +0000 UTC m=+0.056945598 container create bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:38:23 np0005548731 systemd[1]: Started libpod-conmon-bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7.scope.
Dec  6 02:38:23 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:38:23 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e9f4b8f6570e22bf94b7bf20b12889022af4fce1136c67b3449e02d6fdc985/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:38:23 np0005548731 podman[290280]: 2025-12-06 07:38:23.139054404 +0000 UTC m=+0.028943846 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:38:23 np0005548731 podman[290280]: 2025-12-06 07:38:23.246310186 +0000 UTC m=+0.136199608 container init bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:38:23 np0005548731 podman[290280]: 2025-12-06 07:38:23.252331304 +0000 UTC m=+0.142220716 container start bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true)
Dec  6 02:38:23 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[290295]: [NOTICE]   (290299) : New worker (290301) forked
Dec  6 02:38:23 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[290295]: [NOTICE]   (290299) : Loading success.
Dec  6 02:38:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:38:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:24.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:24.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:25 np0005548731 nova_compute[232433]: 2025-12-06 07:38:25.096 232437 DEBUG nova.compute.manager [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:38:25 np0005548731 nova_compute[232433]: 2025-12-06 07:38:25.097 232437 DEBUG oslo_concurrency.lockutils [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:25 np0005548731 nova_compute[232433]: 2025-12-06 07:38:25.098 232437 DEBUG oslo_concurrency.lockutils [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:25 np0005548731 nova_compute[232433]: 2025-12-06 07:38:25.098 232437 DEBUG oslo_concurrency.lockutils [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:25 np0005548731 nova_compute[232433]: 2025-12-06 07:38:25.099 232437 DEBUG nova.compute.manager [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:38:25 np0005548731 nova_compute[232433]: 2025-12-06 07:38:25.099 232437 WARNING nova.compute.manager [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state active and task_state resize_finish.#033[00m
Dec  6 02:38:25 np0005548731 nova_compute[232433]: 2025-12-06 07:38:25.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:26.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:26.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:27 np0005548731 nova_compute[232433]: 2025-12-06 07:38:27.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:27 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:38:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:28.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:28 np0005548731 nova_compute[232433]: 2025-12-06 07:38:28.126 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:38:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:28.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:28 np0005548731 podman[290313]: 2025-12-06 07:38:28.904275068 +0000 UTC m=+0.058321971 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 02:38:28 np0005548731 podman[290315]: 2025-12-06 07:38:28.940741766 +0000 UTC m=+0.089913471 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:38:28 np0005548731 podman[290314]: 2025-12-06 07:38:28.984257216 +0000 UTC m=+0.136479555 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:38:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:30.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:30 np0005548731 nova_compute[232433]: 2025-12-06 07:38:30.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:30 np0005548731 nova_compute[232433]: 2025-12-06 07:38:30.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:38:30 np0005548731 nova_compute[232433]: 2025-12-06 07:38:30.125 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:38:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:30.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:31 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:38:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:32.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:32 np0005548731 nova_compute[232433]: 2025-12-06 07:38:32.367 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).paxos(paxos updating c 4268..4908) lease_timeout -- calling new election
Dec  6 02:38:32 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 02:38:32 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(56) init, last seen epoch 56
Dec  6 02:38:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:38:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:38:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:32.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:33 np0005548731 nova_compute[232433]: 2025-12-06 07:38:33.127 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:38:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:34.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:38:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:34.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:35 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:38:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:36.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:36.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:37 np0005548731 nova_compute[232433]: 2025-12-06 07:38:37.369 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:38:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:38.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:38 np0005548731 nova_compute[232433]: 2025-12-06 07:38:38.129 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec  6 02:38:38 np0005548731 ceph-mon[77458]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:38:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:38:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:38.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:40.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:40.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:42.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:42 np0005548731 nova_compute[232433]: 2025-12-06 07:38:42.372 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:42.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:43 np0005548731 nova_compute[232433]: 2025-12-06 07:38:43.131 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:38:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:44.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:44.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:38:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:38:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:46.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.718 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006726.7178328, 2de097e3-8182-48e5-b69d-88acbfb84e66 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.718 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.721 232437 DEBUG nova.compute.manager [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.723 232437 INFO nova.virt.libvirt.driver [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance running successfully.#033[00m
Dec  6 02:38:46 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.726 232437 DEBUG nova.virt.libvirt.guest [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.726 232437 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.742 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.747 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.776 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.777 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006726.7188678, 2de097e3-8182-48e5-b69d-88acbfb84e66 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.777 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] VM Started (Lifecycle Event)#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.796 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.800 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:38:46 np0005548731 nova_compute[232433]: 2025-12-06 07:38:46.820 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 02:38:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:46.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:47 np0005548731 nova_compute[232433]: 2025-12-06 07:38:47.421 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:48.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:48 np0005548731 nova_compute[232433]: 2025-12-06 07:38:48.133 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:38:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:48.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:49 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 02:38:49 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(60) init, last seen epoch 60
Dec  6 02:38:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:38:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:38:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:50.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:50 np0005548731 nova_compute[232433]: 2025-12-06 07:38:50.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 02:38:50 np0005548731 nova_compute[232433]: 2025-12-06 07:38:50.779 232437 DEBUG nova.network.neutron [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Port 8690867c-c0a8-4574-b54f-38486691e339 binding to destination host compute-2.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Dec  6 02:38:50 np0005548731 nova_compute[232433]: 2025-12-06 07:38:50.780 232437 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:38:50 np0005548731 nova_compute[232433]: 2025-12-06 07:38:50.780 232437 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:38:50 np0005548731 nova_compute[232433]: 2025-12-06 07:38:50.780 232437 DEBUG nova.network.neutron [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:38:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:38:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:50.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:38:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:52.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:52 np0005548731 nova_compute[232433]: 2025-12-06 07:38:52.476 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:52 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:52Z|00596|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec  6 02:38:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:38:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:52.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:38:52 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 02:38:52 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 02:38:52 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 02:38:52 np0005548731 ceph-mon[77458]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec  6 02:38:52 np0005548731 ceph-mon[77458]: Cluster is now healthy
Dec  6 02:38:53 np0005548731 nova_compute[232433]: 2025-12-06 07:38:53.028 232437 DEBUG nova.network.neutron [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:38:53 np0005548731 nova_compute[232433]: 2025-12-06 07:38:53.073 232437 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:38:53 np0005548731 nova_compute[232433]: 2025-12-06 07:38:53.118 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:38:53 np0005548731 nova_compute[232433]: 2025-12-06 07:38:53.118 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:38:53 np0005548731 nova_compute[232433]: 2025-12-06 07:38:53.134 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:38:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:54.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:54 np0005548731 kernel: tap8690867c-c0 (unregistering): left promiscuous mode
Dec  6 02:38:54 np0005548731 NetworkManager[49182]: <info>  [1765006734.3860] device (tap8690867c-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:38:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:54Z|00597|binding|INFO|Releasing lport 8690867c-c0a8-4574-b54f-38486691e339 from this chassis (sb_readonly=0)
Dec  6 02:38:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:54Z|00598|binding|INFO|Setting lport 8690867c-c0a8-4574-b54f-38486691e339 down in Southbound
Dec  6 02:38:54 np0005548731 nova_compute[232433]: 2025-12-06 07:38:54.394 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:54Z|00599|binding|INFO|Removing iface tap8690867c-c0 ovn-installed in OVS
Dec  6 02:38:54 np0005548731 nova_compute[232433]: 2025-12-06 07:38:54.396 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:54 np0005548731 nova_compute[232433]: 2025-12-06 07:38:54.409 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:54 np0005548731 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000083.scope: Deactivated successfully.
Dec  6 02:38:54 np0005548731 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000083.scope: Consumed 7.018s CPU time.
Dec  6 02:38:54 np0005548731 systemd-machined[195355]: Machine qemu-61-instance-00000083 terminated.
Dec  6 02:38:54 np0005548731 ceph-mon[77458]: mon.compute-1 calling monitor election
Dec  6 02:38:54 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 02:38:54 np0005548731 nova_compute[232433]: 2025-12-06 07:38:54.534 232437 INFO nova.virt.libvirt.driver [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance destroyed successfully.#033[00m
Dec  6 02:38:54 np0005548731 nova_compute[232433]: 2025-12-06 07:38:54.535 232437 DEBUG nova.objects.instance [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'resources' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:38:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:54.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.278 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:1f:3e 10.100.0.11'], port_security=['fa:16:3e:4a:1f:3e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2de097e3-8182-48e5-b69d-88acbfb84e66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8690867c-c0a8-4574-b54f-38486691e339) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.280 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8690867c-c0a8-4574-b54f-38486691e339 in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 unbound from our chassis#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.282 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3beede49-1cbb-425c-b1af-82f43dc57163, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.283 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9c96b189-6246-4a7f-b834-c2bcf2379011]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.284 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace which is not needed anymore#033[00m
Dec  6 02:38:55 np0005548731 nova_compute[232433]: 2025-12-06 07:38:55.295 232437 DEBUG nova.virt.libvirt.vif [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:36:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1439654070',display_name='tempest-ServerActionsTestOtherB-server-1439654070',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1439654070',id=131,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:38:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-i11mga9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:38:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2de097e3-8182-48e5-b69d-88acbfb84e66,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:38:55 np0005548731 nova_compute[232433]: 2025-12-06 07:38:55.295 232437 DEBUG nova.network.os_vif_util [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:38:55 np0005548731 nova_compute[232433]: 2025-12-06 07:38:55.296 232437 DEBUG nova.network.os_vif_util [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:38:55 np0005548731 nova_compute[232433]: 2025-12-06 07:38:55.296 232437 DEBUG os_vif [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:38:55 np0005548731 nova_compute[232433]: 2025-12-06 07:38:55.297 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:55 np0005548731 nova_compute[232433]: 2025-12-06 07:38:55.297 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8690867c-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:55 np0005548731 nova_compute[232433]: 2025-12-06 07:38:55.300 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:38:55 np0005548731 nova_compute[232433]: 2025-12-06 07:38:55.302 232437 INFO os_vif [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0')#033[00m
Dec  6 02:38:55 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[290295]: [NOTICE]   (290299) : haproxy version is 2.8.14-c23fe91
Dec  6 02:38:55 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[290295]: [NOTICE]   (290299) : path to executable is /usr/sbin/haproxy
Dec  6 02:38:55 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[290295]: [WARNING]  (290299) : Exiting Master process...
Dec  6 02:38:55 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[290295]: [WARNING]  (290299) : Exiting Master process...
Dec  6 02:38:55 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[290295]: [ALERT]    (290299) : Current worker (290301) exited with code 143 (Terminated)
Dec  6 02:38:55 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[290295]: [WARNING]  (290299) : All workers exited. Exiting... (0)
Dec  6 02:38:55 np0005548731 systemd[1]: libpod-bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7.scope: Deactivated successfully.
Dec  6 02:38:55 np0005548731 podman[290680]: 2025-12-06 07:38:55.417037551 +0000 UTC m=+0.042771403 container died bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:38:55 np0005548731 systemd[1]: var-lib-containers-storage-overlay-f9e9f4b8f6570e22bf94b7bf20b12889022af4fce1136c67b3449e02d6fdc985-merged.mount: Deactivated successfully.
Dec  6 02:38:55 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7-userdata-shm.mount: Deactivated successfully.
Dec  6 02:38:55 np0005548731 podman[290680]: 2025-12-06 07:38:55.452900725 +0000 UTC m=+0.078634577 container cleanup bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:38:55 np0005548731 systemd[1]: libpod-conmon-bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7.scope: Deactivated successfully.
Dec  6 02:38:55 np0005548731 podman[290709]: 2025-12-06 07:38:55.512814625 +0000 UTC m=+0.040480648 container remove bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.518 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f330b5de-8ae6-4c02-80bb-4dce3f9e1c08]: (4, ('Sat Dec  6 07:38:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7)\nbbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7\nSat Dec  6 07:38:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (bbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7)\nbbf297eefcb3ce5e910a2dfe07447a3120fa0269da7cd8454e5af8744b645fc7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.520 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdfd2e8-76fc-4896-8e5e-b7bbab5896f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.521 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:38:55 np0005548731 nova_compute[232433]: 2025-12-06 07:38:55.522 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:55 np0005548731 kernel: tap3beede49-10: left promiscuous mode
Dec  6 02:38:55 np0005548731 nova_compute[232433]: 2025-12-06 07:38:55.536 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.539 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c31942b2-b6c1-4366-a95a-d0785f7127b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.556 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[823c7995-713a-4bee-a37f-58c4f4c697b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.557 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce0a01b-dbea-4208-a0ed-40f566b1f081]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.574 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[91f2d38a-10d3-4ec1-b124-c0cc9bb6e8a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696458, 'reachable_time': 30998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290725, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.577 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:38:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:38:55.577 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[44ddc0a5-adcb-4e8a-9c01-5bcc397fd7aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:38:55 np0005548731 systemd[1]: run-netns-ovnmeta\x2d3beede49\x2d1cbb\x2d425c\x2db1af\x2d82f43dc57163.mount: Deactivated successfully.
Dec  6 02:38:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:38:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:56.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:38:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:56.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:57 np0005548731 nova_compute[232433]: 2025-12-06 07:38:57.237 232437 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:57 np0005548731 nova_compute[232433]: 2025-12-06 07:38:57.237 232437 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:57 np0005548731 nova_compute[232433]: 2025-12-06 07:38:57.288 232437 DEBUG nova.objects.instance [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'migration_context' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:38:57 np0005548731 nova_compute[232433]: 2025-12-06 07:38:57.477 232437 DEBUG nova.compute.manager [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:38:57 np0005548731 nova_compute[232433]: 2025-12-06 07:38:57.478 232437 DEBUG oslo_concurrency.lockutils [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:38:57 np0005548731 nova_compute[232433]: 2025-12-06 07:38:57.478 232437 DEBUG oslo_concurrency.lockutils [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:38:57 np0005548731 nova_compute[232433]: 2025-12-06 07:38:57.478 232437 DEBUG oslo_concurrency.lockutils [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:38:57 np0005548731 nova_compute[232433]: 2025-12-06 07:38:57.479 232437 DEBUG nova.compute.manager [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:38:57 np0005548731 nova_compute[232433]: 2025-12-06 07:38:57.479 232437 WARNING nova.compute.manager [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state resized and task_state resize_reverting.#033[00m
Dec  6 02:38:57 np0005548731 nova_compute[232433]: 2025-12-06 07:38:57.673 232437 DEBUG oslo_concurrency.processutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:38:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:58.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:38:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1863463196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:38:58 np0005548731 nova_compute[232433]: 2025-12-06 07:38:58.136 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:38:58 np0005548731 nova_compute[232433]: 2025-12-06 07:38:58.140 232437 DEBUG oslo_concurrency.processutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:38:58 np0005548731 nova_compute[232433]: 2025-12-06 07:38:58.146 232437 DEBUG nova.compute.provider_tree [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:38:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:58Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:0b:9b 10.100.0.7
Dec  6 02:38:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:38:58Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:0b:9b 10.100.0.7
Dec  6 02:38:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:38:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:38:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:38:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:58.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:38:59 np0005548731 nova_compute[232433]: 2025-12-06 07:38:59.892 232437 DEBUG nova.scheduler.client.report [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:38:59 np0005548731 podman[290800]: 2025-12-06 07:38:59.912653389 +0000 UTC m=+0.067491785 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:38:59 np0005548731 podman[290802]: 2025-12-06 07:38:59.921364871 +0000 UTC m=+0.072121988 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:38:59 np0005548731 nova_compute[232433]: 2025-12-06 07:38:59.986 232437 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 2.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:00 np0005548731 podman[290801]: 2025-12-06 07:39:00.002294623 +0000 UTC m=+0.147450233 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:39:00 np0005548731 nova_compute[232433]: 2025-12-06 07:39:00.027 232437 DEBUG nova.compute.manager [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:00 np0005548731 nova_compute[232433]: 2025-12-06 07:39:00.027 232437 DEBUG oslo_concurrency.lockutils [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:00 np0005548731 nova_compute[232433]: 2025-12-06 07:39:00.028 232437 DEBUG oslo_concurrency.lockutils [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:00 np0005548731 nova_compute[232433]: 2025-12-06 07:39:00.028 232437 DEBUG oslo_concurrency.lockutils [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:00 np0005548731 nova_compute[232433]: 2025-12-06 07:39:00.028 232437 DEBUG nova.compute.manager [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:39:00 np0005548731 nova_compute[232433]: 2025-12-06 07:39:00.028 232437 WARNING nova.compute.manager [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state resized and task_state resize_reverting.#033[00m
Dec  6 02:39:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:00.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:00 np0005548731 nova_compute[232433]: 2025-12-06 07:39:00.299 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:00.880 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:00.880 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:00.882 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:00.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:39:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:02.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:39:03 np0005548731 nova_compute[232433]: 2025-12-06 07:39:03.138 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:04.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:04.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:04 np0005548731 nova_compute[232433]: 2025-12-06 07:39:04.910 232437 DEBUG nova.compute.manager [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-changed-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:04 np0005548731 nova_compute[232433]: 2025-12-06 07:39:04.911 232437 DEBUG nova.compute.manager [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing instance network info cache due to event network-changed-8690867c-c0a8-4574-b54f-38486691e339. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:39:04 np0005548731 nova_compute[232433]: 2025-12-06 07:39:04.911 232437 DEBUG oslo_concurrency.lockutils [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:39:04 np0005548731 nova_compute[232433]: 2025-12-06 07:39:04.911 232437 DEBUG oslo_concurrency.lockutils [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:39:04 np0005548731 nova_compute[232433]: 2025-12-06 07:39:04.911 232437 DEBUG nova.network.neutron [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing network info cache for port 8690867c-c0a8-4574-b54f-38486691e339 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:39:05 np0005548731 nova_compute[232433]: 2025-12-06 07:39:05.301 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:39:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:06.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:39:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:06.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:08.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:08 np0005548731 nova_compute[232433]: 2025-12-06 07:39:08.140 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:39:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:08.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:39:08 np0005548731 nova_compute[232433]: 2025-12-06 07:39:08.976 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "db714956-5ece-4543-8db8-4634df66962e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:08 np0005548731 nova_compute[232433]: 2025-12-06 07:39:08.976 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:09 np0005548731 nova_compute[232433]: 2025-12-06 07:39:09.004 232437 DEBUG nova.compute.manager [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:39:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:39:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/518087762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:39:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:39:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/518087762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:39:09 np0005548731 nova_compute[232433]: 2025-12-06 07:39:09.146 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:09 np0005548731 nova_compute[232433]: 2025-12-06 07:39:09.147 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:09 np0005548731 nova_compute[232433]: 2025-12-06 07:39:09.160 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:39:09 np0005548731 nova_compute[232433]: 2025-12-06 07:39:09.160 232437 INFO nova.compute.claims [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:39:09 np0005548731 nova_compute[232433]: 2025-12-06 07:39:09.532 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006734.5304716, 2de097e3-8182-48e5-b69d-88acbfb84e66 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:39:09 np0005548731 nova_compute[232433]: 2025-12-06 07:39:09.532 232437 INFO nova.compute.manager [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:39:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:10.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:10 np0005548731 nova_compute[232433]: 2025-12-06 07:39:10.305 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:10 np0005548731 nova_compute[232433]: 2025-12-06 07:39:10.790 232437 DEBUG nova.network.neutron [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updated VIF entry in instance network info cache for port 8690867c-c0a8-4574-b54f-38486691e339. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:39:10 np0005548731 nova_compute[232433]: 2025-12-06 07:39:10.791 232437 DEBUG nova.network.neutron [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:39:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:10.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:11 np0005548731 nova_compute[232433]: 2025-12-06 07:39:11.677 232437 DEBUG nova.compute.manager [None req-d654ca99-d4b2-4f0a-901f-b7bf08c0b7d7 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:39:11 np0005548731 nova_compute[232433]: 2025-12-06 07:39:11.734 232437 DEBUG oslo_concurrency.lockutils [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:39:11 np0005548731 nova_compute[232433]: 2025-12-06 07:39:11.930 232437 DEBUG oslo_concurrency.processutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:12.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.222 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.223 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.223 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.223 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.276 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  6 02:39:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:39:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/896338344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.380 232437 DEBUG oslo_concurrency.processutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.386 232437 DEBUG nova.compute.provider_tree [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.412 232437 DEBUG nova.scheduler.client.report [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.437 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.437 232437 DEBUG nova.compute.manager [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.596 232437 DEBUG nova.compute.manager [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.596 232437 DEBUG nova.network.neutron [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.778 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.778 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.778 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.778 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6a50a40c-3b05-4c0e-aa67-1489e203824e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:39:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:39:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/846822878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:39:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:12.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.900 232437 INFO nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.953 232437 DEBUG nova.policy [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '605b5481e0c944048e6a67046c30d693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:39:12 np0005548731 nova_compute[232433]: 2025-12-06 07:39:12.957 232437 DEBUG nova.compute.manager [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.039 232437 INFO nova.virt.block_device [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Booting with volume 67ebc469-c3e0-45da-a1b6-9f2891a4f2a7 at /dev/vda#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.141 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.657 232437 DEBUG os_brick.utils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.659 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.670 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.670 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[1c231bd5-cd6f-42d4-a291-a2955b61804a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.671 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.679 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.680 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6185ff-1e9f-487c-85d5-10f33bfca8ba]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.681 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.689 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.689 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[31f98b89-ed5f-491a-9306-2edfb1497a4a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.690 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[3b54b4d1-a62a-4af8-a63e-bb0da0a24f20]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.691 232437 DEBUG oslo_concurrency.processutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.720 232437 DEBUG oslo_concurrency.processutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.723 232437 DEBUG os_brick.initiator.connectors.lightos [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.724 232437 DEBUG os_brick.initiator.connectors.lightos [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.724 232437 DEBUG os_brick.initiator.connectors.lightos [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.724 232437 DEBUG os_brick.utils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.725 232437 DEBUG nova.virt.block_device [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Updating existing volume attachment record: faf88ffd-3691-4171-8a88-e9ae104d0a9e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:39:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:13.749 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:39:13 np0005548731 nova_compute[232433]: 2025-12-06 07:39:13.749 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:13 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:13.750 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:39:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:14.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:14.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:15 np0005548731 nova_compute[232433]: 2025-12-06 07:39:15.307 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:16.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:16.752 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:39:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:16.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:39:17 np0005548731 nova_compute[232433]: 2025-12-06 07:39:17.214 232437 DEBUG nova.network.neutron [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Successfully created port: 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:39:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:39:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4080526882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:39:18 np0005548731 nova_compute[232433]: 2025-12-06 07:39:18.016 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Updating instance_info_cache with network_info: [{"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:39:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:18.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:18 np0005548731 nova_compute[232433]: 2025-12-06 07:39:18.142 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:18.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.420 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-6a50a40c-3b05-4c0e-aa67-1489e203824e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.421 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.421 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.421 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.422 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.422 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.422 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:39:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e308 e308: 3 total, 3 up, 3 in
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.902 232437 DEBUG nova.compute.manager [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.904 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.905 232437 INFO nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Creating image(s)#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.905 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.905 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Ensure instance console log exists: /var/lib/nova/instances/db714956-5ece-4543-8db8-4634df66962e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.906 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.906 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:19 np0005548731 nova_compute[232433]: 2025-12-06 07:39:19.907 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:39:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:20.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.128 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.128 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.311 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:39:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3987226000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.536 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.685 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.685 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.688 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.689 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.692 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.692 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.860 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.861 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3709MB free_disk=20.622718811035156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.861 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.862 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.873 232437 DEBUG nova.network.neutron [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Successfully updated port: 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:39:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:39:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:20.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.910 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-db714956-5ece-4543-8db8-4634df66962e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.911 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-db714956-5ece-4543-8db8-4634df66962e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.911 232437 DEBUG nova.network.neutron [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.977 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 6a50a40c-3b05-4c0e-aa67-1489e203824e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.977 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.977 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance ffee41ca-ba3b-4787-8435-b3903ffb29a9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.978 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance db714956-5ece-4543-8db8-4634df66962e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.978 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:39:20 np0005548731 nova_compute[232433]: 2025-12-06 07:39:20.978 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=1088MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.115 232437 DEBUG nova.compute.manager [req-cd910c6d-55fe-49a6-916a-123d7260f1f5 req-1bc81460-7d1a-431c-a8f0-2e10561a03a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Received event network-changed-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.115 232437 DEBUG nova.compute.manager [req-cd910c6d-55fe-49a6-916a-123d7260f1f5 req-1bc81460-7d1a-431c-a8f0-2e10561a03a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Refreshing instance network info cache due to event network-changed-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.115 232437 DEBUG oslo_concurrency.lockutils [req-cd910c6d-55fe-49a6-916a-123d7260f1f5 req-1bc81460-7d1a-431c-a8f0-2e10561a03a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-db714956-5ece-4543-8db8-4634df66962e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.185 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.315 232437 DEBUG nova.network.neutron [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.333 232437 DEBUG nova.compute.manager [req-ba3207e8-1771-46f3-b558-329dbab6cb88 req-2cf38406-48ba-4969-94d3-747e99a0db72 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.334 232437 DEBUG oslo_concurrency.lockutils [req-ba3207e8-1771-46f3-b558-329dbab6cb88 req-2cf38406-48ba-4969-94d3-747e99a0db72 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.334 232437 DEBUG oslo_concurrency.lockutils [req-ba3207e8-1771-46f3-b558-329dbab6cb88 req-2cf38406-48ba-4969-94d3-747e99a0db72 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.334 232437 DEBUG oslo_concurrency.lockutils [req-ba3207e8-1771-46f3-b558-329dbab6cb88 req-2cf38406-48ba-4969-94d3-747e99a0db72 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.334 232437 DEBUG nova.compute.manager [req-ba3207e8-1771-46f3-b558-329dbab6cb88 req-2cf38406-48ba-4969-94d3-747e99a0db72 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.334 232437 WARNING nova.compute.manager [req-ba3207e8-1771-46f3-b558-329dbab6cb88 req-2cf38406-48ba-4969-94d3-747e99a0db72 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state resized and task_state resize_reverting.#033[00m
Dec  6 02:39:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:39:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3875607001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.636 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.641 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.686 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.728 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:39:21 np0005548731 nova_compute[232433]: 2025-12-06 07:39:21.729 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:22.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:22 np0005548731 nova_compute[232433]: 2025-12-06 07:39:22.724 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:39:22 np0005548731 nova_compute[232433]: 2025-12-06 07:39:22.753 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:39:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:39:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:22.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:39:23 np0005548731 nova_compute[232433]: 2025-12-06 07:39:23.146 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:24.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.254 232437 DEBUG nova.network.neutron [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Updating instance_info_cache with network_info: [{"id": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "address": "fa:16:3e:33:3f:13", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6a154e-80", "ovs_interfaceid": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.305 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-db714956-5ece-4543-8db8-4634df66962e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.305 232437 DEBUG nova.compute.manager [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Instance network_info: |[{"id": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "address": "fa:16:3e:33:3f:13", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6a154e-80", "ovs_interfaceid": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.305 232437 DEBUG oslo_concurrency.lockutils [req-cd910c6d-55fe-49a6-916a-123d7260f1f5 req-1bc81460-7d1a-431c-a8f0-2e10561a03a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-db714956-5ece-4543-8db8-4634df66962e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.306 232437 DEBUG nova.network.neutron [req-cd910c6d-55fe-49a6-916a-123d7260f1f5 req-1bc81460-7d1a-431c-a8f0-2e10561a03a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Refreshing network info cache for port 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.309 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Start _get_guest_xml network_info=[{"id": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "address": "fa:16:3e:33:3f:13", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6a154e-80", "ovs_interfaceid": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-67ebc469-c3e0-45da-a1b6-9f2891a4f2a7', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '67ebc469-c3e0-45da-a1b6-9f2891a4f2a7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'db714956-5ece-4543-8db8-4634df66962e', 'attached_at': '', 'detached_at': '', 'volume_id': '67ebc469-c3e0-45da-a1b6-9f2891a4f2a7', 'serial': '67ebc469-c3e0-45da-a1b6-9f2891a4f2a7', 'multiattach': True}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'attachment_id': 'faf88ffd-3691-4171-8a88-e9ae104d0a9e', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.312 232437 WARNING nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.316 232437 DEBUG nova.virt.libvirt.host [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.316 232437 DEBUG nova.virt.libvirt.host [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.319 232437 DEBUG nova.virt.libvirt.host [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.319 232437 DEBUG nova.virt.libvirt.host [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.320 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.321 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.321 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.321 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.322 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.322 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.322 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.322 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.322 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.323 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.323 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.323 232437 DEBUG nova.virt.hardware [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.352 232437 DEBUG nova.storage.rbd_utils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image db714956-5ece-4543-8db8-4634df66962e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.357 232437 DEBUG oslo_concurrency.processutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:39:24 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3445992299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.901 232437 DEBUG oslo_concurrency.lockutils [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.902 232437 DEBUG oslo_concurrency.lockutils [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.903 232437 DEBUG oslo_concurrency.lockutils [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.903 232437 DEBUG oslo_concurrency.lockutils [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.903 232437 DEBUG oslo_concurrency.lockutils [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.904 232437 INFO nova.compute.manager [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Terminating instance#033[00m
Dec  6 02:39:24 np0005548731 nova_compute[232433]: 2025-12-06 07:39:24.906 232437 DEBUG nova.compute.manager [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:39:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:39:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:24.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:39:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:39:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 52K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.11 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1338 writes, 6490 keys, 1338 commit groups, 1.0 writes per commit group, ingest: 14.08 MB, 0.02 MB/s#012Interval WAL: 1338 writes, 1338 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     28.2      2.26              0.19        30    0.075       0      0       0.0       0.0#012  L6      1/0   10.90 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4     82.8     69.7      4.04              0.83        29    0.139    187K    16K       0.0       0.0#012 Sum      1/0   10.90 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     53.1     54.8      6.30              1.02        59    0.107    187K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.7    138.7    139.5      0.37              0.16         8    0.047     34K   2088       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     82.8     69.7      4.04              0.83        29    0.139    187K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     28.3      2.26              0.19        29    0.078       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.062, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.34 GB write, 0.08 MB/s write, 0.33 GB read, 0.08 MB/s read, 6.3 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 37.22 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000198 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2128,35.84 MB,11.7879%) FilterBlock(59,527.92 KB,0.169588%) IndexBlock(59,889.84 KB,0.285851%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 02:39:25 np0005548731 kernel: tap500715f2-22 (unregistering): left promiscuous mode
Dec  6 02:39:25 np0005548731 NetworkManager[49182]: <info>  [1765006765.2037] device (tap500715f2-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.214 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:25Z|00600|binding|INFO|Releasing lport 500715f2-2222-4003-b268-0369959f5e25 from this chassis (sb_readonly=0)
Dec  6 02:39:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:25Z|00601|binding|INFO|Setting lport 500715f2-2222-4003-b268-0369959f5e25 down in Southbound
Dec  6 02:39:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:25Z|00602|binding|INFO|Removing iface tap500715f2-22 ovn-installed in OVS
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.217 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.225 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:0b:9b 10.100.0.7'], port_security=['fa:16:3e:d3:0b:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ffee41ca-ba3b-4787-8435-b3903ffb29a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21cd37bffa864aaebe5c734ba468f466', 'neutron:revision_number': '6', 'neutron:security_group_ids': '349efd6b-63d0-41af-8405-3d1f52d82535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=359f89a6-280c-4f37-8512-64abc4582e65, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=500715f2-2222-4003-b268-0369959f5e25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.226 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 500715f2-2222-4003-b268-0369959f5e25 in datapath e4ed5caf-995c-4d95-a87b-6b8a5de780fa unbound from our chassis#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.228 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4ed5caf-995c-4d95-a87b-6b8a5de780fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.230 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[aae4e133-ba21-4d57-b35f-8a5895be76c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.231 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa namespace which is not needed anymore#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.231 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000085.scope: Deactivated successfully.
Dec  6 02:39:25 np0005548731 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000085.scope: Consumed 16.834s CPU time.
Dec  6 02:39:25 np0005548731 systemd-machined[195355]: Machine qemu-60-instance-00000085 terminated.
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.313 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.340 232437 INFO nova.virt.libvirt.driver [-] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Instance destroyed successfully.#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.341 232437 DEBUG nova.objects.instance [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lazy-loading 'resources' on Instance uuid ffee41ca-ba3b-4787-8435-b3903ffb29a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.360 232437 DEBUG nova.virt.libvirt.vif [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:36:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1360575984',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1645861904',id=133,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC09UQYuvXJm1Rko+1Enhuo3SSbAWagIRaWgjbl1eMQTI/iz1qciOUWudPgubxE+dfDGSMSVZfhJ7I1j6mbbf0xS8CEMgr5ptTVwPCXCiAWETgl5ZTVu74pyI+Gi8rzOng==',key_name='tempest-keypair-2069448021',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:38:18Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='21cd37bffa864aaebe5c734ba468f466',ramdisk_id='',reservation_id='r-xs37ll8f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-1595119527',owner_user_name='tempest-ServerActionsV293TestJSON-1595119527-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:38:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7960f8f407754e5d83bee65690a6b772',uuid=ffee41ca-ba3b-4787-8435-b3903ffb29a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.360 232437 DEBUG nova.network.os_vif_util [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converting VIF {"id": "500715f2-2222-4003-b268-0369959f5e25", "address": "fa:16:3e:d3:0b:9b", "network": {"id": "e4ed5caf-995c-4d95-a87b-6b8a5de780fa", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-599749084-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21cd37bffa864aaebe5c734ba468f466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap500715f2-22", "ovs_interfaceid": "500715f2-2222-4003-b268-0369959f5e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.361 232437 DEBUG nova.network.os_vif_util [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.362 232437 DEBUG os_vif [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.364 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[289943]: [NOTICE]   (289947) : haproxy version is 2.8.14-c23fe91
Dec  6 02:39:25 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[289943]: [NOTICE]   (289947) : path to executable is /usr/sbin/haproxy
Dec  6 02:39:25 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[289943]: [WARNING]  (289947) : Exiting Master process...
Dec  6 02:39:25 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[289943]: [WARNING]  (289947) : Exiting Master process...
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.365 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap500715f2-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:25 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[289943]: [ALERT]    (289947) : Current worker (289949) exited with code 143 (Terminated)
Dec  6 02:39:25 np0005548731 neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa[289943]: [WARNING]  (289947) : All workers exited. Exiting... (0)
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.366 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.367 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 systemd[1]: libpod-580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85.scope: Deactivated successfully.
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.370 232437 INFO os_vif [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:0b:9b,bridge_name='br-int',has_traffic_filtering=True,id=500715f2-2222-4003-b268-0369959f5e25,network=Network(e4ed5caf-995c-4d95-a87b-6b8a5de780fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap500715f2-22')#033[00m
Dec  6 02:39:25 np0005548731 podman[291066]: 2025-12-06 07:39:25.375981461 +0000 UTC m=+0.047107998 container died 580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:39:25 np0005548731 systemd[1]: var-lib-containers-storage-overlay-18db066f91339cbba5e23b22028a52934a97ae23ad136d2b812ea5bba68c887c-merged.mount: Deactivated successfully.
Dec  6 02:39:25 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85-userdata-shm.mount: Deactivated successfully.
Dec  6 02:39:25 np0005548731 podman[291066]: 2025-12-06 07:39:25.416821836 +0000 UTC m=+0.087948353 container cleanup 580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  6 02:39:25 np0005548731 systemd[1]: libpod-conmon-580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85.scope: Deactivated successfully.
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.457 232437 DEBUG oslo_concurrency.processutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.482 232437 DEBUG nova.virt.libvirt.vif [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:39:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1557370704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1557370704',id=134,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-fk5h503m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:39:13Z,user_data=None,user_id='605b5481e0c944048e6a67046c30d693',uuid=db714956-5ece-4543-8db8-4634df66962e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "address": "fa:16:3e:33:3f:13", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6a154e-80", "ovs_interfaceid": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.482 232437 DEBUG nova.network.os_vif_util [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "address": "fa:16:3e:33:3f:13", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6a154e-80", "ovs_interfaceid": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.483 232437 DEBUG nova.network.os_vif_util [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:3f:13,bridge_name='br-int',has_traffic_filtering=True,id=3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6a154e-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.484 232437 DEBUG nova.objects.instance [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'pci_devices' on Instance uuid db714956-5ece-4543-8db8-4634df66962e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:39:25 np0005548731 podman[291125]: 2025-12-06 07:39:25.486831512 +0000 UTC m=+0.051462604 container remove 580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.492 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8913f1-04f6-44ea-9ff7-d04c00e76428]: (4, ('Sat Dec  6 07:39:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa (580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85)\n580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85\nSat Dec  6 07:39:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa (580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85)\n580c090467fd77bcd4bc2df659e16b00b72ee18a293f3d2baffe338290aa1e85\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.494 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5755f55a-744e-448f-b68d-9c07cb57678b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.495 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4ed5caf-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.496 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 kernel: tape4ed5caf-90: left promiscuous mode
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.501 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  <uuid>db714956-5ece-4543-8db8-4634df66962e</uuid>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  <name>instance-00000086</name>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <nova:name>tempest-AttachVolumeMultiAttachTest-server-1557370704</nova:name>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:39:24</nova:creationTime>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <nova:user uuid="605b5481e0c944048e6a67046c30d693">tempest-AttachVolumeMultiAttachTest-690984293-project-member</nova:user>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <nova:project uuid="833f4cf9f5a64b2ab94c3bf330353a31">tempest-AttachVolumeMultiAttachTest-690984293</nova:project>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <nova:port uuid="3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <entry name="serial">db714956-5ece-4543-8db8-4634df66962e</entry>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <entry name="uuid">db714956-5ece-4543-8db8-4634df66962e</entry>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/db714956-5ece-4543-8db8-4634df66962e_disk.config">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-67ebc469-c3e0-45da-a1b6-9f2891a4f2a7">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <serial>67ebc469-c3e0-45da-a1b6-9f2891a4f2a7</serial>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <shareable/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:33:3f:13"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <target dev="tap3c6a154e-80"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/db714956-5ece-4543-8db8-4634df66962e/console.log" append="off"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:39:25 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:39:25 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:39:25 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:39:25 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.501 232437 DEBUG nova.compute.manager [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Preparing to wait for external event network-vif-plugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.501 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "db714956-5ece-4543-8db8-4634df66962e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.502 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.502 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.502 232437 DEBUG nova.virt.libvirt.vif [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:39:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1557370704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1557370704',id=134,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-fk5h503m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:39:13Z,user_data=None,user_id='605b5481e0c944048e6a67046c30d693',uuid=db714956-5ece-4543-8db8-4634df66962e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "address": "fa:16:3e:33:3f:13", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6a154e-80", "ovs_interfaceid": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.503 232437 DEBUG nova.network.os_vif_util [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "address": "fa:16:3e:33:3f:13", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6a154e-80", "ovs_interfaceid": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.503 232437 DEBUG nova.network.os_vif_util [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:3f:13,bridge_name='br-int',has_traffic_filtering=True,id=3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6a154e-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.504 232437 DEBUG os_vif [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:3f:13,bridge_name='br-int',has_traffic_filtering=True,id=3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6a154e-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.504 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.505 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.505 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.507 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.508 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c6a154e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.508 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c6a154e-80, col_values=(('external_ids', {'iface-id': '3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:3f:13', 'vm-uuid': 'db714956-5ece-4543-8db8-4634df66962e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.510 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 NetworkManager[49182]: <info>  [1765006765.5115] manager: (tap3c6a154e-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.512 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.514 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dfea577a-c623-4587-b5e9-81ed92b3d73d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.515 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.516 232437 INFO os_vif [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:3f:13,bridge_name='br-int',has_traffic_filtering=True,id=3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6a154e-80')#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.527 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fd15c60f-85c9-461c-94f0-89e74eb4ca74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.528 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[92438ee6-546d-458e-ae84-2585462bd1f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.546 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[313f9d00-f130-4e2b-bdc2-e2b76cb5d3c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695990, 'reachable_time': 44876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291144, 'error': None, 'target': 'ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.548 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4ed5caf-995c-4d95-a87b-6b8a5de780fa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:39:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:25.548 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[88ce32aa-b7cc-4f1f-a559-ee9fdf0af12c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:25 np0005548731 systemd[1]: run-netns-ovnmeta\x2de4ed5caf\x2d995c\x2d4d95\x2da87b\x2d6b8a5de780fa.mount: Deactivated successfully.
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.568 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.568 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.568 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:33:3f:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.569 232437 INFO nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Using config drive#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.594 232437 DEBUG nova.storage.rbd_utils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image db714956-5ece-4543-8db8-4634df66962e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.623 232437 DEBUG nova.compute.manager [req-0a603a54-c2c3-421f-a127-ff3f83f63bbb req-2306cf1c-2cdd-4f09-a4cb-6274e3a14dbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-vif-unplugged-500715f2-2222-4003-b268-0369959f5e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.623 232437 DEBUG oslo_concurrency.lockutils [req-0a603a54-c2c3-421f-a127-ff3f83f63bbb req-2306cf1c-2cdd-4f09-a4cb-6274e3a14dbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.623 232437 DEBUG oslo_concurrency.lockutils [req-0a603a54-c2c3-421f-a127-ff3f83f63bbb req-2306cf1c-2cdd-4f09-a4cb-6274e3a14dbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.624 232437 DEBUG oslo_concurrency.lockutils [req-0a603a54-c2c3-421f-a127-ff3f83f63bbb req-2306cf1c-2cdd-4f09-a4cb-6274e3a14dbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.624 232437 DEBUG nova.compute.manager [req-0a603a54-c2c3-421f-a127-ff3f83f63bbb req-2306cf1c-2cdd-4f09-a4cb-6274e3a14dbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] No waiting events found dispatching network-vif-unplugged-500715f2-2222-4003-b268-0369959f5e25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:39:25 np0005548731 nova_compute[232433]: 2025-12-06 07:39:25.624 232437 DEBUG nova.compute.manager [req-0a603a54-c2c3-421f-a127-ff3f83f63bbb req-2306cf1c-2cdd-4f09-a4cb-6274e3a14dbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-vif-unplugged-500715f2-2222-4003-b268-0369959f5e25 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:39:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:39:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:26.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.205 232437 INFO nova.virt.libvirt.driver [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Deleting instance files /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9_del#033[00m
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.206 232437 INFO nova.virt.libvirt.driver [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Deletion of /var/lib/nova/instances/ffee41ca-ba3b-4787-8435-b3903ffb29a9_del complete#033[00m
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.273 232437 INFO nova.compute.manager [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Took 1.37 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.274 232437 DEBUG oslo.service.loopingcall [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.274 232437 DEBUG nova.compute.manager [-] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.275 232437 DEBUG nova.network.neutron [-] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.693 232437 INFO nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Creating config drive at /var/lib/nova/instances/db714956-5ece-4543-8db8-4634df66962e/disk.config#033[00m
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.699 232437 DEBUG oslo_concurrency.processutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db714956-5ece-4543-8db8-4634df66962e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpozcrs5w5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.837 232437 DEBUG oslo_concurrency.processutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db714956-5ece-4543-8db8-4634df66962e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpozcrs5w5" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.872 232437 DEBUG nova.storage.rbd_utils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image db714956-5ece-4543-8db8-4634df66962e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:39:26 np0005548731 nova_compute[232433]: 2025-12-06 07:39:26.876 232437 DEBUG oslo_concurrency.processutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db714956-5ece-4543-8db8-4634df66962e/disk.config db714956-5ece-4543-8db8-4634df66962e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:39:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:26.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:39:27 np0005548731 nova_compute[232433]: 2025-12-06 07:39:27.552 232437 DEBUG nova.network.neutron [req-cd910c6d-55fe-49a6-916a-123d7260f1f5 req-1bc81460-7d1a-431c-a8f0-2e10561a03a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Updated VIF entry in instance network info cache for port 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:39:27 np0005548731 nova_compute[232433]: 2025-12-06 07:39:27.553 232437 DEBUG nova.network.neutron [req-cd910c6d-55fe-49a6-916a-123d7260f1f5 req-1bc81460-7d1a-431c-a8f0-2e10561a03a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Updating instance_info_cache with network_info: [{"id": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "address": "fa:16:3e:33:3f:13", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6a154e-80", "ovs_interfaceid": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:39:27 np0005548731 nova_compute[232433]: 2025-12-06 07:39:27.585 232437 DEBUG oslo_concurrency.lockutils [req-cd910c6d-55fe-49a6-916a-123d7260f1f5 req-1bc81460-7d1a-431c-a8f0-2e10561a03a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-db714956-5ece-4543-8db8-4634df66962e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:39:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:28.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.149 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.195 232437 DEBUG oslo_concurrency.processutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db714956-5ece-4543-8db8-4634df66962e/disk.config db714956-5ece-4543-8db8-4634df66962e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.196 232437 INFO nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Deleting local config drive /var/lib/nova/instances/db714956-5ece-4543-8db8-4634df66962e/disk.config because it was imported into RBD.#033[00m
Dec  6 02:39:28 np0005548731 kernel: tap3c6a154e-80: entered promiscuous mode
Dec  6 02:39:28 np0005548731 NetworkManager[49182]: <info>  [1765006768.2410] manager: (tap3c6a154e-80): new Tun device (/org/freedesktop/NetworkManager/Devices/283)
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.241 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:28Z|00603|binding|INFO|Claiming lport 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 for this chassis.
Dec  6 02:39:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:28Z|00604|binding|INFO|3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24: Claiming fa:16:3e:33:3f:13 10.100.0.10
Dec  6 02:39:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:28Z|00605|binding|INFO|Setting lport 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 ovn-installed in OVS
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.261 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:28Z|00606|binding|INFO|Setting lport 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 up in Southbound
Dec  6 02:39:28 np0005548731 systemd-udevd[291217]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.266 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:3f:13 10.100.0.10'], port_security=['fa:16:3e:33:3f:13 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'db714956-5ece-4543-8db8-4634df66962e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '384b06d0-71ab-455f-9033-7290730c5c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.268 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 bound to our chassis#033[00m
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.270 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:39:28 np0005548731 NetworkManager[49182]: <info>  [1765006768.2784] device (tap3c6a154e-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:39:28 np0005548731 NetworkManager[49182]: <info>  [1765006768.2791] device (tap3c6a154e-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:39:28 np0005548731 systemd-machined[195355]: New machine qemu-62-instance-00000086.
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.285 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bec96aea-fb77-4aae-af75-9dfbda2d98a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:28 np0005548731 systemd[1]: Started Virtual Machine qemu-62-instance-00000086.
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.315 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[89eaef26-9e2e-43fa-b176-1a9ae27dec3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.317 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[de19557b-ec6d-4957-a9a0-c5c359250791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.344 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[fa931feb-7b8d-4837-adc5-cf5f09e6af50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.361 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9705b16a-b9a1-4eba-a13e-25a8f590353d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291232, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.377 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6416755b-86b3-4e57-9e27-3c9a3255b750]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291234, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291234, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.379 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.380 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.382 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.382 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.382 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.383 232437 DEBUG nova.network.neutron [-] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:39:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:28.382 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.426 232437 INFO nova.compute.manager [-] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Took 2.15 seconds to deallocate network for instance.#033[00m
Dec  6 02:39:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.597 232437 DEBUG nova.compute.manager [req-55c2663a-0ed2-44bd-9269-846fa729d219 req-273efd3a-f004-438d-a950-c7b467c9e338 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.597 232437 DEBUG oslo_concurrency.lockutils [req-55c2663a-0ed2-44bd-9269-846fa729d219 req-273efd3a-f004-438d-a950-c7b467c9e338 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.598 232437 DEBUG oslo_concurrency.lockutils [req-55c2663a-0ed2-44bd-9269-846fa729d219 req-273efd3a-f004-438d-a950-c7b467c9e338 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.598 232437 DEBUG oslo_concurrency.lockutils [req-55c2663a-0ed2-44bd-9269-846fa729d219 req-273efd3a-f004-438d-a950-c7b467c9e338 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.598 232437 DEBUG nova.compute.manager [req-55c2663a-0ed2-44bd-9269-846fa729d219 req-273efd3a-f004-438d-a950-c7b467c9e338 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] No waiting events found dispatching network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.598 232437 WARNING nova.compute.manager [req-55c2663a-0ed2-44bd-9269-846fa729d219 req-273efd3a-f004-438d-a950-c7b467c9e338 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received unexpected event network-vif-plugged-500715f2-2222-4003-b268-0369959f5e25 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.600 232437 DEBUG nova.compute.manager [req-f4a0f2ca-af28-4e0b-9a48-d5b26673986a req-8a298537-c6c6-4b69-9780-9f09a58d38e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Received event network-vif-deleted-500715f2-2222-4003-b268-0369959f5e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:28.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.975 232437 INFO nova.compute.manager [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Took 0.55 seconds to detach 1 volumes for instance.#033[00m
Dec  6 02:39:28 np0005548731 nova_compute[232433]: 2025-12-06 07:39:28.976 232437 DEBUG nova.compute.manager [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Deleting volume: a032ab5d-3621-4ab1-82c3-2e8c7d4eeb9c _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Dec  6 02:39:29 np0005548731 nova_compute[232433]: 2025-12-06 07:39:29.331 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006769.3314283, db714956-5ece-4543-8db8-4634df66962e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:39:29 np0005548731 nova_compute[232433]: 2025-12-06 07:39:29.332 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] VM Started (Lifecycle Event)#033[00m
Dec  6 02:39:29 np0005548731 nova_compute[232433]: 2025-12-06 07:39:29.354 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:39:29 np0005548731 nova_compute[232433]: 2025-12-06 07:39:29.357 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006769.3323429, db714956-5ece-4543-8db8-4634df66962e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:39:29 np0005548731 nova_compute[232433]: 2025-12-06 07:39:29.358 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:39:29 np0005548731 nova_compute[232433]: 2025-12-06 07:39:29.362 232437 DEBUG oslo_concurrency.lockutils [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:29 np0005548731 nova_compute[232433]: 2025-12-06 07:39:29.362 232437 DEBUG oslo_concurrency.lockutils [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:29 np0005548731 nova_compute[232433]: 2025-12-06 07:39:29.375 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:39:29 np0005548731 nova_compute[232433]: 2025-12-06 07:39:29.379 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:39:29 np0005548731 nova_compute[232433]: 2025-12-06 07:39:29.416 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:39:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e309 e309: 3 total, 3 up, 3 in
Dec  6 02:39:29 np0005548731 nova_compute[232433]: 2025-12-06 07:39:29.674 232437 DEBUG oslo_concurrency.processutils [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:39:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/704595085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:39:30 np0005548731 nova_compute[232433]: 2025-12-06 07:39:30.098 232437 DEBUG oslo_concurrency.processutils [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:30 np0005548731 nova_compute[232433]: 2025-12-06 07:39:30.104 232437 DEBUG nova.compute.provider_tree [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:39:30 np0005548731 nova_compute[232433]: 2025-12-06 07:39:30.120 232437 DEBUG nova.scheduler.client.report [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:39:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:30.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:30 np0005548731 nova_compute[232433]: 2025-12-06 07:39:30.147 232437 DEBUG oslo_concurrency.lockutils [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:30 np0005548731 nova_compute[232433]: 2025-12-06 07:39:30.201 232437 INFO nova.scheduler.client.report [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Deleted allocations for instance ffee41ca-ba3b-4787-8435-b3903ffb29a9#033[00m
Dec  6 02:39:30 np0005548731 nova_compute[232433]: 2025-12-06 07:39:30.401 232437 DEBUG oslo_concurrency.lockutils [None req-20539e8d-dc7c-4a33-a976-258f208f4cfb 7960f8f407754e5d83bee65690a6b772 21cd37bffa864aaebe5c734ba468f466 - - default default] Lock "ffee41ca-ba3b-4787-8435-b3903ffb29a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:30 np0005548731 nova_compute[232433]: 2025-12-06 07:39:30.512 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:30 np0005548731 podman[291300]: 2025-12-06 07:39:30.900513487 +0000 UTC m=+0.058872325 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:39:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:30.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:30 np0005548731 podman[291301]: 2025-12-06 07:39:30.926981332 +0000 UTC m=+0.085522145 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:39:30 np0005548731 podman[291302]: 2025-12-06 07:39:30.932834464 +0000 UTC m=+0.091189352 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.312 232437 DEBUG nova.compute.manager [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Received event network-vif-plugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.312 232437 DEBUG oslo_concurrency.lockutils [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db714956-5ece-4543-8db8-4634df66962e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.312 232437 DEBUG oslo_concurrency.lockutils [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.313 232437 DEBUG oslo_concurrency.lockutils [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.313 232437 DEBUG nova.compute.manager [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Processing event network-vif-plugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.313 232437 DEBUG nova.compute.manager [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Received event network-vif-plugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.313 232437 DEBUG oslo_concurrency.lockutils [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db714956-5ece-4543-8db8-4634df66962e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.313 232437 DEBUG oslo_concurrency.lockutils [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.314 232437 DEBUG oslo_concurrency.lockutils [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.314 232437 DEBUG nova.compute.manager [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] No waiting events found dispatching network-vif-plugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.314 232437 WARNING nova.compute.manager [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Received unexpected event network-vif-plugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.315 232437 DEBUG nova.compute.manager [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.318 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006771.3181765, db714956-5ece-4543-8db8-4634df66962e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.318 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.320 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.323 232437 INFO nova.virt.libvirt.driver [-] [instance: db714956-5ece-4543-8db8-4634df66962e] Instance spawned successfully.#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.323 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.374 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.375 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.375 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.375 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.376 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.376 232437 DEBUG nova.virt.libvirt.driver [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.380 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.384 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.423 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.479 232437 INFO nova.compute.manager [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Took 11.58 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.479 232437 DEBUG nova.compute.manager [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.546 232437 INFO nova.compute.manager [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Took 22.47 seconds to build instance.#033[00m
Dec  6 02:39:31 np0005548731 nova_compute[232433]: 2025-12-06 07:39:31.577 232437 DEBUG oslo_concurrency.lockutils [None req-b4986752-70ed-481a-b271-ae862310c404 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:39:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:32.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:39:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:39:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:32.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:39:33 np0005548731 nova_compute[232433]: 2025-12-06 07:39:33.151 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:34.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:34.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:35 np0005548731 nova_compute[232433]: 2025-12-06 07:39:35.515 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:36.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:36.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:38.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:38 np0005548731 nova_compute[232433]: 2025-12-06 07:39:38.154 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e310 e310: 3 total, 3 up, 3 in
Dec  6 02:39:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:39:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:38.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:39:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e311 e311: 3 total, 3 up, 3 in
Dec  6 02:39:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:40.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:40 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:40Z|00607|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec  6 02:39:40 np0005548731 nova_compute[232433]: 2025-12-06 07:39:40.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:40 np0005548731 nova_compute[232433]: 2025-12-06 07:39:40.338 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006765.3374465, ffee41ca-ba3b-4787-8435-b3903ffb29a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:39:40 np0005548731 nova_compute[232433]: 2025-12-06 07:39:40.338 232437 INFO nova.compute.manager [-] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:39:40 np0005548731 nova_compute[232433]: 2025-12-06 07:39:40.423 232437 DEBUG nova.compute.manager [None req-dd44040d-d644-4cce-846f-47a6382b0503 - - - - - -] [instance: ffee41ca-ba3b-4787-8435-b3903ffb29a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:39:40 np0005548731 nova_compute[232433]: 2025-12-06 07:39:40.517 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:40.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:41 np0005548731 nova_compute[232433]: 2025-12-06 07:39:41.843 232437 DEBUG oslo_concurrency.lockutils [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "db714956-5ece-4543-8db8-4634df66962e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:41 np0005548731 nova_compute[232433]: 2025-12-06 07:39:41.843 232437 DEBUG oslo_concurrency.lockutils [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:41 np0005548731 nova_compute[232433]: 2025-12-06 07:39:41.843 232437 DEBUG oslo_concurrency.lockutils [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "db714956-5ece-4543-8db8-4634df66962e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:41 np0005548731 nova_compute[232433]: 2025-12-06 07:39:41.844 232437 DEBUG oslo_concurrency.lockutils [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:41 np0005548731 nova_compute[232433]: 2025-12-06 07:39:41.844 232437 DEBUG oslo_concurrency.lockutils [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:41 np0005548731 nova_compute[232433]: 2025-12-06 07:39:41.845 232437 INFO nova.compute.manager [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Terminating instance#033[00m
Dec  6 02:39:41 np0005548731 nova_compute[232433]: 2025-12-06 07:39:41.846 232437 DEBUG nova.compute.manager [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:39:42 np0005548731 kernel: tap3c6a154e-80 (unregistering): left promiscuous mode
Dec  6 02:39:42 np0005548731 NetworkManager[49182]: <info>  [1765006782.0167] device (tap3c6a154e-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:39:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:42Z|00608|binding|INFO|Releasing lport 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 from this chassis (sb_readonly=0)
Dec  6 02:39:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:42Z|00609|binding|INFO|Setting lport 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 down in Southbound
Dec  6 02:39:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:42Z|00610|binding|INFO|Removing iface tap3c6a154e-80 ovn-installed in OVS
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.026 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.028 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.048 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:3f:13 10.100.0.10'], port_security=['fa:16:3e:33:3f:13 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'db714956-5ece-4543-8db8-4634df66962e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '384b06d0-71ab-455f-9033-7290730c5c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.050 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.052 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.067 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff96503-77db-4754-be83-819803bc34c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000086.scope: Deactivated successfully.
Dec  6 02:39:42 np0005548731 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000086.scope: Consumed 11.654s CPU time.
Dec  6 02:39:42 np0005548731 systemd-machined[195355]: Machine qemu-62-instance-00000086 terminated.
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.103 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[344ea0c8-b7f7-4fb0-8fd1-caf5a301ae2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.107 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[13acbde8-8419-4917-bd48-a678b39593e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:42.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.137 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[47c2d2f0-d545-4954-b0bd-5732e2165f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.156 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[04605916-7c7d-477b-af43-2726079e2a0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291432, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.171 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0481f314-899f-4a7a-9f9f-b10f629d9116]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291433, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291433, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.173 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.174 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.178 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.178 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.179 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.179 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:39:42 np0005548731 kernel: tap3c6a154e-80: entered promiscuous mode
Dec  6 02:39:42 np0005548731 kernel: tap3c6a154e-80 (unregistering): left promiscuous mode
Dec  6 02:39:42 np0005548731 NetworkManager[49182]: <info>  [1765006782.2673] manager: (tap3c6a154e-80): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Dec  6 02:39:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:42Z|00611|binding|INFO|Claiming lport 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 for this chassis.
Dec  6 02:39:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:42Z|00612|binding|INFO|3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24: Claiming fa:16:3e:33:3f:13 10.100.0.10
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.269 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.288 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:42Z|00613|if_status|INFO|Dropped 2 log messages in last 774 seconds (most recently, 774 seconds ago) due to excessive rate
Dec  6 02:39:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:42Z|00614|if_status|INFO|Not setting lport 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 down as sb is readonly
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.292 232437 INFO nova.virt.libvirt.driver [-] [instance: db714956-5ece-4543-8db8-4634df66962e] Instance destroyed successfully.#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.292 232437 DEBUG nova.objects.instance [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'resources' on Instance uuid db714956-5ece-4543-8db8-4634df66962e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:39:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:39:42Z|00615|binding|INFO|Releasing lport 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 from this chassis (sb_readonly=0)
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.368 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:3f:13 10.100.0.10'], port_security=['fa:16:3e:33:3f:13 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'db714956-5ece-4543-8db8-4634df66962e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '384b06d0-71ab-455f-9033-7290730c5c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.370 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 bound to our chassis#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.371 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.381 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.385 232437 DEBUG nova.virt.libvirt.vif [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:39:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1557370704',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1557370704',id=134,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:39:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-fk5h503m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:39:31Z,user_data=None,user_id='605b5481e0c944048e6a67046c30d693',uuid=db714956-5ece-4543-8db8-4634df66962e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "address": "fa:16:3e:33:3f:13", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6a154e-80", "ovs_interfaceid": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.385 232437 DEBUG nova.network.os_vif_util [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "address": "fa:16:3e:33:3f:13", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6a154e-80", "ovs_interfaceid": "3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.386 232437 DEBUG nova.network.os_vif_util [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:3f:13,bridge_name='br-int',has_traffic_filtering=True,id=3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6a154e-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.387 232437 DEBUG os_vif [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:3f:13,bridge_name='br-int',has_traffic_filtering=True,id=3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6a154e-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.388 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[54a5b01d-b7c9-4206-86de-149ed3fda4ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.389 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.389 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c6a154e-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.392 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.394 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.396 232437 INFO os_vif [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:3f:13,bridge_name='br-int',has_traffic_filtering=True,id=3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6a154e-80')#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.420 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e752a79e-0d02-46f5-a701-6673c51a6fea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.423 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d402a3f0-50e7-441f-8c06-600f497bc844]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.459 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d0980a17-1164-4f3f-bb35-55c177412922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.476 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe0da02-5a5a-4751-94ed-99edef781212]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291445, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.494 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[abfb8bb2-d25d-4bb1-97d4-2494fa86f98d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291446, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291446, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.495 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.497 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.498 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.498 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.499 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.499 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.585 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:3f:13 10.100.0.10'], port_security=['fa:16:3e:33:3f:13 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'db714956-5ece-4543-8db8-4634df66962e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '384b06d0-71ab-455f-9033-7290730c5c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.586 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.587 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.603 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ab98e7-2879-4a87-bdd5-294b87db4c9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.631 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[24daee4e-2b28-4387-804f-4d072f984983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.635 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[432502fc-48ba-47da-9cd9-c3c0d768bdaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.661 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b8aa7996-0f1d-4e60-9172-b891564edd30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.677 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb5fbc1-0927-479b-a5d2-a3aa0f6585ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291452, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.693 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4f49ad00-3e3f-4b4b-910e-d1b95f1e905e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291453, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291453, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.695 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.696 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.700 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.700 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.701 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:39:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:39:42.701 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.928 232437 INFO nova.virt.libvirt.driver [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Deleting instance files /var/lib/nova/instances/db714956-5ece-4543-8db8-4634df66962e_del#033[00m
Dec  6 02:39:42 np0005548731 nova_compute[232433]: 2025-12-06 07:39:42.929 232437 INFO nova.virt.libvirt.driver [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Deletion of /var/lib/nova/instances/db714956-5ece-4543-8db8-4634df66962e_del complete#033[00m
Dec  6 02:39:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:42.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:43 np0005548731 nova_compute[232433]: 2025-12-06 07:39:43.156 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:43 np0005548731 nova_compute[232433]: 2025-12-06 07:39:43.834 232437 DEBUG nova.compute.manager [req-42829fa4-a077-45d9-9db3-e5f9c6feddbb req-96701b10-e929-4dbb-ba3d-d196277fb242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Received event network-vif-unplugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:43 np0005548731 nova_compute[232433]: 2025-12-06 07:39:43.835 232437 DEBUG oslo_concurrency.lockutils [req-42829fa4-a077-45d9-9db3-e5f9c6feddbb req-96701b10-e929-4dbb-ba3d-d196277fb242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db714956-5ece-4543-8db8-4634df66962e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:43 np0005548731 nova_compute[232433]: 2025-12-06 07:39:43.835 232437 DEBUG oslo_concurrency.lockutils [req-42829fa4-a077-45d9-9db3-e5f9c6feddbb req-96701b10-e929-4dbb-ba3d-d196277fb242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:43 np0005548731 nova_compute[232433]: 2025-12-06 07:39:43.835 232437 DEBUG oslo_concurrency.lockutils [req-42829fa4-a077-45d9-9db3-e5f9c6feddbb req-96701b10-e929-4dbb-ba3d-d196277fb242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:43 np0005548731 nova_compute[232433]: 2025-12-06 07:39:43.835 232437 DEBUG nova.compute.manager [req-42829fa4-a077-45d9-9db3-e5f9c6feddbb req-96701b10-e929-4dbb-ba3d-d196277fb242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] No waiting events found dispatching network-vif-unplugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:39:43 np0005548731 nova_compute[232433]: 2025-12-06 07:39:43.835 232437 DEBUG nova.compute.manager [req-42829fa4-a077-45d9-9db3-e5f9c6feddbb req-96701b10-e929-4dbb-ba3d-d196277fb242 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Received event network-vif-unplugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:39:43 np0005548731 nova_compute[232433]: 2025-12-06 07:39:43.990 232437 INFO nova.compute.manager [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Took 2.14 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:39:43 np0005548731 nova_compute[232433]: 2025-12-06 07:39:43.991 232437 DEBUG oslo.service.loopingcall [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:39:43 np0005548731 nova_compute[232433]: 2025-12-06 07:39:43.991 232437 DEBUG nova.compute.manager [-] [instance: db714956-5ece-4543-8db8-4634df66962e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:39:43 np0005548731 nova_compute[232433]: 2025-12-06 07:39:43.992 232437 DEBUG nova.network.neutron [-] [instance: db714956-5ece-4543-8db8-4634df66962e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:39:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:44.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e312 e312: 3 total, 3 up, 3 in
Dec  6 02:39:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:39:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:44.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:39:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:39:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.100 232437 DEBUG nova.compute.manager [req-c6e582d8-bf26-4012-8fe9-6732bf503f91 req-df16a041-ab34-46b9-a9b0-65a4a08f971e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Received event network-vif-plugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.101 232437 DEBUG oslo_concurrency.lockutils [req-c6e582d8-bf26-4012-8fe9-6732bf503f91 req-df16a041-ab34-46b9-a9b0-65a4a08f971e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db714956-5ece-4543-8db8-4634df66962e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.101 232437 DEBUG oslo_concurrency.lockutils [req-c6e582d8-bf26-4012-8fe9-6732bf503f91 req-df16a041-ab34-46b9-a9b0-65a4a08f971e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.101 232437 DEBUG oslo_concurrency.lockutils [req-c6e582d8-bf26-4012-8fe9-6732bf503f91 req-df16a041-ab34-46b9-a9b0-65a4a08f971e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.101 232437 DEBUG nova.compute.manager [req-c6e582d8-bf26-4012-8fe9-6732bf503f91 req-df16a041-ab34-46b9-a9b0-65a4a08f971e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] No waiting events found dispatching network-vif-plugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.101 232437 WARNING nova.compute.manager [req-c6e582d8-bf26-4012-8fe9-6732bf503f91 req-df16a041-ab34-46b9-a9b0-65a4a08f971e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Received unexpected event network-vif-plugged-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.133 232437 DEBUG nova.network.neutron [-] [instance: db714956-5ece-4543-8db8-4634df66962e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:39:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:39:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:46.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.159 232437 INFO nova.compute.manager [-] [instance: db714956-5ece-4543-8db8-4634df66962e] Took 2.17 seconds to deallocate network for instance.#033[00m
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.261 232437 DEBUG nova.compute.manager [req-768a5d0b-2ca3-40ba-94f1-7f57b59e1e5d req-d5eb2de5-5aa2-41b2-bf0b-1537a6a48b9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Received event network-vif-deleted-3c6a154e-800f-4ed4-b1a9-f9e0aefe2f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.816 232437 INFO nova.compute.manager [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: db714956-5ece-4543-8db8-4634df66962e] Took 0.66 seconds to detach 1 volumes for instance.#033[00m
Dec  6 02:39:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:46.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.945 232437 DEBUG oslo_concurrency.lockutils [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:46 np0005548731 nova_compute[232433]: 2025-12-06 07:39:46.946 232437 DEBUG oslo_concurrency.lockutils [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:47 np0005548731 nova_compute[232433]: 2025-12-06 07:39:47.052 232437 DEBUG oslo_concurrency.processutils [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:39:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:39:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:39:47 np0005548731 nova_compute[232433]: 2025-12-06 07:39:47.394 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:39:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4058252904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:39:47 np0005548731 nova_compute[232433]: 2025-12-06 07:39:47.562 232437 DEBUG oslo_concurrency.processutils [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:47 np0005548731 nova_compute[232433]: 2025-12-06 07:39:47.567 232437 DEBUG nova.compute.provider_tree [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:39:47 np0005548731 nova_compute[232433]: 2025-12-06 07:39:47.590 232437 DEBUG nova.scheduler.client.report [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:39:47 np0005548731 nova_compute[232433]: 2025-12-06 07:39:47.654 232437 DEBUG oslo_concurrency.lockutils [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:47 np0005548731 nova_compute[232433]: 2025-12-06 07:39:47.687 232437 INFO nova.scheduler.client.report [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Deleted allocations for instance db714956-5ece-4543-8db8-4634df66962e#033[00m
Dec  6 02:39:47 np0005548731 nova_compute[232433]: 2025-12-06 07:39:47.783 232437 DEBUG oslo_concurrency.lockutils [None req-206cb68f-638e-49ba-868d-74766edf8f0e 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "db714956-5ece-4543-8db8-4634df66962e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:48.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:48 np0005548731 nova_compute[232433]: 2025-12-06 07:39:48.158 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:39:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:39:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:50.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e313 e313: 3 total, 3 up, 3 in
Dec  6 02:39:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:50.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.532 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Acquiring lock "78c21010-1e12-412a-8a98-2cb6e015f94a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.532 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "78c21010-1e12-412a-8a98-2cb6e015f94a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.560 232437 DEBUG nova.compute.manager [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.750 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.751 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.758 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.758 232437 INFO nova.compute.claims [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.873 232437 DEBUG nova.scheduler.client.report [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.895 232437 DEBUG nova.scheduler.client.report [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.896 232437 DEBUG nova.compute.provider_tree [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.917 232437 DEBUG nova.scheduler.client.report [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 02:39:51 np0005548731 nova_compute[232433]: 2025-12-06 07:39:51.950 232437 DEBUG nova.scheduler.client.report [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.067 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:52.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:39:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2420898333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:39:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:39:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2420898333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.442 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e314 e314: 3 total, 3 up, 3 in
Dec  6 02:39:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:39:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3637923805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.560 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.565 232437 DEBUG nova.compute.provider_tree [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.591 232437 DEBUG nova.scheduler.client.report [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.640 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.641 232437 DEBUG nova.compute.manager [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.683 232437 DEBUG nova.compute.manager [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.713 232437 INFO nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.743 232437 DEBUG nova.compute.manager [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.872 232437 DEBUG nova.compute.manager [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.874 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.874 232437 INFO nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Creating image(s)#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.898 232437 DEBUG nova.storage.rbd_utils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] rbd image 78c21010-1e12-412a-8a98-2cb6e015f94a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.926 232437 DEBUG nova.storage.rbd_utils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] rbd image 78c21010-1e12-412a-8a98-2cb6e015f94a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:39:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:39:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:52.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.954 232437 DEBUG nova.storage.rbd_utils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] rbd image 78c21010-1e12-412a-8a98-2cb6e015f94a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:39:52 np0005548731 nova_compute[232433]: 2025-12-06 07:39:52.959 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:53 np0005548731 nova_compute[232433]: 2025-12-06 07:39:53.026 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:53 np0005548731 nova_compute[232433]: 2025-12-06 07:39:53.027 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:53 np0005548731 nova_compute[232433]: 2025-12-06 07:39:53.028 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:53 np0005548731 nova_compute[232433]: 2025-12-06 07:39:53.029 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:53 np0005548731 nova_compute[232433]: 2025-12-06 07:39:53.053 232437 DEBUG nova.storage.rbd_utils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] rbd image 78c21010-1e12-412a-8a98-2cb6e015f94a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:39:53 np0005548731 nova_compute[232433]: 2025-12-06 07:39:53.058 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 78c21010-1e12-412a-8a98-2cb6e015f94a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:53 np0005548731 nova_compute[232433]: 2025-12-06 07:39:53.160 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:39:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:39:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e315 e315: 3 total, 3 up, 3 in
Dec  6 02:39:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:54.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:54 np0005548731 nova_compute[232433]: 2025-12-06 07:39:54.561 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 78c21010-1e12-412a-8a98-2cb6e015f94a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:54 np0005548731 nova_compute[232433]: 2025-12-06 07:39:54.630 232437 DEBUG nova.storage.rbd_utils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] resizing rbd image 78c21010-1e12-412a-8a98-2cb6e015f94a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:39:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:39:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:54.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.131 232437 DEBUG nova.objects.instance [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lazy-loading 'migration_context' on Instance uuid 78c21010-1e12-412a-8a98-2cb6e015f94a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.265 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.266 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Ensure instance console log exists: /var/lib/nova/instances/78c21010-1e12-412a-8a98-2cb6e015f94a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.266 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.266 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.267 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.268 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.273 232437 WARNING nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.277 232437 DEBUG nova.virt.libvirt.host [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.278 232437 DEBUG nova.virt.libvirt.host [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.281 232437 DEBUG nova.virt.libvirt.host [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.282 232437 DEBUG nova.virt.libvirt.host [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.285 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.285 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.287 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.287 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.288 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.288 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.289 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.289 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.290 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.290 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.291 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.292 232437 DEBUG nova.virt.hardware [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.296 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:39:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2926417048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.727 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.752 232437 DEBUG nova.storage.rbd_utils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] rbd image 78c21010-1e12-412a-8a98-2cb6e015f94a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:39:55 np0005548731 nova_compute[232433]: 2025-12-06 07:39:55.755 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:39:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:56.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:39:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:39:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4267829070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:39:56 np0005548731 nova_compute[232433]: 2025-12-06 07:39:56.177 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:56 np0005548731 nova_compute[232433]: 2025-12-06 07:39:56.179 232437 DEBUG nova.objects.instance [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78c21010-1e12-412a-8a98-2cb6e015f94a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:39:56 np0005548731 nova_compute[232433]: 2025-12-06 07:39:56.380 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  <uuid>78c21010-1e12-412a-8a98-2cb6e015f94a</uuid>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  <name>instance-00000088</name>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersAaction247Test-server-1892976388</nova:name>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:39:55</nova:creationTime>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <nova:user uuid="2f3bfc62663b415caedcd269c56412be">tempest-ServersAaction247Test-447964022-project-member</nova:user>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <nova:project uuid="17906cc84c6c4f43b1a09758e5552c63">tempest-ServersAaction247Test-447964022</nova:project>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <nova:ports/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <entry name="serial">78c21010-1e12-412a-8a98-2cb6e015f94a</entry>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <entry name="uuid">78c21010-1e12-412a-8a98-2cb6e015f94a</entry>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/78c21010-1e12-412a-8a98-2cb6e015f94a_disk">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/78c21010-1e12-412a-8a98-2cb6e015f94a_disk.config">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/78c21010-1e12-412a-8a98-2cb6e015f94a/console.log" append="off"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:39:56 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:39:56 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:39:56 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:39:56 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:39:56 np0005548731 nova_compute[232433]: 2025-12-06 07:39:56.522 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:39:56 np0005548731 nova_compute[232433]: 2025-12-06 07:39:56.522 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:39:56 np0005548731 nova_compute[232433]: 2025-12-06 07:39:56.523 232437 INFO nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Using config drive#033[00m
Dec  6 02:39:56 np0005548731 nova_compute[232433]: 2025-12-06 07:39:56.549 232437 DEBUG nova.storage.rbd_utils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] rbd image 78c21010-1e12-412a-8a98-2cb6e015f94a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:39:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:39:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:56.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:39:57 np0005548731 nova_compute[232433]: 2025-12-06 07:39:57.217 232437 INFO nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Creating config drive at /var/lib/nova/instances/78c21010-1e12-412a-8a98-2cb6e015f94a/disk.config#033[00m
Dec  6 02:39:57 np0005548731 nova_compute[232433]: 2025-12-06 07:39:57.222 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78c21010-1e12-412a-8a98-2cb6e015f94a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1z02n7ge execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:57 np0005548731 nova_compute[232433]: 2025-12-06 07:39:57.290 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006782.288496, db714956-5ece-4543-8db8-4634df66962e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:39:57 np0005548731 nova_compute[232433]: 2025-12-06 07:39:57.291 232437 INFO nova.compute.manager [-] [instance: db714956-5ece-4543-8db8-4634df66962e] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:39:57 np0005548731 nova_compute[232433]: 2025-12-06 07:39:57.373 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78c21010-1e12-412a-8a98-2cb6e015f94a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1z02n7ge" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:57 np0005548731 nova_compute[232433]: 2025-12-06 07:39:57.404 232437 DEBUG nova.storage.rbd_utils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] rbd image 78c21010-1e12-412a-8a98-2cb6e015f94a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:39:57 np0005548731 nova_compute[232433]: 2025-12-06 07:39:57.408 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78c21010-1e12-412a-8a98-2cb6e015f94a/disk.config 78c21010-1e12-412a-8a98-2cb6e015f94a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:39:57 np0005548731 nova_compute[232433]: 2025-12-06 07:39:57.437 232437 DEBUG nova.compute.manager [None req-4722300a-8035-4df6-b844-662b901e4ad1 - - - - - -] [instance: db714956-5ece-4543-8db8-4634df66962e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:39:57 np0005548731 nova_compute[232433]: 2025-12-06 07:39:57.446 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:57 np0005548731 nova_compute[232433]: 2025-12-06 07:39:57.554 232437 DEBUG oslo_concurrency.processutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78c21010-1e12-412a-8a98-2cb6e015f94a/disk.config 78c21010-1e12-412a-8a98-2cb6e015f94a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:39:57 np0005548731 nova_compute[232433]: 2025-12-06 07:39:57.555 232437 INFO nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Deleting local config drive /var/lib/nova/instances/78c21010-1e12-412a-8a98-2cb6e015f94a/disk.config because it was imported into RBD.#033[00m
Dec  6 02:39:57 np0005548731 systemd-machined[195355]: New machine qemu-63-instance-00000088.
Dec  6 02:39:57 np0005548731 systemd[1]: Started Virtual Machine qemu-63-instance-00000088.
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.026 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006798.0260966, 78c21010-1e12-412a-8a98-2cb6e015f94a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.027 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.029 232437 DEBUG nova.compute.manager [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.030 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.032 232437 INFO nova.virt.libvirt.driver [-] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Instance spawned successfully.#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.033 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.069072) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798069104, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2578, "num_deletes": 258, "total_data_size": 5993165, "memory_usage": 6068544, "flush_reason": "Manual Compaction"}
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798093652, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 3887606, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51029, "largest_seqno": 53602, "table_properties": {"data_size": 3876935, "index_size": 6845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23514, "raw_average_key_size": 21, "raw_value_size": 3855123, "raw_average_value_size": 3504, "num_data_blocks": 296, "num_entries": 1100, "num_filter_entries": 1100, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006574, "oldest_key_time": 1765006574, "file_creation_time": 1765006798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 24793 microseconds, and 9931 cpu microseconds.
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.093858) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 3887606 bytes OK
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.093914) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.095791) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.095809) EVENT_LOG_v1 {"time_micros": 1765006798095803, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.095825) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 5981653, prev total WAL file size 5981653, number of live WAL files 2.
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.098337) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(3796KB)], [99(10MB)]
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798098447, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 15315792, "oldest_snapshot_seqno": -1}
Dec  6 02:39:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:39:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:58.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.163 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.163 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.169 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.170 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.170 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.171 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.171 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.172 232437 DEBUG nova.virt.libvirt.driver [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 8556 keys, 13157011 bytes, temperature: kUnknown
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798173521, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 13157011, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13099129, "index_size": 35370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21445, "raw_key_size": 222290, "raw_average_key_size": 25, "raw_value_size": 12946040, "raw_average_value_size": 1513, "num_data_blocks": 1390, "num_entries": 8556, "num_filter_entries": 8556, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765006798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.173795) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 13157011 bytes
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.175155) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.7 rd, 175.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 10.9 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 9095, records dropped: 539 output_compression: NoCompression
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.175177) EVENT_LOG_v1 {"time_micros": 1765006798175167, "job": 62, "event": "compaction_finished", "compaction_time_micros": 75170, "compaction_time_cpu_micros": 35865, "output_level": 6, "num_output_files": 1, "total_output_size": 13157011, "num_input_records": 9095, "num_output_records": 8556, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798176056, "job": 62, "event": "table_file_deletion", "file_number": 101}
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798178160, "job": 62, "event": "table_file_deletion", "file_number": 99}
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.098240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.178196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.178200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.178202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.178203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:39:58.178204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.181 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.221 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.221 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006798.0270276, 78c21010-1e12-412a-8a98-2cb6e015f94a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.221 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] VM Started (Lifecycle Event)#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.311 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.315 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:39:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.488 232437 INFO nova.compute.manager [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Took 5.62 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.489 232437 DEBUG nova.compute.manager [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.528 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.663 232437 INFO nova.compute.manager [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Took 6.95 seconds to build instance.#033[00m
Dec  6 02:39:58 np0005548731 nova_compute[232433]: 2025-12-06 07:39:58.816 232437 DEBUG oslo_concurrency.lockutils [None req-5d02c810-c18e-4b5c-ac13-89318e4bb6c3 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "78c21010-1e12-412a-8a98-2cb6e015f94a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:39:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:39:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:39:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:58.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:39:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e316 e316: 3 total, 3 up, 3 in
Dec  6 02:40:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:00.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:00 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 02:40:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:00.880 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:00.881 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:00.882 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:40:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:00.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:40:01 np0005548731 nova_compute[232433]: 2025-12-06 07:40:01.846 232437 DEBUG nova.compute.manager [None req-995d71d7-1704-4b14-91c3-b45d098e58b8 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:01 np0005548731 nova_compute[232433]: 2025-12-06 07:40:01.899 232437 INFO nova.compute.manager [None req-995d71d7-1704-4b14-91c3-b45d098e58b8 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] instance snapshotting#033[00m
Dec  6 02:40:01 np0005548731 nova_compute[232433]: 2025-12-06 07:40:01.901 232437 DEBUG nova.objects.instance [None req-995d71d7-1704-4b14-91c3-b45d098e58b8 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lazy-loading 'flavor' on Instance uuid 78c21010-1e12-412a-8a98-2cb6e015f94a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:40:01 np0005548731 podman[292100]: 2025-12-06 07:40:01.902352047 +0000 UTC m=+0.050636285 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  6 02:40:01 np0005548731 podman[292101]: 2025-12-06 07:40:01.930461072 +0000 UTC m=+0.078974565 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 02:40:01 np0005548731 podman[292102]: 2025-12-06 07:40:01.933628718 +0000 UTC m=+0.081534737 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  6 02:40:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:02.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:02 np0005548731 nova_compute[232433]: 2025-12-06 07:40:02.448 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:02 np0005548731 nova_compute[232433]: 2025-12-06 07:40:02.894 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Acquiring lock "78c21010-1e12-412a-8a98-2cb6e015f94a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:02 np0005548731 nova_compute[232433]: 2025-12-06 07:40:02.894 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "78c21010-1e12-412a-8a98-2cb6e015f94a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:02 np0005548731 nova_compute[232433]: 2025-12-06 07:40:02.894 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Acquiring lock "78c21010-1e12-412a-8a98-2cb6e015f94a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:02 np0005548731 nova_compute[232433]: 2025-12-06 07:40:02.894 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "78c21010-1e12-412a-8a98-2cb6e015f94a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:02 np0005548731 nova_compute[232433]: 2025-12-06 07:40:02.895 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "78c21010-1e12-412a-8a98-2cb6e015f94a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:02 np0005548731 nova_compute[232433]: 2025-12-06 07:40:02.895 232437 INFO nova.compute.manager [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Terminating instance#033[00m
Dec  6 02:40:02 np0005548731 nova_compute[232433]: 2025-12-06 07:40:02.896 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Acquiring lock "refresh_cache-78c21010-1e12-412a-8a98-2cb6e015f94a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:40:02 np0005548731 nova_compute[232433]: 2025-12-06 07:40:02.896 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Acquired lock "refresh_cache-78c21010-1e12-412a-8a98-2cb6e015f94a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:40:02 np0005548731 nova_compute[232433]: 2025-12-06 07:40:02.896 232437 DEBUG nova.network.neutron [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:40:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:02.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:03 np0005548731 nova_compute[232433]: 2025-12-06 07:40:03.163 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:03 np0005548731 nova_compute[232433]: 2025-12-06 07:40:03.579 232437 INFO nova.virt.libvirt.driver [None req-995d71d7-1704-4b14-91c3-b45d098e58b8 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Beginning live snapshot process#033[00m
Dec  6 02:40:03 np0005548731 nova_compute[232433]: 2025-12-06 07:40:03.729 232437 DEBUG nova.compute.manager [None req-995d71d7-1704-4b14-91c3-b45d098e58b8 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Dec  6 02:40:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:04.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.218 232437 DEBUG nova.network.neutron [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.594 232437 DEBUG oslo_concurrency.lockutils [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.594 232437 DEBUG oslo_concurrency.lockutils [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.595 232437 DEBUG oslo_concurrency.lockutils [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.595 232437 DEBUG oslo_concurrency.lockutils [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.595 232437 DEBUG oslo_concurrency.lockutils [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.596 232437 INFO nova.compute.manager [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Terminating instance#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.597 232437 DEBUG nova.compute.manager [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:40:04 np0005548731 kernel: tap657a8a48-7a (unregistering): left promiscuous mode
Dec  6 02:40:04 np0005548731 NetworkManager[49182]: <info>  [1765006804.6602] device (tap657a8a48-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:40:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:04Z|00616|binding|INFO|Releasing lport 657a8a48-7a74-4b37-a294-df0b2fd9332c from this chassis (sb_readonly=0)
Dec  6 02:40:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:04Z|00617|binding|INFO|Setting lport 657a8a48-7a74-4b37-a294-df0b2fd9332c down in Southbound
Dec  6 02:40:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:04Z|00618|binding|INFO|Removing iface tap657a8a48-7a ovn-installed in OVS
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.717 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.719 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.744 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.747 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:fe:4a 10.100.0.7'], port_security=['fa:16:3e:f4:fe:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b4be0ef8-945f-47a1-a3a8-5962f1e692e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '6', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=657a8a48-7a74-4b37-a294-df0b2fd9332c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.748 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 657a8a48-7a74-4b37-a294-df0b2fd9332c in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.749 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.764 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[93ef320a-10f2-4553-8dac-cb41e0645128]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:04 np0005548731 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Dec  6 02:40:04 np0005548731 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007f.scope: Consumed 21.789s CPU time.
Dec  6 02:40:04 np0005548731 systemd-machined[195355]: Machine qemu-58-instance-0000007f terminated.
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.792 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ef6bc5-be23-4df2-aedf-8d9515213016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.795 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[717a5a5a-f212-44c6-b61c-7397a9690f57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.823 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[afde3354-012b-4c41-b4ad-138c02f89049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.840 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[800be28d-01d7-4476-bc20-0b5686269a5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 27, 'rx_bytes': 868, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 27, 'rx_bytes': 868, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667055, 'reachable_time': 27555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292178, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.845 232437 INFO nova.virt.libvirt.driver [-] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Instance destroyed successfully.#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.846 232437 DEBUG nova.objects.instance [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'resources' on Instance uuid b4be0ef8-945f-47a1-a3a8-5962f1e692e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.857 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[468500e0-51c7-437a-b188-465e82922b12]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667067, 'tstamp': 667067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292185, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667070, 'tstamp': 667070}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292185, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.858 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.862 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.866 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.866 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.867 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.867 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:04.867 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.894 232437 DEBUG nova.virt.libvirt.vif [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:34:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-0',id=127,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:36:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-nhxynips',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:36:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=b4be0ef8-945f-47a1-a3a8-5962f1e692e5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.895 232437 DEBUG nova.network.os_vif_util [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "address": "fa:16:3e:f4:fe:4a", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap657a8a48-7a", "ovs_interfaceid": "657a8a48-7a74-4b37-a294-df0b2fd9332c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.895 232437 DEBUG nova.network.os_vif_util [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.896 232437 DEBUG os_vif [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.899 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.900 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap657a8a48-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.902 232437 DEBUG nova.network.neutron [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.903 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.905 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.907 232437 INFO os_vif [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:fe:4a,bridge_name='br-int',has_traffic_filtering=True,id=657a8a48-7a74-4b37-a294-df0b2fd9332c,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap657a8a48-7a')#033[00m
Dec  6 02:40:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:04.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.991 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Releasing lock "refresh_cache-78c21010-1e12-412a-8a98-2cb6e015f94a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:40:04 np0005548731 nova_compute[232433]: 2025-12-06 07:40:04.993 232437 DEBUG nova.compute.manager [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:40:05 np0005548731 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000088.scope: Deactivated successfully.
Dec  6 02:40:05 np0005548731 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000088.scope: Consumed 7.452s CPU time.
Dec  6 02:40:05 np0005548731 systemd-machined[195355]: Machine qemu-63-instance-00000088 terminated.
Dec  6 02:40:05 np0005548731 nova_compute[232433]: 2025-12-06 07:40:05.091 232437 DEBUG nova.compute.manager [None req-995d71d7-1704-4b14-91c3-b45d098e58b8 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Dec  6 02:40:05 np0005548731 nova_compute[232433]: 2025-12-06 07:40:05.209 232437 INFO nova.virt.libvirt.driver [-] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Instance destroyed successfully.#033[00m
Dec  6 02:40:05 np0005548731 nova_compute[232433]: 2025-12-06 07:40:05.211 232437 DEBUG nova.objects.instance [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lazy-loading 'resources' on Instance uuid 78c21010-1e12-412a-8a98-2cb6e015f94a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:40:05 np0005548731 nova_compute[232433]: 2025-12-06 07:40:05.309 232437 INFO nova.virt.libvirt.driver [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Deleting instance files /var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5_del#033[00m
Dec  6 02:40:05 np0005548731 nova_compute[232433]: 2025-12-06 07:40:05.310 232437 INFO nova.virt.libvirt.driver [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Deletion of /var/lib/nova/instances/b4be0ef8-945f-47a1-a3a8-5962f1e692e5_del complete#033[00m
Dec  6 02:40:05 np0005548731 nova_compute[232433]: 2025-12-06 07:40:05.694 232437 INFO nova.compute.manager [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:40:05 np0005548731 nova_compute[232433]: 2025-12-06 07:40:05.695 232437 DEBUG oslo.service.loopingcall [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:40:05 np0005548731 nova_compute[232433]: 2025-12-06 07:40:05.695 232437 DEBUG nova.compute.manager [-] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:40:05 np0005548731 nova_compute[232433]: 2025-12-06 07:40:05.695 232437 DEBUG nova.network.neutron [-] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:40:05 np0005548731 nova_compute[232433]: 2025-12-06 07:40:05.900 232437 INFO nova.virt.libvirt.driver [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Deleting instance files /var/lib/nova/instances/78c21010-1e12-412a-8a98-2cb6e015f94a_del#033[00m
Dec  6 02:40:05 np0005548731 nova_compute[232433]: 2025-12-06 07:40:05.901 232437 INFO nova.virt.libvirt.driver [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Deletion of /var/lib/nova/instances/78c21010-1e12-412a-8a98-2cb6e015f94a_del complete#033[00m
Dec  6 02:40:06 np0005548731 nova_compute[232433]: 2025-12-06 07:40:06.038 232437 DEBUG nova.compute.manager [req-233b76d1-1fbd-4650-90a9-7591552b7846 req-4f51e91e-2605-45fb-bb11-4bf833cbf1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-vif-unplugged-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:06 np0005548731 nova_compute[232433]: 2025-12-06 07:40:06.039 232437 DEBUG oslo_concurrency.lockutils [req-233b76d1-1fbd-4650-90a9-7591552b7846 req-4f51e91e-2605-45fb-bb11-4bf833cbf1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:06 np0005548731 nova_compute[232433]: 2025-12-06 07:40:06.039 232437 DEBUG oslo_concurrency.lockutils [req-233b76d1-1fbd-4650-90a9-7591552b7846 req-4f51e91e-2605-45fb-bb11-4bf833cbf1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:06 np0005548731 nova_compute[232433]: 2025-12-06 07:40:06.039 232437 DEBUG oslo_concurrency.lockutils [req-233b76d1-1fbd-4650-90a9-7591552b7846 req-4f51e91e-2605-45fb-bb11-4bf833cbf1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:06 np0005548731 nova_compute[232433]: 2025-12-06 07:40:06.039 232437 DEBUG nova.compute.manager [req-233b76d1-1fbd-4650-90a9-7591552b7846 req-4f51e91e-2605-45fb-bb11-4bf833cbf1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] No waiting events found dispatching network-vif-unplugged-657a8a48-7a74-4b37-a294-df0b2fd9332c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:40:06 np0005548731 nova_compute[232433]: 2025-12-06 07:40:06.040 232437 DEBUG nova.compute.manager [req-233b76d1-1fbd-4650-90a9-7591552b7846 req-4f51e91e-2605-45fb-bb11-4bf833cbf1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-vif-unplugged-657a8a48-7a74-4b37-a294-df0b2fd9332c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:40:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:40:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:06.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:40:06 np0005548731 nova_compute[232433]: 2025-12-06 07:40:06.406 232437 INFO nova.compute.manager [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Took 1.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:40:06 np0005548731 nova_compute[232433]: 2025-12-06 07:40:06.407 232437 DEBUG oslo.service.loopingcall [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:40:06 np0005548731 nova_compute[232433]: 2025-12-06 07:40:06.409 232437 DEBUG nova.compute.manager [-] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:40:06 np0005548731 nova_compute[232433]: 2025-12-06 07:40:06.409 232437 DEBUG nova.network.neutron [-] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:40:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:06.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:08.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:08 np0005548731 nova_compute[232433]: 2025-12-06 07:40:08.165 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:08 np0005548731 nova_compute[232433]: 2025-12-06 07:40:08.413 232437 DEBUG nova.network.neutron [-] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:40:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:08 np0005548731 nova_compute[232433]: 2025-12-06 07:40:08.777 232437 DEBUG nova.compute.manager [req-de98a8a8-1b3f-4f65-9a4e-490d0e8e13a2 req-45ba3b2c-72e1-448c-ba25-16208555c350 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:08 np0005548731 nova_compute[232433]: 2025-12-06 07:40:08.778 232437 DEBUG oslo_concurrency.lockutils [req-de98a8a8-1b3f-4f65-9a4e-490d0e8e13a2 req-45ba3b2c-72e1-448c-ba25-16208555c350 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:08 np0005548731 nova_compute[232433]: 2025-12-06 07:40:08.778 232437 DEBUG oslo_concurrency.lockutils [req-de98a8a8-1b3f-4f65-9a4e-490d0e8e13a2 req-45ba3b2c-72e1-448c-ba25-16208555c350 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:08 np0005548731 nova_compute[232433]: 2025-12-06 07:40:08.778 232437 DEBUG oslo_concurrency.lockutils [req-de98a8a8-1b3f-4f65-9a4e-490d0e8e13a2 req-45ba3b2c-72e1-448c-ba25-16208555c350 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:08 np0005548731 nova_compute[232433]: 2025-12-06 07:40:08.778 232437 DEBUG nova.compute.manager [req-de98a8a8-1b3f-4f65-9a4e-490d0e8e13a2 req-45ba3b2c-72e1-448c-ba25-16208555c350 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] No waiting events found dispatching network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:40:08 np0005548731 nova_compute[232433]: 2025-12-06 07:40:08.778 232437 WARNING nova.compute.manager [req-de98a8a8-1b3f-4f65-9a4e-490d0e8e13a2 req-45ba3b2c-72e1-448c-ba25-16208555c350 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received unexpected event network-vif-plugged-657a8a48-7a74-4b37-a294-df0b2fd9332c for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:40:08 np0005548731 nova_compute[232433]: 2025-12-06 07:40:08.786 232437 DEBUG nova.network.neutron [-] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:40:08 np0005548731 nova_compute[232433]: 2025-12-06 07:40:08.808 232437 INFO nova.compute.manager [-] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Took 2.40 seconds to deallocate network for instance.#033[00m
Dec  6 02:40:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:40:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2376389813' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:40:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:40:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2376389813' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:40:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:08.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:09 np0005548731 nova_compute[232433]: 2025-12-06 07:40:09.358 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:09 np0005548731 nova_compute[232433]: 2025-12-06 07:40:09.358 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:09 np0005548731 nova_compute[232433]: 2025-12-06 07:40:09.902 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:10.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:10.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:12 np0005548731 nova_compute[232433]: 2025-12-06 07:40:12.132 232437 DEBUG nova.network.neutron [-] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:40:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:12.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:12 np0005548731 nova_compute[232433]: 2025-12-06 07:40:12.178 232437 DEBUG oslo_concurrency.processutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:12 np0005548731 nova_compute[232433]: 2025-12-06 07:40:12.446 232437 INFO nova.compute.manager [-] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Took 6.75 seconds to deallocate network for instance.#033[00m
Dec  6 02:40:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:40:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4164932125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:40:12 np0005548731 nova_compute[232433]: 2025-12-06 07:40:12.592 232437 DEBUG oslo_concurrency.processutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:12 np0005548731 nova_compute[232433]: 2025-12-06 07:40:12.600 232437 DEBUG nova.compute.provider_tree [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:40:12 np0005548731 nova_compute[232433]: 2025-12-06 07:40:12.665 232437 DEBUG nova.compute.manager [req-750d40e4-5328-415e-bd64-7261483d31dc req-877a9ed0-37b3-4352-add6-81e5d2caf99d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Received event network-vif-deleted-657a8a48-7a74-4b37-a294-df0b2fd9332c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:12 np0005548731 nova_compute[232433]: 2025-12-06 07:40:12.665 232437 INFO nova.compute.manager [req-750d40e4-5328-415e-bd64-7261483d31dc req-877a9ed0-37b3-4352-add6-81e5d2caf99d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Neutron deleted interface 657a8a48-7a74-4b37-a294-df0b2fd9332c; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:40:12 np0005548731 nova_compute[232433]: 2025-12-06 07:40:12.665 232437 DEBUG nova.network.neutron [req-750d40e4-5328-415e-bd64-7261483d31dc req-877a9ed0-37b3-4352-add6-81e5d2caf99d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:40:12 np0005548731 nova_compute[232433]: 2025-12-06 07:40:12.901 232437 DEBUG nova.scheduler.client.report [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:40:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:12.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:12 np0005548731 nova_compute[232433]: 2025-12-06 07:40:12.990 232437 DEBUG nova.compute.manager [req-750d40e4-5328-415e-bd64-7261483d31dc req-877a9ed0-37b3-4352-add6-81e5d2caf99d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Detach interface failed, port_id=657a8a48-7a74-4b37-a294-df0b2fd9332c, reason: Instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:40:13 np0005548731 nova_compute[232433]: 2025-12-06 07:40:13.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:40:13 np0005548731 nova_compute[232433]: 2025-12-06 07:40:13.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:40:13 np0005548731 nova_compute[232433]: 2025-12-06 07:40:13.118 232437 DEBUG oslo_concurrency.lockutils [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:13 np0005548731 nova_compute[232433]: 2025-12-06 07:40:13.167 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:14 np0005548731 nova_compute[232433]: 2025-12-06 07:40:14.032 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 4.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:14 np0005548731 nova_compute[232433]: 2025-12-06 07:40:14.035 232437 DEBUG oslo_concurrency.lockutils [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:14 np0005548731 nova_compute[232433]: 2025-12-06 07:40:14.129 232437 DEBUG oslo_concurrency.processutils [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:14.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:40:14 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/384442653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:40:14 np0005548731 nova_compute[232433]: 2025-12-06 07:40:14.569 232437 DEBUG oslo_concurrency.processutils [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:14 np0005548731 nova_compute[232433]: 2025-12-06 07:40:14.576 232437 DEBUG nova.compute.provider_tree [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:40:14 np0005548731 nova_compute[232433]: 2025-12-06 07:40:14.915 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:14.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:15 np0005548731 nova_compute[232433]: 2025-12-06 07:40:15.278 232437 INFO nova.scheduler.client.report [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Deleted allocations for instance 78c21010-1e12-412a-8a98-2cb6e015f94a#033[00m
Dec  6 02:40:15 np0005548731 nova_compute[232433]: 2025-12-06 07:40:15.294 232437 DEBUG nova.scheduler.client.report [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:40:15 np0005548731 nova_compute[232433]: 2025-12-06 07:40:15.338 232437 DEBUG oslo_concurrency.lockutils [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:15 np0005548731 nova_compute[232433]: 2025-12-06 07:40:15.375 232437 DEBUG oslo_concurrency.lockutils [None req-6379b5f4-6e25-4335-ae9b-97d8b24dbe12 2f3bfc62663b415caedcd269c56412be 17906cc84c6c4f43b1a09758e5552c63 - - default default] Lock "78c21010-1e12-412a-8a98-2cb6e015f94a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:15 np0005548731 nova_compute[232433]: 2025-12-06 07:40:15.378 232437 INFO nova.scheduler.client.report [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Deleted allocations for instance b4be0ef8-945f-47a1-a3a8-5962f1e692e5#033[00m
Dec  6 02:40:15 np0005548731 nova_compute[232433]: 2025-12-06 07:40:15.583 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:40:15 np0005548731 nova_compute[232433]: 2025-12-06 07:40:15.583 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:40:15 np0005548731 nova_compute[232433]: 2025-12-06 07:40:15.583 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:40:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:15Z|00619|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec  6 02:40:15 np0005548731 nova_compute[232433]: 2025-12-06 07:40:15.691 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:15 np0005548731 nova_compute[232433]: 2025-12-06 07:40:15.758 232437 DEBUG oslo_concurrency.lockutils [None req-813964a9-5886-4008-8228-81c3aa67f0c5 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "b4be0ef8-945f-47a1-a3a8-5962f1e692e5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:15Z|00620|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec  6 02:40:15 np0005548731 nova_compute[232433]: 2025-12-06 07:40:15.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:40:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:16.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:40:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:16.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:17 np0005548731 nova_compute[232433]: 2025-12-06 07:40:17.310 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:40:18 np0005548731 nova_compute[232433]: 2025-12-06 07:40:18.169 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:18.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:40:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:18.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:40:19 np0005548731 nova_compute[232433]: 2025-12-06 07:40:19.163 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:40:19 np0005548731 nova_compute[232433]: 2025-12-06 07:40:19.208 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-b4be0ef8-945f-47a1-a3a8-5962f1e692e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:40:19 np0005548731 nova_compute[232433]: 2025-12-06 07:40:19.209 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:40:19 np0005548731 nova_compute[232433]: 2025-12-06 07:40:19.209 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:40:19 np0005548731 nova_compute[232433]: 2025-12-06 07:40:19.210 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:40:19 np0005548731 nova_compute[232433]: 2025-12-06 07:40:19.210 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:40:19 np0005548731 nova_compute[232433]: 2025-12-06 07:40:19.844 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006804.8433282, b4be0ef8-945f-47a1-a3a8-5962f1e692e5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:40:19 np0005548731 nova_compute[232433]: 2025-12-06 07:40:19.845 232437 INFO nova.compute.manager [-] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:40:19 np0005548731 nova_compute[232433]: 2025-12-06 07:40:19.918 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:40:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:20.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.208 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006805.2074816, 78c21010-1e12-412a-8a98-2cb6e015f94a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.208 232437 INFO nova.compute.manager [-] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.251 232437 DEBUG nova.compute.manager [None req-2b3a0197-a488-4e72-9e35-a3374ec116cd - - - - - -] [instance: b4be0ef8-945f-47a1-a3a8-5962f1e692e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.521 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.521 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.521 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.521 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.522 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.545 232437 DEBUG nova.compute.manager [None req-00a4cb91-e9ea-457c-b3dc-ee2d6ffa4774 - - - - - -] [instance: 78c21010-1e12-412a-8a98-2cb6e015f94a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:40:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/666352619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:40:20 np0005548731 nova_compute[232433]: 2025-12-06 07:40:20.948 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:20.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:21 np0005548731 nova_compute[232433]: 2025-12-06 07:40:21.604 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:40:21 np0005548731 nova_compute[232433]: 2025-12-06 07:40:21.604 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:40:21 np0005548731 nova_compute[232433]: 2025-12-06 07:40:21.747 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:40:21 np0005548731 nova_compute[232433]: 2025-12-06 07:40:21.748 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4140MB free_disk=20.805877685546875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:40:21 np0005548731 nova_compute[232433]: 2025-12-06 07:40:21.749 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:21 np0005548731 nova_compute[232433]: 2025-12-06 07:40:21.749 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:22 np0005548731 nova_compute[232433]: 2025-12-06 07:40:22.057 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 6a50a40c-3b05-4c0e-aa67-1489e203824e actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:40:22 np0005548731 nova_compute[232433]: 2025-12-06 07:40:22.058 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:40:22 np0005548731 nova_compute[232433]: 2025-12-06 07:40:22.058 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:40:22 np0005548731 nova_compute[232433]: 2025-12-06 07:40:22.098 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:40:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:22.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:40:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:40:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3163851333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:40:22 np0005548731 nova_compute[232433]: 2025-12-06 07:40:22.525 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:22 np0005548731 nova_compute[232433]: 2025-12-06 07:40:22.532 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:40:22 np0005548731 nova_compute[232433]: 2025-12-06 07:40:22.747 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:40:22 np0005548731 nova_compute[232433]: 2025-12-06 07:40:22.778 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:40:22 np0005548731 nova_compute[232433]: 2025-12-06 07:40:22.779 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:22.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:23 np0005548731 nova_compute[232433]: 2025-12-06 07:40:23.170 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:23 np0005548731 nova_compute[232433]: 2025-12-06 07:40:23.779 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:40:23 np0005548731 nova_compute[232433]: 2025-12-06 07:40:23.779 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:40:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:40:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:24.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:40:24 np0005548731 nova_compute[232433]: 2025-12-06 07:40:24.921 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:40:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:24.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:40:25 np0005548731 nova_compute[232433]: 2025-12-06 07:40:25.337 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "2d45e460-a94a-4fcb-881d-de59908d47b3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:25 np0005548731 nova_compute[232433]: 2025-12-06 07:40:25.337 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:25 np0005548731 nova_compute[232433]: 2025-12-06 07:40:25.374 232437 DEBUG nova.compute.manager [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:40:25 np0005548731 nova_compute[232433]: 2025-12-06 07:40:25.523 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:25 np0005548731 nova_compute[232433]: 2025-12-06 07:40:25.524 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:25 np0005548731 nova_compute[232433]: 2025-12-06 07:40:25.538 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:40:25 np0005548731 nova_compute[232433]: 2025-12-06 07:40:25.538 232437 INFO nova.compute.claims [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:40:25 np0005548731 nova_compute[232433]: 2025-12-06 07:40:25.729 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:40:26 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2930511714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.139 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.145 232437 DEBUG nova.compute.provider_tree [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.171 232437 DEBUG nova.scheduler.client.report [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:40:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:26.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.204 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.205 232437 DEBUG nova.compute.manager [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:40:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:26.269 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:40:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:26.270 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.270 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.314 232437 DEBUG nova.compute.manager [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.315 232437 DEBUG nova.network.neutron [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.358 232437 INFO nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.386 232437 DEBUG nova.compute.manager [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.572 232437 DEBUG nova.compute.manager [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.573 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.573 232437 INFO nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Creating image(s)#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.597 232437 DEBUG nova.storage.rbd_utils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2d45e460-a94a-4fcb-881d-de59908d47b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.624 232437 DEBUG nova.storage.rbd_utils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2d45e460-a94a-4fcb-881d-de59908d47b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.653 232437 DEBUG nova.storage.rbd_utils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2d45e460-a94a-4fcb-881d-de59908d47b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.657 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.721 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.722 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.722 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.723 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.749 232437 DEBUG nova.storage.rbd_utils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2d45e460-a94a-4fcb-881d-de59908d47b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.753 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2d45e460-a94a-4fcb-881d-de59908d47b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:26 np0005548731 nova_compute[232433]: 2025-12-06 07:40:26.943 232437 DEBUG nova.policy [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a70f6c3c5e2c402bb6fa0e0507e9b6dc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b10aa03d68eb4d4799d53538521cc364', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:40:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:26.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:27 np0005548731 nova_compute[232433]: 2025-12-06 07:40:27.949 232437 DEBUG nova.network.neutron [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Successfully created port: 35b6f475-0d58-47c2-811e-67155ce3476e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:40:28 np0005548731 nova_compute[232433]: 2025-12-06 07:40:28.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:40:28 np0005548731 nova_compute[232433]: 2025-12-06 07:40:28.171 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:40:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:28.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:40:28 np0005548731 nova_compute[232433]: 2025-12-06 07:40:28.205 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2d45e460-a94a-4fcb-881d-de59908d47b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:28 np0005548731 nova_compute[232433]: 2025-12-06 07:40:28.282 232437 DEBUG nova.storage.rbd_utils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] resizing rbd image 2d45e460-a94a-4fcb-881d-de59908d47b3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:40:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:28 np0005548731 nova_compute[232433]: 2025-12-06 07:40:28.602 232437 DEBUG nova.objects.instance [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'migration_context' on Instance uuid 2d45e460-a94a-4fcb-881d-de59908d47b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:40:28 np0005548731 nova_compute[232433]: 2025-12-06 07:40:28.623 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:40:28 np0005548731 nova_compute[232433]: 2025-12-06 07:40:28.623 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Ensure instance console log exists: /var/lib/nova/instances/2d45e460-a94a-4fcb-881d-de59908d47b3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:40:28 np0005548731 nova_compute[232433]: 2025-12-06 07:40:28.624 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:28 np0005548731 nova_compute[232433]: 2025-12-06 07:40:28.624 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:28 np0005548731 nova_compute[232433]: 2025-12-06 07:40:28.624 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:29.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:29 np0005548731 nova_compute[232433]: 2025-12-06 07:40:29.237 232437 DEBUG nova.network.neutron [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Successfully updated port: 35b6f475-0d58-47c2-811e-67155ce3476e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:40:29 np0005548731 nova_compute[232433]: 2025-12-06 07:40:29.265 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:40:29 np0005548731 nova_compute[232433]: 2025-12-06 07:40:29.265 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:40:29 np0005548731 nova_compute[232433]: 2025-12-06 07:40:29.266 232437 DEBUG nova.network.neutron [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:40:29 np0005548731 nova_compute[232433]: 2025-12-06 07:40:29.373 232437 DEBUG nova.compute.manager [req-c9b9bc15-bbf3-46fc-838e-e03d29c02ef6 req-f51e80de-5b3a-45dc-aa2e-afde64a75c6f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received event network-changed-35b6f475-0d58-47c2-811e-67155ce3476e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:29 np0005548731 nova_compute[232433]: 2025-12-06 07:40:29.373 232437 DEBUG nova.compute.manager [req-c9b9bc15-bbf3-46fc-838e-e03d29c02ef6 req-f51e80de-5b3a-45dc-aa2e-afde64a75c6f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Refreshing instance network info cache due to event network-changed-35b6f475-0d58-47c2-811e-67155ce3476e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:40:29 np0005548731 nova_compute[232433]: 2025-12-06 07:40:29.373 232437 DEBUG oslo_concurrency.lockutils [req-c9b9bc15-bbf3-46fc-838e-e03d29c02ef6 req-f51e80de-5b3a-45dc-aa2e-afde64a75c6f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:40:29 np0005548731 nova_compute[232433]: 2025-12-06 07:40:29.491 232437 DEBUG nova.network.neutron [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:40:29 np0005548731 nova_compute[232433]: 2025-12-06 07:40:29.924 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:30.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.898 232437 DEBUG nova.network.neutron [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Updating instance_info_cache with network_info: [{"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b6f475-0d", "ovs_interfaceid": "35b6f475-0d58-47c2-811e-67155ce3476e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.951 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.951 232437 DEBUG nova.compute.manager [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Instance network_info: |[{"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b6f475-0d", "ovs_interfaceid": "35b6f475-0d58-47c2-811e-67155ce3476e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.952 232437 DEBUG oslo_concurrency.lockutils [req-c9b9bc15-bbf3-46fc-838e-e03d29c02ef6 req-f51e80de-5b3a-45dc-aa2e-afde64a75c6f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.952 232437 DEBUG nova.network.neutron [req-c9b9bc15-bbf3-46fc-838e-e03d29c02ef6 req-f51e80de-5b3a-45dc-aa2e-afde64a75c6f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Refreshing network info cache for port 35b6f475-0d58-47c2-811e-67155ce3476e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.955 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Start _get_guest_xml network_info=[{"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b6f475-0d", "ovs_interfaceid": "35b6f475-0d58-47c2-811e-67155ce3476e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.959 232437 WARNING nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.963 232437 DEBUG nova.virt.libvirt.host [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.964 232437 DEBUG nova.virt.libvirt.host [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.972 232437 DEBUG nova.virt.libvirt.host [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.973 232437 DEBUG nova.virt.libvirt.host [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.977 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.978 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.978 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.978 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.979 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.979 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.979 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.980 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.980 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.980 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.980 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.981 232437 DEBUG nova.virt.hardware [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:40:30 np0005548731 nova_compute[232433]: 2025-12-06 07:40:30.984 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:31.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:40:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4202.4 total, 600.0 interval#012Cumulative writes: 48K writes, 198K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.05 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.79 writes per sync, written: 0.20 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7925 writes, 30K keys, 7925 commit groups, 1.0 writes per commit group, ingest: 29.90 MB, 0.05 MB/s#012Interval WAL: 7925 writes, 3137 syncs, 2.53 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 02:40:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:40:31 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/936565583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:40:31 np0005548731 nova_compute[232433]: 2025-12-06 07:40:31.604 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:31 np0005548731 nova_compute[232433]: 2025-12-06 07:40:31.630 232437 DEBUG nova.storage.rbd_utils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2d45e460-a94a-4fcb-881d-de59908d47b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:40:31 np0005548731 nova_compute[232433]: 2025-12-06 07:40:31.634 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:40:32 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3844405613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.073 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.076 232437 DEBUG nova.virt.libvirt.vif [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:40:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1918499189',display_name='tempest-ServerActionsTestOtherB-server-1918499189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1918499189',id=137,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-a6grlxca',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:40:26Z,user_data=None,user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2d45e460-a94a-4fcb-881d-de59908d47b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b6f475-0d", "ovs_interfaceid": "35b6f475-0d58-47c2-811e-67155ce3476e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.076 232437 DEBUG nova.network.os_vif_util [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b6f475-0d", "ovs_interfaceid": "35b6f475-0d58-47c2-811e-67155ce3476e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.077 232437 DEBUG nova.network.os_vif_util [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:b1:03,bridge_name='br-int',has_traffic_filtering=True,id=35b6f475-0d58-47c2-811e-67155ce3476e,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b6f475-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.078 232437 DEBUG nova.objects.instance [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d45e460-a94a-4fcb-881d-de59908d47b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.115 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  <uuid>2d45e460-a94a-4fcb-881d-de59908d47b3</uuid>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  <name>instance-00000089</name>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerActionsTestOtherB-server-1918499189</nova:name>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:40:30</nova:creationTime>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <nova:user uuid="a70f6c3c5e2c402bb6fa0e0507e9b6dc">tempest-ServerActionsTestOtherB-874907570-project-member</nova:user>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <nova:project uuid="b10aa03d68eb4d4799d53538521cc364">tempest-ServerActionsTestOtherB-874907570</nova:project>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <nova:port uuid="35b6f475-0d58-47c2-811e-67155ce3476e">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <entry name="serial">2d45e460-a94a-4fcb-881d-de59908d47b3</entry>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <entry name="uuid">2d45e460-a94a-4fcb-881d-de59908d47b3</entry>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2d45e460-a94a-4fcb-881d-de59908d47b3_disk">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2d45e460-a94a-4fcb-881d-de59908d47b3_disk.config">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:de:b1:03"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <target dev="tap35b6f475-0d"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/2d45e460-a94a-4fcb-881d-de59908d47b3/console.log" append="off"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:40:32 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:40:32 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:40:32 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:40:32 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.117 232437 DEBUG nova.compute.manager [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Preparing to wait for external event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.117 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.118 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.118 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.119 232437 DEBUG nova.virt.libvirt.vif [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:40:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1918499189',display_name='tempest-ServerActionsTestOtherB-server-1918499189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1918499189',id=137,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-a6grlxca',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:40:26Z,user_data=None,user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2d45e460-a94a-4fcb-881d-de59908d47b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b6f475-0d", "ovs_interfaceid": "35b6f475-0d58-47c2-811e-67155ce3476e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.119 232437 DEBUG nova.network.os_vif_util [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b6f475-0d", "ovs_interfaceid": "35b6f475-0d58-47c2-811e-67155ce3476e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.120 232437 DEBUG nova.network.os_vif_util [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:b1:03,bridge_name='br-int',has_traffic_filtering=True,id=35b6f475-0d58-47c2-811e-67155ce3476e,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b6f475-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.120 232437 DEBUG os_vif [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:b1:03,bridge_name='br-int',has_traffic_filtering=True,id=35b6f475-0d58-47c2-811e-67155ce3476e,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b6f475-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.121 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.122 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.122 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.125 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.125 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35b6f475-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.126 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35b6f475-0d, col_values=(('external_ids', {'iface-id': '35b6f475-0d58-47c2-811e-67155ce3476e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:b1:03', 'vm-uuid': '2d45e460-a94a-4fcb-881d-de59908d47b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.128 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:32 np0005548731 NetworkManager[49182]: <info>  [1765006832.1287] manager: (tap35b6f475-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.131 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.132 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.133 232437 INFO os_vif [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:b1:03,bridge_name='br-int',has_traffic_filtering=True,id=35b6f475-0d58-47c2-811e-67155ce3476e,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b6f475-0d')#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.183 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.183 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.184 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No VIF found with MAC fa:16:3e:de:b1:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.184 232437 INFO nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Using config drive#033[00m
Dec  6 02:40:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:32.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.217 232437 DEBUG nova.storage.rbd_utils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2d45e460-a94a-4fcb-881d-de59908d47b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.871 232437 DEBUG nova.network.neutron [req-c9b9bc15-bbf3-46fc-838e-e03d29c02ef6 req-f51e80de-5b3a-45dc-aa2e-afde64a75c6f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Updated VIF entry in instance network info cache for port 35b6f475-0d58-47c2-811e-67155ce3476e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.872 232437 DEBUG nova.network.neutron [req-c9b9bc15-bbf3-46fc-838e-e03d29c02ef6 req-f51e80de-5b3a-45dc-aa2e-afde64a75c6f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Updating instance_info_cache with network_info: [{"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b6f475-0d", "ovs_interfaceid": "35b6f475-0d58-47c2-811e-67155ce3476e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:40:32 np0005548731 nova_compute[232433]: 2025-12-06 07:40:32.894 232437 DEBUG oslo_concurrency.lockutils [req-c9b9bc15-bbf3-46fc-838e-e03d29c02ef6 req-f51e80de-5b3a-45dc-aa2e-afde64a75c6f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:40:32 np0005548731 podman[292653]: 2025-12-06 07:40:32.904428382 +0000 UTC m=+0.059606214 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 02:40:32 np0005548731 podman[292655]: 2025-12-06 07:40:32.907619019 +0000 UTC m=+0.060013633 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 02:40:32 np0005548731 podman[292654]: 2025-12-06 07:40:32.928598001 +0000 UTC m=+0.083689961 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:40:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:33.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.173 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.273 232437 INFO nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Creating config drive at /var/lib/nova/instances/2d45e460-a94a-4fcb-881d-de59908d47b3/disk.config#033[00m
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.278 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d45e460-a94a-4fcb-881d-de59908d47b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy_ngxnso execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.408 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d45e460-a94a-4fcb-881d-de59908d47b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy_ngxnso" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.433 232437 DEBUG nova.storage.rbd_utils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2d45e460-a94a-4fcb-881d-de59908d47b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.437 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d45e460-a94a-4fcb-881d-de59908d47b3/disk.config 2d45e460-a94a-4fcb-881d-de59908d47b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.759 232437 DEBUG oslo_concurrency.processutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d45e460-a94a-4fcb-881d-de59908d47b3/disk.config 2d45e460-a94a-4fcb-881d-de59908d47b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.761 232437 INFO nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Deleting local config drive /var/lib/nova/instances/2d45e460-a94a-4fcb-881d-de59908d47b3/disk.config because it was imported into RBD.#033[00m
Dec  6 02:40:33 np0005548731 kernel: tap35b6f475-0d: entered promiscuous mode
Dec  6 02:40:33 np0005548731 NetworkManager[49182]: <info>  [1765006833.8051] manager: (tap35b6f475-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/286)
Dec  6 02:40:33 np0005548731 systemd-udevd[292768]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:40:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:33Z|00621|binding|INFO|Claiming lport 35b6f475-0d58-47c2-811e-67155ce3476e for this chassis.
Dec  6 02:40:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:33Z|00622|binding|INFO|35b6f475-0d58-47c2-811e-67155ce3476e: Claiming fa:16:3e:de:b1:03 10.100.0.6
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.856 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:33 np0005548731 NetworkManager[49182]: <info>  [1765006833.8617] device (tap35b6f475-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:40:33 np0005548731 NetworkManager[49182]: <info>  [1765006833.8629] device (tap35b6f475-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:40:33 np0005548731 NetworkManager[49182]: <info>  [1765006833.8639] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Dec  6 02:40:33 np0005548731 NetworkManager[49182]: <info>  [1765006833.8646] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.863 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.879 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:b1:03 10.100.0.6'], port_security=['fa:16:3e:de:b1:03 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2d45e460-a94a-4fcb-881d-de59908d47b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f7f4f14-4f63-443a-af4a-951f8b77b0f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=35b6f475-0d58-47c2-811e-67155ce3476e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.881 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 35b6f475-0d58-47c2-811e-67155ce3476e in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 bound to our chassis#033[00m
Dec  6 02:40:33 np0005548731 systemd-machined[195355]: New machine qemu-64-instance-00000089.
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.883 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163#033[00m
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.894 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8c8615-055e-4038-a49d-288680844ed1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.895 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3beede49-11 in ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.897 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3beede49-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.897 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f34de602-1737-4d7c-b148-0d303e3e04b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.898 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a9bf5c42-7b68-4290-b2cc-19776c7989c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.908 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[18d647d4-6e9e-4b68-bf2d-515057a62533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:33 np0005548731 systemd[1]: Started Virtual Machine qemu-64-instance-00000089.
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.931 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a419fc-30e3-4058-9a97-33c8eb0a9afb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.965 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fac37b-943b-4887-b9b4-0a169bc655c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:33 np0005548731 NetworkManager[49182]: <info>  [1765006833.9720] manager: (tap3beede49-10): new Veth device (/org/freedesktop/NetworkManager/Devices/289)
Dec  6 02:40:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:33.973 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[be50e4f8-69aa-4d94-b54a-df8382e61cb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:33 np0005548731 nova_compute[232433]: 2025-12-06 07:40:33.988 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:33Z|00623|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.003 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.006 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ce07685e-566c-454f-8e50-f32f83739ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.008 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7c8abad2-5aeb-47ac-a8ac-1cc8147a13fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:34Z|00624|binding|INFO|Setting lport 35b6f475-0d58-47c2-811e-67155ce3476e ovn-installed in OVS
Dec  6 02:40:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:34Z|00625|binding|INFO|Setting lport 35b6f475-0d58-47c2-811e-67155ce3476e up in Southbound
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.017 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:34 np0005548731 NetworkManager[49182]: <info>  [1765006834.0341] device (tap3beede49-10): carrier: link connected
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.038 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8432c5-1794-4ec1-85ae-a9a513d758a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.055 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[093213b8-d61e-461e-884b-776a44993a57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709604, 'reachable_time': 31784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292804, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.071 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[23f6b9ac-347b-41ca-9831-61eabf6905d9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:c755'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709604, 'tstamp': 709604}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292805, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.089 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a369f7-aaa3-4abe-b9eb-a7e9cd12cb55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709604, 'reachable_time': 31784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292806, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.121 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebec3e5-bccb-4700-b4bf-df98272e8ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.171 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2c972363-95fb-44da-914c-372edc8666b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.172 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.173 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.173 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:34 np0005548731 NetworkManager[49182]: <info>  [1765006834.1755] manager: (tap3beede49-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Dec  6 02:40:34 np0005548731 kernel: tap3beede49-10: entered promiscuous mode
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.178 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:34Z|00626|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec  6 02:40:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:34.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.192 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.198 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.198 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.200 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e7fc0656-8fe4-4057-a0f5-f4c136e6d49b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.200 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-3beede49-1cbb-425c-b1af-82f43dc57163
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 3beede49-1cbb-425c-b1af-82f43dc57163
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.202 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'env', 'PROCESS_TAG=haproxy-3beede49-1cbb-425c-b1af-82f43dc57163', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3beede49-1cbb-425c-b1af-82f43dc57163.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:40:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:34.273 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:34 np0005548731 podman[292854]: 2025-12-06 07:40:34.593105824 +0000 UTC m=+0.047691823 container create 3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:40:34 np0005548731 systemd[1]: Started libpod-conmon-3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c.scope.
Dec  6 02:40:34 np0005548731 podman[292854]: 2025-12-06 07:40:34.56790582 +0000 UTC m=+0.022491849 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:40:34 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:40:34 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c7b1dafb0c3399b61c6f96d9868f59ef80200af823e5e172c250af5c099606/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.680 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006834.6798086, 2d45e460-a94a-4fcb-881d-de59908d47b3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.681 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] VM Started (Lifecycle Event)#033[00m
Dec  6 02:40:34 np0005548731 podman[292854]: 2025-12-06 07:40:34.682456951 +0000 UTC m=+0.137042950 container init 3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:40:34 np0005548731 podman[292854]: 2025-12-06 07:40:34.687277098 +0000 UTC m=+0.141863097 container start 3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:40:34 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[292892]: [NOTICE]   (292897) : New worker (292899) forked
Dec  6 02:40:34 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[292892]: [NOTICE]   (292897) : Loading success.
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.883 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.888 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006834.6800058, 2d45e460-a94a-4fcb-881d-de59908d47b3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.889 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.909 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.914 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:40:34 np0005548731 nova_compute[232433]: 2025-12-06 07:40:34.935 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:40:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:35.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.890 232437 DEBUG nova.compute.manager [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.891 232437 DEBUG oslo_concurrency.lockutils [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.891 232437 DEBUG oslo_concurrency.lockutils [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.892 232437 DEBUG oslo_concurrency.lockutils [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.892 232437 DEBUG nova.compute.manager [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Processing event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.893 232437 DEBUG nova.compute.manager [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.893 232437 DEBUG oslo_concurrency.lockutils [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.894 232437 DEBUG oslo_concurrency.lockutils [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.894 232437 DEBUG oslo_concurrency.lockutils [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.894 232437 DEBUG nova.compute.manager [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] No waiting events found dispatching network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.895 232437 WARNING nova.compute.manager [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received unexpected event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.896 232437 DEBUG nova.compute.manager [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.899 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006835.899012, 2d45e460-a94a-4fcb-881d-de59908d47b3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.899 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.901 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.905 232437 INFO nova.virt.libvirt.driver [-] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Instance spawned successfully.#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.906 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.960 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.964 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.964 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.965 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.965 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.966 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.966 232437 DEBUG nova.virt.libvirt.driver [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:40:35 np0005548731 nova_compute[232433]: 2025-12-06 07:40:35.971 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:40:36 np0005548731 nova_compute[232433]: 2025-12-06 07:40:36.023 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:40:36 np0005548731 nova_compute[232433]: 2025-12-06 07:40:36.085 232437 INFO nova.compute.manager [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Took 9.51 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:40:36 np0005548731 nova_compute[232433]: 2025-12-06 07:40:36.085 232437 DEBUG nova.compute.manager [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:36 np0005548731 nova_compute[232433]: 2025-12-06 07:40:36.188 232437 INFO nova.compute.manager [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Took 10.69 seconds to build instance.#033[00m
Dec  6 02:40:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:40:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:36.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:40:36 np0005548731 nova_compute[232433]: 2025-12-06 07:40:36.219 232437 DEBUG oslo_concurrency.lockutils [None req-fb76b8b2-c185-4ca3-8902-aca6052b7cdb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:36 np0005548731 nova_compute[232433]: 2025-12-06 07:40:36.734 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:37.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:37 np0005548731 nova_compute[232433]: 2025-12-06 07:40:37.129 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:37 np0005548731 nova_compute[232433]: 2025-12-06 07:40:37.982 232437 DEBUG oslo_concurrency.lockutils [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "6a50a40c-3b05-4c0e-aa67-1489e203824e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:37 np0005548731 nova_compute[232433]: 2025-12-06 07:40:37.982 232437 DEBUG oslo_concurrency.lockutils [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:37 np0005548731 nova_compute[232433]: 2025-12-06 07:40:37.982 232437 DEBUG oslo_concurrency.lockutils [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:37 np0005548731 nova_compute[232433]: 2025-12-06 07:40:37.982 232437 DEBUG oslo_concurrency.lockutils [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:37 np0005548731 nova_compute[232433]: 2025-12-06 07:40:37.983 232437 DEBUG oslo_concurrency.lockutils [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:37 np0005548731 nova_compute[232433]: 2025-12-06 07:40:37.984 232437 INFO nova.compute.manager [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Terminating instance#033[00m
Dec  6 02:40:37 np0005548731 nova_compute[232433]: 2025-12-06 07:40:37.984 232437 DEBUG nova.compute.manager [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:40:38 np0005548731 kernel: tapad544c9a-af (unregistering): left promiscuous mode
Dec  6 02:40:38 np0005548731 NetworkManager[49182]: <info>  [1765006838.0436] device (tapad544c9a-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00627|binding|INFO|Releasing lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 from this chassis (sb_readonly=0)
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00628|binding|INFO|Setting lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 down in Southbound
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.080 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00629|binding|INFO|Removing iface tapad544c9a-af ovn-installed in OVS
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.084 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.086 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.100 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000078.scope: Deactivated successfully.
Dec  6 02:40:38 np0005548731 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000078.scope: Consumed 30.407s CPU time.
Dec  6 02:40:38 np0005548731 systemd-machined[195355]: Machine qemu-52-instance-00000078 terminated.
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.128 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:fb:86 10.100.0.9'], port_security=['fa:16:3e:1f:fb:86 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6a50a40c-3b05-4c0e-aa67-1489e203824e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '384b06d0-71ab-455f-9033-7290730c5c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ad544c9a-af55-4a7f-babe-68f6d1b23e25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.130 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ad544c9a-af55-4a7f-babe-68f6d1b23e25 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.131 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bb49e8a-b939-4c79-851c-62c634be0272, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.132 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a24b8554-7559-489e-a6e9-9b32d1333e42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.133 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272 namespace which is not needed anymore#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.174 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:40:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:38.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:40:38 np0005548731 kernel: tapad544c9a-af: entered promiscuous mode
Dec  6 02:40:38 np0005548731 NetworkManager[49182]: <info>  [1765006838.2028] manager: (tapad544c9a-af): new Tun device (/org/freedesktop/NetworkManager/Devices/291)
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00630|binding|INFO|Claiming lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 for this chassis.
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00631|binding|INFO|ad544c9a-af55-4a7f-babe-68f6d1b23e25: Claiming fa:16:3e:1f:fb:86 10.100.0.9
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.203 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 kernel: tapad544c9a-af (unregistering): left promiscuous mode
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.218 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:fb:86 10.100.0.9'], port_security=['fa:16:3e:1f:fb:86 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6a50a40c-3b05-4c0e-aa67-1489e203824e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '384b06d0-71ab-455f-9033-7290730c5c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ad544c9a-af55-4a7f-babe-68f6d1b23e25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.226 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00632|binding|INFO|Setting lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 ovn-installed in OVS
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00633|binding|INFO|Setting lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 up in Southbound
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00634|binding|INFO|Releasing lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 from this chassis (sb_readonly=1)
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00635|if_status|INFO|Dropped 2 log messages in last 56 seconds (most recently, 56 seconds ago) due to excessive rate
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00636|if_status|INFO|Not setting lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 down as sb is readonly
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00637|binding|INFO|Removing iface tapad544c9a-af ovn-installed in OVS
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00638|binding|INFO|Releasing lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 from this chassis (sb_readonly=0)
Dec  6 02:40:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:38Z|00639|binding|INFO|Setting lport ad544c9a-af55-4a7f-babe-68f6d1b23e25 down in Southbound
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.237 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:fb:86 10.100.0.9'], port_security=['fa:16:3e:1f:fb:86 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6a50a40c-3b05-4c0e-aa67-1489e203824e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '384b06d0-71ab-455f-9033-7290730c5c8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ad544c9a-af55-4a7f-babe-68f6d1b23e25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.240 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.241 232437 INFO nova.virt.libvirt.driver [-] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Instance destroyed successfully.#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.242 232437 DEBUG nova.objects.instance [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'resources' on Instance uuid 6a50a40c-3b05-4c0e-aa67-1489e203824e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.261 232437 DEBUG nova.virt.libvirt.vif [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:33:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-386684313',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-386684313',id=120,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:33:29Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-b7297kax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:33:29Z,user_data=None,user_id='605b5481e0c944048e6a67046c30d693',uuid=6a50a40c-3b05-4c0e-aa67-1489e203824e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.261 232437 DEBUG nova.network.os_vif_util [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "address": "fa:16:3e:1f:fb:86", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad544c9a-af", "ovs_interfaceid": "ad544c9a-af55-4a7f-babe-68f6d1b23e25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.262 232437 DEBUG nova.network.os_vif_util [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:fb:86,bridge_name='br-int',has_traffic_filtering=True,id=ad544c9a-af55-4a7f-babe-68f6d1b23e25,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad544c9a-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.263 232437 DEBUG os_vif [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:fb:86,bridge_name='br-int',has_traffic_filtering=True,id=ad544c9a-af55-4a7f-babe-68f6d1b23e25,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad544c9a-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.265 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.266 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad544c9a-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.267 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.270 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.272 232437 INFO os_vif [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:fb:86,bridge_name='br-int',has_traffic_filtering=True,id=ad544c9a-af55-4a7f-babe-68f6d1b23e25,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad544c9a-af')#033[00m
Dec  6 02:40:38 np0005548731 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[282786]: [NOTICE]   (282790) : haproxy version is 2.8.14-c23fe91
Dec  6 02:40:38 np0005548731 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[282786]: [NOTICE]   (282790) : path to executable is /usr/sbin/haproxy
Dec  6 02:40:38 np0005548731 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[282786]: [WARNING]  (282790) : Exiting Master process...
Dec  6 02:40:38 np0005548731 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[282786]: [WARNING]  (282790) : Exiting Master process...
Dec  6 02:40:38 np0005548731 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[282786]: [ALERT]    (282790) : Current worker (282792) exited with code 143 (Terminated)
Dec  6 02:40:38 np0005548731 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[282786]: [WARNING]  (282790) : All workers exited. Exiting... (0)
Dec  6 02:40:38 np0005548731 systemd[1]: libpod-20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2.scope: Deactivated successfully.
Dec  6 02:40:38 np0005548731 podman[292935]: 2025-12-06 07:40:38.289167113 +0000 UTC m=+0.052498731 container died 20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:40:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2-userdata-shm.mount: Deactivated successfully.
Dec  6 02:40:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay-0815dbcf6806f53a04728943ebdcd5c4f82587c43c53d3f9bfa3653102522ef5-merged.mount: Deactivated successfully.
Dec  6 02:40:38 np0005548731 podman[292935]: 2025-12-06 07:40:38.40849124 +0000 UTC m=+0.171822838 container cleanup 20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:40:38 np0005548731 systemd[1]: libpod-conmon-20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2.scope: Deactivated successfully.
Dec  6 02:40:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:38 np0005548731 podman[292968]: 2025-12-06 07:40:38.527155451 +0000 UTC m=+0.098005659 container remove 20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.535 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0970f3-603c-4aca-8cf4-14e4e5bf7518]: (4, ('Sat Dec  6 07:40:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272 (20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2)\n20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2\nSat Dec  6 07:40:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272 (20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2)\n20324efdfeb613da0ec6dbc2b5c77ac2a2f44ec1dda8fab0d9a6a795d10320c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.537 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bf269693-9c87-4643-8575-1c49963c955a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.538 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:38 np0005548731 kernel: tap5bb49e8a-b0: left promiscuous mode
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.590 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.606 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cea8649a-542b-46bd-b558-8abdb783df87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.620 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f372c074-4811-400c-b52c-49f5ac4d49b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.621 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1b9ac1-b59a-4793-89b7-69afce9062bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.629 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.637 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c004e2d1-393c-4530-9ea4-6c9f31e80a5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667047, 'reachable_time': 25298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292998, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.640 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.640 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[bea9ae4b-ffc4-47d7-84bb-5677318871d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.641 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ad544c9a-af55-4a7f-babe-68f6d1b23e25 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis#033[00m
Dec  6 02:40:38 np0005548731 systemd[1]: run-netns-ovnmeta\x2d5bb49e8a\x2db939\x2d4c79\x2d851c\x2d62c634be0272.mount: Deactivated successfully.
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.642 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bb49e8a-b939-4c79-851c-62c634be0272, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.643 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[97a80f74-2850-4abc-b9ac-134d6bce04be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.646 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ad544c9a-af55-4a7f-babe-68f6d1b23e25 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.647 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bb49e8a-b939-4c79-851c-62c634be0272, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:40:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:38.648 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f94202bc-af40-428b-bbfe-b4ea7969f64e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.821 232437 INFO nova.compute.manager [None req-45011976-7ee2-471d-9c60-1118a5ee6731 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Pausing#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.822 232437 DEBUG nova.objects.instance [None req-45011976-7ee2-471d-9c60-1118a5ee6731 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'flavor' on Instance uuid 2d45e460-a94a-4fcb-881d-de59908d47b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.852 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006838.852757, 2d45e460-a94a-4fcb-881d-de59908d47b3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.853 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.855 232437 DEBUG nova.compute.manager [None req-45011976-7ee2-471d-9c60-1118a5ee6731 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.885 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.888 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:40:38 np0005548731 nova_compute[232433]: 2025-12-06 07:40:38.911 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Dec  6 02:40:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:39.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:39 np0005548731 nova_compute[232433]: 2025-12-06 07:40:39.044 232437 INFO nova.virt.libvirt.driver [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Deleting instance files /var/lib/nova/instances/6a50a40c-3b05-4c0e-aa67-1489e203824e_del#033[00m
Dec  6 02:40:39 np0005548731 nova_compute[232433]: 2025-12-06 07:40:39.045 232437 INFO nova.virt.libvirt.driver [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Deletion of /var/lib/nova/instances/6a50a40c-3b05-4c0e-aa67-1489e203824e_del complete#033[00m
Dec  6 02:40:39 np0005548731 nova_compute[232433]: 2025-12-06 07:40:39.112 232437 INFO nova.compute.manager [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:40:39 np0005548731 nova_compute[232433]: 2025-12-06 07:40:39.113 232437 DEBUG oslo.service.loopingcall [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:40:39 np0005548731 nova_compute[232433]: 2025-12-06 07:40:39.113 232437 DEBUG nova.compute.manager [-] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:40:39 np0005548731 nova_compute[232433]: 2025-12-06 07:40:39.114 232437 DEBUG nova.network.neutron [-] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:40:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:40.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:40 np0005548731 nova_compute[232433]: 2025-12-06 07:40:40.305 232437 DEBUG nova.network.neutron [-] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:40:40 np0005548731 nova_compute[232433]: 2025-12-06 07:40:40.338 232437 INFO nova.compute.manager [-] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Took 1.22 seconds to deallocate network for instance.#033[00m
Dec  6 02:40:40 np0005548731 nova_compute[232433]: 2025-12-06 07:40:40.653 232437 DEBUG nova.compute.manager [req-66b23b6b-d8af-4229-b3f6-4f59c8da927a req-1c44f48f-b7e5-4be7-9957-c4be6363c8bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Received event network-vif-deleted-ad544c9a-af55-4a7f-babe-68f6d1b23e25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:40 np0005548731 nova_compute[232433]: 2025-12-06 07:40:40.846 232437 INFO nova.compute.manager [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Took 0.51 seconds to detach 1 volumes for instance.#033[00m
Dec  6 02:40:40 np0005548731 nova_compute[232433]: 2025-12-06 07:40:40.919 232437 DEBUG oslo_concurrency.lockutils [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:40 np0005548731 nova_compute[232433]: 2025-12-06 07:40:40.920 232437 DEBUG oslo_concurrency.lockutils [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:41.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:41 np0005548731 nova_compute[232433]: 2025-12-06 07:40:41.018 232437 DEBUG oslo_concurrency.processutils [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:40:41 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2878607099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:40:41 np0005548731 nova_compute[232433]: 2025-12-06 07:40:41.459 232437 DEBUG oslo_concurrency.processutils [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:41 np0005548731 nova_compute[232433]: 2025-12-06 07:40:41.464 232437 DEBUG nova.compute.provider_tree [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:40:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:40:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:42.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:40:42 np0005548731 nova_compute[232433]: 2025-12-06 07:40:42.397 232437 DEBUG nova.scheduler.client.report [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:40:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:43.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:43 np0005548731 nova_compute[232433]: 2025-12-06 07:40:43.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:43 np0005548731 nova_compute[232433]: 2025-12-06 07:40:43.254 232437 DEBUG oslo_concurrency.lockutils [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:43 np0005548731 nova_compute[232433]: 2025-12-06 07:40:43.267 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:43 np0005548731 nova_compute[232433]: 2025-12-06 07:40:43.321 232437 INFO nova.scheduler.client.report [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Deleted allocations for instance 6a50a40c-3b05-4c0e-aa67-1489e203824e#033[00m
Dec  6 02:40:43 np0005548731 nova_compute[232433]: 2025-12-06 07:40:43.444 232437 DEBUG oslo_concurrency.lockutils [None req-3bcb4b10-a1e3-4fc2-84c8-23b97251ff8c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "6a50a40c-3b05-4c0e-aa67-1489e203824e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:43 np0005548731 nova_compute[232433]: 2025-12-06 07:40:43.764 232437 DEBUG oslo_concurrency.lockutils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "2d45e460-a94a-4fcb-881d-de59908d47b3" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:43 np0005548731 nova_compute[232433]: 2025-12-06 07:40:43.765 232437 DEBUG oslo_concurrency.lockutils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:43 np0005548731 nova_compute[232433]: 2025-12-06 07:40:43.765 232437 INFO nova.compute.manager [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Shelving#033[00m
Dec  6 02:40:43 np0005548731 kernel: tap35b6f475-0d (unregistering): left promiscuous mode
Dec  6 02:40:43 np0005548731 NetworkManager[49182]: <info>  [1765006843.9977] device (tap35b6f475-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:40:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:44Z|00640|binding|INFO|Releasing lport 35b6f475-0d58-47c2-811e-67155ce3476e from this chassis (sb_readonly=0)
Dec  6 02:40:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:44Z|00641|binding|INFO|Setting lport 35b6f475-0d58-47c2-811e-67155ce3476e down in Southbound
Dec  6 02:40:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:44Z|00642|binding|INFO|Removing iface tap35b6f475-0d ovn-installed in OVS
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.014 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.016 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.024 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:b1:03 10.100.0.6'], port_security=['fa:16:3e:de:b1:03 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2d45e460-a94a-4fcb-881d-de59908d47b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f7f4f14-4f63-443a-af4a-951f8b77b0f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=35b6f475-0d58-47c2-811e-67155ce3476e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.025 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 35b6f475-0d58-47c2-811e-67155ce3476e in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 unbound from our chassis#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.027 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3beede49-1cbb-425c-b1af-82f43dc57163, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.028 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1f5d7b-a06f-4222-8bb7-c2851aa0edfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.028 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace which is not needed anymore#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.035 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:44 np0005548731 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000089.scope: Deactivated successfully.
Dec  6 02:40:44 np0005548731 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000089.scope: Consumed 3.803s CPU time.
Dec  6 02:40:44 np0005548731 systemd-machined[195355]: Machine qemu-64-instance-00000089 terminated.
Dec  6 02:40:44 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[292892]: [NOTICE]   (292897) : haproxy version is 2.8.14-c23fe91
Dec  6 02:40:44 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[292892]: [NOTICE]   (292897) : path to executable is /usr/sbin/haproxy
Dec  6 02:40:44 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[292892]: [WARNING]  (292897) : Exiting Master process...
Dec  6 02:40:44 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[292892]: [ALERT]    (292897) : Current worker (292899) exited with code 143 (Terminated)
Dec  6 02:40:44 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[292892]: [WARNING]  (292897) : All workers exited. Exiting... (0)
Dec  6 02:40:44 np0005548731 systemd[1]: libpod-3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c.scope: Deactivated successfully.
Dec  6 02:40:44 np0005548731 podman[293101]: 2025-12-06 07:40:44.159657299 +0000 UTC m=+0.044897295 container died 3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 02:40:44 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c-userdata-shm.mount: Deactivated successfully.
Dec  6 02:40:44 np0005548731 systemd[1]: var-lib-containers-storage-overlay-14c7b1dafb0c3399b61c6f96d9868f59ef80200af823e5e172c250af5c099606-merged.mount: Deactivated successfully.
Dec  6 02:40:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:44 np0005548731 NetworkManager[49182]: <info>  [1765006844.2043] manager: (tap35b6f475-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/292)
Dec  6 02:40:44 np0005548731 kernel: tap35b6f475-0d: entered promiscuous mode
Dec  6 02:40:44 np0005548731 systemd-udevd[293080]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:40:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:40:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:44.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:40:44 np0005548731 podman[293101]: 2025-12-06 07:40:44.205518595 +0000 UTC m=+0.090758591 container cleanup 3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:40:44 np0005548731 kernel: tap35b6f475-0d (unregistering): left promiscuous mode
Dec  6 02:40:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:44Z|00643|binding|INFO|Claiming lport 35b6f475-0d58-47c2-811e-67155ce3476e for this chassis.
Dec  6 02:40:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:44Z|00644|binding|INFO|35b6f475-0d58-47c2-811e-67155ce3476e: Claiming fa:16:3e:de:b1:03 10.100.0.6
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.206 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:44 np0005548731 systemd[1]: libpod-conmon-3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c.scope: Deactivated successfully.
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.215 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:b1:03 10.100.0.6'], port_security=['fa:16:3e:de:b1:03 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2d45e460-a94a-4fcb-881d-de59908d47b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f7f4f14-4f63-443a-af4a-951f8b77b0f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=35b6f475-0d58-47c2-811e-67155ce3476e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.228 232437 INFO nova.virt.libvirt.driver [-] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Instance destroyed successfully.#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.229 232437 DEBUG nova.objects.instance [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2d45e460-a94a-4fcb-881d-de59908d47b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:40:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:44Z|00645|binding|INFO|Setting lport 35b6f475-0d58-47c2-811e-67155ce3476e ovn-installed in OVS
Dec  6 02:40:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:44Z|00646|binding|INFO|Setting lport 35b6f475-0d58-47c2-811e-67155ce3476e up in Southbound
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.231 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:44Z|00647|binding|INFO|Releasing lport 35b6f475-0d58-47c2-811e-67155ce3476e from this chassis (sb_readonly=1)
Dec  6 02:40:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:44Z|00648|binding|INFO|Removing iface tap35b6f475-0d ovn-installed in OVS
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.233 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:44Z|00649|binding|INFO|Releasing lport 35b6f475-0d58-47c2-811e-67155ce3476e from this chassis (sb_readonly=0)
Dec  6 02:40:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:40:44Z|00650|binding|INFO|Setting lport 35b6f475-0d58-47c2-811e-67155ce3476e down in Southbound
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.245 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.248 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:b1:03 10.100.0.6'], port_security=['fa:16:3e:de:b1:03 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2d45e460-a94a-4fcb-881d-de59908d47b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f7f4f14-4f63-443a-af4a-951f8b77b0f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=35b6f475-0d58-47c2-811e-67155ce3476e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:40:44 np0005548731 podman[293134]: 2025-12-06 07:40:44.276606737 +0000 UTC m=+0.048632266 container remove 3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.283 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[32375275-6ead-46a0-8858-697baa53eef4]: (4, ('Sat Dec  6 07:40:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c)\n3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c\nSat Dec  6 07:40:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c)\n3009e143e836508bd93b64905d338b1fc01af14fe0a388e73c02c2ccef747f0c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.284 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[689dc631-5473-42f0-9357-d8a619af19d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.285 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.287 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:44 np0005548731 kernel: tap3beede49-10: left promiscuous mode
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.303 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.305 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[73f9944c-f0a5-4a54-9051-6992ed039466]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.327 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1617d34e-08d4-490a-88ed-e7663ef61f7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.329 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[811c080c-a764-43f6-8a4b-52523322d3d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.343 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[678441df-d7d0-409f-9d2b-dc9cf77784a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709596, 'reachable_time': 31181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293155, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.345 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.345 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[525b7561-77e3-4ead-9786-684952a12ea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.346 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 35b6f475-0d58-47c2-811e-67155ce3476e in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 unbound from our chassis#033[00m
Dec  6 02:40:44 np0005548731 systemd[1]: run-netns-ovnmeta\x2d3beede49\x2d1cbb\x2d425c\x2db1af\x2d82f43dc57163.mount: Deactivated successfully.
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.347 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3beede49-1cbb-425c-b1af-82f43dc57163, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.348 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7251aa00-6b1a-4d03-a8e4-f9d17ebed3dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.348 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 35b6f475-0d58-47c2-811e-67155ce3476e in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 unbound from our chassis#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.349 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3beede49-1cbb-425c-b1af-82f43dc57163, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:40:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:40:44.349 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0111e9b7-8495-4c3a-b725-1d7fe57f207f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.390 232437 DEBUG nova.compute.manager [req-9b79a21d-8db8-46d8-9073-99f42e87cf1e req-cbf83586-ddd3-4817-9d75-a221ff087cfa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received event network-vif-unplugged-35b6f475-0d58-47c2-811e-67155ce3476e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.390 232437 DEBUG oslo_concurrency.lockutils [req-9b79a21d-8db8-46d8-9073-99f42e87cf1e req-cbf83586-ddd3-4817-9d75-a221ff087cfa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.390 232437 DEBUG oslo_concurrency.lockutils [req-9b79a21d-8db8-46d8-9073-99f42e87cf1e req-cbf83586-ddd3-4817-9d75-a221ff087cfa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.391 232437 DEBUG oslo_concurrency.lockutils [req-9b79a21d-8db8-46d8-9073-99f42e87cf1e req-cbf83586-ddd3-4817-9d75-a221ff087cfa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.391 232437 DEBUG nova.compute.manager [req-9b79a21d-8db8-46d8-9073-99f42e87cf1e req-cbf83586-ddd3-4817-9d75-a221ff087cfa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] No waiting events found dispatching network-vif-unplugged-35b6f475-0d58-47c2-811e-67155ce3476e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.391 232437 WARNING nova.compute.manager [req-9b79a21d-8db8-46d8-9073-99f42e87cf1e req-cbf83586-ddd3-4817-9d75-a221ff087cfa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received unexpected event network-vif-unplugged-35b6f475-0d58-47c2-811e-67155ce3476e for instance with vm_state paused and task_state shelving.#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.728 232437 INFO nova.virt.libvirt.driver [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Beginning cold snapshot process#033[00m
Dec  6 02:40:44 np0005548731 nova_compute[232433]: 2025-12-06 07:40:44.913 232437 DEBUG nova.virt.libvirt.imagebackend [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec  6 02:40:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:45.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:45 np0005548731 nova_compute[232433]: 2025-12-06 07:40:45.298 232437 DEBUG nova.storage.rbd_utils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(0a532e91abfd421bafa23d2771185703) on rbd image(2d45e460-a94a-4fcb-881d-de59908d47b3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:40:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:46.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e317 e317: 3 total, 3 up, 3 in
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.606 232437 DEBUG nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.606 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.607 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.607 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.607 232437 DEBUG nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] No waiting events found dispatching network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.608 232437 WARNING nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received unexpected event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.608 232437 DEBUG nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.608 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.608 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.609 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.609 232437 DEBUG nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] No waiting events found dispatching network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.609 232437 WARNING nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received unexpected event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.609 232437 DEBUG nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.610 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.610 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.610 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.610 232437 DEBUG nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] No waiting events found dispatching network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.611 232437 WARNING nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received unexpected event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.611 232437 DEBUG nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received event network-vif-unplugged-35b6f475-0d58-47c2-811e-67155ce3476e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.611 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.611 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.612 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.612 232437 DEBUG nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] No waiting events found dispatching network-vif-unplugged-35b6f475-0d58-47c2-811e-67155ce3476e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.612 232437 WARNING nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received unexpected event network-vif-unplugged-35b6f475-0d58-47c2-811e-67155ce3476e for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.612 232437 DEBUG nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.613 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.613 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.613 232437 DEBUG oslo_concurrency.lockutils [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.613 232437 DEBUG nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] No waiting events found dispatching network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.614 232437 WARNING nova.compute.manager [req-15c4f380-0631-4554-8b70-f96b9e332cf2 req-eae58935-7a31-4cc8-8f96-f8f362ee4d8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received unexpected event network-vif-plugged-35b6f475-0d58-47c2-811e-67155ce3476e for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Dec  6 02:40:46 np0005548731 nova_compute[232433]: 2025-12-06 07:40:46.905 232437 DEBUG nova.storage.rbd_utils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] cloning vms/2d45e460-a94a-4fcb-881d-de59908d47b3_disk@0a532e91abfd421bafa23d2771185703 to images/4df1dc2e-92f6-4200-8389-10650a86e843 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:40:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:47.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:47 np0005548731 nova_compute[232433]: 2025-12-06 07:40:47.240 232437 DEBUG nova.storage.rbd_utils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] flattening images/4df1dc2e-92f6-4200-8389-10650a86e843 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  6 02:40:47 np0005548731 nova_compute[232433]: 2025-12-06 07:40:47.489 232437 DEBUG nova.storage.rbd_utils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] removing snapshot(0a532e91abfd421bafa23d2771185703) on rbd image(2d45e460-a94a-4fcb-881d-de59908d47b3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:40:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:40:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1497706244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:40:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:40:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1497706244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:40:48 np0005548731 nova_compute[232433]: 2025-12-06 07:40:48.180 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:40:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:48.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:40:48 np0005548731 nova_compute[232433]: 2025-12-06 07:40:48.268 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e318 e318: 3 total, 3 up, 3 in
Dec  6 02:40:48 np0005548731 nova_compute[232433]: 2025-12-06 07:40:48.531 232437 DEBUG nova.storage.rbd_utils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(snap) on rbd image(4df1dc2e-92f6-4200-8389-10650a86e843) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:40:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:49.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e319 e319: 3 total, 3 up, 3 in
Dec  6 02:40:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:50.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:51.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:51 np0005548731 nova_compute[232433]: 2025-12-06 07:40:51.951 232437 INFO nova.virt.libvirt.driver [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Snapshot image upload complete#033[00m
Dec  6 02:40:51 np0005548731 nova_compute[232433]: 2025-12-06 07:40:51.952 232437 DEBUG nova.compute.manager [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:52 np0005548731 nova_compute[232433]: 2025-12-06 07:40:52.198 232437 INFO nova.compute.manager [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Shelve offloading#033[00m
Dec  6 02:40:52 np0005548731 nova_compute[232433]: 2025-12-06 07:40:52.205 232437 INFO nova.virt.libvirt.driver [-] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Instance destroyed successfully.#033[00m
Dec  6 02:40:52 np0005548731 nova_compute[232433]: 2025-12-06 07:40:52.205 232437 DEBUG nova.compute.manager [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:52 np0005548731 nova_compute[232433]: 2025-12-06 07:40:52.208 232437 DEBUG oslo_concurrency.lockutils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:40:52 np0005548731 nova_compute[232433]: 2025-12-06 07:40:52.208 232437 DEBUG oslo_concurrency.lockutils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:40:52 np0005548731 nova_compute[232433]: 2025-12-06 07:40:52.208 232437 DEBUG nova.network.neutron [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:40:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:52.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:53.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:53 np0005548731 nova_compute[232433]: 2025-12-06 07:40:53.182 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:53 np0005548731 nova_compute[232433]: 2025-12-06 07:40:53.236 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006838.2356544, 6a50a40c-3b05-4c0e-aa67-1489e203824e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:40:53 np0005548731 nova_compute[232433]: 2025-12-06 07:40:53.237 232437 INFO nova.compute.manager [-] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:40:53 np0005548731 nova_compute[232433]: 2025-12-06 07:40:53.270 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:53 np0005548731 nova_compute[232433]: 2025-12-06 07:40:53.607 232437 DEBUG nova.compute.manager [None req-12eb60e6-3a8b-4751-b3eb-de4033eac760 - - - - - -] [instance: 6a50a40c-3b05-4c0e-aa67-1489e203824e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:40:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:40:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:40:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:54.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e320 e320: 3 total, 3 up, 3 in
Dec  6 02:40:54 np0005548731 nova_compute[232433]: 2025-12-06 07:40:54.981 232437 DEBUG nova.network.neutron [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Updating instance_info_cache with network_info: [{"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b6f475-0d", "ovs_interfaceid": "35b6f475-0d58-47c2-811e-67155ce3476e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:40:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:55.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:55 np0005548731 nova_compute[232433]: 2025-12-06 07:40:55.085 232437 DEBUG oslo_concurrency.lockutils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:40:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:56.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:40:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:57.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.181 232437 INFO nova.virt.libvirt.driver [-] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Instance destroyed successfully.#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.181 232437 DEBUG nova.objects.instance [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'resources' on Instance uuid 2d45e460-a94a-4fcb-881d-de59908d47b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.183 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:40:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:58.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.271 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.396 232437 DEBUG nova.compute.manager [req-ba926822-7036-49cb-8b01-b6d19229d986 req-510cf767-7c5e-45b5-9d16-04275836b938 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Received event network-changed-35b6f475-0d58-47c2-811e-67155ce3476e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.396 232437 DEBUG nova.compute.manager [req-ba926822-7036-49cb-8b01-b6d19229d986 req-510cf767-7c5e-45b5-9d16-04275836b938 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Refreshing instance network info cache due to event network-changed-35b6f475-0d58-47c2-811e-67155ce3476e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.397 232437 DEBUG oslo_concurrency.lockutils [req-ba926822-7036-49cb-8b01-b6d19229d986 req-510cf767-7c5e-45b5-9d16-04275836b938 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.397 232437 DEBUG oslo_concurrency.lockutils [req-ba926822-7036-49cb-8b01-b6d19229d986 req-510cf767-7c5e-45b5-9d16-04275836b938 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.397 232437 DEBUG nova.network.neutron [req-ba926822-7036-49cb-8b01-b6d19229d986 req-510cf767-7c5e-45b5-9d16-04275836b938 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Refreshing network info cache for port 35b6f475-0d58-47c2-811e-67155ce3476e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.411 232437 DEBUG nova.virt.libvirt.vif [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:40:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1918499189',display_name='tempest-ServerActionsTestOtherB-server-1918499189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1918499189',id=137,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:40:36Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-a6grlxca',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member',shelved_at='2025-12-06T07:40:51.952347',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='4df1dc2e-92f6-4200-8389-10650a86e843'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:40:44Z,user_data=None,user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2d45e460-a94a-4fcb-881d-de59908d47b3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b6f475-0d", "ovs_interfaceid": "35b6f475-0d58-47c2-811e-67155ce3476e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.411 232437 DEBUG nova.network.os_vif_util [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b6f475-0d", "ovs_interfaceid": "35b6f475-0d58-47c2-811e-67155ce3476e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.412 232437 DEBUG nova.network.os_vif_util [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:b1:03,bridge_name='br-int',has_traffic_filtering=True,id=35b6f475-0d58-47c2-811e-67155ce3476e,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b6f475-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.412 232437 DEBUG os_vif [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:b1:03,bridge_name='br-int',has_traffic_filtering=True,id=35b6f475-0d58-47c2-811e-67155ce3476e,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b6f475-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.414 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.414 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35b6f475-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.415 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.417 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.420 232437 INFO os_vif [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:b1:03,bridge_name='br-int',has_traffic_filtering=True,id=35b6f475-0d58-47c2-811e-67155ce3476e,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b6f475-0d')#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.436 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.667 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.879 232437 INFO nova.virt.libvirt.driver [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Deleting instance files /var/lib/nova/instances/2d45e460-a94a-4fcb-881d-de59908d47b3_del#033[00m
Dec  6 02:40:58 np0005548731 nova_compute[232433]: 2025-12-06 07:40:58.880 232437 INFO nova.virt.libvirt.driver [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Deletion of /var/lib/nova/instances/2d45e460-a94a-4fcb-881d-de59908d47b3_del complete#033[00m
Dec  6 02:40:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:40:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:40:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:59.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.188 232437 INFO nova.scheduler.client.report [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Deleted allocations for instance 2d45e460-a94a-4fcb-881d-de59908d47b3#033[00m
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.228 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006844.226629, 2d45e460-a94a-4fcb-881d-de59908d47b3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.228 232437 INFO nova.compute.manager [-] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.256 232437 DEBUG oslo_concurrency.lockutils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.256 232437 DEBUG oslo_concurrency.lockutils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.259 232437 DEBUG nova.compute.manager [None req-1612ce45-d3e7-45ea-99a3-c3ad8ee52300 - - - - - -] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.296 232437 DEBUG oslo_concurrency.processutils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:40:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:40:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1098147259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.716 232437 DEBUG oslo_concurrency.processutils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.722 232437 DEBUG nova.compute.provider_tree [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.750 232437 DEBUG nova.scheduler.client.report [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.774 232437 DEBUG oslo_concurrency.lockutils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:59 np0005548731 nova_compute[232433]: 2025-12-06 07:40:59.840 232437 DEBUG oslo_concurrency.lockutils [None req-a30aa99e-4821-41ed-9eb7-7b5f40ca048b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2d45e460-a94a-4fcb-881d-de59908d47b3" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 16.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:40:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:40:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:41:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:00.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:00.881 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:00.881 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:00.882 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:41:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:01.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:41:01 np0005548731 nova_compute[232433]: 2025-12-06 07:41:01.961 232437 DEBUG nova.network.neutron [req-ba926822-7036-49cb-8b01-b6d19229d986 req-510cf767-7c5e-45b5-9d16-04275836b938 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Updated VIF entry in instance network info cache for port 35b6f475-0d58-47c2-811e-67155ce3476e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:41:01 np0005548731 nova_compute[232433]: 2025-12-06 07:41:01.962 232437 DEBUG nova.network.neutron [req-ba926822-7036-49cb-8b01-b6d19229d986 req-510cf767-7c5e-45b5-9d16-04275836b938 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2d45e460-a94a-4fcb-881d-de59908d47b3] Updating instance_info_cache with network_info: [{"id": "35b6f475-0d58-47c2-811e-67155ce3476e", "address": "fa:16:3e:de:b1:03", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": null, "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap35b6f475-0d", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:41:02 np0005548731 nova_compute[232433]: 2025-12-06 07:41:02.142 232437 DEBUG oslo_concurrency.lockutils [req-ba926822-7036-49cb-8b01-b6d19229d986 req-510cf767-7c5e-45b5-9d16-04275836b938 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2d45e460-a94a-4fcb-881d-de59908d47b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:41:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:02.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:03.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:03 np0005548731 nova_compute[232433]: 2025-12-06 07:41:03.187 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:03 np0005548731 nova_compute[232433]: 2025-12-06 07:41:03.415 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:03 np0005548731 podman[293578]: 2025-12-06 07:41:03.892511645 +0000 UTC m=+0.053678549 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:41:03 np0005548731 podman[293580]: 2025-12-06 07:41:03.901395131 +0000 UTC m=+0.059353267 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 02:41:03 np0005548731 podman[293579]: 2025-12-06 07:41:03.920348803 +0000 UTC m=+0.081533137 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller)
Dec  6 02:41:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:04.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:05.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:06.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:07.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:08 np0005548731 nova_compute[232433]: 2025-12-06 07:41:08.187 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:08.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:08 np0005548731 nova_compute[232433]: 2025-12-06 07:41:08.417 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:41:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3528680710' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:41:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:41:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3528680710' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:41:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:09.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:10.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:10 np0005548731 nova_compute[232433]: 2025-12-06 07:41:10.941 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:10 np0005548731 nova_compute[232433]: 2025-12-06 07:41:10.941 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:10 np0005548731 nova_compute[232433]: 2025-12-06 07:41:10.992 232437 DEBUG nova.compute.manager [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:41:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:11.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.074 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.074 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.081 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.081 232437 INFO nova.compute.claims [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.200 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:41:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3426479770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.615 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.623 232437 DEBUG nova.compute.provider_tree [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.648 232437 DEBUG nova.scheduler.client.report [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.681 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.682 232437 DEBUG nova.compute.manager [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.727 232437 DEBUG nova.compute.manager [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.728 232437 DEBUG nova.network.neutron [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.746 232437 INFO nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.763 232437 DEBUG nova.compute.manager [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:41:11 np0005548731 nova_compute[232433]: 2025-12-06 07:41:11.945 232437 DEBUG nova.policy [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d8b62a3276f4a8b8349af67b82134c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eff1f6a1654b45079de20eddb830e76d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.199 232437 DEBUG nova.compute.manager [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.200 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.201 232437 INFO nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Creating image(s)#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.227 232437 DEBUG nova.storage.rbd_utils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:12.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.257 232437 DEBUG nova.storage.rbd_utils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.283 232437 DEBUG nova.storage.rbd_utils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.287 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.352 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.353 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.354 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.354 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.380 232437 DEBUG nova.storage.rbd_utils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.384 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.740 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:12 np0005548731 nova_compute[232433]: 2025-12-06 07:41:12.813 232437 DEBUG nova.storage.rbd_utils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] resizing rbd image f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:41:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:13.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:13 np0005548731 nova_compute[232433]: 2025-12-06 07:41:13.178 232437 DEBUG nova.objects.instance [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'migration_context' on Instance uuid f55fc2f8-9116-4e95-926a-c40ebfaf68d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:13 np0005548731 nova_compute[232433]: 2025-12-06 07:41:13.188 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:13 np0005548731 nova_compute[232433]: 2025-12-06 07:41:13.267 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:41:13 np0005548731 nova_compute[232433]: 2025-12-06 07:41:13.268 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Ensure instance console log exists: /var/lib/nova/instances/f55fc2f8-9116-4e95-926a-c40ebfaf68d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:41:13 np0005548731 nova_compute[232433]: 2025-12-06 07:41:13.269 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:13 np0005548731 nova_compute[232433]: 2025-12-06 07:41:13.269 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:13 np0005548731 nova_compute[232433]: 2025-12-06 07:41:13.269 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:13 np0005548731 nova_compute[232433]: 2025-12-06 07:41:13.419 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:14 np0005548731 nova_compute[232433]: 2025-12-06 07:41:14.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:41:14 np0005548731 nova_compute[232433]: 2025-12-06 07:41:14.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:41:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:14.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:15.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:15 np0005548731 nova_compute[232433]: 2025-12-06 07:41:15.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:41:15 np0005548731 nova_compute[232433]: 2025-12-06 07:41:15.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:41:15 np0005548731 nova_compute[232433]: 2025-12-06 07:41:15.132 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:41:15 np0005548731 nova_compute[232433]: 2025-12-06 07:41:15.908 232437 DEBUG nova.network.neutron [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Successfully created port: b9c116ad-c036-42e3-965d-656439eface1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:41:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e321 e321: 3 total, 3 up, 3 in
Dec  6 02:41:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:16.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:16 np0005548731 nova_compute[232433]: 2025-12-06 07:41:16.745 232437 DEBUG nova.network.neutron [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Successfully updated port: b9c116ad-c036-42e3-965d-656439eface1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:41:16 np0005548731 nova_compute[232433]: 2025-12-06 07:41:16.766 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "refresh_cache-f55fc2f8-9116-4e95-926a-c40ebfaf68d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:41:16 np0005548731 nova_compute[232433]: 2025-12-06 07:41:16.766 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquired lock "refresh_cache-f55fc2f8-9116-4e95-926a-c40ebfaf68d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:41:16 np0005548731 nova_compute[232433]: 2025-12-06 07:41:16.766 232437 DEBUG nova.network.neutron [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:41:16 np0005548731 nova_compute[232433]: 2025-12-06 07:41:16.889 232437 DEBUG nova.compute.manager [req-10b85f31-4927-452f-8a1e-88f4ce058b17 req-24534834-a86f-4909-a12e-0fbdc44a121c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Received event network-changed-b9c116ad-c036-42e3-965d-656439eface1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:16 np0005548731 nova_compute[232433]: 2025-12-06 07:41:16.889 232437 DEBUG nova.compute.manager [req-10b85f31-4927-452f-8a1e-88f4ce058b17 req-24534834-a86f-4909-a12e-0fbdc44a121c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Refreshing instance network info cache due to event network-changed-b9c116ad-c036-42e3-965d-656439eface1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:41:16 np0005548731 nova_compute[232433]: 2025-12-06 07:41:16.889 232437 DEBUG oslo_concurrency.lockutils [req-10b85f31-4927-452f-8a1e-88f4ce058b17 req-24534834-a86f-4909-a12e-0fbdc44a121c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f55fc2f8-9116-4e95-926a-c40ebfaf68d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:41:16 np0005548731 nova_compute[232433]: 2025-12-06 07:41:16.945 232437 DEBUG nova.network.neutron [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:41:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e322 e322: 3 total, 3 up, 3 in
Dec  6 02:41:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:17.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.816 232437 DEBUG nova.network.neutron [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Updating instance_info_cache with network_info: [{"id": "b9c116ad-c036-42e3-965d-656439eface1", "address": "fa:16:3e:12:5c:54", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c116ad-c0", "ovs_interfaceid": "b9c116ad-c036-42e3-965d-656439eface1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.833 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Releasing lock "refresh_cache-f55fc2f8-9116-4e95-926a-c40ebfaf68d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.834 232437 DEBUG nova.compute.manager [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Instance network_info: |[{"id": "b9c116ad-c036-42e3-965d-656439eface1", "address": "fa:16:3e:12:5c:54", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c116ad-c0", "ovs_interfaceid": "b9c116ad-c036-42e3-965d-656439eface1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.834 232437 DEBUG oslo_concurrency.lockutils [req-10b85f31-4927-452f-8a1e-88f4ce058b17 req-24534834-a86f-4909-a12e-0fbdc44a121c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f55fc2f8-9116-4e95-926a-c40ebfaf68d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.834 232437 DEBUG nova.network.neutron [req-10b85f31-4927-452f-8a1e-88f4ce058b17 req-24534834-a86f-4909-a12e-0fbdc44a121c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Refreshing network info cache for port b9c116ad-c036-42e3-965d-656439eface1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.837 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Start _get_guest_xml network_info=[{"id": "b9c116ad-c036-42e3-965d-656439eface1", "address": "fa:16:3e:12:5c:54", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c116ad-c0", "ovs_interfaceid": "b9c116ad-c036-42e3-965d-656439eface1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.843 232437 WARNING nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.848 232437 DEBUG nova.virt.libvirt.host [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.849 232437 DEBUG nova.virt.libvirt.host [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.853 232437 DEBUG nova.virt.libvirt.host [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.854 232437 DEBUG nova.virt.libvirt.host [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.855 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.856 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.856 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.856 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.856 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.857 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.857 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.857 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.857 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.857 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.858 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.858 232437 DEBUG nova.virt.hardware [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:41:17 np0005548731 nova_compute[232433]: 2025-12-06 07:41:17.861 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e323 e323: 3 total, 3 up, 3 in
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.190 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:18.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:41:18 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2372975583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.312 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.336 232437 DEBUG nova.storage.rbd_utils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.340 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.421 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:41:18 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2685470459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.780 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.783 232437 DEBUG nova.virt.libvirt.vif [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:41:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-611064881',display_name='tempest-ServersTestJSON-server-611064881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-611064881',id=140,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-8z2tcffj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:11Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=f55fc2f8-9116-4e95-926a-c40ebfaf68d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9c116ad-c036-42e3-965d-656439eface1", "address": "fa:16:3e:12:5c:54", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c116ad-c0", "ovs_interfaceid": "b9c116ad-c036-42e3-965d-656439eface1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.784 232437 DEBUG nova.network.os_vif_util [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "b9c116ad-c036-42e3-965d-656439eface1", "address": "fa:16:3e:12:5c:54", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c116ad-c0", "ovs_interfaceid": "b9c116ad-c036-42e3-965d-656439eface1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.785 232437 DEBUG nova.network.os_vif_util [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:5c:54,bridge_name='br-int',has_traffic_filtering=True,id=b9c116ad-c036-42e3-965d-656439eface1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c116ad-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.786 232437 DEBUG nova.objects.instance [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'pci_devices' on Instance uuid f55fc2f8-9116-4e95-926a-c40ebfaf68d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.802 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  <uuid>f55fc2f8-9116-4e95-926a-c40ebfaf68d1</uuid>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  <name>instance-0000008c</name>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersTestJSON-server-611064881</nova:name>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:41:17</nova:creationTime>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <nova:user uuid="0d8b62a3276f4a8b8349af67b82134c8">tempest-ServersTestJSON-374151197-project-member</nova:user>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <nova:project uuid="eff1f6a1654b45079de20eddb830e76d">tempest-ServersTestJSON-374151197</nova:project>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <nova:port uuid="b9c116ad-c036-42e3-965d-656439eface1">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <entry name="serial">f55fc2f8-9116-4e95-926a-c40ebfaf68d1</entry>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <entry name="uuid">f55fc2f8-9116-4e95-926a-c40ebfaf68d1</entry>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk.config">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:12:5c:54"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <target dev="tapb9c116ad-c0"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/f55fc2f8-9116-4e95-926a-c40ebfaf68d1/console.log" append="off"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:41:18 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:41:18 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:41:18 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:41:18 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.803 232437 DEBUG nova.compute.manager [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Preparing to wait for external event network-vif-plugged-b9c116ad-c036-42e3-965d-656439eface1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.804 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.804 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.805 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.806 232437 DEBUG nova.virt.libvirt.vif [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:41:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-611064881',display_name='tempest-ServersTestJSON-server-611064881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-611064881',id=140,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-8z2tcffj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:11Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=f55fc2f8-9116-4e95-926a-c40ebfaf68d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9c116ad-c036-42e3-965d-656439eface1", "address": "fa:16:3e:12:5c:54", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c116ad-c0", "ovs_interfaceid": "b9c116ad-c036-42e3-965d-656439eface1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.806 232437 DEBUG nova.network.os_vif_util [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "b9c116ad-c036-42e3-965d-656439eface1", "address": "fa:16:3e:12:5c:54", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c116ad-c0", "ovs_interfaceid": "b9c116ad-c036-42e3-965d-656439eface1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.807 232437 DEBUG nova.network.os_vif_util [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:5c:54,bridge_name='br-int',has_traffic_filtering=True,id=b9c116ad-c036-42e3-965d-656439eface1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c116ad-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.807 232437 DEBUG os_vif [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:5c:54,bridge_name='br-int',has_traffic_filtering=True,id=b9c116ad-c036-42e3-965d-656439eface1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c116ad-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.808 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.808 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.809 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.813 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.813 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9c116ad-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.814 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9c116ad-c0, col_values=(('external_ids', {'iface-id': 'b9c116ad-c036-42e3-965d-656439eface1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:5c:54', 'vm-uuid': 'f55fc2f8-9116-4e95-926a-c40ebfaf68d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:18 np0005548731 NetworkManager[49182]: <info>  [1765006878.8169] manager: (tapb9c116ad-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.818 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.822 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.823 232437 INFO os_vif [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:5c:54,bridge_name='br-int',has_traffic_filtering=True,id=b9c116ad-c036-42e3-965d-656439eface1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c116ad-c0')#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.890 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.890 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.891 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No VIF found with MAC fa:16:3e:12:5c:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.891 232437 INFO nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Using config drive#033[00m
Dec  6 02:41:18 np0005548731 nova_compute[232433]: 2025-12-06 07:41:18.910 232437 DEBUG nova.storage.rbd_utils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:19.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:19 np0005548731 nova_compute[232433]: 2025-12-06 07:41:19.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:41:19 np0005548731 nova_compute[232433]: 2025-12-06 07:41:19.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:41:19 np0005548731 nova_compute[232433]: 2025-12-06 07:41:19.516 232437 INFO nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Creating config drive at /var/lib/nova/instances/f55fc2f8-9116-4e95-926a-c40ebfaf68d1/disk.config#033[00m
Dec  6 02:41:19 np0005548731 nova_compute[232433]: 2025-12-06 07:41:19.521 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f55fc2f8-9116-4e95-926a-c40ebfaf68d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyq_bpu8d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:19 np0005548731 nova_compute[232433]: 2025-12-06 07:41:19.651 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f55fc2f8-9116-4e95-926a-c40ebfaf68d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyq_bpu8d" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:19 np0005548731 nova_compute[232433]: 2025-12-06 07:41:19.674 232437 DEBUG nova.storage.rbd_utils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:19 np0005548731 nova_compute[232433]: 2025-12-06 07:41:19.677 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f55fc2f8-9116-4e95-926a-c40ebfaf68d1/disk.config f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:19 np0005548731 nova_compute[232433]: 2025-12-06 07:41:19.897 232437 DEBUG nova.network.neutron [req-10b85f31-4927-452f-8a1e-88f4ce058b17 req-24534834-a86f-4909-a12e-0fbdc44a121c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Updated VIF entry in instance network info cache for port b9c116ad-c036-42e3-965d-656439eface1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:41:19 np0005548731 nova_compute[232433]: 2025-12-06 07:41:19.898 232437 DEBUG nova.network.neutron [req-10b85f31-4927-452f-8a1e-88f4ce058b17 req-24534834-a86f-4909-a12e-0fbdc44a121c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Updating instance_info_cache with network_info: [{"id": "b9c116ad-c036-42e3-965d-656439eface1", "address": "fa:16:3e:12:5c:54", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c116ad-c0", "ovs_interfaceid": "b9c116ad-c036-42e3-965d-656439eface1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:41:19 np0005548731 nova_compute[232433]: 2025-12-06 07:41:19.918 232437 DEBUG oslo_concurrency.lockutils [req-10b85f31-4927-452f-8a1e-88f4ce058b17 req-24534834-a86f-4909-a12e-0fbdc44a121c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f55fc2f8-9116-4e95-926a-c40ebfaf68d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:41:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:20.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:21.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.082 232437 DEBUG oslo_concurrency.processutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f55fc2f8-9116-4e95-926a-c40ebfaf68d1/disk.config f55fc2f8-9116-4e95-926a-c40ebfaf68d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.082 232437 INFO nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Deleting local config drive /var/lib/nova/instances/f55fc2f8-9116-4e95-926a-c40ebfaf68d1/disk.config because it was imported into RBD.#033[00m
Dec  6 02:41:21 np0005548731 kernel: tapb9c116ad-c0: entered promiscuous mode
Dec  6 02:41:21 np0005548731 NetworkManager[49182]: <info>  [1765006881.1432] manager: (tapb9c116ad-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/294)
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.144 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:21Z|00651|binding|INFO|Claiming lport b9c116ad-c036-42e3-965d-656439eface1 for this chassis.
Dec  6 02:41:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:21Z|00652|binding|INFO|b9c116ad-c036-42e3-965d-656439eface1: Claiming fa:16:3e:12:5c:54 10.100.0.10
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.149 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.159 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:5c:54 10.100.0.10'], port_security=['fa:16:3e:12:5c:54 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f55fc2f8-9116-4e95-926a-c40ebfaf68d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=b9c116ad-c036-42e3-965d-656439eface1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.161 143965 INFO neutron.agent.ovn.metadata.agent [-] Port b9c116ad-c036-42e3-965d-656439eface1 in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 bound to our chassis#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.163 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35a27638-382c-4afb-83b0-edd6d7f4bca8#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.179 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1437d172-d34e-4f94-8709-fd7939f86ec3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.181 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35a27638-31 in ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.183 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35a27638-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.183 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7e80076f-6ba9-45c4-ad0a-a71c4937cc47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 systemd-machined[195355]: New machine qemu-65-instance-0000008c.
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.184 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8b86657e-2030-46fd-8041-ef0d56e76fd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.197 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[9623c712-1700-424e-a5d3-ed3fe1246a2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 systemd[1]: Started Virtual Machine qemu-65-instance-0000008c.
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.222 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[32133976-9bfb-4020-91f4-817811e73804]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 systemd-udevd[294030]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.226 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:21Z|00653|binding|INFO|Setting lport b9c116ad-c036-42e3-965d-656439eface1 ovn-installed in OVS
Dec  6 02:41:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:21Z|00654|binding|INFO|Setting lport b9c116ad-c036-42e3-965d-656439eface1 up in Southbound
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.234 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:21 np0005548731 NetworkManager[49182]: <info>  [1765006881.2395] device (tapb9c116ad-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:41:21 np0005548731 NetworkManager[49182]: <info>  [1765006881.2409] device (tapb9c116ad-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.253 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[87854092-519c-4ee2-9777-213bc238ddd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.260 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd9f965-6597-4504-8e8f-6f264b539c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 NetworkManager[49182]: <info>  [1765006881.2617] manager: (tap35a27638-30): new Veth device (/org/freedesktop/NetworkManager/Devices/295)
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.293 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6a929e1c-5363-42ab-8f22-1dcb3991626a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.296 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d157d72f-164b-4279-8e8a-6037562ac632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 NetworkManager[49182]: <info>  [1765006881.3214] device (tap35a27638-30): carrier: link connected
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.327 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ededb3f6-4f12-438d-a33e-85772493a6dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.346 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[18fa261c-3f66-4d67-8ee6-20e164072b17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714332, 'reachable_time': 32701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294060, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.361 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2d68a8f5-9be4-484c-91fc-3d3c1c0cba51]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:c527'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714332, 'tstamp': 714332}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294061, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.376 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf62a14-a232-4f7e-add7-b383a2d570e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714332, 'reachable_time': 32701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294062, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.404 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[116c2186-b634-44bd-8133-98c52de292c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.459 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[970d5002-7530-4f26-ae6b-7eccd3104406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.461 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.460 232437 DEBUG nova.compute.manager [req-d03f4e8e-84b1-448e-aebf-d45d9e235332 req-813776cb-a414-4d64-8e14-41d3285ac065 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Received event network-vif-plugged-b9c116ad-c036-42e3-965d-656439eface1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.461 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.461 232437 DEBUG oslo_concurrency.lockutils [req-d03f4e8e-84b1-448e-aebf-d45d9e235332 req-813776cb-a414-4d64-8e14-41d3285ac065 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.461 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35a27638-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.461 232437 DEBUG oslo_concurrency.lockutils [req-d03f4e8e-84b1-448e-aebf-d45d9e235332 req-813776cb-a414-4d64-8e14-41d3285ac065 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.462 232437 DEBUG oslo_concurrency.lockutils [req-d03f4e8e-84b1-448e-aebf-d45d9e235332 req-813776cb-a414-4d64-8e14-41d3285ac065 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.462 232437 DEBUG nova.compute.manager [req-d03f4e8e-84b1-448e-aebf-d45d9e235332 req-813776cb-a414-4d64-8e14-41d3285ac065 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Processing event network-vif-plugged-b9c116ad-c036-42e3-965d-656439eface1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.463 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:21 np0005548731 NetworkManager[49182]: <info>  [1765006881.4639] manager: (tap35a27638-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Dec  6 02:41:21 np0005548731 kernel: tap35a27638-30: entered promiscuous mode
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.464 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.468 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35a27638-30, col_values=(('external_ids', {'iface-id': '5e371956-96bf-49df-b4a8-97154044dc54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.470 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:21Z|00655|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.470 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.474 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.475 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f8caef1a-6903-4d31-9663-b04872323e68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.476 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:41:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:21.477 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'env', 'PROCESS_TAG=haproxy-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35a27638-382c-4afb-83b0-edd6d7f4bca8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.484 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.598 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006881.5978615, f55fc2f8-9116-4e95-926a-c40ebfaf68d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.598 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] VM Started (Lifecycle Event)#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.600 232437 DEBUG nova.compute.manager [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.603 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.606 232437 INFO nova.virt.libvirt.driver [-] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Instance spawned successfully.#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.607 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.652 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.652 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.653 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.653 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.654 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.654 232437 DEBUG nova.virt.libvirt.driver [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.658 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.661 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.727 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.727 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006881.5979838, f55fc2f8-9116-4e95-926a-c40ebfaf68d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.727 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.766 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.770 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006881.6025836, f55fc2f8-9116-4e95-926a-c40ebfaf68d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.770 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.782 232437 INFO nova.compute.manager [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Took 9.58 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.782 232437 DEBUG nova.compute.manager [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.797 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.801 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.827 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.846 232437 INFO nova.compute.manager [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Took 10.81 seconds to build instance.#033[00m
Dec  6 02:41:21 np0005548731 podman[294137]: 2025-12-06 07:41:21.854432146 +0000 UTC m=+0.055630476 container create bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:41:21 np0005548731 nova_compute[232433]: 2025-12-06 07:41:21.862 232437 DEBUG oslo_concurrency.lockutils [None req-67d35fbc-d782-4ad0-a11d-df78aada2fbd 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:21 np0005548731 systemd[1]: Started libpod-conmon-bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2.scope.
Dec  6 02:41:21 np0005548731 podman[294137]: 2025-12-06 07:41:21.823208345 +0000 UTC m=+0.024406675 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:41:21 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:41:21 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44119b54993efa98be2f157f4f05f98776379bbf24afcb25ce552960c61a376/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:41:21 np0005548731 podman[294137]: 2025-12-06 07:41:21.966742722 +0000 UTC m=+0.167941072 container init bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  6 02:41:21 np0005548731 podman[294137]: 2025-12-06 07:41:21.972253247 +0000 UTC m=+0.173451577 container start bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:41:21 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[294152]: [NOTICE]   (294156) : New worker (294158) forked
Dec  6 02:41:21 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[294152]: [NOTICE]   (294156) : Loading success.
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.145 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.145 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.146 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.146 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.146 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:22.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:41:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3397156865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.643 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.710 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.711 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.869 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.870 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4274MB free_disk=20.830978393554688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.871 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.872 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.969 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance f55fc2f8-9116-4e95-926a-c40ebfaf68d1 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.970 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:41:22 np0005548731 nova_compute[232433]: 2025-12-06 07:41:22.971 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.018 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:23.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.193 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:41:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1794384952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.453 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.458 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:41:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.509 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.728 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.728 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.806 232437 DEBUG nova.compute.manager [req-03a4da3d-0837-49f5-a2a0-8df7e6189460 req-42802db2-0863-4c0e-aecd-acc0831e13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Received event network-vif-plugged-b9c116ad-c036-42e3-965d-656439eface1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.807 232437 DEBUG oslo_concurrency.lockutils [req-03a4da3d-0837-49f5-a2a0-8df7e6189460 req-42802db2-0863-4c0e-aecd-acc0831e13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.807 232437 DEBUG oslo_concurrency.lockutils [req-03a4da3d-0837-49f5-a2a0-8df7e6189460 req-42802db2-0863-4c0e-aecd-acc0831e13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.807 232437 DEBUG oslo_concurrency.lockutils [req-03a4da3d-0837-49f5-a2a0-8df7e6189460 req-42802db2-0863-4c0e-aecd-acc0831e13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.807 232437 DEBUG nova.compute.manager [req-03a4da3d-0837-49f5-a2a0-8df7e6189460 req-42802db2-0863-4c0e-aecd-acc0831e13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] No waiting events found dispatching network-vif-plugged-b9c116ad-c036-42e3-965d-656439eface1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.808 232437 WARNING nova.compute.manager [req-03a4da3d-0837-49f5-a2a0-8df7e6189460 req-42802db2-0863-4c0e-aecd-acc0831e13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Received unexpected event network-vif-plugged-b9c116ad-c036-42e3-965d-656439eface1 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:41:23 np0005548731 nova_compute[232433]: 2025-12-06 07:41:23.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:24.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e324 e324: 3 total, 3 up, 3 in
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.630 232437 DEBUG oslo_concurrency.lockutils [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.630 232437 DEBUG oslo_concurrency.lockutils [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.631 232437 DEBUG oslo_concurrency.lockutils [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.631 232437 DEBUG oslo_concurrency.lockutils [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.631 232437 DEBUG oslo_concurrency.lockutils [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.633 232437 INFO nova.compute.manager [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Terminating instance#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.634 232437 DEBUG nova.compute.manager [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:41:24 np0005548731 kernel: tapb9c116ad-c0 (unregistering): left promiscuous mode
Dec  6 02:41:24 np0005548731 NetworkManager[49182]: <info>  [1765006884.6732] device (tapb9c116ad-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.683 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:24Z|00656|binding|INFO|Releasing lport b9c116ad-c036-42e3-965d-656439eface1 from this chassis (sb_readonly=0)
Dec  6 02:41:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:24Z|00657|binding|INFO|Setting lport b9c116ad-c036-42e3-965d-656439eface1 down in Southbound
Dec  6 02:41:24 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:24Z|00658|binding|INFO|Removing iface tapb9c116ad-c0 ovn-installed in OVS
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.686 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.695 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:5c:54 10.100.0.10'], port_security=['fa:16:3e:12:5c:54 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f55fc2f8-9116-4e95-926a-c40ebfaf68d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=b9c116ad-c036-42e3-965d-656439eface1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.696 143965 INFO neutron.agent.ovn.metadata.agent [-] Port b9c116ad-c036-42e3-965d-656439eface1 in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 unbound from our chassis#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.698 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35a27638-382c-4afb-83b0-edd6d7f4bca8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.699 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bdbde347-d169-47c3-bb04-d98dc5b8b72c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.699 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 namespace which is not needed anymore#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.703 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.728 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:41:24 np0005548731 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Dec  6 02:41:24 np0005548731 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008c.scope: Consumed 3.500s CPU time.
Dec  6 02:41:24 np0005548731 systemd-machined[195355]: Machine qemu-65-instance-0000008c terminated.
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.747 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.747 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:41:24 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[294152]: [NOTICE]   (294156) : haproxy version is 2.8.14-c23fe91
Dec  6 02:41:24 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[294152]: [NOTICE]   (294156) : path to executable is /usr/sbin/haproxy
Dec  6 02:41:24 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[294152]: [WARNING]  (294156) : Exiting Master process...
Dec  6 02:41:24 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[294152]: [WARNING]  (294156) : Exiting Master process...
Dec  6 02:41:24 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[294152]: [ALERT]    (294156) : Current worker (294158) exited with code 143 (Terminated)
Dec  6 02:41:24 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[294152]: [WARNING]  (294156) : All workers exited. Exiting... (0)
Dec  6 02:41:24 np0005548731 systemd[1]: libpod-bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2.scope: Deactivated successfully.
Dec  6 02:41:24 np0005548731 podman[294238]: 2025-12-06 07:41:24.825690186 +0000 UTC m=+0.042173089 container died bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:41:24 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2-userdata-shm.mount: Deactivated successfully.
Dec  6 02:41:24 np0005548731 systemd[1]: var-lib-containers-storage-overlay-c44119b54993efa98be2f157f4f05f98776379bbf24afcb25ce552960c61a376-merged.mount: Deactivated successfully.
Dec  6 02:41:24 np0005548731 podman[294238]: 2025-12-06 07:41:24.859926029 +0000 UTC m=+0.076408932 container cleanup bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.867 232437 INFO nova.virt.libvirt.driver [-] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Instance destroyed successfully.#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.867 232437 DEBUG nova.objects.instance [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'resources' on Instance uuid f55fc2f8-9116-4e95-926a-c40ebfaf68d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:24 np0005548731 systemd[1]: libpod-conmon-bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2.scope: Deactivated successfully.
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.891 232437 DEBUG nova.virt.libvirt.vif [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:41:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-611064881',display_name='tempest-ServersTestJSON-server-611064881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-611064881',id=140,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:41:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-8z2tcffj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:41:21Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=f55fc2f8-9116-4e95-926a-c40ebfaf68d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9c116ad-c036-42e3-965d-656439eface1", "address": "fa:16:3e:12:5c:54", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c116ad-c0", "ovs_interfaceid": "b9c116ad-c036-42e3-965d-656439eface1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.892 232437 DEBUG nova.network.os_vif_util [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "b9c116ad-c036-42e3-965d-656439eface1", "address": "fa:16:3e:12:5c:54", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c116ad-c0", "ovs_interfaceid": "b9c116ad-c036-42e3-965d-656439eface1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.893 232437 DEBUG nova.network.os_vif_util [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:5c:54,bridge_name='br-int',has_traffic_filtering=True,id=b9c116ad-c036-42e3-965d-656439eface1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c116ad-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.894 232437 DEBUG os_vif [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:5c:54,bridge_name='br-int',has_traffic_filtering=True,id=b9c116ad-c036-42e3-965d-656439eface1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c116ad-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.895 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.896 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9c116ad-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.897 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.898 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.900 232437 INFO os_vif [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:5c:54,bridge_name='br-int',has_traffic_filtering=True,id=b9c116ad-c036-42e3-965d-656439eface1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c116ad-c0')#033[00m
Dec  6 02:41:24 np0005548731 podman[294277]: 2025-12-06 07:41:24.921311665 +0000 UTC m=+0.037981546 container remove bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.926 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8c91bcbe-db2d-4dae-80e4-75a19916a99b]: (4, ('Sat Dec  6 07:41:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 (bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2)\nbd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2\nSat Dec  6 07:41:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 (bd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2)\nbd40f62f82dbb79a3af9a0b583e560f63acb3215f01956f8f959d24a122888d2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.928 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c6199066-8689-40de-995f-d591c1ea5581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.929 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.930 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:24 np0005548731 kernel: tap35a27638-30: left promiscuous mode
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.944 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:24 np0005548731 nova_compute[232433]: 2025-12-06 07:41:24.945 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.946 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e05d9907-d5ed-46de-9b53-1be86191f096]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.961 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[30121076-98a0-4999-b817-39211fd3c5e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.962 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[677d27db-e0f0-4cf3-9049-a0ba1214eab2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.977 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[22091b0e-d5cb-4fcf-a57e-ae502e347ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714325, 'reachable_time': 39830, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294307, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.980 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:41:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:24.980 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[da23f17e-b170-4c59-b905-a1bd3cb9b1ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:24 np0005548731 systemd[1]: run-netns-ovnmeta\x2d35a27638\x2d382c\x2d4afb\x2d83b0\x2dedd6d7f4bca8.mount: Deactivated successfully.
Dec  6 02:41:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:25.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.914 232437 DEBUG nova.compute.manager [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Received event network-vif-unplugged-b9c116ad-c036-42e3-965d-656439eface1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.916 232437 DEBUG oslo_concurrency.lockutils [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.916 232437 DEBUG oslo_concurrency.lockutils [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.916 232437 DEBUG oslo_concurrency.lockutils [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.916 232437 DEBUG nova.compute.manager [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] No waiting events found dispatching network-vif-unplugged-b9c116ad-c036-42e3-965d-656439eface1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.916 232437 DEBUG nova.compute.manager [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Received event network-vif-unplugged-b9c116ad-c036-42e3-965d-656439eface1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.917 232437 DEBUG nova.compute.manager [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Received event network-vif-plugged-b9c116ad-c036-42e3-965d-656439eface1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.917 232437 DEBUG oslo_concurrency.lockutils [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.917 232437 DEBUG oslo_concurrency.lockutils [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.917 232437 DEBUG oslo_concurrency.lockutils [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.918 232437 DEBUG nova.compute.manager [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] No waiting events found dispatching network-vif-plugged-b9c116ad-c036-42e3-965d-656439eface1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:41:25 np0005548731 nova_compute[232433]: 2025-12-06 07:41:25.918 232437 WARNING nova.compute.manager [req-dcf5326c-7ae0-41a7-9922-edb5e73f9504 req-2753aaef-2bd2-43e8-a5c9-78b24fd55c5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Received unexpected event network-vif-plugged-b9c116ad-c036-42e3-965d-656439eface1 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:41:26 np0005548731 nova_compute[232433]: 2025-12-06 07:41:26.148 232437 INFO nova.virt.libvirt.driver [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Deleting instance files /var/lib/nova/instances/f55fc2f8-9116-4e95-926a-c40ebfaf68d1_del#033[00m
Dec  6 02:41:26 np0005548731 nova_compute[232433]: 2025-12-06 07:41:26.148 232437 INFO nova.virt.libvirt.driver [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Deletion of /var/lib/nova/instances/f55fc2f8-9116-4e95-926a-c40ebfaf68d1_del complete#033[00m
Dec  6 02:41:26 np0005548731 nova_compute[232433]: 2025-12-06 07:41:26.205 232437 INFO nova.compute.manager [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Took 1.57 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:41:26 np0005548731 nova_compute[232433]: 2025-12-06 07:41:26.206 232437 DEBUG oslo.service.loopingcall [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:41:26 np0005548731 nova_compute[232433]: 2025-12-06 07:41:26.206 232437 DEBUG nova.compute.manager [-] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:41:26 np0005548731 nova_compute[232433]: 2025-12-06 07:41:26.206 232437 DEBUG nova.network.neutron [-] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:41:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:26.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:26 np0005548731 nova_compute[232433]: 2025-12-06 07:41:26.930 232437 DEBUG nova.network.neutron [-] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:41:26 np0005548731 nova_compute[232433]: 2025-12-06 07:41:26.955 232437 INFO nova.compute.manager [-] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Took 0.75 seconds to deallocate network for instance.#033[00m
Dec  6 02:41:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:27.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:27 np0005548731 nova_compute[232433]: 2025-12-06 07:41:27.104 232437 DEBUG oslo_concurrency.lockutils [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:27 np0005548731 nova_compute[232433]: 2025-12-06 07:41:27.105 232437 DEBUG oslo_concurrency.lockutils [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:27 np0005548731 nova_compute[232433]: 2025-12-06 07:41:27.131 232437 DEBUG nova.compute.manager [req-3a054ec9-f372-40a2-88e5-df40ffe04f9b req-8032965b-9bfc-4995-baa8-8046285edf97 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Received event network-vif-deleted-b9c116ad-c036-42e3-965d-656439eface1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:27 np0005548731 nova_compute[232433]: 2025-12-06 07:41:27.168 232437 DEBUG oslo_concurrency.processutils [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:41:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3383122313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:41:27 np0005548731 nova_compute[232433]: 2025-12-06 07:41:27.669 232437 DEBUG oslo_concurrency.processutils [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:27 np0005548731 nova_compute[232433]: 2025-12-06 07:41:27.677 232437 DEBUG nova.compute.provider_tree [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:41:27 np0005548731 nova_compute[232433]: 2025-12-06 07:41:27.700 232437 DEBUG nova.scheduler.client.report [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:41:27 np0005548731 nova_compute[232433]: 2025-12-06 07:41:27.722 232437 DEBUG oslo_concurrency.lockutils [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:27 np0005548731 nova_compute[232433]: 2025-12-06 07:41:27.752 232437 INFO nova.scheduler.client.report [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Deleted allocations for instance f55fc2f8-9116-4e95-926a-c40ebfaf68d1#033[00m
Dec  6 02:41:27 np0005548731 nova_compute[232433]: 2025-12-06 07:41:27.809 232437 DEBUG oslo_concurrency.lockutils [None req-361b4254-d95b-4694-968a-58521df8f063 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "f55fc2f8-9116-4e95-926a-c40ebfaf68d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:28 np0005548731 nova_compute[232433]: 2025-12-06 07:41:28.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:41:28 np0005548731 nova_compute[232433]: 2025-12-06 07:41:28.195 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:28.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:29.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:29.152 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:41:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:29.153 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:41:29 np0005548731 nova_compute[232433]: 2025-12-06 07:41:29.183 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:29 np0005548731 nova_compute[232433]: 2025-12-06 07:41:29.898 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:30.155 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:41:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:30.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:41:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:31.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:41:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:32.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:41:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:33.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.197 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.656 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "e78be236-3d5f-4951-9149-8c7d9a6f0382" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.656 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.678 232437 DEBUG nova.compute.manager [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.740 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.740 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.741 232437 INFO nova.compute.manager [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Unshelving#033[00m
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.757 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.758 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.766 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.767 232437 INFO nova.compute.claims [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.838 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:33 np0005548731 nova_compute[232433]: 2025-12-06 07:41:33.892 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:34.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:41:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/193829452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.358 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.366 232437 DEBUG nova.compute.provider_tree [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.384 232437 DEBUG nova.scheduler.client.report [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.416 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.417 232437 DEBUG nova.compute.manager [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.419 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.433 232437 DEBUG nova.objects.instance [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_requests' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.457 232437 DEBUG nova.objects.instance [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'numa_topology' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.463 232437 DEBUG nova.compute.manager [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.464 232437 DEBUG nova.network.neutron [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.469 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.469 232437 INFO nova.compute.claims [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.493 232437 INFO nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.528 232437 DEBUG nova.compute.manager [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.699 232437 DEBUG oslo_concurrency.processutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.736 232437 DEBUG nova.compute.manager [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.738 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.738 232437 INFO nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Creating image(s)#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.765 232437 DEBUG nova.storage.rbd_utils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image e78be236-3d5f-4951-9149-8c7d9a6f0382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.798 232437 DEBUG nova.storage.rbd_utils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image e78be236-3d5f-4951-9149-8c7d9a6f0382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.848 232437 DEBUG nova.storage.rbd_utils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image e78be236-3d5f-4951-9149-8c7d9a6f0382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.854 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.879 232437 DEBUG nova.policy [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d8b62a3276f4a8b8349af67b82134c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eff1f6a1654b45079de20eddb830e76d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:41:34 np0005548731 podman[294407]: 2025-12-06 07:41:34.890924957 +0000 UTC m=+0.056682502 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:41:34 np0005548731 podman[294425]: 2025-12-06 07:41:34.893298754 +0000 UTC m=+0.054330524 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.902 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.916 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.917 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.918 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.918 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:34 np0005548731 podman[294424]: 2025-12-06 07:41:34.92227298 +0000 UTC m=+0.087436790 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.944 232437 DEBUG nova.storage.rbd_utils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image e78be236-3d5f-4951-9149-8c7d9a6f0382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:34 np0005548731 nova_compute[232433]: 2025-12-06 07:41:34.947 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e78be236-3d5f-4951-9149-8c7d9a6f0382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:35.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:41:35 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/257299139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:41:35 np0005548731 nova_compute[232433]: 2025-12-06 07:41:35.148 232437 DEBUG oslo_concurrency.processutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:35 np0005548731 nova_compute[232433]: 2025-12-06 07:41:35.154 232437 DEBUG nova.compute.provider_tree [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:41:35 np0005548731 nova_compute[232433]: 2025-12-06 07:41:35.169 232437 DEBUG nova.scheduler.client.report [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:41:35 np0005548731 nova_compute[232433]: 2025-12-06 07:41:35.189 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:35 np0005548731 nova_compute[232433]: 2025-12-06 07:41:35.362 232437 INFO nova.network.neutron [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updating port f1f563a8-9001-419f-858a-0213c5d6607a with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec  6 02:41:35 np0005548731 nova_compute[232433]: 2025-12-06 07:41:35.790 232437 DEBUG nova.network.neutron [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Successfully created port: e8c2e4a9-a078-43ea-a03d-81a749cc64f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:41:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:36.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.384 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.384 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.384 232437 DEBUG nova.network.neutron [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.564 232437 DEBUG nova.compute.manager [req-730e54b4-a390-4251-9533-65479e3e158e req-906bb405-de8c-4f0d-a5fc-d56fa3041926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-changed-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.565 232437 DEBUG nova.compute.manager [req-730e54b4-a390-4251-9533-65479e3e158e req-906bb405-de8c-4f0d-a5fc-d56fa3041926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Refreshing instance network info cache due to event network-changed-f1f563a8-9001-419f-858a-0213c5d6607a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.565 232437 DEBUG oslo_concurrency.lockutils [req-730e54b4-a390-4251-9533-65479e3e158e req-906bb405-de8c-4f0d-a5fc-d56fa3041926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.809 232437 DEBUG nova.network.neutron [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Successfully updated port: e8c2e4a9-a078-43ea-a03d-81a749cc64f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.823 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "refresh_cache-e78be236-3d5f-4951-9149-8c7d9a6f0382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.823 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquired lock "refresh_cache-e78be236-3d5f-4951-9149-8c7d9a6f0382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.823 232437 DEBUG nova.network.neutron [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.996 232437 DEBUG nova.compute.manager [req-d3fc322e-45e7-4707-b465-2fbb48cbe3de req-8e0cbce1-f265-4d87-a1f6-176b0c23b5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Received event network-changed-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.997 232437 DEBUG nova.compute.manager [req-d3fc322e-45e7-4707-b465-2fbb48cbe3de req-8e0cbce1-f265-4d87-a1f6-176b0c23b5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Refreshing instance network info cache due to event network-changed-e8c2e4a9-a078-43ea-a03d-81a749cc64f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:41:36 np0005548731 nova_compute[232433]: 2025-12-06 07:41:36.997 232437 DEBUG oslo_concurrency.lockutils [req-d3fc322e-45e7-4707-b465-2fbb48cbe3de req-8e0cbce1-f265-4d87-a1f6-176b0c23b5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e78be236-3d5f-4951-9149-8c7d9a6f0382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:41:37 np0005548731 nova_compute[232433]: 2025-12-06 07:41:37.040 232437 DEBUG nova.network.neutron [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:41:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:37.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:38 np0005548731 nova_compute[232433]: 2025-12-06 07:41:38.245 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:38 np0005548731 nova_compute[232433]: 2025-12-06 07:41:38.258 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e78be236-3d5f-4951-9149-8c7d9a6f0382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:38.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:38 np0005548731 nova_compute[232433]: 2025-12-06 07:41:38.331 232437 DEBUG nova.storage.rbd_utils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] resizing rbd image e78be236-3d5f-4951-9149-8c7d9a6f0382_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:41:38 np0005548731 nova_compute[232433]: 2025-12-06 07:41:38.425 232437 DEBUG nova.objects.instance [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'migration_context' on Instance uuid e78be236-3d5f-4951-9149-8c7d9a6f0382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:38 np0005548731 nova_compute[232433]: 2025-12-06 07:41:38.444 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:41:38 np0005548731 nova_compute[232433]: 2025-12-06 07:41:38.445 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Ensure instance console log exists: /var/lib/nova/instances/e78be236-3d5f-4951-9149-8c7d9a6f0382/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:41:38 np0005548731 nova_compute[232433]: 2025-12-06 07:41:38.445 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:38 np0005548731 nova_compute[232433]: 2025-12-06 07:41:38.445 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:38 np0005548731 nova_compute[232433]: 2025-12-06 07:41:38.446 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.053 232437 DEBUG nova.network.neutron [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Updating instance_info_cache with network_info: [{"id": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "address": "fa:16:3e:1e:af:07", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8c2e4a9-a0", "ovs_interfaceid": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.073 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Releasing lock "refresh_cache-e78be236-3d5f-4951-9149-8c7d9a6f0382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.074 232437 DEBUG nova.compute.manager [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Instance network_info: |[{"id": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "address": "fa:16:3e:1e:af:07", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8c2e4a9-a0", "ovs_interfaceid": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.074 232437 DEBUG oslo_concurrency.lockutils [req-d3fc322e-45e7-4707-b465-2fbb48cbe3de req-8e0cbce1-f265-4d87-a1f6-176b0c23b5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e78be236-3d5f-4951-9149-8c7d9a6f0382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.074 232437 DEBUG nova.network.neutron [req-d3fc322e-45e7-4707-b465-2fbb48cbe3de req-8e0cbce1-f265-4d87-a1f6-176b0c23b5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Refreshing network info cache for port e8c2e4a9-a078-43ea-a03d-81a749cc64f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.077 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Start _get_guest_xml network_info=[{"id": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "address": "fa:16:3e:1e:af:07", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8c2e4a9-a0", "ovs_interfaceid": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.082 232437 WARNING nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:41:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:39.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.088 232437 DEBUG nova.virt.libvirt.host [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.088 232437 DEBUG nova.virt.libvirt.host [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.091 232437 DEBUG nova.virt.libvirt.host [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.091 232437 DEBUG nova.virt.libvirt.host [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.093 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.093 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.093 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.094 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.094 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.094 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.094 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.094 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.095 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.095 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.095 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.095 232437 DEBUG nova.virt.hardware [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.098 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.204 232437 DEBUG nova.network.neutron [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updating instance_info_cache with network_info: [{"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.221 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.222 232437 DEBUG nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.223 232437 INFO nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Creating image(s)#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.247 232437 DEBUG nova.storage.rbd_utils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.251 232437 DEBUG nova.objects.instance [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.252 232437 DEBUG oslo_concurrency.lockutils [req-730e54b4-a390-4251-9533-65479e3e158e req-906bb405-de8c-4f0d-a5fc-d56fa3041926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.252 232437 DEBUG nova.network.neutron [req-730e54b4-a390-4251-9533-65479e3e158e req-906bb405-de8c-4f0d-a5fc-d56fa3041926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Refreshing network info cache for port f1f563a8-9001-419f-858a-0213c5d6607a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.293 232437 DEBUG nova.storage.rbd_utils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.323 232437 DEBUG nova.storage.rbd_utils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.328 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "f287858eed515d261d3d3b2f6292726090ff4fb7" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.328 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "f287858eed515d261d3d3b2f6292726090ff4fb7" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:41:39 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2344964929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.527 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.546 232437 DEBUG nova.storage.rbd_utils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image e78be236-3d5f-4951-9149-8c7d9a6f0382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.550 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.700 232437 DEBUG nova.virt.libvirt.imagebackend [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/16647923-36b7-4c61-9de3-fb629f75ca61/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/16647923-36b7-4c61-9de3-fb629f75ca61/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.750 232437 DEBUG nova.virt.libvirt.imagebackend [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/16647923-36b7-4c61-9de3-fb629f75ca61/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.751 232437 DEBUG nova.storage.rbd_utils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] cloning images/16647923-36b7-4c61-9de3-fb629f75ca61@snap to None/b85968f0-ebd7-48f6-a932-c4e8da09381e_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.865 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006884.8639095, f55fc2f8-9116-4e95-926a-c40ebfaf68d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.866 232437 INFO nova.compute.manager [-] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.888 232437 DEBUG nova.compute.manager [None req-ccd6db3b-a5a3-48a5-a1ab-9d888a01567a - - - - - -] [instance: f55fc2f8-9116-4e95-926a-c40ebfaf68d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.905 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.920 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "f287858eed515d261d3d3b2f6292726090ff4fb7" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:41:39 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/523603762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.993 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.998 232437 DEBUG nova.virt.libvirt.vif [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:41:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-778223522',display_name='tempest-ServersTestJSON-server-778223522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-778223522',id=142,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyFGrqZ3O10d+Mtnb3FboSnloI6/6aBO0CbDH2Wv2Z+d4Kw7ojIiEFuDxPkAl/gckwKhEI1ABUm3/fkKFkXqzNfl1EdQBjJSlqKfthYKDWPsLKMgOEm8xbJ7FeKDe5jCg==',key_name='tempest-key-1984291357',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-04hbeklq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:34Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=e78be236-3d5f-4951-9149-8c7d9a6f0382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "address": "fa:16:3e:1e:af:07", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8c2e4a9-a0", "ovs_interfaceid": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:41:39 np0005548731 nova_compute[232433]: 2025-12-06 07:41:39.999 232437 DEBUG nova.network.os_vif_util [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "address": "fa:16:3e:1e:af:07", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8c2e4a9-a0", "ovs_interfaceid": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.000 232437 DEBUG nova.network.os_vif_util [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:af:07,bridge_name='br-int',has_traffic_filtering=True,id=e8c2e4a9-a078-43ea-a03d-81a749cc64f5,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8c2e4a9-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.001 232437 DEBUG nova.objects.instance [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'pci_devices' on Instance uuid e78be236-3d5f-4951-9149-8c7d9a6f0382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.043 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  <uuid>e78be236-3d5f-4951-9149-8c7d9a6f0382</uuid>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  <name>instance-0000008e</name>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersTestJSON-server-778223522</nova:name>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:41:39</nova:creationTime>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <nova:user uuid="0d8b62a3276f4a8b8349af67b82134c8">tempest-ServersTestJSON-374151197-project-member</nova:user>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <nova:project uuid="eff1f6a1654b45079de20eddb830e76d">tempest-ServersTestJSON-374151197</nova:project>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <nova:port uuid="e8c2e4a9-a078-43ea-a03d-81a749cc64f5">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <entry name="serial">e78be236-3d5f-4951-9149-8c7d9a6f0382</entry>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <entry name="uuid">e78be236-3d5f-4951-9149-8c7d9a6f0382</entry>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e78be236-3d5f-4951-9149-8c7d9a6f0382_disk">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e78be236-3d5f-4951-9149-8c7d9a6f0382_disk.config">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:1e:af:07"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <target dev="tape8c2e4a9-a0"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/e78be236-3d5f-4951-9149-8c7d9a6f0382/console.log" append="off"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:41:40 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:41:40 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:41:40 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:41:40 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.045 232437 DEBUG nova.compute.manager [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Preparing to wait for external event network-vif-plugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.045 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.045 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.046 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.046 232437 DEBUG nova.virt.libvirt.vif [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:41:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-778223522',display_name='tempest-ServersTestJSON-server-778223522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-778223522',id=142,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyFGrqZ3O10d+Mtnb3FboSnloI6/6aBO0CbDH2Wv2Z+d4Kw7ojIiEFuDxPkAl/gckwKhEI1ABUm3/fkKFkXqzNfl1EdQBjJSlqKfthYKDWPsLKMgOEm8xbJ7FeKDe5jCg==',key_name='tempest-key-1984291357',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-04hbeklq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:34Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=e78be236-3d5f-4951-9149-8c7d9a6f0382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "address": "fa:16:3e:1e:af:07", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8c2e4a9-a0", "ovs_interfaceid": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.047 232437 DEBUG nova.network.os_vif_util [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "address": "fa:16:3e:1e:af:07", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8c2e4a9-a0", "ovs_interfaceid": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.047 232437 DEBUG nova.network.os_vif_util [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:af:07,bridge_name='br-int',has_traffic_filtering=True,id=e8c2e4a9-a078-43ea-a03d-81a749cc64f5,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8c2e4a9-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.048 232437 DEBUG os_vif [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:af:07,bridge_name='br-int',has_traffic_filtering=True,id=e8c2e4a9-a078-43ea-a03d-81a749cc64f5,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8c2e4a9-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.048 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.049 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.049 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.055 232437 DEBUG nova.objects.instance [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'migration_context' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.058 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8c2e4a9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.058 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape8c2e4a9-a0, col_values=(('external_ids', {'iface-id': 'e8c2e4a9-a078-43ea-a03d-81a749cc64f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:af:07', 'vm-uuid': 'e78be236-3d5f-4951-9149-8c7d9a6f0382'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.060 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:40 np0005548731 NetworkManager[49182]: <info>  [1765006900.0608] manager: (tape8c2e4a9-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.065 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.067 232437 INFO os_vif [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:af:07,bridge_name='br-int',has_traffic_filtering=True,id=e8c2e4a9-a078-43ea-a03d-81a749cc64f5,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8c2e4a9-a0')#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.112 232437 DEBUG nova.storage.rbd_utils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] flattening vms/b85968f0-ebd7-48f6-a932-c4e8da09381e_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.171 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.171 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.172 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No VIF found with MAC fa:16:3e:1e:af:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.172 232437 INFO nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Using config drive#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.195 232437 DEBUG nova.storage.rbd_utils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image e78be236-3d5f-4951-9149-8c7d9a6f0382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:40.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.674 232437 DEBUG nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Image rbd:vms/b85968f0-ebd7-48f6-a932-c4e8da09381e_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.675 232437 DEBUG nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.675 232437 DEBUG nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Ensure instance console log exists: /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.675 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.676 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.676 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.678 232437 DEBUG nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Start _get_guest_xml network_info=[{"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:41:06Z,direct_url=<?>,disk_format='raw',id=16647923-36b7-4c61-9de3-fb629f75ca61,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1033932756-shelved',owner='b10aa03d68eb4d4799d53538521cc364',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:41:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.681 232437 WARNING nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.686 232437 DEBUG nova.virt.libvirt.host [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.686 232437 DEBUG nova.virt.libvirt.host [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.690 232437 DEBUG nova.virt.libvirt.host [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.691 232437 DEBUG nova.virt.libvirt.host [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.692 232437 DEBUG nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.692 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:41:06Z,direct_url=<?>,disk_format='raw',id=16647923-36b7-4c61-9de3-fb629f75ca61,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1033932756-shelved',owner='b10aa03d68eb4d4799d53538521cc364',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:41:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.693 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.693 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.693 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.693 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.694 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.694 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.694 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.694 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.695 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.695 232437 DEBUG nova.virt.hardware [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.695 232437 DEBUG nova.objects.instance [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.717 232437 DEBUG oslo_concurrency.processutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.981 232437 INFO nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Creating config drive at /var/lib/nova/instances/e78be236-3d5f-4951-9149-8c7d9a6f0382/disk.config#033[00m
Dec  6 02:41:40 np0005548731 nova_compute[232433]: 2025-12-06 07:41:40.992 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e78be236-3d5f-4951-9149-8c7d9a6f0382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d8n2hx9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:41.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.143 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e78be236-3d5f-4951-9149-8c7d9a6f0382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d8n2hx9" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:41:41 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4180631999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.167 232437 DEBUG nova.storage.rbd_utils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image e78be236-3d5f-4951-9149-8c7d9a6f0382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.170 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e78be236-3d5f-4951-9149-8c7d9a6f0382/disk.config e78be236-3d5f-4951-9149-8c7d9a6f0382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.193 232437 DEBUG oslo_concurrency.processutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.214 232437 DEBUG nova.storage.rbd_utils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.217 232437 DEBUG oslo_concurrency.processutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.307 232437 DEBUG oslo_concurrency.processutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e78be236-3d5f-4951-9149-8c7d9a6f0382/disk.config e78be236-3d5f-4951-9149-8c7d9a6f0382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.310 232437 INFO nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Deleting local config drive /var/lib/nova/instances/e78be236-3d5f-4951-9149-8c7d9a6f0382/disk.config because it was imported into RBD.#033[00m
Dec  6 02:41:41 np0005548731 kernel: tape8c2e4a9-a0: entered promiscuous mode
Dec  6 02:41:41 np0005548731 NetworkManager[49182]: <info>  [1765006901.3694] manager: (tape8c2e4a9-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/298)
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.369 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:41Z|00659|binding|INFO|Claiming lport e8c2e4a9-a078-43ea-a03d-81a749cc64f5 for this chassis.
Dec  6 02:41:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:41Z|00660|binding|INFO|e8c2e4a9-a078-43ea-a03d-81a749cc64f5: Claiming fa:16:3e:1e:af:07 10.100.0.5
Dec  6 02:41:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:41Z|00661|binding|INFO|Setting lport e8c2e4a9-a078-43ea-a03d-81a749cc64f5 ovn-installed in OVS
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.387 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.390 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 systemd-udevd[295068]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:41:41 np0005548731 systemd-machined[195355]: New machine qemu-66-instance-0000008e.
Dec  6 02:41:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:41Z|00662|binding|INFO|Setting lport e8c2e4a9-a078-43ea-a03d-81a749cc64f5 up in Southbound
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.400 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:af:07 10.100.0.5'], port_security=['fa:16:3e:1e:af:07 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e78be236-3d5f-4951-9149-8c7d9a6f0382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=e8c2e4a9-a078-43ea-a03d-81a749cc64f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.402 143965 INFO neutron.agent.ovn.metadata.agent [-] Port e8c2e4a9-a078-43ea-a03d-81a749cc64f5 in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 bound to our chassis#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.403 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35a27638-382c-4afb-83b0-edd6d7f4bca8#033[00m
Dec  6 02:41:41 np0005548731 NetworkManager[49182]: <info>  [1765006901.4109] device (tape8c2e4a9-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:41:41 np0005548731 NetworkManager[49182]: <info>  [1765006901.4118] device (tape8c2e4a9-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.413 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[355a7606-2e8c-40da-9e90-06e43b0c960a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.414 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35a27638-31 in ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.416 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35a27638-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.416 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a0c1ca-f41d-45b6-ae98-223f3f251ae0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.417 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ee262928-b060-4a19-992c-e30549f6e7c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 systemd[1]: Started Virtual Machine qemu-66-instance-0000008e.
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.427 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[6dfe52d0-973e-497a-8d34-0715b6f39108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.451 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b27d3f0d-2f71-443e-a507-f36ba7065e8c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.480 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f8827222-57bc-4a1c-aa72-53518589af39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.493 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2193ec44-d422-4bca-94ea-1e5e25a1f03f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 NetworkManager[49182]: <info>  [1765006901.4948] manager: (tap35a27638-30): new Veth device (/org/freedesktop/NetworkManager/Devices/299)
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.536 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f53cb3db-4763-409e-bc1e-b6efe1d55328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.540 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b97760bb-40a6-4acb-ab35-3090088f7b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 NetworkManager[49182]: <info>  [1765006901.5621] device (tap35a27638-30): carrier: link connected
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.567 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[78a5b2f7-6ba9-4bf8-b943-0c591b84c941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.584 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5b3934-460b-4647-be91-43b5dd446932]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716356, 'reachable_time': 23261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295104, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.600 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fdc4ce-5882-4a5b-ad54-c79b4fe8bf79]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:c527'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 716356, 'tstamp': 716356}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295105, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.618 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[808ba946-5f1e-4a38-967d-3b2c106f0413]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716356, 'reachable_time': 23261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295106, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:41:41 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3433887939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.651 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9670e0cc-6286-411b-9d88-e88b069338f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.662 232437 DEBUG oslo_concurrency.processutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.664 232437 DEBUG nova.virt.libvirt.vif [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:39:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1033932756',display_name='tempest-ServerActionsTestOtherB-server-1033932756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1033932756',id=135,image_ref='16647923-36b7-4c61-9de3-fb629f75ca61',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:39:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-kbdg07ib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member',shelved_at='2025-12-06T07:41:21.405475',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='16647923-36b7-4c61-9de3-fb629f75ca61'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=b85968f0-ebd7-48f6-a932-c4e8da09381e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.664 232437 DEBUG nova.network.os_vif_util [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.666 232437 DEBUG nova.network.os_vif_util [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.667 232437 DEBUG nova.objects.instance [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_devices' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.682 232437 DEBUG nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  <uuid>b85968f0-ebd7-48f6-a932-c4e8da09381e</uuid>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  <name>instance-00000087</name>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerActionsTestOtherB-server-1033932756</nova:name>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:41:40</nova:creationTime>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <nova:user uuid="a70f6c3c5e2c402bb6fa0e0507e9b6dc">tempest-ServerActionsTestOtherB-874907570-project-member</nova:user>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <nova:project uuid="b10aa03d68eb4d4799d53538521cc364">tempest-ServerActionsTestOtherB-874907570</nova:project>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="16647923-36b7-4c61-9de3-fb629f75ca61"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <nova:port uuid="f1f563a8-9001-419f-858a-0213c5d6607a">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <entry name="serial">b85968f0-ebd7-48f6-a932-c4e8da09381e</entry>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <entry name="uuid">b85968f0-ebd7-48f6-a932-c4e8da09381e</entry>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/b85968f0-ebd7-48f6-a932-c4e8da09381e_disk">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:48:b4:88"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <target dev="tapf1f563a8-90"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/console.log" append="off"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <input type="keyboard" bus="usb"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:41:41 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:41:41 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:41:41 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:41:41 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.684 232437 DEBUG nova.compute.manager [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Preparing to wait for external event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.684 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.684 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.684 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.685 232437 DEBUG nova.virt.libvirt.vif [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:39:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1033932756',display_name='tempest-ServerActionsTestOtherB-server-1033932756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1033932756',id=135,image_ref='16647923-36b7-4c61-9de3-fb629f75ca61',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:39:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-kbdg07ib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member',shelved_at='2025-12-06T07:41:21.405475',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='16647923-36b7-4c61-9de3-fb629f75ca61'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=b85968f0-ebd7-48f6-a932-c4e8da09381e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.686 232437 DEBUG nova.network.os_vif_util [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.686 232437 DEBUG nova.network.os_vif_util [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.687 232437 DEBUG os_vif [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.687 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.688 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.688 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.690 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.691 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1f563a8-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.691 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1f563a8-90, col_values=(('external_ids', {'iface-id': 'f1f563a8-9001-419f-858a-0213c5d6607a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:b4:88', 'vm-uuid': 'b85968f0-ebd7-48f6-a932-c4e8da09381e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:41 np0005548731 NetworkManager[49182]: <info>  [1765006901.7063] manager: (tapf1f563a8-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.706 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.708 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.710 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.711 232437 INFO os_vif [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90')#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.714 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6040a84f-6c4c-4cd9-aa1b-e96b7bf6fd7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.715 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.716 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.716 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35a27638-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.718 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 NetworkManager[49182]: <info>  [1765006901.7187] manager: (tap35a27638-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Dec  6 02:41:41 np0005548731 kernel: tap35a27638-30: entered promiscuous mode
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.722 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.724 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35a27638-30, col_values=(('external_ids', {'iface-id': '5e371956-96bf-49df-b4a8-97154044dc54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:41Z|00663|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.727 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.741 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.744 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.745 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.746 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e7dd7a-f44f-47c6-85e8-508b9d38e4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.747 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:41:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:41.747 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'env', 'PROCESS_TAG=haproxy-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35a27638-382c-4afb-83b0-edd6d7f4bca8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.777 232437 DEBUG nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.777 232437 DEBUG nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.777 232437 DEBUG nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No VIF found with MAC fa:16:3e:48:b4:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.778 232437 INFO nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Using config drive#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.801 232437 DEBUG nova.storage.rbd_utils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.811 232437 DEBUG nova.compute.manager [req-337045a2-5138-4400-a6bb-50387ff54a72 req-8ce595d8-c57b-46a5-b53c-2b801b685ed5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Received event network-vif-plugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.811 232437 DEBUG oslo_concurrency.lockutils [req-337045a2-5138-4400-a6bb-50387ff54a72 req-8ce595d8-c57b-46a5-b53c-2b801b685ed5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.812 232437 DEBUG oslo_concurrency.lockutils [req-337045a2-5138-4400-a6bb-50387ff54a72 req-8ce595d8-c57b-46a5-b53c-2b801b685ed5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.812 232437 DEBUG oslo_concurrency.lockutils [req-337045a2-5138-4400-a6bb-50387ff54a72 req-8ce595d8-c57b-46a5-b53c-2b801b685ed5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.812 232437 DEBUG nova.compute.manager [req-337045a2-5138-4400-a6bb-50387ff54a72 req-8ce595d8-c57b-46a5-b53c-2b801b685ed5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Processing event network-vif-plugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.821 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006901.8215225, e78be236-3d5f-4951-9149-8c7d9a6f0382 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.822 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] VM Started (Lifecycle Event)#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.823 232437 DEBUG nova.compute.manager [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.826 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.836 232437 INFO nova.virt.libvirt.driver [-] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Instance spawned successfully.#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.837 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.841 232437 DEBUG nova.network.neutron [req-d3fc322e-45e7-4707-b465-2fbb48cbe3de req-8e0cbce1-f265-4d87-a1f6-176b0c23b5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Updated VIF entry in instance network info cache for port e8c2e4a9-a078-43ea-a03d-81a749cc64f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.842 232437 DEBUG nova.network.neutron [req-d3fc322e-45e7-4707-b465-2fbb48cbe3de req-8e0cbce1-f265-4d87-a1f6-176b0c23b5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Updating instance_info_cache with network_info: [{"id": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "address": "fa:16:3e:1e:af:07", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8c2e4a9-a0", "ovs_interfaceid": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.866 232437 DEBUG nova.objects.instance [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.874 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.881 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.888 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.888 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.889 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.890 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.891 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.891 232437 DEBUG nova.virt.libvirt.driver [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.897 232437 DEBUG oslo_concurrency.lockutils [req-d3fc322e-45e7-4707-b465-2fbb48cbe3de req-8e0cbce1-f265-4d87-a1f6-176b0c23b5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e78be236-3d5f-4951-9149-8c7d9a6f0382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.917 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.918 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006901.8216507, e78be236-3d5f-4951-9149-8c7d9a6f0382 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.918 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.950 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.953 232437 DEBUG nova.objects.instance [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'keypairs' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.955 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006901.826072, e78be236-3d5f-4951-9149-8c7d9a6f0382 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.956 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.964 232437 INFO nova.compute.manager [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Took 7.23 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.965 232437 DEBUG nova.compute.manager [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.969 232437 DEBUG nova.network.neutron [req-730e54b4-a390-4251-9533-65479e3e158e req-906bb405-de8c-4f0d-a5fc-d56fa3041926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updated VIF entry in instance network info cache for port f1f563a8-9001-419f-858a-0213c5d6607a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.970 232437 DEBUG nova.network.neutron [req-730e54b4-a390-4251-9533-65479e3e158e req-906bb405-de8c-4f0d-a5fc-d56fa3041926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updating instance_info_cache with network_info: [{"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.987 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.992 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:41:41 np0005548731 nova_compute[232433]: 2025-12-06 07:41:41.997 232437 DEBUG oslo_concurrency.lockutils [req-730e54b4-a390-4251-9533-65479e3e158e req-906bb405-de8c-4f0d-a5fc-d56fa3041926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:41:42 np0005548731 nova_compute[232433]: 2025-12-06 07:41:42.019 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:41:42 np0005548731 nova_compute[232433]: 2025-12-06 07:41:42.046 232437 INFO nova.compute.manager [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Took 8.32 seconds to build instance.#033[00m
Dec  6 02:41:42 np0005548731 nova_compute[232433]: 2025-12-06 07:41:42.073 232437 DEBUG oslo_concurrency.lockutils [None req-9f62c375-1be1-4ed8-a235-ae8e0f9176ef 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:42 np0005548731 podman[295206]: 2025-12-06 07:41:42.105081317 +0000 UTC m=+0.046801851 container create 3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 02:41:42 np0005548731 systemd[1]: Started libpod-conmon-3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125.scope.
Dec  6 02:41:42 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:41:42 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c405c990628320d23b462de8e54bbd7290872596e9af109d04c3f65cd247e1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:41:42 np0005548731 podman[295206]: 2025-12-06 07:41:42.079013142 +0000 UTC m=+0.020733696 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:41:42 np0005548731 podman[295206]: 2025-12-06 07:41:42.181075808 +0000 UTC m=+0.122796362 container init 3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:41:42 np0005548731 podman[295206]: 2025-12-06 07:41:42.189011802 +0000 UTC m=+0.130732336 container start 3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  6 02:41:42 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[295218]: [NOTICE]   (295225) : New worker (295227) forked
Dec  6 02:41:42 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[295218]: [NOTICE]   (295225) : Loading success.
Dec  6 02:41:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:42.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:42 np0005548731 nova_compute[232433]: 2025-12-06 07:41:42.587 232437 INFO nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Creating config drive at /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config#033[00m
Dec  6 02:41:42 np0005548731 nova_compute[232433]: 2025-12-06 07:41:42.593 232437 DEBUG oslo_concurrency.processutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxypcn5q8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:42 np0005548731 nova_compute[232433]: 2025-12-06 07:41:42.727 232437 DEBUG oslo_concurrency.processutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxypcn5q8" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:42 np0005548731 nova_compute[232433]: 2025-12-06 07:41:42.754 232437 DEBUG nova.storage.rbd_utils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:42 np0005548731 nova_compute[232433]: 2025-12-06 07:41:42.757 232437 DEBUG oslo_concurrency.processutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.024 232437 DEBUG oslo_concurrency.processutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.026 232437 INFO nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Deleting local config drive /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config because it was imported into RBD.#033[00m
Dec  6 02:41:43 np0005548731 kernel: tapf1f563a8-90: entered promiscuous mode
Dec  6 02:41:43 np0005548731 NetworkManager[49182]: <info>  [1765006903.0737] manager: (tapf1f563a8-90): new Tun device (/org/freedesktop/NetworkManager/Devices/302)
Dec  6 02:41:43 np0005548731 systemd-udevd[295100]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:41:43 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:43Z|00664|binding|INFO|Claiming lport f1f563a8-9001-419f-858a-0213c5d6607a for this chassis.
Dec  6 02:41:43 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:43Z|00665|binding|INFO|f1f563a8-9001-419f-858a-0213c5d6607a: Claiming fa:16:3e:48:b4:88 10.100.0.3
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.077 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.085 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 NetworkManager[49182]: <info>  [1765006903.0877] device (tapf1f563a8-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:41:43 np0005548731 NetworkManager[49182]: <info>  [1765006903.0892] device (tapf1f563a8-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:41:43 np0005548731 NetworkManager[49182]: <info>  [1765006903.0918] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Dec  6 02:41:43 np0005548731 NetworkManager[49182]: <info>  [1765006903.0924] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.090 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.091 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:43.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.097 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:b4:88 10.100.0.3'], port_security=['fa:16:3e:48:b4:88 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b85968f0-ebd7-48f6-a932-c4e8da09381e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f1f563a8-9001-419f-858a-0213c5d6607a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.099 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f1f563a8-9001-419f-858a-0213c5d6607a in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 bound to our chassis#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.102 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.113 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5149af-f7b2-4969-9c91-6031c2c1add2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.114 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3beede49-11 in ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:41:43 np0005548731 systemd-machined[195355]: New machine qemu-67-instance-00000087.
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.116 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3beede49-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.116 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[725c534a-40fa-414a-a7ee-f5205543ca2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.121 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8e3c0c86-5337-43d2-8eb6-bbc57bc9593d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.132 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[aacfe992-2536-4e33-8324-d39730504801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 systemd[1]: Started Virtual Machine qemu-67-instance-00000087.
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.155 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[07c34d91-27a9-4787-b337-6dcac5ab0167]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.185 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c8303c-a1cf-4b5c-a511-87a8b4bd81a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 NetworkManager[49182]: <info>  [1765006903.1933] manager: (tap3beede49-10): new Veth device (/org/freedesktop/NetworkManager/Devices/305)
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.194 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[821a16fc-a183-4e13-a39a-30de16e74c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.224 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[26aec3c4-b876-4630-af92-d46cce7ca961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.227 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[efb4d501-4d92-4f62-84b5-0a676dbc47a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.243 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.246 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:43Z|00666|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec  6 02:41:43 np0005548731 NetworkManager[49182]: <info>  [1765006903.2526] device (tap3beede49-10): carrier: link connected
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.261 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.266 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c18bf9-c4b1-4f7e-a818-390fae0186f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:43Z|00667|binding|INFO|Setting lport f1f563a8-9001-419f-858a-0213c5d6607a ovn-installed in OVS
Dec  6 02:41:43 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:43Z|00668|binding|INFO|Setting lport f1f563a8-9001-419f-858a-0213c5d6607a up in Southbound
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.274 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.286 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3bcb570-f4b8-4d3e-85ae-8963ffac6b93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716525, 'reachable_time': 43077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295308, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.307 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9414f4a2-f24e-4664-a473-d06d168276ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:c755'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 716525, 'tstamp': 716525}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295309, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.325 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd04335-240b-4177-b19e-f229a650db4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716525, 'reachable_time': 43077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295310, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.351 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9fc9c3-ee3d-4969-a232-3b223435af92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.410 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[99eb80e9-0ce8-4979-9629-e8788a067d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.411 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.411 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.412 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.413 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 NetworkManager[49182]: <info>  [1765006903.4144] manager: (tap3beede49-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Dec  6 02:41:43 np0005548731 kernel: tap3beede49-10: entered promiscuous mode
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.415 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.416 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.417 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:43Z|00669|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.433 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.434 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.436 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[45df568a-f695-4459-b263-23d61241d132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.436 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-3beede49-1cbb-425c-b1af-82f43dc57163
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 3beede49-1cbb-425c-b1af-82f43dc57163
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:41:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:43.437 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'env', 'PROCESS_TAG=haproxy-3beede49-1cbb-425c-b1af-82f43dc57163', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3beede49-1cbb-425c-b1af-82f43dc57163.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:41:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:43 np0005548731 podman[295342]: 2025-12-06 07:41:43.777677607 +0000 UTC m=+0.041576404 container create 56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:41:43 np0005548731 systemd[1]: Started libpod-conmon-56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e.scope.
Dec  6 02:41:43 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:41:43 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d75dec381c60fff194b8a68e5981c36de7f9d1cd9211848f432f3f966232558/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:41:43 np0005548731 podman[295342]: 2025-12-06 07:41:43.757818393 +0000 UTC m=+0.021717210 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:41:43 np0005548731 podman[295342]: 2025-12-06 07:41:43.85823469 +0000 UTC m=+0.122133517 container init 56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.863 232437 DEBUG nova.compute.manager [req-b9ef5260-bbd9-4ac5-9c1a-47cff5d6b675 req-f6418807-4191-4c4f-982c-3a39379e0c01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Received event network-vif-plugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.863 232437 DEBUG oslo_concurrency.lockutils [req-b9ef5260-bbd9-4ac5-9c1a-47cff5d6b675 req-f6418807-4191-4c4f-982c-3a39379e0c01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.863 232437 DEBUG oslo_concurrency.lockutils [req-b9ef5260-bbd9-4ac5-9c1a-47cff5d6b675 req-f6418807-4191-4c4f-982c-3a39379e0c01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.864 232437 DEBUG oslo_concurrency.lockutils [req-b9ef5260-bbd9-4ac5-9c1a-47cff5d6b675 req-f6418807-4191-4c4f-982c-3a39379e0c01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:43 np0005548731 podman[295342]: 2025-12-06 07:41:43.864439721 +0000 UTC m=+0.128338518 container start 56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.864 232437 DEBUG nova.compute.manager [req-b9ef5260-bbd9-4ac5-9c1a-47cff5d6b675 req-f6418807-4191-4c4f-982c-3a39379e0c01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] No waiting events found dispatching network-vif-plugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:41:43 np0005548731 nova_compute[232433]: 2025-12-06 07:41:43.864 232437 WARNING nova.compute.manager [req-b9ef5260-bbd9-4ac5-9c1a-47cff5d6b675 req-f6418807-4191-4c4f-982c-3a39379e0c01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Received unexpected event network-vif-plugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:41:43 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[295357]: [NOTICE]   (295376) : New worker (295382) forked
Dec  6 02:41:43 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[295357]: [NOTICE]   (295376) : Loading success.
Dec  6 02:41:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:44.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:44 np0005548731 nova_compute[232433]: 2025-12-06 07:41:44.949 232437 DEBUG oslo_concurrency.lockutils [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "e78be236-3d5f-4951-9149-8c7d9a6f0382" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:44 np0005548731 nova_compute[232433]: 2025-12-06 07:41:44.949 232437 DEBUG oslo_concurrency.lockutils [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:44 np0005548731 nova_compute[232433]: 2025-12-06 07:41:44.950 232437 DEBUG oslo_concurrency.lockutils [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:44 np0005548731 nova_compute[232433]: 2025-12-06 07:41:44.950 232437 DEBUG oslo_concurrency.lockutils [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:44 np0005548731 nova_compute[232433]: 2025-12-06 07:41:44.950 232437 DEBUG oslo_concurrency.lockutils [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:44 np0005548731 nova_compute[232433]: 2025-12-06 07:41:44.952 232437 INFO nova.compute.manager [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Terminating instance#033[00m
Dec  6 02:41:44 np0005548731 nova_compute[232433]: 2025-12-06 07:41:44.953 232437 DEBUG nova.compute.manager [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:41:45 np0005548731 kernel: tape8c2e4a9-a0 (unregistering): left promiscuous mode
Dec  6 02:41:45 np0005548731 NetworkManager[49182]: <info>  [1765006905.0063] device (tape8c2e4a9-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:41:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:45Z|00670|binding|INFO|Releasing lport e8c2e4a9-a078-43ea-a03d-81a749cc64f5 from this chassis (sb_readonly=0)
Dec  6 02:41:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:45Z|00671|binding|INFO|Setting lport e8c2e4a9-a078-43ea-a03d-81a749cc64f5 down in Southbound
Dec  6 02:41:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:41:45Z|00672|binding|INFO|Removing iface tape8c2e4a9-a0 ovn-installed in OVS
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.023 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.024 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.040 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.042 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:af:07 10.100.0.5'], port_security=['fa:16:3e:1e:af:07 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e78be236-3d5f-4951-9149-8c7d9a6f0382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=e8c2e4a9-a078-43ea-a03d-81a749cc64f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.043 143965 INFO neutron.agent.ovn.metadata.agent [-] Port e8c2e4a9-a078-43ea-a03d-81a749cc64f5 in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 unbound from our chassis#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.045 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35a27638-382c-4afb-83b0-edd6d7f4bca8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.046 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7723a404-e6a0-4235-bfe8-be298115b238]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.046 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 namespace which is not needed anymore#033[00m
Dec  6 02:41:45 np0005548731 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Dec  6 02:41:45 np0005548731 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008e.scope: Consumed 3.595s CPU time.
Dec  6 02:41:45 np0005548731 systemd-machined[195355]: Machine qemu-66-instance-0000008e terminated.
Dec  6 02:41:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:45.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.122 232437 DEBUG nova.compute.manager [req-1efc9028-9f47-4ca6-9386-58c05124f7ab req-aef2e726-89f4-4327-ad95-909a0478e08f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.123 232437 DEBUG oslo_concurrency.lockutils [req-1efc9028-9f47-4ca6-9386-58c05124f7ab req-aef2e726-89f4-4327-ad95-909a0478e08f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.124 232437 DEBUG oslo_concurrency.lockutils [req-1efc9028-9f47-4ca6-9386-58c05124f7ab req-aef2e726-89f4-4327-ad95-909a0478e08f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.124 232437 DEBUG oslo_concurrency.lockutils [req-1efc9028-9f47-4ca6-9386-58c05124f7ab req-aef2e726-89f4-4327-ad95-909a0478e08f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.124 232437 DEBUG nova.compute.manager [req-1efc9028-9f47-4ca6-9386-58c05124f7ab req-aef2e726-89f4-4327-ad95-909a0478e08f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Processing event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.125 232437 DEBUG nova.compute.manager [req-1efc9028-9f47-4ca6-9386-58c05124f7ab req-aef2e726-89f4-4327-ad95-909a0478e08f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.125 232437 DEBUG oslo_concurrency.lockutils [req-1efc9028-9f47-4ca6-9386-58c05124f7ab req-aef2e726-89f4-4327-ad95-909a0478e08f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.125 232437 DEBUG oslo_concurrency.lockutils [req-1efc9028-9f47-4ca6-9386-58c05124f7ab req-aef2e726-89f4-4327-ad95-909a0478e08f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.126 232437 DEBUG oslo_concurrency.lockutils [req-1efc9028-9f47-4ca6-9386-58c05124f7ab req-aef2e726-89f4-4327-ad95-909a0478e08f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.126 232437 DEBUG nova.compute.manager [req-1efc9028-9f47-4ca6-9386-58c05124f7ab req-aef2e726-89f4-4327-ad95-909a0478e08f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] No waiting events found dispatching network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.126 232437 WARNING nova.compute.manager [req-1efc9028-9f47-4ca6-9386-58c05124f7ab req-aef2e726-89f4-4327-ad95-909a0478e08f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received unexpected event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Dec  6 02:41:45 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[295218]: [NOTICE]   (295225) : haproxy version is 2.8.14-c23fe91
Dec  6 02:41:45 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[295218]: [NOTICE]   (295225) : path to executable is /usr/sbin/haproxy
Dec  6 02:41:45 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[295218]: [WARNING]  (295225) : Exiting Master process...
Dec  6 02:41:45 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[295218]: [ALERT]    (295225) : Current worker (295227) exited with code 143 (Terminated)
Dec  6 02:41:45 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[295218]: [WARNING]  (295225) : All workers exited. Exiting... (0)
Dec  6 02:41:45 np0005548731 systemd[1]: libpod-3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125.scope: Deactivated successfully.
Dec  6 02:41:45 np0005548731 NetworkManager[49182]: <info>  [1765006905.1711] manager: (tape8c2e4a9-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Dec  6 02:41:45 np0005548731 podman[295434]: 2025-12-06 07:41:45.17583045 +0000 UTC m=+0.044822453 container died 3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.188 232437 INFO nova.virt.libvirt.driver [-] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Instance destroyed successfully.#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.188 232437 DEBUG nova.objects.instance [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'resources' on Instance uuid e78be236-3d5f-4951-9149-8c7d9a6f0382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:41:45 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125-userdata-shm.mount: Deactivated successfully.
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.211 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006905.2110999, b85968f0-ebd7-48f6-a932-c4e8da09381e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:41:45 np0005548731 systemd[1]: var-lib-containers-storage-overlay-8c405c990628320d23b462de8e54bbd7290872596e9af109d04c3f65cd247e1d-merged.mount: Deactivated successfully.
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.212 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] VM Started (Lifecycle Event)#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.214 232437 DEBUG nova.compute.manager [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.217 232437 DEBUG nova.virt.libvirt.driver [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:41:45 np0005548731 podman[295434]: 2025-12-06 07:41:45.218866429 +0000 UTC m=+0.087858432 container cleanup 3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.219 232437 DEBUG nova.virt.libvirt.vif [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:41:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-778223522',display_name='tempest-ServersTestJSON-server-778223522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-778223522',id=142,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyFGrqZ3O10d+Mtnb3FboSnloI6/6aBO0CbDH2Wv2Z+d4Kw7ojIiEFuDxPkAl/gckwKhEI1ABUm3/fkKFkXqzNfl1EdQBjJSlqKfthYKDWPsLKMgOEm8xbJ7FeKDe5jCg==',key_name='tempest-key-1984291357',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:41:41Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-04hbeklq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:41:42Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=e78be236-3d5f-4951-9149-8c7d9a6f0382,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "address": "fa:16:3e:1e:af:07", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8c2e4a9-a0", "ovs_interfaceid": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.219 232437 DEBUG nova.network.os_vif_util [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "address": "fa:16:3e:1e:af:07", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8c2e4a9-a0", "ovs_interfaceid": "e8c2e4a9-a078-43ea-a03d-81a749cc64f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.221 232437 DEBUG nova.network.os_vif_util [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:af:07,bridge_name='br-int',has_traffic_filtering=True,id=e8c2e4a9-a078-43ea-a03d-81a749cc64f5,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8c2e4a9-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.221 232437 DEBUG os_vif [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:af:07,bridge_name='br-int',has_traffic_filtering=True,id=e8c2e4a9-a078-43ea-a03d-81a749cc64f5,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8c2e4a9-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.223 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.224 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8c2e4a9-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.225 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.227 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.230 232437 INFO os_vif [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:af:07,bridge_name='br-int',has_traffic_filtering=True,id=e8c2e4a9-a078-43ea-a03d-81a749cc64f5,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8c2e4a9-a0')#033[00m
Dec  6 02:41:45 np0005548731 systemd[1]: libpod-conmon-3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125.scope: Deactivated successfully.
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.255 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.256 232437 INFO nova.virt.libvirt.driver [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Instance spawned successfully.#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.265 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:41:45 np0005548731 podman[295480]: 2025-12-06 07:41:45.283731129 +0000 UTC m=+0.039614396 container remove 3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.288 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f91fb3d5-6734-4c2b-b2d3-c60a7c18178b]: (4, ('Sat Dec  6 07:41:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 (3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125)\n3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125\nSat Dec  6 07:41:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 (3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125)\n3f97c4068fb8357fb60b171694fb62994c5e61fc8d51bf3efacad1cd89525125\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.290 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dedb5b35-45b7-4d4b-bcf3-3237b098666a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.290 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.290 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.291 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006905.2120237, b85968f0-ebd7-48f6-a932-c4e8da09381e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.291 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.292 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:45 np0005548731 kernel: tap35a27638-30: left promiscuous mode
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.294 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.304 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce02249-c797-4dc0-861a-f988eb075a75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.308 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.313 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.317 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b017b51a-22c7-45a2-8d97-229379cd8002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.318 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2d765452-7186-4894-9153-e63f3211dc96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.319 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006905.2167635, b85968f0-ebd7-48f6-a932-c4e8da09381e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.319 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.339 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[aa47163c-c1e6-46ed-b010-0fdfb02a4c6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716348, 'reachable_time': 24300, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295513, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.341 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:41:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:41:45.341 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[ace17d95-c141-4e3a-8a1e-f14eedc1c7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:41:45 np0005548731 systemd[1]: run-netns-ovnmeta\x2d35a27638\x2d382c\x2d4afb\x2d83b0\x2dedd6d7f4bca8.mount: Deactivated successfully.
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.347 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.350 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.379 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.980 232437 DEBUG nova.compute.manager [req-cc86a26d-453e-4c7c-a6e3-951d02d637ca req-21e8ebe2-9e6e-46fe-a7d6-7aa873a01047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Received event network-vif-unplugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.981 232437 DEBUG oslo_concurrency.lockutils [req-cc86a26d-453e-4c7c-a6e3-951d02d637ca req-21e8ebe2-9e6e-46fe-a7d6-7aa873a01047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.981 232437 DEBUG oslo_concurrency.lockutils [req-cc86a26d-453e-4c7c-a6e3-951d02d637ca req-21e8ebe2-9e6e-46fe-a7d6-7aa873a01047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.981 232437 DEBUG oslo_concurrency.lockutils [req-cc86a26d-453e-4c7c-a6e3-951d02d637ca req-21e8ebe2-9e6e-46fe-a7d6-7aa873a01047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.982 232437 DEBUG nova.compute.manager [req-cc86a26d-453e-4c7c-a6e3-951d02d637ca req-21e8ebe2-9e6e-46fe-a7d6-7aa873a01047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] No waiting events found dispatching network-vif-unplugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:41:45 np0005548731 nova_compute[232433]: 2025-12-06 07:41:45.982 232437 DEBUG nova.compute.manager [req-cc86a26d-453e-4c7c-a6e3-951d02d637ca req-21e8ebe2-9e6e-46fe-a7d6-7aa873a01047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Received event network-vif-unplugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:41:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:46.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:46 np0005548731 nova_compute[232433]: 2025-12-06 07:41:46.383 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:41:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:47.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:41:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e325 e325: 3 total, 3 up, 3 in
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.118 232437 DEBUG nova.compute.manager [req-e8cede96-be4f-4e9c-bec0-da681f751e56 req-98f3396d-6679-4349-9598-b2272d355629 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Received event network-vif-plugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.119 232437 DEBUG oslo_concurrency.lockutils [req-e8cede96-be4f-4e9c-bec0-da681f751e56 req-98f3396d-6679-4349-9598-b2272d355629 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.119 232437 DEBUG oslo_concurrency.lockutils [req-e8cede96-be4f-4e9c-bec0-da681f751e56 req-98f3396d-6679-4349-9598-b2272d355629 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.119 232437 DEBUG oslo_concurrency.lockutils [req-e8cede96-be4f-4e9c-bec0-da681f751e56 req-98f3396d-6679-4349-9598-b2272d355629 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.120 232437 DEBUG nova.compute.manager [req-e8cede96-be4f-4e9c-bec0-da681f751e56 req-98f3396d-6679-4349-9598-b2272d355629 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] No waiting events found dispatching network-vif-plugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.120 232437 WARNING nova.compute.manager [req-e8cede96-be4f-4e9c-bec0-da681f751e56 req-98f3396d-6679-4349-9598-b2272d355629 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Received unexpected event network-vif-plugged-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.196 232437 INFO nova.virt.libvirt.driver [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Deleting instance files /var/lib/nova/instances/e78be236-3d5f-4951-9149-8c7d9a6f0382_del#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.198 232437 INFO nova.virt.libvirt.driver [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Deletion of /var/lib/nova/instances/e78be236-3d5f-4951-9149-8c7d9a6f0382_del complete#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.249 232437 INFO nova.compute.manager [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Took 3.29 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.249 232437 DEBUG oslo.service.loopingcall [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.250 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.251 232437 DEBUG nova.compute.manager [-] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.252 232437 DEBUG nova.network.neutron [-] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:41:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:48.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.573 232437 DEBUG nova.compute.manager [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:41:48 np0005548731 nova_compute[232433]: 2025-12-06 07:41:48.653 232437 DEBUG oslo_concurrency.lockutils [None req-a86f2b9e-4ea1-42e5-b973-19942d25fd14 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 14.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:41:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:49.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:41:49 np0005548731 nova_compute[232433]: 2025-12-06 07:41:49.509 232437 DEBUG nova.network.neutron [-] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:41:49 np0005548731 nova_compute[232433]: 2025-12-06 07:41:49.535 232437 INFO nova.compute.manager [-] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Took 1.28 seconds to deallocate network for instance.#033[00m
Dec  6 02:41:49 np0005548731 nova_compute[232433]: 2025-12-06 07:41:49.594 232437 DEBUG oslo_concurrency.lockutils [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:49 np0005548731 nova_compute[232433]: 2025-12-06 07:41:49.595 232437 DEBUG oslo_concurrency.lockutils [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:49 np0005548731 nova_compute[232433]: 2025-12-06 07:41:49.615 232437 DEBUG nova.compute.manager [req-927dc2ec-1b91-4b58-837d-f8993ded1fbf req-dab3fdc5-a3f4-4584-8796-3623b203b810 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Received event network-vif-deleted-e8c2e4a9-a078-43ea-a03d-81a749cc64f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:49 np0005548731 nova_compute[232433]: 2025-12-06 07:41:49.681 232437 DEBUG oslo_concurrency.processutils [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:41:50 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1112513091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:41:50 np0005548731 nova_compute[232433]: 2025-12-06 07:41:50.165 232437 DEBUG oslo_concurrency.processutils [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:50 np0005548731 nova_compute[232433]: 2025-12-06 07:41:50.170 232437 DEBUG nova.compute.provider_tree [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:41:50 np0005548731 nova_compute[232433]: 2025-12-06 07:41:50.190 232437 DEBUG nova.scheduler.client.report [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:41:50 np0005548731 nova_compute[232433]: 2025-12-06 07:41:50.222 232437 DEBUG oslo_concurrency.lockutils [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:50 np0005548731 nova_compute[232433]: 2025-12-06 07:41:50.227 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:50 np0005548731 nova_compute[232433]: 2025-12-06 07:41:50.267 232437 INFO nova.scheduler.client.report [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Deleted allocations for instance e78be236-3d5f-4951-9149-8c7d9a6f0382#033[00m
Dec  6 02:41:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:50.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:50 np0005548731 nova_compute[232433]: 2025-12-06 07:41:50.349 232437 DEBUG oslo_concurrency.lockutils [None req-50c4f1e1-35a0-47be-8f5f-21713430c5ca 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "e78be236-3d5f-4951-9149-8c7d9a6f0382" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:50 np0005548731 nova_compute[232433]: 2025-12-06 07:41:50.584 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:51.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:52.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:53.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:53 np0005548731 nova_compute[232433]: 2025-12-06 07:41:53.250 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e326 e326: 3 total, 3 up, 3 in
Dec  6 02:41:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:41:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:54.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:41:54 np0005548731 nova_compute[232433]: 2025-12-06 07:41:54.956 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Acquiring lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:54 np0005548731 nova_compute[232433]: 2025-12-06 07:41:54.956 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:54 np0005548731 nova_compute[232433]: 2025-12-06 07:41:54.981 232437 DEBUG nova.compute.manager [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.047 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.048 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.052 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.052 232437 INFO nova.compute.claims [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:41:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:55.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.164 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.231 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:41:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/26752850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.588 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.593 232437 DEBUG nova.compute.provider_tree [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.611 232437 DEBUG nova.scheduler.client.report [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.648 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.649 232437 DEBUG nova.compute.manager [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.732 232437 DEBUG nova.compute.manager [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.732 232437 DEBUG nova.network.neutron [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.776 232437 INFO nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.812 232437 DEBUG nova.compute.manager [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.946 232437 DEBUG nova.compute.manager [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.948 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.948 232437 INFO nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Creating image(s)#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.971 232437 DEBUG nova.storage.rbd_utils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] rbd image e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:55 np0005548731 nova_compute[232433]: 2025-12-06 07:41:55.996 232437 DEBUG nova.storage.rbd_utils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] rbd image e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:56 np0005548731 nova_compute[232433]: 2025-12-06 07:41:56.026 232437 DEBUG nova.storage.rbd_utils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] rbd image e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:56 np0005548731 nova_compute[232433]: 2025-12-06 07:41:56.030 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:56 np0005548731 nova_compute[232433]: 2025-12-06 07:41:56.057 232437 DEBUG nova.policy [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05dfe700f5a141638f3ecb185261858e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '765a234f3c484ca28bf25e21cff07bd1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:41:56 np0005548731 nova_compute[232433]: 2025-12-06 07:41:56.093 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:41:56 np0005548731 nova_compute[232433]: 2025-12-06 07:41:56.094 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:41:56 np0005548731 nova_compute[232433]: 2025-12-06 07:41:56.095 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:41:56 np0005548731 nova_compute[232433]: 2025-12-06 07:41:56.096 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:41:56 np0005548731 nova_compute[232433]: 2025-12-06 07:41:56.120 232437 DEBUG nova.storage.rbd_utils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] rbd image e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:41:56 np0005548731 nova_compute[232433]: 2025-12-06 07:41:56.124 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:41:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:41:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:56.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:41:56 np0005548731 nova_compute[232433]: 2025-12-06 07:41:56.884 232437 DEBUG nova.network.neutron [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Successfully created port: 6443973b-c8a9-4d1f-9bf8-bfd076718040 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:41:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:57.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:57 np0005548731 nova_compute[232433]: 2025-12-06 07:41:57.837 232437 DEBUG nova.network.neutron [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Successfully updated port: 6443973b-c8a9-4d1f-9bf8-bfd076718040 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:41:57 np0005548731 nova_compute[232433]: 2025-12-06 07:41:57.853 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Acquiring lock "refresh_cache-e0c1dd0c-1286-4b5d-987d-0b82bdb15585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:41:57 np0005548731 nova_compute[232433]: 2025-12-06 07:41:57.853 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Acquired lock "refresh_cache-e0c1dd0c-1286-4b5d-987d-0b82bdb15585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:41:57 np0005548731 nova_compute[232433]: 2025-12-06 07:41:57.853 232437 DEBUG nova.network.neutron [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:41:58 np0005548731 nova_compute[232433]: 2025-12-06 07:41:58.252 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:41:58 np0005548731 nova_compute[232433]: 2025-12-06 07:41:58.270 232437 DEBUG nova.network.neutron [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:41:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:58.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:41:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:41:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:41:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:59.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:41:59 np0005548731 nova_compute[232433]: 2025-12-06 07:41:59.169 232437 DEBUG nova.compute.manager [req-f1d6e142-f767-4b9e-bef7-de51989c0f9a req-3c57a23b-ca42-4183-b585-a4a8b5503439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Received event network-changed-6443973b-c8a9-4d1f-9bf8-bfd076718040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:41:59 np0005548731 nova_compute[232433]: 2025-12-06 07:41:59.169 232437 DEBUG nova.compute.manager [req-f1d6e142-f767-4b9e-bef7-de51989c0f9a req-3c57a23b-ca42-4183-b585-a4a8b5503439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Refreshing instance network info cache due to event network-changed-6443973b-c8a9-4d1f-9bf8-bfd076718040. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:41:59 np0005548731 nova_compute[232433]: 2025-12-06 07:41:59.169 232437 DEBUG oslo_concurrency.lockutils [req-f1d6e142-f767-4b9e-bef7-de51989c0f9a req-3c57a23b-ca42-4183-b585-a4a8b5503439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e0c1dd0c-1286-4b5d-987d-0b82bdb15585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:41:59 np0005548731 nova_compute[232433]: 2025-12-06 07:41:59.410 232437 DEBUG nova.network.neutron [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Updating instance_info_cache with network_info: [{"id": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "address": "fa:16:3e:61:18:97", "network": {"id": "75187f06-26ac-46dc-98f0-52adc57eeaeb", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1104472458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "765a234f3c484ca28bf25e21cff07bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6443973b-c8", "ovs_interfaceid": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:41:59 np0005548731 nova_compute[232433]: 2025-12-06 07:41:59.433 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Releasing lock "refresh_cache-e0c1dd0c-1286-4b5d-987d-0b82bdb15585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:41:59 np0005548731 nova_compute[232433]: 2025-12-06 07:41:59.434 232437 DEBUG nova.compute.manager [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Instance network_info: |[{"id": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "address": "fa:16:3e:61:18:97", "network": {"id": "75187f06-26ac-46dc-98f0-52adc57eeaeb", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1104472458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "765a234f3c484ca28bf25e21cff07bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6443973b-c8", "ovs_interfaceid": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:41:59 np0005548731 nova_compute[232433]: 2025-12-06 07:41:59.434 232437 DEBUG oslo_concurrency.lockutils [req-f1d6e142-f767-4b9e-bef7-de51989c0f9a req-3c57a23b-ca42-4183-b585-a4a8b5503439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e0c1dd0c-1286-4b5d-987d-0b82bdb15585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:41:59 np0005548731 nova_compute[232433]: 2025-12-06 07:41:59.435 232437 DEBUG nova.network.neutron [req-f1d6e142-f767-4b9e-bef7-de51989c0f9a req-3c57a23b-ca42-4183-b585-a4a8b5503439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Refreshing network info cache for port 6443973b-c8a9-4d1f-9bf8-bfd076718040 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:42:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e327 e327: 3 total, 3 up, 3 in
Dec  6 02:42:00 np0005548731 nova_compute[232433]: 2025-12-06 07:42:00.185 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006905.183041, e78be236-3d5f-4951-9149-8c7d9a6f0382 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:42:00 np0005548731 nova_compute[232433]: 2025-12-06 07:42:00.185 232437 INFO nova.compute.manager [-] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:42:00 np0005548731 nova_compute[232433]: 2025-12-06 07:42:00.234 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:00 np0005548731 nova_compute[232433]: 2025-12-06 07:42:00.265 232437 DEBUG nova.compute.manager [None req-1e49ca02-e720-477e-8114-649e174bfcd2 - - - - - -] [instance: e78be236-3d5f-4951-9149-8c7d9a6f0382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:42:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:00.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:00 np0005548731 nova_compute[232433]: 2025-12-06 07:42:00.702 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:00 np0005548731 nova_compute[232433]: 2025-12-06 07:42:00.777 232437 DEBUG nova.storage.rbd_utils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] resizing rbd image e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:00.882 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:00.882 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:00.883 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:01Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:b4:88 10.100.0.3
Dec  6 02:42:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:01.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:42:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.255 232437 DEBUG nova.objects.instance [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid e0c1dd0c-1286-4b5d-987d-0b82bdb15585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.271 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.272 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Ensure instance console log exists: /var/lib/nova/instances/e0c1dd0c-1286-4b5d-987d-0b82bdb15585/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.273 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.273 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.274 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.276 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Start _get_guest_xml network_info=[{"id": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "address": "fa:16:3e:61:18:97", "network": {"id": "75187f06-26ac-46dc-98f0-52adc57eeaeb", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1104472458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "765a234f3c484ca28bf25e21cff07bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6443973b-c8", "ovs_interfaceid": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.281 232437 WARNING nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.287 232437 DEBUG nova.virt.libvirt.host [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.288 232437 DEBUG nova.virt.libvirt.host [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.294 232437 DEBUG nova.virt.libvirt.host [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.295 232437 DEBUG nova.virt.libvirt.host [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.296 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.296 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.296 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.297 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.297 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.297 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.297 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.297 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.298 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.298 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.298 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.298 232437 DEBUG nova.virt.hardware [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.301 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.327 232437 DEBUG nova.network.neutron [req-f1d6e142-f767-4b9e-bef7-de51989c0f9a req-3c57a23b-ca42-4183-b585-a4a8b5503439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Updated VIF entry in instance network info cache for port 6443973b-c8a9-4d1f-9bf8-bfd076718040. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.328 232437 DEBUG nova.network.neutron [req-f1d6e142-f767-4b9e-bef7-de51989c0f9a req-3c57a23b-ca42-4183-b585-a4a8b5503439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Updating instance_info_cache with network_info: [{"id": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "address": "fa:16:3e:61:18:97", "network": {"id": "75187f06-26ac-46dc-98f0-52adc57eeaeb", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1104472458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "765a234f3c484ca28bf25e21cff07bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6443973b-c8", "ovs_interfaceid": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.346 232437 DEBUG oslo_concurrency.lockutils [req-f1d6e142-f767-4b9e-bef7-de51989c0f9a req-3c57a23b-ca42-4183-b585-a4a8b5503439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e0c1dd0c-1286-4b5d-987d-0b82bdb15585" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:42:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:42:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/820102900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.733 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.759 232437 DEBUG nova.storage.rbd_utils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] rbd image e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:42:01 np0005548731 nova_compute[232433]: 2025-12-06 07:42:01.764 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:42:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/41133090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.198 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.200 232437 DEBUG nova.virt.libvirt.vif [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:41:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-890920002',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servertagstestjson-server-890920002',id=144,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='765a234f3c484ca28bf25e21cff07bd1',ramdisk_id='',reservation_id='r-izgv4dki',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1006516127',owner_user_name='tempest-ServerTagsTestJSON-1006516127-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:55Z,user_data=None,user_id='05dfe700f5a141638f3ecb185261858e',uuid=e0c1dd0c-1286-4b5d-987d-0b82bdb15585,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "address": "fa:16:3e:61:18:97", "network": {"id": "75187f06-26ac-46dc-98f0-52adc57eeaeb", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1104472458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "765a234f3c484ca28bf25e21cff07bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6443973b-c8", "ovs_interfaceid": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.200 232437 DEBUG nova.network.os_vif_util [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Converting VIF {"id": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "address": "fa:16:3e:61:18:97", "network": {"id": "75187f06-26ac-46dc-98f0-52adc57eeaeb", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1104472458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "765a234f3c484ca28bf25e21cff07bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6443973b-c8", "ovs_interfaceid": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.201 232437 DEBUG nova.network.os_vif_util [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:18:97,bridge_name='br-int',has_traffic_filtering=True,id=6443973b-c8a9-4d1f-9bf8-bfd076718040,network=Network(75187f06-26ac-46dc-98f0-52adc57eeaeb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6443973b-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.202 232437 DEBUG nova.objects.instance [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0c1dd0c-1286-4b5d-987d-0b82bdb15585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.217 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  <uuid>e0c1dd0c-1286-4b5d-987d-0b82bdb15585</uuid>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  <name>instance-00000090</name>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerTagsTestJSON-server-890920002</nova:name>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:42:01</nova:creationTime>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <nova:user uuid="05dfe700f5a141638f3ecb185261858e">tempest-ServerTagsTestJSON-1006516127-project-member</nova:user>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <nova:project uuid="765a234f3c484ca28bf25e21cff07bd1">tempest-ServerTagsTestJSON-1006516127</nova:project>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <nova:port uuid="6443973b-c8a9-4d1f-9bf8-bfd076718040">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <entry name="serial">e0c1dd0c-1286-4b5d-987d-0b82bdb15585</entry>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <entry name="uuid">e0c1dd0c-1286-4b5d-987d-0b82bdb15585</entry>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk.config">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:61:18:97"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <target dev="tap6443973b-c8"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/e0c1dd0c-1286-4b5d-987d-0b82bdb15585/console.log" append="off"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:42:02 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:42:02 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:42:02 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:42:02 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.219 232437 DEBUG nova.compute.manager [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Preparing to wait for external event network-vif-plugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.219 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Acquiring lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.219 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.220 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.220 232437 DEBUG nova.virt.libvirt.vif [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:41:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-890920002',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servertagstestjson-server-890920002',id=144,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='765a234f3c484ca28bf25e21cff07bd1',ramdisk_id='',reservation_id='r-izgv4dki',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1006516127',owner_user_name='tempest-ServerTagsTestJSON-1006516127-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:55Z,user_data=None,user_id='05dfe700f5a141638f3ecb185261858e',uuid=e0c1dd0c-1286-4b5d-987d-0b82bdb15585,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "address": "fa:16:3e:61:18:97", "network": {"id": "75187f06-26ac-46dc-98f0-52adc57eeaeb", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1104472458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "765a234f3c484ca28bf25e21cff07bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6443973b-c8", "ovs_interfaceid": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.221 232437 DEBUG nova.network.os_vif_util [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Converting VIF {"id": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "address": "fa:16:3e:61:18:97", "network": {"id": "75187f06-26ac-46dc-98f0-52adc57eeaeb", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1104472458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "765a234f3c484ca28bf25e21cff07bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6443973b-c8", "ovs_interfaceid": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.221 232437 DEBUG nova.network.os_vif_util [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:18:97,bridge_name='br-int',has_traffic_filtering=True,id=6443973b-c8a9-4d1f-9bf8-bfd076718040,network=Network(75187f06-26ac-46dc-98f0-52adc57eeaeb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6443973b-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.222 232437 DEBUG os_vif [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:18:97,bridge_name='br-int',has_traffic_filtering=True,id=6443973b-c8a9-4d1f-9bf8-bfd076718040,network=Network(75187f06-26ac-46dc-98f0-52adc57eeaeb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6443973b-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.222 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.223 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.223 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.226 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.227 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6443973b-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.227 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6443973b-c8, col_values=(('external_ids', {'iface-id': '6443973b-c8a9-4d1f-9bf8-bfd076718040', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:18:97', 'vm-uuid': 'e0c1dd0c-1286-4b5d-987d-0b82bdb15585'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.228 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:02 np0005548731 NetworkManager[49182]: <info>  [1765006922.2308] manager: (tap6443973b-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.231 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:42:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:42:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:42:02 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.234 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.236 232437 INFO os_vif [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:18:97,bridge_name='br-int',has_traffic_filtering=True,id=6443973b-c8a9-4d1f-9bf8-bfd076718040,network=Network(75187f06-26ac-46dc-98f0-52adc57eeaeb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6443973b-c8')#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.287 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.288 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.288 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] No VIF found with MAC fa:16:3e:61:18:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:42:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.289 232437 INFO nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Using config drive#033[00m
Dec  6 02:42:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:42:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:02.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.311 232437 DEBUG nova.storage.rbd_utils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] rbd image e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.852 232437 INFO nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Creating config drive at /var/lib/nova/instances/e0c1dd0c-1286-4b5d-987d-0b82bdb15585/disk.config#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.857 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0c1dd0c-1286-4b5d-987d-0b82bdb15585/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6y1_ae2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:02 np0005548731 nova_compute[232433]: 2025-12-06 07:42:02.988 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0c1dd0c-1286-4b5d-987d-0b82bdb15585/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6y1_ae2" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.013 232437 DEBUG nova.storage.rbd_utils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] rbd image e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.015 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0c1dd0c-1286-4b5d-987d-0b82bdb15585/disk.config e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:03.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.254 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.589 232437 DEBUG oslo_concurrency.processutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0c1dd0c-1286-4b5d-987d-0b82bdb15585/disk.config e0c1dd0c-1286-4b5d-987d-0b82bdb15585_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.589 232437 INFO nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Deleting local config drive /var/lib/nova/instances/e0c1dd0c-1286-4b5d-987d-0b82bdb15585/disk.config because it was imported into RBD.#033[00m
Dec  6 02:42:03 np0005548731 kernel: tap6443973b-c8: entered promiscuous mode
Dec  6 02:42:03 np0005548731 NetworkManager[49182]: <info>  [1765006923.6370] manager: (tap6443973b-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Dec  6 02:42:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:03Z|00673|binding|INFO|Claiming lport 6443973b-c8a9-4d1f-9bf8-bfd076718040 for this chassis.
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.639 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:03Z|00674|binding|INFO|6443973b-c8a9-4d1f-9bf8-bfd076718040: Claiming fa:16:3e:61:18:97 10.100.0.10
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.649 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:18:97 10.100.0.10'], port_security=['fa:16:3e:61:18:97 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e0c1dd0c-1286-4b5d-987d-0b82bdb15585', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75187f06-26ac-46dc-98f0-52adc57eeaeb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '765a234f3c484ca28bf25e21cff07bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f80a282f-990c-4de5-9ba6-b6c06541e055', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef5d64cd-4e09-4ca3-8a3d-5a9d6164c6f6, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=6443973b-c8a9-4d1f-9bf8-bfd076718040) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.650 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 6443973b-c8a9-4d1f-9bf8-bfd076718040 in datapath 75187f06-26ac-46dc-98f0-52adc57eeaeb bound to our chassis#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.652 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 75187f06-26ac-46dc-98f0-52adc57eeaeb#033[00m
Dec  6 02:42:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:03Z|00675|binding|INFO|Setting lport 6443973b-c8a9-4d1f-9bf8-bfd076718040 ovn-installed in OVS
Dec  6 02:42:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:03Z|00676|binding|INFO|Setting lport 6443973b-c8a9-4d1f-9bf8-bfd076718040 up in Southbound
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.659 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.663 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:03 np0005548731 systemd-udevd[296049]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.665 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7355da-1f16-4e0e-af30-dacf1f13dadf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.665 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap75187f06-21 in ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.667 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap75187f06-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.667 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf0cb85-3763-4cca-9292-61b96963472b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.668 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0eef09b4-2d65-40b6-8567-c8eff7aeee19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 NetworkManager[49182]: <info>  [1765006923.6792] device (tap6443973b-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.678 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7a2526-5b4e-485f-b44f-9acd5a528509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 NetworkManager[49182]: <info>  [1765006923.6798] device (tap6443973b-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:42:03 np0005548731 systemd-machined[195355]: New machine qemu-68-instance-00000090.
Dec  6 02:42:03 np0005548731 systemd[1]: Started Virtual Machine qemu-68-instance-00000090.
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.703 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[38c640bd-5952-4c5b-ac58-780d97b80cd0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.730 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[41aaec9b-63b3-4b3d-b63e-2d7cd1c69e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 systemd-udevd[296055]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.734 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[21dfe519-cf3e-47e9-968d-6dac961e1833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 NetworkManager[49182]: <info>  [1765006923.7360] manager: (tap75187f06-20): new Veth device (/org/freedesktop/NetworkManager/Devices/310)
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.767 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c52073fd-9395-42f4-9bd1-22254cc53e0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.769 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[571d43a1-90f9-4fbe-bd8b-fe05b73defe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.782 232437 DEBUG oslo_concurrency.lockutils [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.782 232437 DEBUG oslo_concurrency.lockutils [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.783 232437 DEBUG oslo_concurrency.lockutils [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.783 232437 DEBUG oslo_concurrency.lockutils [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.783 232437 DEBUG oslo_concurrency.lockutils [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.787 232437 INFO nova.compute.manager [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Terminating instance#033[00m
Dec  6 02:42:03 np0005548731 nova_compute[232433]: 2025-12-06 07:42:03.788 232437 DEBUG nova.compute.manager [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:42:03 np0005548731 NetworkManager[49182]: <info>  [1765006923.7921] device (tap75187f06-20): carrier: link connected
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.797 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e78730-b7a5-4b11-90d8-2159f9365d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.812 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2d24f76a-ccda-40fd-ac63-4da138525e37]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75187f06-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:d3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718579, 'reachable_time': 21068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296084, 'error': None, 'target': 'ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.826 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[15d7d600-aa06-4388-a5b5-7742b6bc2003]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:d340'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718579, 'tstamp': 718579}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296085, 'error': None, 'target': 'ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.842 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[818fdfb4-204c-4f38-8a43-88b8ed6e3d1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75187f06-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:d3:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718579, 'reachable_time': 21068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296086, 'error': None, 'target': 'ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.871 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[98e37528-90fd-4259-90ed-ed5033e225cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.932 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fda08c6d-2433-4dea-9306-6c347888ae58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.934 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75187f06-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.934 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:42:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:03.934 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75187f06-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:04 np0005548731 NetworkManager[49182]: <info>  [1765006924.0028] manager: (tap75187f06-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Dec  6 02:42:04 np0005548731 kernel: tap75187f06-20: entered promiscuous mode
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.002 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.009 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap75187f06-20, col_values=(('external_ids', {'iface-id': '6f71b0d8-197d-4804-abd7-32eef501b9b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.011 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:04Z|00677|binding|INFO|Releasing lport 6f71b0d8-197d-4804-abd7-32eef501b9b9 from this chassis (sb_readonly=0)
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.011 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.024 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/75187f06-26ac-46dc-98f0-52adc57eeaeb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/75187f06-26ac-46dc-98f0-52adc57eeaeb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.025 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[03618ea9-235d-4e9f-b701-4946b7273934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.026 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-75187f06-26ac-46dc-98f0-52adc57eeaeb
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/75187f06-26ac-46dc-98f0-52adc57eeaeb.pid.haproxy
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 75187f06-26ac-46dc-98f0-52adc57eeaeb
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.026 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb', 'env', 'PROCESS_TAG=haproxy-75187f06-26ac-46dc-98f0-52adc57eeaeb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/75187f06-26ac-46dc-98f0-52adc57eeaeb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.030 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.121 232437 DEBUG nova.compute.manager [req-051d325d-816b-4d3a-8321-fb650dd182aa req-163fe1ce-d662-40c0-84c3-457b5c874f19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Received event network-vif-plugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.121 232437 DEBUG oslo_concurrency.lockutils [req-051d325d-816b-4d3a-8321-fb650dd182aa req-163fe1ce-d662-40c0-84c3-457b5c874f19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.121 232437 DEBUG oslo_concurrency.lockutils [req-051d325d-816b-4d3a-8321-fb650dd182aa req-163fe1ce-d662-40c0-84c3-457b5c874f19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.122 232437 DEBUG oslo_concurrency.lockutils [req-051d325d-816b-4d3a-8321-fb650dd182aa req-163fe1ce-d662-40c0-84c3-457b5c874f19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.122 232437 DEBUG nova.compute.manager [req-051d325d-816b-4d3a-8321-fb650dd182aa req-163fe1ce-d662-40c0-84c3-457b5c874f19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Processing event network-vif-plugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:42:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:04.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:04 np0005548731 kernel: tapf1f563a8-90 (unregistering): left promiscuous mode
Dec  6 02:42:04 np0005548731 NetworkManager[49182]: <info>  [1765006924.3509] device (tapf1f563a8-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.358 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:04Z|00678|binding|INFO|Releasing lport f1f563a8-9001-419f-858a-0213c5d6607a from this chassis (sb_readonly=0)
Dec  6 02:42:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:04Z|00679|binding|INFO|Setting lport f1f563a8-9001-419f-858a-0213c5d6607a down in Southbound
Dec  6 02:42:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:04Z|00680|binding|INFO|Removing iface tapf1f563a8-90 ovn-installed in OVS
Dec  6 02:42:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 e328: 3 total, 3 up, 3 in
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.371 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 podman[296160]: 2025-12-06 07:42:04.373901152 +0000 UTC m=+0.051601359 container create 4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.373 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006924.3722346, e0c1dd0c-1286-4b5d-987d-0b82bdb15585 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.374 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] VM Started (Lifecycle Event)#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.376 232437 DEBUG nova.compute.manager [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.376 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:b4:88 10.100.0.3'], port_security=['fa:16:3e:48:b4:88 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b85968f0-ebd7-48f6-a932-c4e8da09381e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f1f563a8-9001-419f-858a-0213c5d6607a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.381 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.384 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.387 232437 INFO nova.virt.libvirt.driver [-] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Instance spawned successfully.#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.388 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:42:04 np0005548731 systemd[1]: Started libpod-conmon-4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f.scope.
Dec  6 02:42:04 np0005548731 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000087.scope: Deactivated successfully.
Dec  6 02:42:04 np0005548731 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000087.scope: Consumed 15.058s CPU time.
Dec  6 02:42:04 np0005548731 systemd-machined[195355]: Machine qemu-67-instance-00000087 terminated.
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.420 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:42:04 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.426 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:42:04 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b1788e46ad867639f1329dee42d409c8ed1c39ea95c07fd10ab9747ab46be1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.430 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.430 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.431 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.431 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.432 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.432 232437 DEBUG nova.virt.libvirt.driver [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:04 np0005548731 podman[296160]: 2025-12-06 07:42:04.441197201 +0000 UTC m=+0.118897428 container init 4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 02:42:04 np0005548731 podman[296160]: 2025-12-06 07:42:04.348305327 +0000 UTC m=+0.026005554 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:42:04 np0005548731 podman[296160]: 2025-12-06 07:42:04.446280094 +0000 UTC m=+0.123980301 container start 4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:42:04 np0005548731 neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb[296179]: [NOTICE]   (296183) : New worker (296185) forked
Dec  6 02:42:04 np0005548731 neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb[296179]: [NOTICE]   (296183) : Loading success.
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.474 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.475 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006924.3723366, e0c1dd0c-1286-4b5d-987d-0b82bdb15585 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.475 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.501 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.505 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006924.383768, e0c1dd0c-1286-4b5d-987d-0b82bdb15585 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.505 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.509 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f1f563a8-9001-419f-858a-0213c5d6607a in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 unbound from our chassis#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.510 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3beede49-1cbb-425c-b1af-82f43dc57163, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.511 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[948fdadd-1605-42e7-b67c-6e7de8df4412]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.512 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace which is not needed anymore#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.516 232437 INFO nova.compute.manager [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Took 8.57 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.517 232437 DEBUG nova.compute.manager [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.546 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.550 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.585 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.606 232437 INFO nova.compute.manager [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Took 9.57 seconds to build instance.#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.611 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.616 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.617 232437 INFO nova.virt.libvirt.driver [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Instance destroyed successfully.#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.617 232437 DEBUG nova.objects.instance [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'resources' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:42:04 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[295357]: [NOTICE]   (295376) : haproxy version is 2.8.14-c23fe91
Dec  6 02:42:04 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[295357]: [NOTICE]   (295376) : path to executable is /usr/sbin/haproxy
Dec  6 02:42:04 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[295357]: [WARNING]  (295376) : Exiting Master process...
Dec  6 02:42:04 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[295357]: [ALERT]    (295376) : Current worker (295382) exited with code 143 (Terminated)
Dec  6 02:42:04 np0005548731 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[295357]: [WARNING]  (295376) : All workers exited. Exiting... (0)
Dec  6 02:42:04 np0005548731 systemd[1]: libpod-56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e.scope: Deactivated successfully.
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.644 232437 DEBUG oslo_concurrency.lockutils [None req-81bd3f10-8b29-42e7-b9d0-f08237827cff 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:04 np0005548731 podman[296209]: 2025-12-06 07:42:04.646180835 +0000 UTC m=+0.049336293 container died 56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.646 232437 DEBUG nova.virt.libvirt.vif [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:39:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1033932756',display_name='tempest-ServerActionsTestOtherB-server-1033932756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1033932756',id=135,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:41:48Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-kbdg07ib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:41:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=b85968f0-ebd7-48f6-a932-c4e8da09381e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.646 232437 DEBUG nova.network.os_vif_util [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.647 232437 DEBUG nova.network.os_vif_util [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.647 232437 DEBUG os_vif [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.650 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.650 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1f563a8-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.652 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.653 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.655 232437 INFO os_vif [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90')#033[00m
Dec  6 02:42:04 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e-userdata-shm.mount: Deactivated successfully.
Dec  6 02:42:04 np0005548731 systemd[1]: var-lib-containers-storage-overlay-1d75dec381c60fff194b8a68e5981c36de7f9d1cd9211848f432f3f966232558-merged.mount: Deactivated successfully.
Dec  6 02:42:04 np0005548731 podman[296209]: 2025-12-06 07:42:04.686324403 +0000 UTC m=+0.089479861 container cleanup 56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:42:04 np0005548731 systemd[1]: libpod-conmon-56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e.scope: Deactivated successfully.
Dec  6 02:42:04 np0005548731 podman[296267]: 2025-12-06 07:42:04.752639568 +0000 UTC m=+0.041584023 container remove 56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.759 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f0327a5a-c4f3-496a-ae0d-46ddbebbf8f0]: (4, ('Sat Dec  6 07:42:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e)\n56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e\nSat Dec  6 07:42:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e)\n56fad5335577b4164649aaf7d7cfb70ee54aaa548d99eb5c96b73dbdcbd73d8e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.762 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3548ec5-3c03-4bf8-ab02-6d74fc13866c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.763 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.765 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 kernel: tap3beede49-10: left promiscuous mode
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.779 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 nova_compute[232433]: 2025-12-06 07:42:04.780 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.783 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[15226a6c-c163-4026-af0b-adeb1fa84882]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.804 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[66529862-2c58-46f2-9cd2-a5d95a9fe40e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.804 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7326ff28-04b7-4846-a2e4-f917b747a647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.818 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[972d604b-0fb1-44e2-a0e7-f9f0ed843b43]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716518, 'reachable_time': 34126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296285, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:04 np0005548731 systemd[1]: run-netns-ovnmeta\x2d3beede49\x2d1cbb\x2d425c\x2db1af\x2d82f43dc57163.mount: Deactivated successfully.
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.824 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:42:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:04.824 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b28a86-bf55-4105-9f62-c8ee6122659a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:05.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.752 232437 DEBUG nova.compute.manager [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-vif-unplugged-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.752 232437 DEBUG oslo_concurrency.lockutils [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.753 232437 DEBUG oslo_concurrency.lockutils [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.753 232437 DEBUG oslo_concurrency.lockutils [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.753 232437 DEBUG nova.compute.manager [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] No waiting events found dispatching network-vif-unplugged-f1f563a8-9001-419f-858a-0213c5d6607a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.754 232437 DEBUG nova.compute.manager [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-vif-unplugged-f1f563a8-9001-419f-858a-0213c5d6607a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.754 232437 DEBUG nova.compute.manager [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.754 232437 DEBUG oslo_concurrency.lockutils [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.754 232437 DEBUG oslo_concurrency.lockutils [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.755 232437 DEBUG oslo_concurrency.lockutils [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.755 232437 DEBUG nova.compute.manager [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] No waiting events found dispatching network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:42:05 np0005548731 nova_compute[232433]: 2025-12-06 07:42:05.755 232437 WARNING nova.compute.manager [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received unexpected event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:42:05 np0005548731 podman[296286]: 2025-12-06 07:42:05.901380666 +0000 UTC m=+0.058782393 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  6 02:42:05 np0005548731 podman[296288]: 2025-12-06 07:42:05.919673152 +0000 UTC m=+0.075316116 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec  6 02:42:05 np0005548731 podman[296287]: 2025-12-06 07:42:05.933445517 +0000 UTC m=+0.090876195 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 02:42:06 np0005548731 nova_compute[232433]: 2025-12-06 07:42:06.202 232437 DEBUG nova.compute.manager [req-eefd372c-8be8-48b5-a505-99cff427f47c req-dfe55a3b-dfdc-4251-b1b0-35d37db8f021 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Received event network-vif-plugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:06 np0005548731 nova_compute[232433]: 2025-12-06 07:42:06.202 232437 DEBUG oslo_concurrency.lockutils [req-eefd372c-8be8-48b5-a505-99cff427f47c req-dfe55a3b-dfdc-4251-b1b0-35d37db8f021 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:06 np0005548731 nova_compute[232433]: 2025-12-06 07:42:06.202 232437 DEBUG oslo_concurrency.lockutils [req-eefd372c-8be8-48b5-a505-99cff427f47c req-dfe55a3b-dfdc-4251-b1b0-35d37db8f021 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:06 np0005548731 nova_compute[232433]: 2025-12-06 07:42:06.202 232437 DEBUG oslo_concurrency.lockutils [req-eefd372c-8be8-48b5-a505-99cff427f47c req-dfe55a3b-dfdc-4251-b1b0-35d37db8f021 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:06 np0005548731 nova_compute[232433]: 2025-12-06 07:42:06.203 232437 DEBUG nova.compute.manager [req-eefd372c-8be8-48b5-a505-99cff427f47c req-dfe55a3b-dfdc-4251-b1b0-35d37db8f021 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] No waiting events found dispatching network-vif-plugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:42:06 np0005548731 nova_compute[232433]: 2025-12-06 07:42:06.203 232437 WARNING nova.compute.manager [req-eefd372c-8be8-48b5-a505-99cff427f47c req-dfe55a3b-dfdc-4251-b1b0-35d37db8f021 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Received unexpected event network-vif-plugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:42:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:06.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:07.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.256 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:08.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.732 232437 DEBUG oslo_concurrency.lockutils [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Acquiring lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.733 232437 DEBUG oslo_concurrency.lockutils [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.733 232437 DEBUG oslo_concurrency.lockutils [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Acquiring lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.734 232437 DEBUG oslo_concurrency.lockutils [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.735 232437 DEBUG oslo_concurrency.lockutils [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.736 232437 INFO nova.compute.manager [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Terminating instance#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.737 232437 DEBUG nova.compute.manager [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:42:08 np0005548731 kernel: tap6443973b-c8 (unregistering): left promiscuous mode
Dec  6 02:42:08 np0005548731 NetworkManager[49182]: <info>  [1765006928.7728] device (tap6443973b-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:42:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:08Z|00681|binding|INFO|Releasing lport 6443973b-c8a9-4d1f-9bf8-bfd076718040 from this chassis (sb_readonly=0)
Dec  6 02:42:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:08Z|00682|binding|INFO|Setting lport 6443973b-c8a9-4d1f-9bf8-bfd076718040 down in Southbound
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.780 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:08Z|00683|binding|INFO|Removing iface tap6443973b-c8 ovn-installed in OVS
Dec  6 02:42:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:08.795 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:18:97 10.100.0.10'], port_security=['fa:16:3e:61:18:97 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e0c1dd0c-1286-4b5d-987d-0b82bdb15585', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75187f06-26ac-46dc-98f0-52adc57eeaeb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '765a234f3c484ca28bf25e21cff07bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f80a282f-990c-4de5-9ba6-b6c06541e055', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef5d64cd-4e09-4ca3-8a3d-5a9d6164c6f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=6443973b-c8a9-4d1f-9bf8-bfd076718040) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:42:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:08.797 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 6443973b-c8a9-4d1f-9bf8-bfd076718040 in datapath 75187f06-26ac-46dc-98f0-52adc57eeaeb unbound from our chassis#033[00m
Dec  6 02:42:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:08.799 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75187f06-26ac-46dc-98f0-52adc57eeaeb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:42:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:08.800 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d734b010-3638-4c34-a15e-add3070432fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:08.800 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb namespace which is not needed anymore#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.802 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:08 np0005548731 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000090.scope: Deactivated successfully.
Dec  6 02:42:08 np0005548731 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000090.scope: Consumed 4.987s CPU time.
Dec  6 02:42:08 np0005548731 systemd-machined[195355]: Machine qemu-68-instance-00000090 terminated.
Dec  6 02:42:08 np0005548731 neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb[296179]: [NOTICE]   (296183) : haproxy version is 2.8.14-c23fe91
Dec  6 02:42:08 np0005548731 neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb[296179]: [NOTICE]   (296183) : path to executable is /usr/sbin/haproxy
Dec  6 02:42:08 np0005548731 neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb[296179]: [WARNING]  (296183) : Exiting Master process...
Dec  6 02:42:08 np0005548731 neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb[296179]: [ALERT]    (296183) : Current worker (296185) exited with code 143 (Terminated)
Dec  6 02:42:08 np0005548731 neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb[296179]: [WARNING]  (296183) : All workers exited. Exiting... (0)
Dec  6 02:42:08 np0005548731 systemd[1]: libpod-4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f.scope: Deactivated successfully.
Dec  6 02:42:08 np0005548731 podman[296372]: 2025-12-06 07:42:08.921824034 +0000 UTC m=+0.041871801 container died 4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:42:08 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f-userdata-shm.mount: Deactivated successfully.
Dec  6 02:42:08 np0005548731 systemd[1]: var-lib-containers-storage-overlay-65b1788e46ad867639f1329dee42d409c8ed1c39ea95c07fd10ab9747ab46be1-merged.mount: Deactivated successfully.
Dec  6 02:42:08 np0005548731 podman[296372]: 2025-12-06 07:42:08.953767822 +0000 UTC m=+0.073815579 container cleanup 4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.954 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:42:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3429869309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:42:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:42:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3429869309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.959 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:08 np0005548731 systemd[1]: libpod-conmon-4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f.scope: Deactivated successfully.
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.973 232437 INFO nova.virt.libvirt.driver [-] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Instance destroyed successfully.#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.973 232437 DEBUG nova.objects.instance [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lazy-loading 'resources' on Instance uuid e0c1dd0c-1286-4b5d-987d-0b82bdb15585 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.987 232437 DEBUG nova.virt.libvirt.vif [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:41:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-890920002',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-servertagstestjson-server-890920002',id=144,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:42:04Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='765a234f3c484ca28bf25e21cff07bd1',ramdisk_id='',reservation_id='r-izgv4dki',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1006516127',owner_user_name='tempest-ServerTagsTestJSON-1006516127-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:42:04Z,user_data=None,user_id='05dfe700f5a141638f3ecb185261858e',uuid=e0c1dd0c-1286-4b5d-987d-0b82bdb15585,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "address": "fa:16:3e:61:18:97", "network": {"id": "75187f06-26ac-46dc-98f0-52adc57eeaeb", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1104472458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "765a234f3c484ca28bf25e21cff07bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6443973b-c8", "ovs_interfaceid": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.988 232437 DEBUG nova.network.os_vif_util [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Converting VIF {"id": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "address": "fa:16:3e:61:18:97", "network": {"id": "75187f06-26ac-46dc-98f0-52adc57eeaeb", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1104472458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "765a234f3c484ca28bf25e21cff07bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6443973b-c8", "ovs_interfaceid": "6443973b-c8a9-4d1f-9bf8-bfd076718040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.989 232437 DEBUG nova.network.os_vif_util [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:18:97,bridge_name='br-int',has_traffic_filtering=True,id=6443973b-c8a9-4d1f-9bf8-bfd076718040,network=Network(75187f06-26ac-46dc-98f0-52adc57eeaeb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6443973b-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.989 232437 DEBUG os_vif [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:18:97,bridge_name='br-int',has_traffic_filtering=True,id=6443973b-c8a9-4d1f-9bf8-bfd076718040,network=Network(75187f06-26ac-46dc-98f0-52adc57eeaeb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6443973b-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.990 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.990 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6443973b-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.993 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:42:08 np0005548731 nova_compute[232433]: 2025-12-06 07:42:08.995 232437 INFO os_vif [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:18:97,bridge_name='br-int',has_traffic_filtering=True,id=6443973b-c8a9-4d1f-9bf8-bfd076718040,network=Network(75187f06-26ac-46dc-98f0-52adc57eeaeb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6443973b-c8')#033[00m
Dec  6 02:42:09 np0005548731 podman[296411]: 2025-12-06 07:42:09.019057872 +0000 UTC m=+0.042275820 container remove 4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:42:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:09.024 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0df738c5-54e3-44c0-b3db-d1509b6d5b11]: (4, ('Sat Dec  6 07:42:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb (4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f)\n4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f\nSat Dec  6 07:42:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb (4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f)\n4bae804598eace45ff27deb8cf263d54a9cbdbfebc0819d5a501258b38ff3e7f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:09.026 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac85f94-166d-4f8b-b8cc-3727e407b52d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:09.026 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75187f06-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:09 np0005548731 nova_compute[232433]: 2025-12-06 07:42:09.028 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:09 np0005548731 kernel: tap75187f06-20: left promiscuous mode
Dec  6 02:42:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:09.031 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[72018553-f9ab-40a8-9666-3e715ca0c3ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:09 np0005548731 nova_compute[232433]: 2025-12-06 07:42:09.044 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:09.047 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c0098894-449c-49cc-a7dc-571f4c080bd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:09.048 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5cc919-1448-47c5-b801-307a51332938]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:09.062 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4f318580-b1d4-4f81-a2c7-32c9f6c87cce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718573, 'reachable_time': 37006, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296442, 'error': None, 'target': 'ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:09 np0005548731 systemd[1]: run-netns-ovnmeta\x2d75187f06\x2d26ac\x2d46dc\x2d98f0\x2d52adc57eeaeb.mount: Deactivated successfully.
Dec  6 02:42:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:09.065 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-75187f06-26ac-46dc-98f0-52adc57eeaeb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:42:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:09.065 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[a6918451-8c61-45f5-9fb9-0ea8f235bd00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:09 np0005548731 nova_compute[232433]: 2025-12-06 07:42:09.089 232437 DEBUG nova.compute.manager [req-b27e08ad-323a-4900-a3fa-4f6a19b9b9da req-f1f01318-add4-46ff-9d04-8898d2c10fd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Received event network-vif-unplugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:09 np0005548731 nova_compute[232433]: 2025-12-06 07:42:09.090 232437 DEBUG oslo_concurrency.lockutils [req-b27e08ad-323a-4900-a3fa-4f6a19b9b9da req-f1f01318-add4-46ff-9d04-8898d2c10fd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:09 np0005548731 nova_compute[232433]: 2025-12-06 07:42:09.090 232437 DEBUG oslo_concurrency.lockutils [req-b27e08ad-323a-4900-a3fa-4f6a19b9b9da req-f1f01318-add4-46ff-9d04-8898d2c10fd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:09 np0005548731 nova_compute[232433]: 2025-12-06 07:42:09.090 232437 DEBUG oslo_concurrency.lockutils [req-b27e08ad-323a-4900-a3fa-4f6a19b9b9da req-f1f01318-add4-46ff-9d04-8898d2c10fd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:09 np0005548731 nova_compute[232433]: 2025-12-06 07:42:09.090 232437 DEBUG nova.compute.manager [req-b27e08ad-323a-4900-a3fa-4f6a19b9b9da req-f1f01318-add4-46ff-9d04-8898d2c10fd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] No waiting events found dispatching network-vif-unplugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:42:09 np0005548731 nova_compute[232433]: 2025-12-06 07:42:09.091 232437 DEBUG nova.compute.manager [req-b27e08ad-323a-4900-a3fa-4f6a19b9b9da req-f1f01318-add4-46ff-9d04-8898d2c10fd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Received event network-vif-unplugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:42:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:09.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:42:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:10.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:42:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:42:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:42:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:11.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:11 np0005548731 nova_compute[232433]: 2025-12-06 07:42:11.197 232437 DEBUG nova.compute.manager [req-49d5ec02-ea1c-4b0c-befc-142f7b9eecf9 req-ac64a033-d10e-4260-8cca-8ad0961ae52f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Received event network-vif-plugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:11 np0005548731 nova_compute[232433]: 2025-12-06 07:42:11.198 232437 DEBUG oslo_concurrency.lockutils [req-49d5ec02-ea1c-4b0c-befc-142f7b9eecf9 req-ac64a033-d10e-4260-8cca-8ad0961ae52f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:11 np0005548731 nova_compute[232433]: 2025-12-06 07:42:11.198 232437 DEBUG oslo_concurrency.lockutils [req-49d5ec02-ea1c-4b0c-befc-142f7b9eecf9 req-ac64a033-d10e-4260-8cca-8ad0961ae52f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:11 np0005548731 nova_compute[232433]: 2025-12-06 07:42:11.198 232437 DEBUG oslo_concurrency.lockutils [req-49d5ec02-ea1c-4b0c-befc-142f7b9eecf9 req-ac64a033-d10e-4260-8cca-8ad0961ae52f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:11 np0005548731 nova_compute[232433]: 2025-12-06 07:42:11.198 232437 DEBUG nova.compute.manager [req-49d5ec02-ea1c-4b0c-befc-142f7b9eecf9 req-ac64a033-d10e-4260-8cca-8ad0961ae52f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] No waiting events found dispatching network-vif-plugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:42:11 np0005548731 nova_compute[232433]: 2025-12-06 07:42:11.198 232437 WARNING nova.compute.manager [req-49d5ec02-ea1c-4b0c-befc-142f7b9eecf9 req-ac64a033-d10e-4260-8cca-8ad0961ae52f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Received unexpected event network-vif-plugged-6443973b-c8a9-4d1f-9bf8-bfd076718040 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:42:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:12.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:13.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:13 np0005548731 nova_compute[232433]: 2025-12-06 07:42:13.258 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:13 np0005548731 nova_compute[232433]: 2025-12-06 07:42:13.993 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:14 np0005548731 nova_compute[232433]: 2025-12-06 07:42:14.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:42:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.007000166s ======
Dec  6 02:42:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:14.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.007000166s
Dec  6 02:42:15 np0005548731 nova_compute[232433]: 2025-12-06 07:42:15.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:42:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:15.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:16 np0005548731 nova_compute[232433]: 2025-12-06 07:42:16.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:42:16 np0005548731 nova_compute[232433]: 2025-12-06 07:42:16.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:42:16 np0005548731 nova_compute[232433]: 2025-12-06 07:42:16.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:42:16 np0005548731 nova_compute[232433]: 2025-12-06 07:42:16.126 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  6 02:42:16 np0005548731 nova_compute[232433]: 2025-12-06 07:42:16.126 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  6 02:42:16 np0005548731 nova_compute[232433]: 2025-12-06 07:42:16.126 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:42:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:16.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:17.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:17 np0005548731 nova_compute[232433]: 2025-12-06 07:42:17.737 232437 INFO nova.virt.libvirt.driver [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Deleting instance files /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e_del#033[00m
Dec  6 02:42:17 np0005548731 nova_compute[232433]: 2025-12-06 07:42:17.738 232437 INFO nova.virt.libvirt.driver [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Deletion of /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e_del complete#033[00m
Dec  6 02:42:17 np0005548731 nova_compute[232433]: 2025-12-06 07:42:17.786 232437 INFO nova.compute.manager [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Took 14.00 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:42:17 np0005548731 nova_compute[232433]: 2025-12-06 07:42:17.786 232437 DEBUG oslo.service.loopingcall [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:42:17 np0005548731 nova_compute[232433]: 2025-12-06 07:42:17.787 232437 DEBUG nova.compute.manager [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:42:17 np0005548731 nova_compute[232433]: 2025-12-06 07:42:17.787 232437 DEBUG nova.network.neutron [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:42:18 np0005548731 nova_compute[232433]: 2025-12-06 07:42:18.260 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:18.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:18 np0005548731 nova_compute[232433]: 2025-12-06 07:42:18.333 232437 INFO nova.virt.libvirt.driver [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Deleting instance files /var/lib/nova/instances/e0c1dd0c-1286-4b5d-987d-0b82bdb15585_del#033[00m
Dec  6 02:42:18 np0005548731 nova_compute[232433]: 2025-12-06 07:42:18.333 232437 INFO nova.virt.libvirt.driver [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Deletion of /var/lib/nova/instances/e0c1dd0c-1286-4b5d-987d-0b82bdb15585_del complete#033[00m
Dec  6 02:42:18 np0005548731 nova_compute[232433]: 2025-12-06 07:42:18.411 232437 INFO nova.compute.manager [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Took 9.67 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:42:18 np0005548731 nova_compute[232433]: 2025-12-06 07:42:18.411 232437 DEBUG oslo.service.loopingcall [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:42:18 np0005548731 nova_compute[232433]: 2025-12-06 07:42:18.412 232437 DEBUG nova.compute.manager [-] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:42:18 np0005548731 nova_compute[232433]: 2025-12-06 07:42:18.412 232437 DEBUG nova.network.neutron [-] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:42:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.025 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:42:19 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4069215907' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:42:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:19.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.493 232437 DEBUG nova.network.neutron [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.520 232437 INFO nova.compute.manager [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Took 1.73 seconds to deallocate network for instance.#033[00m
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.573 232437 DEBUG oslo_concurrency.lockutils [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.574 232437 DEBUG oslo_concurrency.lockutils [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.587 232437 DEBUG nova.compute.manager [req-33e46078-1511-4b34-8da6-2f446c927eef req-4c487074-765b-40b6-bdea-ec1c8c7034f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-vif-deleted-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.616 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006924.6149328, b85968f0-ebd7-48f6-a932-c4e8da09381e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.616 232437 INFO nova.compute.manager [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.621 232437 DEBUG nova.network.neutron [-] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.637 232437 DEBUG nova.compute.manager [None req-adbb529e-d249-4733-b140-541533335ecb - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.643 232437 INFO nova.compute.manager [-] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Took 1.23 seconds to deallocate network for instance.#033[00m
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.656 232437 DEBUG oslo_concurrency.processutils [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:19 np0005548731 nova_compute[232433]: 2025-12-06 07:42:19.683 232437 DEBUG oslo_concurrency.lockutils [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:42:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/733693291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.092 232437 DEBUG oslo_concurrency.processutils [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.098 232437 DEBUG nova.compute.provider_tree [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.116 232437 DEBUG nova.scheduler.client.report [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.136 232437 DEBUG oslo_concurrency.lockutils [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.139 232437 DEBUG oslo_concurrency.lockutils [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.158 232437 INFO nova.scheduler.client.report [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Deleted allocations for instance b85968f0-ebd7-48f6-a932-c4e8da09381e#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.202 232437 DEBUG oslo_concurrency.processutils [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.232 232437 DEBUG oslo_concurrency.lockutils [None req-0e29f838-fd75-4628-8416-c6cd093693a3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:20.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:42:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/559352823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.631 232437 DEBUG oslo_concurrency.processutils [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.636 232437 DEBUG nova.compute.provider_tree [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.650 232437 DEBUG nova.scheduler.client.report [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.671 232437 DEBUG oslo_concurrency.lockutils [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.708 232437 INFO nova.scheduler.client.report [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Deleted allocations for instance e0c1dd0c-1286-4b5d-987d-0b82bdb15585#033[00m
Dec  6 02:42:20 np0005548731 nova_compute[232433]: 2025-12-06 07:42:20.781 232437 DEBUG oslo_concurrency.lockutils [None req-680355b1-1e54-4f98-a819-4d6c1e80a143 05dfe700f5a141638f3ecb185261858e 765a234f3c484ca28bf25e21cff07bd1 - - default default] Lock "e0c1dd0c-1286-4b5d-987d-0b82bdb15585" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:21.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:21 np0005548731 nova_compute[232433]: 2025-12-06 07:42:21.694 232437 DEBUG nova.compute.manager [req-403bf8af-7347-4734-a0cd-2f0247679899 req-90a31d15-b5b2-4062-bb30-ad92f7e3a7c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Received event network-vif-deleted-6443973b-c8a9-4d1f-9bf8-bfd076718040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.126 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.126 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.127 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.127 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:22.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:42:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/743800435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.570 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.720 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.721 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4352MB free_disk=20.804115295410156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.722 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.722 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.770 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.770 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:42:22 np0005548731 nova_compute[232433]: 2025-12-06 07:42:22.796 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:42:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:23.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:42:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:42:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/946810008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:42:23 np0005548731 nova_compute[232433]: 2025-12-06 07:42:23.223 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:23 np0005548731 nova_compute[232433]: 2025-12-06 07:42:23.230 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:42:23 np0005548731 nova_compute[232433]: 2025-12-06 07:42:23.262 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:42:23 np0005548731 nova_compute[232433]: 2025-12-06 07:42:23.266 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:23 np0005548731 nova_compute[232433]: 2025-12-06 07:42:23.351 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:42:23 np0005548731 nova_compute[232433]: 2025-12-06 07:42:23.351 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:23 np0005548731 nova_compute[232433]: 2025-12-06 07:42:23.970 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006928.9694712, e0c1dd0c-1286-4b5d-987d-0b82bdb15585 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:42:23 np0005548731 nova_compute[232433]: 2025-12-06 07:42:23.971 232437 INFO nova.compute.manager [-] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:42:24 np0005548731 nova_compute[232433]: 2025-12-06 07:42:23.999 232437 DEBUG nova.compute.manager [None req-c8a710b5-4218-444d-ae32-45396c8f509f - - - - - -] [instance: e0c1dd0c-1286-4b5d-987d-0b82bdb15585] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:42:24 np0005548731 nova_compute[232433]: 2025-12-06 07:42:24.028 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:24.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:42:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:25.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:42:25 np0005548731 nova_compute[232433]: 2025-12-06 07:42:25.353 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:42:25 np0005548731 nova_compute[232433]: 2025-12-06 07:42:25.353 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:42:25 np0005548731 nova_compute[232433]: 2025-12-06 07:42:25.556 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:25 np0005548731 nova_compute[232433]: 2025-12-06 07:42:25.794 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:26.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:42:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:27.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:42:27 np0005548731 nova_compute[232433]: 2025-12-06 07:42:27.473 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "e5068414-3119-4041-97c2-e4cef3bb779a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:27 np0005548731 nova_compute[232433]: 2025-12-06 07:42:27.474 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:27 np0005548731 nova_compute[232433]: 2025-12-06 07:42:27.497 232437 DEBUG nova.compute.manager [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:42:27 np0005548731 nova_compute[232433]: 2025-12-06 07:42:27.580 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:27 np0005548731 nova_compute[232433]: 2025-12-06 07:42:27.580 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:27 np0005548731 nova_compute[232433]: 2025-12-06 07:42:27.587 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:42:27 np0005548731 nova_compute[232433]: 2025-12-06 07:42:27.587 232437 INFO nova.compute.claims [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:42:27 np0005548731 nova_compute[232433]: 2025-12-06 07:42:27.694 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:27 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 02:42:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:42:28 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3930922968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.122 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.128 232437 DEBUG nova.compute.provider_tree [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.263 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.306 232437 DEBUG nova.scheduler.client.report [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:42:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:28.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.369 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.370 232437 DEBUG nova.compute.manager [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.421 232437 DEBUG nova.compute.manager [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.421 232437 DEBUG nova.network.neutron [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.445 232437 INFO nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.466 232437 DEBUG nova.compute.manager [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:42:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.584 232437 DEBUG nova.compute.manager [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.585 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.585 232437 INFO nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Creating image(s)#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.612 232437 DEBUG nova.storage.rbd_utils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image e5068414-3119-4041-97c2-e4cef3bb779a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.639 232437 DEBUG nova.storage.rbd_utils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image e5068414-3119-4041-97c2-e4cef3bb779a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.664 232437 DEBUG nova.storage.rbd_utils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image e5068414-3119-4041-97c2-e4cef3bb779a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.668 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.693 232437 DEBUG nova.policy [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '297bc99c242e4fa8aedea4a6367b61c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.729 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.729 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.730 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.730 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.756 232437 DEBUG nova.storage.rbd_utils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image e5068414-3119-4041-97c2-e4cef3bb779a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:42:28 np0005548731 nova_compute[232433]: 2025-12-06 07:42:28.761 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e5068414-3119-4041-97c2-e4cef3bb779a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:29 np0005548731 nova_compute[232433]: 2025-12-06 07:42:29.030 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:29 np0005548731 nova_compute[232433]: 2025-12-06 07:42:29.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:42:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:42:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:29.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:42:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:29.481 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:42:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:29.482 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:42:29 np0005548731 nova_compute[232433]: 2025-12-06 07:42:29.524 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:30 np0005548731 nova_compute[232433]: 2025-12-06 07:42:30.163 232437 DEBUG nova.network.neutron [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Successfully created port: 9150edb9-25c0-443e-baec-8ba956bec23f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:42:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:30.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:30.484 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:31 np0005548731 nova_compute[232433]: 2025-12-06 07:42:31.082 232437 DEBUG nova.network.neutron [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Successfully updated port: 9150edb9-25c0-443e-baec-8ba956bec23f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:42:31 np0005548731 nova_compute[232433]: 2025-12-06 07:42:31.095 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:42:31 np0005548731 nova_compute[232433]: 2025-12-06 07:42:31.095 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquired lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:42:31 np0005548731 nova_compute[232433]: 2025-12-06 07:42:31.095 232437 DEBUG nova.network.neutron [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:42:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:42:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:31.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:42:31 np0005548731 nova_compute[232433]: 2025-12-06 07:42:31.176 232437 DEBUG nova.compute.manager [req-13d81d79-7347-49eb-b028-96e57f5e6d73 req-359a558b-934b-43d4-85cf-c55640f347c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Received event network-changed-9150edb9-25c0-443e-baec-8ba956bec23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:31 np0005548731 nova_compute[232433]: 2025-12-06 07:42:31.176 232437 DEBUG nova.compute.manager [req-13d81d79-7347-49eb-b028-96e57f5e6d73 req-359a558b-934b-43d4-85cf-c55640f347c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Refreshing instance network info cache due to event network-changed-9150edb9-25c0-443e-baec-8ba956bec23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:42:31 np0005548731 nova_compute[232433]: 2025-12-06 07:42:31.177 232437 DEBUG oslo_concurrency.lockutils [req-13d81d79-7347-49eb-b028-96e57f5e6d73 req-359a558b-934b-43d4-85cf-c55640f347c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:42:31 np0005548731 nova_compute[232433]: 2025-12-06 07:42:31.243 232437 DEBUG nova.network.neutron [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:42:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:32.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:32 np0005548731 nova_compute[232433]: 2025-12-06 07:42:32.563 232437 DEBUG nova.network.neutron [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Updating instance_info_cache with network_info: [{"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:42:32 np0005548731 nova_compute[232433]: 2025-12-06 07:42:32.585 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Releasing lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:42:32 np0005548731 nova_compute[232433]: 2025-12-06 07:42:32.585 232437 DEBUG nova.compute.manager [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Instance network_info: |[{"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:42:32 np0005548731 nova_compute[232433]: 2025-12-06 07:42:32.586 232437 DEBUG oslo_concurrency.lockutils [req-13d81d79-7347-49eb-b028-96e57f5e6d73 req-359a558b-934b-43d4-85cf-c55640f347c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:42:32 np0005548731 nova_compute[232433]: 2025-12-06 07:42:32.586 232437 DEBUG nova.network.neutron [req-13d81d79-7347-49eb-b028-96e57f5e6d73 req-359a558b-934b-43d4-85cf-c55640f347c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Refreshing network info cache for port 9150edb9-25c0-443e-baec-8ba956bec23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:42:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:42:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:33.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:42:33 np0005548731 nova_compute[232433]: 2025-12-06 07:42:33.265 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.032 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.044 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e5068414-3119-4041-97c2-e4cef3bb779a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.132 232437 DEBUG nova.storage.rbd_utils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] resizing rbd image e5068414-3119-4041-97c2-e4cef3bb779a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.262 232437 DEBUG nova.objects.instance [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'migration_context' on Instance uuid e5068414-3119-4041-97c2-e4cef3bb779a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.278 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.279 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Ensure instance console log exists: /var/lib/nova/instances/e5068414-3119-4041-97c2-e4cef3bb779a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.280 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.281 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.281 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.286 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Start _get_guest_xml network_info=[{"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.299 232437 WARNING nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.305 232437 DEBUG nova.virt.libvirt.host [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.307 232437 DEBUG nova.virt.libvirt.host [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.312 232437 DEBUG nova.virt.libvirt.host [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.313 232437 DEBUG nova.virt.libvirt.host [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.315 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.315 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.317 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.317 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.318 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.318 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.318 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.318 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.319 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.319 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.319 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.319 232437 DEBUG nova.virt.hardware [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.323 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:34.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.500 232437 DEBUG nova.network.neutron [req-13d81d79-7347-49eb-b028-96e57f5e6d73 req-359a558b-934b-43d4-85cf-c55640f347c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Updated VIF entry in instance network info cache for port 9150edb9-25c0-443e-baec-8ba956bec23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.501 232437 DEBUG nova.network.neutron [req-13d81d79-7347-49eb-b028-96e57f5e6d73 req-359a558b-934b-43d4-85cf-c55640f347c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Updating instance_info_cache with network_info: [{"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:42:34 np0005548731 nova_compute[232433]: 2025-12-06 07:42:34.520 232437 DEBUG oslo_concurrency.lockutils [req-13d81d79-7347-49eb-b028-96e57f5e6d73 req-359a558b-934b-43d4-85cf-c55640f347c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:42:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:42:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3098859861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:42:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:35.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.704564) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955704619, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1911, "num_deletes": 258, "total_data_size": 4109549, "memory_usage": 4166888, "flush_reason": "Manual Compaction"}
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955716059, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 1715795, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53607, "largest_seqno": 55513, "table_properties": {"data_size": 1709538, "index_size": 3203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 17153, "raw_average_key_size": 21, "raw_value_size": 1695498, "raw_average_value_size": 2151, "num_data_blocks": 140, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006799, "oldest_key_time": 1765006799, "file_creation_time": 1765006955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 11548 microseconds, and 6222 cpu microseconds.
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.716112) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 1715795 bytes OK
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.716130) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.718312) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.718332) EVENT_LOG_v1 {"time_micros": 1765006955718327, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.718349) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 4100834, prev total WAL file size 4100834, number of live WAL files 2.
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.719334) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373735' seq:72057594037927935, type:22 .. '6D6772737461740032303237' seq:0, type:0; will stop at (end)
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(1675KB)], [102(12MB)]
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955719372, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 14872806, "oldest_snapshot_seqno": -1}
Dec  6 02:42:35 np0005548731 nova_compute[232433]: 2025-12-06 07:42:35.738 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:35 np0005548731 nova_compute[232433]: 2025-12-06 07:42:35.765 232437 DEBUG nova.storage.rbd_utils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image e5068414-3119-4041-97c2-e4cef3bb779a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:42:35 np0005548731 nova_compute[232433]: 2025-12-06 07:42:35.770 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8874 keys, 11975025 bytes, temperature: kUnknown
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955785635, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 11975025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11917601, "index_size": 34144, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 229428, "raw_average_key_size": 25, "raw_value_size": 11761495, "raw_average_value_size": 1325, "num_data_blocks": 1340, "num_entries": 8874, "num_filter_entries": 8874, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765006955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.786077) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 11975025 bytes
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.787183) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.0 rd, 180.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 12.5 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(15.6) write-amplify(7.0) OK, records in: 9344, records dropped: 470 output_compression: NoCompression
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.787205) EVENT_LOG_v1 {"time_micros": 1765006955787194, "job": 64, "event": "compaction_finished", "compaction_time_micros": 66397, "compaction_time_cpu_micros": 29225, "output_level": 6, "num_output_files": 1, "total_output_size": 11975025, "num_input_records": 9344, "num_output_records": 8874, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955787795, "job": 64, "event": "table_file_deletion", "file_number": 104}
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955790121, "job": 64, "event": "table_file_deletion", "file_number": 102}
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.719252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.790165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.790172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.790174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.790175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:42:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:42:35.790177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:42:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:42:36 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1153323595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.210 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.212 232437 DEBUG nova.virt.libvirt.vif [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:42:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-8475590',display_name='tempest-AttachVolumeNegativeTest-server-8475590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-8475590',id=146,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOAw6woOATDIm/DWqTIph4LibbWTtXMwMrs+K9JWtnpxynvb7JtS3OPLqjHfzzc8U4N4dhPecjsu4dz194SejMj6fp49R968XBi14ndk4PM/WEWME049321djHvZx3nDw==',key_name='tempest-keypair-1954418097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-w4vw8hdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:42:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=e5068414-3119-4041-97c2-e4cef3bb779a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.213 232437 DEBUG nova.network.os_vif_util [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.214 232437 DEBUG nova.network.os_vif_util [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:8d:1f,bridge_name='br-int',has_traffic_filtering=True,id=9150edb9-25c0-443e-baec-8ba956bec23f,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9150edb9-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.215 232437 DEBUG nova.objects.instance [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'pci_devices' on Instance uuid e5068414-3119-4041-97c2-e4cef3bb779a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.229 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  <uuid>e5068414-3119-4041-97c2-e4cef3bb779a</uuid>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  <name>instance-00000092</name>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <nova:name>tempest-AttachVolumeNegativeTest-server-8475590</nova:name>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:42:34</nova:creationTime>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <nova:user uuid="297bc99c242e4fa8aedea4a6367b61c0">tempest-AttachVolumeNegativeTest-2080911030-project-member</nova:user>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <nova:project uuid="741dc47f9ced423cbd99fd6f9d32904f">tempest-AttachVolumeNegativeTest-2080911030</nova:project>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <nova:port uuid="9150edb9-25c0-443e-baec-8ba956bec23f">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <entry name="serial">e5068414-3119-4041-97c2-e4cef3bb779a</entry>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <entry name="uuid">e5068414-3119-4041-97c2-e4cef3bb779a</entry>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e5068414-3119-4041-97c2-e4cef3bb779a_disk">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e5068414-3119-4041-97c2-e4cef3bb779a_disk.config">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:f5:8d:1f"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <target dev="tap9150edb9-25"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/e5068414-3119-4041-97c2-e4cef3bb779a/console.log" append="off"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:42:36 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:42:36 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:42:36 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:42:36 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.230 232437 DEBUG nova.compute.manager [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Preparing to wait for external event network-vif-plugged-9150edb9-25c0-443e-baec-8ba956bec23f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.230 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.230 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.230 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.231 232437 DEBUG nova.virt.libvirt.vif [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:42:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-8475590',display_name='tempest-AttachVolumeNegativeTest-server-8475590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-8475590',id=146,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOAw6woOATDIm/DWqTIph4LibbWTtXMwMrs+K9JWtnpxynvb7JtS3OPLqjHfzzc8U4N4dhPecjsu4dz194SejMj6fp49R968XBi14ndk4PM/WEWME049321djHvZx3nDw==',key_name='tempest-keypair-1954418097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-w4vw8hdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:42:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=e5068414-3119-4041-97c2-e4cef3bb779a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.231 232437 DEBUG nova.network.os_vif_util [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.232 232437 DEBUG nova.network.os_vif_util [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:8d:1f,bridge_name='br-int',has_traffic_filtering=True,id=9150edb9-25c0-443e-baec-8ba956bec23f,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9150edb9-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.232 232437 DEBUG os_vif [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:8d:1f,bridge_name='br-int',has_traffic_filtering=True,id=9150edb9-25c0-443e-baec-8ba956bec23f,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9150edb9-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.233 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.233 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.234 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.238 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.238 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9150edb9-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.238 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9150edb9-25, col_values=(('external_ids', {'iface-id': '9150edb9-25c0-443e-baec-8ba956bec23f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:8d:1f', 'vm-uuid': 'e5068414-3119-4041-97c2-e4cef3bb779a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.240 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:36 np0005548731 NetworkManager[49182]: <info>  [1765006956.2409] manager: (tap9150edb9-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.241 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.246 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.247 232437 INFO os_vif [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:8d:1f,bridge_name='br-int',has_traffic_filtering=True,id=9150edb9-25c0-443e-baec-8ba956bec23f,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9150edb9-25')#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.305 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.305 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.305 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No VIF found with MAC fa:16:3e:f5:8d:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.306 232437 INFO nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Using config drive#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.328 232437 DEBUG nova.storage.rbd_utils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image e5068414-3119-4041-97c2-e4cef3bb779a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:42:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:42:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:36.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.637 232437 INFO nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Creating config drive at /var/lib/nova/instances/e5068414-3119-4041-97c2-e4cef3bb779a/disk.config#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.649 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5068414-3119-4041-97c2-e4cef3bb779a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6jsp_34d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.810 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5068414-3119-4041-97c2-e4cef3bb779a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6jsp_34d" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.843 232437 DEBUG nova.storage.rbd_utils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image e5068414-3119-4041-97c2-e4cef3bb779a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:42:36 np0005548731 nova_compute[232433]: 2025-12-06 07:42:36.850 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5068414-3119-4041-97c2-e4cef3bb779a/disk.config e5068414-3119-4041-97c2-e4cef3bb779a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:42:36 np0005548731 podman[296926]: 2025-12-06 07:42:36.90877152 +0000 UTC m=+0.061161851 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec  6 02:42:36 np0005548731 podman[296943]: 2025-12-06 07:42:36.923906377 +0000 UTC m=+0.065714471 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:42:36 np0005548731 podman[296942]: 2025-12-06 07:42:36.945294309 +0000 UTC m=+0.097830414 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 02:42:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:37.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:38 np0005548731 nova_compute[232433]: 2025-12-06 07:42:38.267 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:38.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:39.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:40.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:41.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:41 np0005548731 nova_compute[232433]: 2025-12-06 07:42:41.241 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:41 np0005548731 nova_compute[232433]: 2025-12-06 07:42:41.559 232437 DEBUG oslo_concurrency.processutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5068414-3119-4041-97c2-e4cef3bb779a/disk.config e5068414-3119-4041-97c2-e4cef3bb779a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.709s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:42:41 np0005548731 nova_compute[232433]: 2025-12-06 07:42:41.560 232437 INFO nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Deleting local config drive /var/lib/nova/instances/e5068414-3119-4041-97c2-e4cef3bb779a/disk.config because it was imported into RBD.#033[00m
Dec  6 02:42:41 np0005548731 kernel: tap9150edb9-25: entered promiscuous mode
Dec  6 02:42:41 np0005548731 NetworkManager[49182]: <info>  [1765006961.6410] manager: (tap9150edb9-25): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Dec  6 02:42:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:41Z|00684|binding|INFO|Claiming lport 9150edb9-25c0-443e-baec-8ba956bec23f for this chassis.
Dec  6 02:42:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:41Z|00685|binding|INFO|9150edb9-25c0-443e-baec-8ba956bec23f: Claiming fa:16:3e:f5:8d:1f 10.100.0.14
Dec  6 02:42:41 np0005548731 nova_compute[232433]: 2025-12-06 07:42:41.735 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:41 np0005548731 nova_compute[232433]: 2025-12-06 07:42:41.736 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:41 np0005548731 systemd-udevd[297084]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:42:41 np0005548731 nova_compute[232433]: 2025-12-06 07:42:41.741 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:41 np0005548731 NetworkManager[49182]: <info>  [1765006961.7479] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Dec  6 02:42:41 np0005548731 nova_compute[232433]: 2025-12-06 07:42:41.746 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:41 np0005548731 NetworkManager[49182]: <info>  [1765006961.7492] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Dec  6 02:42:41 np0005548731 NetworkManager[49182]: <info>  [1765006961.7525] device (tap9150edb9-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.751 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:8d:1f 10.100.0.14'], port_security=['fa:16:3e:f5:8d:1f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e5068414-3119-4041-97c2-e4cef3bb779a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '50882c7d-6ee4-44a6-917b-83c2c44b35c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a93df87f-d2df-4d3a-b692-98bba32f2fe1, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9150edb9-25c0-443e-baec-8ba956bec23f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.752 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9150edb9-25c0-443e-baec-8ba956bec23f in datapath 3c5d4817-c3d5-45fc-9890-418e779bacb2 bound to our chassis#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.753 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c5d4817-c3d5-45fc-9890-418e779bacb2#033[00m
Dec  6 02:42:41 np0005548731 NetworkManager[49182]: <info>  [1765006961.7550] device (tap9150edb9-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:42:41 np0005548731 systemd-machined[195355]: New machine qemu-69-instance-00000092.
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.767 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[11dcaede-6d1e-4b4d-bac7-e8d0e5022334]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.768 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c5d4817-c1 in ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.770 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c5d4817-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.770 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[da3f0a0a-c0dc-439d-b97e-3967d8f66aac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.771 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5a4a728a-b59f-499f-8a56-1fb2df0c7cc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.785 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc02d9f-f927-4fe1-a5c6-5796adb1b8e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 systemd[1]: Started Virtual Machine qemu-69-instance-00000092.
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.810 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe95a3b-3ff8-4334-b103-2ccb94663808]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.839 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3b80709e-7573-41bc-9629-aabf2823adac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 systemd-udevd[297088]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:42:41 np0005548731 NetworkManager[49182]: <info>  [1765006961.8462] manager: (tap3c5d4817-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/316)
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.845 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[11043e34-d65d-4360-ae67-54a8e37eb55b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.878 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[330564e2-f57b-4a4f-884b-9a44850cb7f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.880 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef70a8f-e8a6-41ed-bde3-b7083fc1c7c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 NetworkManager[49182]: <info>  [1765006961.9022] device (tap3c5d4817-c0): carrier: link connected
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.908 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3cfec91a-b1a5-49a2-857c-e9d38e809e3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.925 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[06b7e59f-5d59-4bd7-af44-70fc70d9fda4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c5d4817-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:51:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722390, 'reachable_time': 16061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297120, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 nova_compute[232433]: 2025-12-06 07:42:41.938 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.945 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0c67c968-538e-409e-94da-ddb9461c5166]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:517c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722390, 'tstamp': 722390}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297121, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 nova_compute[232433]: 2025-12-06 07:42:41.958 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.960 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ccdaf1-aed4-4096-a9a0-3de71ace9152]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c5d4817-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:51:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722390, 'reachable_time': 16061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297122, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:41Z|00686|binding|INFO|Setting lport 9150edb9-25c0-443e-baec-8ba956bec23f ovn-installed in OVS
Dec  6 02:42:41 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:41Z|00687|binding|INFO|Setting lport 9150edb9-25c0-443e-baec-8ba956bec23f up in Southbound
Dec  6 02:42:41 np0005548731 nova_compute[232433]: 2025-12-06 07:42:41.971 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:41.989 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ade2794a-8f9b-40c8-a035-84b6855a1e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:42.038 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[03587d74-35b3-4415-852d-fb36745b77c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:42.040 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c5d4817-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:42.041 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:42.042 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c5d4817-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:42 np0005548731 NetworkManager[49182]: <info>  [1765006962.0453] manager: (tap3c5d4817-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Dec  6 02:42:42 np0005548731 kernel: tap3c5d4817-c0: entered promiscuous mode
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.044 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:42.050 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c5d4817-c0, col_values=(('external_ids', {'iface-id': 'dc336d05-182d-42ac-ab5e-a73bf30a0662'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.052 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:42Z|00688|binding|INFO|Releasing lport dc336d05-182d-42ac-ab5e-a73bf30a0662 from this chassis (sb_readonly=0)
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:42.056 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:42.057 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8b157751-bf2a-4d3c-aae3-d7fb6af00d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:42.058 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-3c5d4817-c3d5-45fc-9890-418e779bacb2
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 3c5d4817-c3d5-45fc-9890-418e779bacb2
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:42:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:42:42.060 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'env', 'PROCESS_TAG=haproxy-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c5d4817-c3d5-45fc-9890-418e779bacb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.066 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.211 232437 DEBUG nova.compute.manager [req-d9632c60-f61e-4686-aa3c-dc37bc331257 req-1ad41fad-a51b-472a-affd-7eedce161b63 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Received event network-vif-plugged-9150edb9-25c0-443e-baec-8ba956bec23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.211 232437 DEBUG oslo_concurrency.lockutils [req-d9632c60-f61e-4686-aa3c-dc37bc331257 req-1ad41fad-a51b-472a-affd-7eedce161b63 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.211 232437 DEBUG oslo_concurrency.lockutils [req-d9632c60-f61e-4686-aa3c-dc37bc331257 req-1ad41fad-a51b-472a-affd-7eedce161b63 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.211 232437 DEBUG oslo_concurrency.lockutils [req-d9632c60-f61e-4686-aa3c-dc37bc331257 req-1ad41fad-a51b-472a-affd-7eedce161b63 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.211 232437 DEBUG nova.compute.manager [req-d9632c60-f61e-4686-aa3c-dc37bc331257 req-1ad41fad-a51b-472a-affd-7eedce161b63 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Processing event network-vif-plugged-9150edb9-25c0-443e-baec-8ba956bec23f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:42:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:42:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/35790465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:42:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:42:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/35790465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.555 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006962.5549316, e5068414-3119-4041-97c2-e4cef3bb779a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.556 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] VM Started (Lifecycle Event)#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.559 232437 DEBUG nova.compute.manager [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.565 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.569 232437 INFO nova.virt.libvirt.driver [-] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Instance spawned successfully.#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.569 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.590 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.598 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.601 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.602 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.602 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.602 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.603 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.603 232437 DEBUG nova.virt.libvirt.driver [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:42:42 np0005548731 podman[297197]: 2025-12-06 07:42:42.624247998 +0000 UTC m=+0.046672169 container create 29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.634 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.635 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006962.555034, e5068414-3119-4041-97c2-e4cef3bb779a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.635 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:42:42 np0005548731 systemd[1]: Started libpod-conmon-29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e.scope.
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.680 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.685 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765006962.5632417, e5068414-3119-4041-97c2-e4cef3bb779a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.685 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:42:42 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:42:42 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c664a058df5ac54a67b23d6929894f9aaa92601294d8f6cce9a620741acc3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:42:42 np0005548731 podman[297197]: 2025-12-06 07:42:42.598319626 +0000 UTC m=+0.020743817 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:42:42 np0005548731 podman[297197]: 2025-12-06 07:42:42.705533668 +0000 UTC m=+0.127957859 container init 29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.710 232437 INFO nova.compute.manager [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Took 14.13 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.710 232437 DEBUG nova.compute.manager [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:42:42 np0005548731 podman[297197]: 2025-12-06 07:42:42.714307612 +0000 UTC m=+0.136731783 container start 29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.723 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.726 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:42:42 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[297213]: [NOTICE]   (297217) : New worker (297219) forked
Dec  6 02:42:42 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[297213]: [NOTICE]   (297217) : Loading success.
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.766 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.780 232437 INFO nova.compute.manager [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Took 15.23 seconds to build instance.#033[00m
Dec  6 02:42:42 np0005548731 nova_compute[232433]: 2025-12-06 07:42:42.798 232437 DEBUG oslo_concurrency.lockutils [None req-0371dc49-6d82-4b8b-a0d2-53f5c252380a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:43.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:43 np0005548731 nova_compute[232433]: 2025-12-06 07:42:43.270 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:44 np0005548731 nova_compute[232433]: 2025-12-06 07:42:44.637 232437 DEBUG nova.compute.manager [req-c8d35e66-6e6a-4ca4-882d-102779f22b7d req-6cbb92f7-45cc-4d8e-84ac-82b08a53e343 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Received event network-vif-plugged-9150edb9-25c0-443e-baec-8ba956bec23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:44 np0005548731 nova_compute[232433]: 2025-12-06 07:42:44.637 232437 DEBUG oslo_concurrency.lockutils [req-c8d35e66-6e6a-4ca4-882d-102779f22b7d req-6cbb92f7-45cc-4d8e-84ac-82b08a53e343 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:42:44 np0005548731 nova_compute[232433]: 2025-12-06 07:42:44.637 232437 DEBUG oslo_concurrency.lockutils [req-c8d35e66-6e6a-4ca4-882d-102779f22b7d req-6cbb92f7-45cc-4d8e-84ac-82b08a53e343 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:42:44 np0005548731 nova_compute[232433]: 2025-12-06 07:42:44.638 232437 DEBUG oslo_concurrency.lockutils [req-c8d35e66-6e6a-4ca4-882d-102779f22b7d req-6cbb92f7-45cc-4d8e-84ac-82b08a53e343 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:42:44 np0005548731 nova_compute[232433]: 2025-12-06 07:42:44.638 232437 DEBUG nova.compute.manager [req-c8d35e66-6e6a-4ca4-882d-102779f22b7d req-6cbb92f7-45cc-4d8e-84ac-82b08a53e343 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] No waiting events found dispatching network-vif-plugged-9150edb9-25c0-443e-baec-8ba956bec23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:42:44 np0005548731 nova_compute[232433]: 2025-12-06 07:42:44.638 232437 WARNING nova.compute.manager [req-c8d35e66-6e6a-4ca4-882d-102779f22b7d req-6cbb92f7-45cc-4d8e-84ac-82b08a53e343 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Received unexpected event network-vif-plugged-9150edb9-25c0-443e-baec-8ba956bec23f for instance with vm_state active and task_state None.#033[00m
Dec  6 02:42:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:45.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:45 np0005548731 nova_compute[232433]: 2025-12-06 07:42:45.977 232437 DEBUG nova.compute.manager [req-042bf1a7-09fa-4ae9-9a8f-806731a06858 req-da6aaa35-e4fb-4305-8040-9c1bcbe94e50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Received event network-changed-9150edb9-25c0-443e-baec-8ba956bec23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:42:45 np0005548731 nova_compute[232433]: 2025-12-06 07:42:45.977 232437 DEBUG nova.compute.manager [req-042bf1a7-09fa-4ae9-9a8f-806731a06858 req-da6aaa35-e4fb-4305-8040-9c1bcbe94e50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Refreshing instance network info cache due to event network-changed-9150edb9-25c0-443e-baec-8ba956bec23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:42:45 np0005548731 nova_compute[232433]: 2025-12-06 07:42:45.977 232437 DEBUG oslo_concurrency.lockutils [req-042bf1a7-09fa-4ae9-9a8f-806731a06858 req-da6aaa35-e4fb-4305-8040-9c1bcbe94e50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:42:45 np0005548731 nova_compute[232433]: 2025-12-06 07:42:45.977 232437 DEBUG oslo_concurrency.lockutils [req-042bf1a7-09fa-4ae9-9a8f-806731a06858 req-da6aaa35-e4fb-4305-8040-9c1bcbe94e50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:42:45 np0005548731 nova_compute[232433]: 2025-12-06 07:42:45.978 232437 DEBUG nova.network.neutron [req-042bf1a7-09fa-4ae9-9a8f-806731a06858 req-da6aaa35-e4fb-4305-8040-9c1bcbe94e50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Refreshing network info cache for port 9150edb9-25c0-443e-baec-8ba956bec23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:42:46 np0005548731 nova_compute[232433]: 2025-12-06 07:42:46.244 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:42:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:46.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:42:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:47.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:48 np0005548731 nova_compute[232433]: 2025-12-06 07:42:48.136 232437 DEBUG nova.network.neutron [req-042bf1a7-09fa-4ae9-9a8f-806731a06858 req-da6aaa35-e4fb-4305-8040-9c1bcbe94e50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Updated VIF entry in instance network info cache for port 9150edb9-25c0-443e-baec-8ba956bec23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:42:48 np0005548731 nova_compute[232433]: 2025-12-06 07:42:48.136 232437 DEBUG nova.network.neutron [req-042bf1a7-09fa-4ae9-9a8f-806731a06858 req-da6aaa35-e4fb-4305-8040-9c1bcbe94e50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Updating instance_info_cache with network_info: [{"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:42:48 np0005548731 nova_compute[232433]: 2025-12-06 07:42:48.191 232437 DEBUG oslo_concurrency.lockutils [req-042bf1a7-09fa-4ae9-9a8f-806731a06858 req-da6aaa35-e4fb-4305-8040-9c1bcbe94e50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:42:48 np0005548731 nova_compute[232433]: 2025-12-06 07:42:48.272 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:48.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:49.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:51.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:51 np0005548731 nova_compute[232433]: 2025-12-06 07:42:51.288 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:52.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:42:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:53.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:42:53 np0005548731 nova_compute[232433]: 2025-12-06 07:42:53.274 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:54Z|00689|binding|INFO|Releasing lport dc336d05-182d-42ac-ab5e-a73bf30a0662 from this chassis (sb_readonly=0)
Dec  6 02:42:54 np0005548731 nova_compute[232433]: 2025-12-06 07:42:54.194 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:54.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:55.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:56 np0005548731 nova_compute[232433]: 2025-12-06 07:42:56.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:42:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:56.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:42:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:56Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f5:8d:1f 10.100.0.14
Dec  6 02:42:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:42:56Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f5:8d:1f 10.100.0.14
Dec  6 02:42:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:57.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:58 np0005548731 nova_compute[232433]: 2025-12-06 07:42:58.277 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:42:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:58.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:42:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:42:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:42:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:42:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:59.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:43:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:00.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:43:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:00.883 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:00.884 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:00.884 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:01.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:01 np0005548731 nova_compute[232433]: 2025-12-06 07:43:01.335 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:02.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:03.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:03 np0005548731 nova_compute[232433]: 2025-12-06 07:43:03.279 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:03 np0005548731 nova_compute[232433]: 2025-12-06 07:43:03.365 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:43:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:04.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:05.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:06 np0005548731 nova_compute[232433]: 2025-12-06 07:43:06.338 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:06.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:06 np0005548731 nova_compute[232433]: 2025-12-06 07:43:06.609 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:43:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:07.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:43:07 np0005548731 podman[297290]: 2025-12-06 07:43:07.898572069 +0000 UTC m=+0.061274915 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:43:07 np0005548731 podman[297292]: 2025-12-06 07:43:07.913372021 +0000 UTC m=+0.070886320 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 02:43:07 np0005548731 podman[297291]: 2025-12-06 07:43:07.929748869 +0000 UTC m=+0.086620093 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  6 02:43:08 np0005548731 nova_compute[232433]: 2025-12-06 07:43:08.282 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:08.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:43:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:43:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3363148672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:43:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:43:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3363148672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:43:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:43:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:09.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:43:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:43:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:10.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:43:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:43:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:43:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 02:43:11 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 02:43:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:11.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:11 np0005548731 nova_compute[232433]: 2025-12-06 07:43:11.339 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:11 np0005548731 podman[297643]: 2025-12-06 07:43:11.390914374 +0000 UTC m=+0.066665126 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  6 02:43:11 np0005548731 podman[297643]: 2025-12-06 07:43:11.489871807 +0000 UTC m=+0.165622559 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  6 02:43:12 np0005548731 podman[297796]: 2025-12-06 07:43:12.061917904 +0000 UTC m=+0.054691655 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:43:12 np0005548731 podman[297817]: 2025-12-06 07:43:12.259727277 +0000 UTC m=+0.182134391 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:43:12 np0005548731 podman[297796]: 2025-12-06 07:43:12.265350175 +0000 UTC m=+0.258123906 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:43:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:12.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:12 np0005548731 podman[297861]: 2025-12-06 07:43:12.454321942 +0000 UTC m=+0.049858627 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, name=keepalived, version=2.2.4, description=keepalived for Ceph, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9)
Dec  6 02:43:12 np0005548731 podman[297861]: 2025-12-06 07:43:12.466838597 +0000 UTC m=+0.062375262 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, release=1793, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=)
Dec  6 02:43:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:13.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:43:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:43:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:43:13 np0005548731 nova_compute[232433]: 2025-12-06 07:43:13.286 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:13 np0005548731 nova_compute[232433]: 2025-12-06 07:43:13.374 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:13 np0005548731 nova_compute[232433]: 2025-12-06 07:43:13.374 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:13 np0005548731 nova_compute[232433]: 2025-12-06 07:43:13.393 232437 DEBUG nova.compute.manager [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:43:13 np0005548731 nova_compute[232433]: 2025-12-06 07:43:13.473 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:13 np0005548731 nova_compute[232433]: 2025-12-06 07:43:13.473 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:13 np0005548731 nova_compute[232433]: 2025-12-06 07:43:13.480 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:43:13 np0005548731 nova_compute[232433]: 2025-12-06 07:43:13.480 232437 INFO nova.compute.claims [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:43:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:43:13 np0005548731 nova_compute[232433]: 2025-12-06 07:43:13.623 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:43:14 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3595080009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.079 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.086 232437 DEBUG nova.compute.provider_tree [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.107 232437 DEBUG nova.scheduler.client.report [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.139 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.140 232437 DEBUG nova.compute.manager [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.203 232437 DEBUG nova.compute.manager [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.203 232437 DEBUG nova.network.neutron [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.226 232437 INFO nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.245 232437 DEBUG nova.compute.manager [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:43:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:43:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.340 232437 DEBUG nova.compute.manager [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.341 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.342 232437 INFO nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Creating image(s)#033[00m
Dec  6 02:43:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:14.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.370 232437 DEBUG nova.storage.rbd_utils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.397 232437 DEBUG nova.storage.rbd_utils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.426 232437 DEBUG nova.storage.rbd_utils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.431 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.462 232437 DEBUG nova.policy [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d8b62a3276f4a8b8349af67b82134c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eff1f6a1654b45079de20eddb830e76d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.506 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.507 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.507 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.507 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.533 232437 DEBUG nova.storage.rbd_utils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:14 np0005548731 nova_compute[232433]: 2025-12-06 07:43:14.536 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:15 np0005548731 nova_compute[232433]: 2025-12-06 07:43:15.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:15 np0005548731 nova_compute[232433]: 2025-12-06 07:43:15.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000048s ======
Dec  6 02:43:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:15.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Dec  6 02:43:15 np0005548731 nova_compute[232433]: 2025-12-06 07:43:15.225 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.689s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:15 np0005548731 nova_compute[232433]: 2025-12-06 07:43:15.324 232437 DEBUG nova.storage.rbd_utils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] resizing rbd image 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:43:15 np0005548731 nova_compute[232433]: 2025-12-06 07:43:15.569 232437 DEBUG nova.network.neutron [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Successfully created port: 59bb701f-9125-43d2-9023-6af0a1f18fea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:43:15 np0005548731 nova_compute[232433]: 2025-12-06 07:43:15.771 232437 DEBUG nova.objects.instance [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'migration_context' on Instance uuid 4c6cadbd-1c16-490c-aa8c-049b1595ca1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:43:15 np0005548731 nova_compute[232433]: 2025-12-06 07:43:15.967 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:43:15 np0005548731 nova_compute[232433]: 2025-12-06 07:43:15.967 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Ensure instance console log exists: /var/lib/nova/instances/4c6cadbd-1c16-490c-aa8c-049b1595ca1b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:43:15 np0005548731 nova_compute[232433]: 2025-12-06 07:43:15.967 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:15 np0005548731 nova_compute[232433]: 2025-12-06 07:43:15.968 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:15 np0005548731 nova_compute[232433]: 2025-12-06 07:43:15.968 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:16 np0005548731 nova_compute[232433]: 2025-12-06 07:43:16.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:16 np0005548731 nova_compute[232433]: 2025-12-06 07:43:16.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:43:16 np0005548731 nova_compute[232433]: 2025-12-06 07:43:16.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:43:16 np0005548731 nova_compute[232433]: 2025-12-06 07:43:16.218 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  6 02:43:16 np0005548731 nova_compute[232433]: 2025-12-06 07:43:16.342 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:16.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:16 np0005548731 nova_compute[232433]: 2025-12-06 07:43:16.439 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:43:16 np0005548731 nova_compute[232433]: 2025-12-06 07:43:16.439 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:43:16 np0005548731 nova_compute[232433]: 2025-12-06 07:43:16.439 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:43:16 np0005548731 nova_compute[232433]: 2025-12-06 07:43:16.440 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid e5068414-3119-4041-97c2-e4cef3bb779a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:43:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:17.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:17 np0005548731 nova_compute[232433]: 2025-12-06 07:43:17.573 232437 DEBUG nova.network.neutron [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Successfully updated port: 59bb701f-9125-43d2-9023-6af0a1f18fea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:43:17 np0005548731 nova_compute[232433]: 2025-12-06 07:43:17.825 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "refresh_cache-4c6cadbd-1c16-490c-aa8c-049b1595ca1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:43:17 np0005548731 nova_compute[232433]: 2025-12-06 07:43:17.825 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquired lock "refresh_cache-4c6cadbd-1c16-490c-aa8c-049b1595ca1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:43:17 np0005548731 nova_compute[232433]: 2025-12-06 07:43:17.826 232437 DEBUG nova.network.neutron [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:43:17 np0005548731 nova_compute[232433]: 2025-12-06 07:43:17.922 232437 DEBUG nova.compute.manager [req-7f32edc7-f38a-4e1f-a1ad-2494d22db066 req-c02716a1-209f-4bad-8639-ac3d5cee77e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Received event network-changed-59bb701f-9125-43d2-9023-6af0a1f18fea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:17 np0005548731 nova_compute[232433]: 2025-12-06 07:43:17.925 232437 DEBUG nova.compute.manager [req-7f32edc7-f38a-4e1f-a1ad-2494d22db066 req-c02716a1-209f-4bad-8639-ac3d5cee77e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Refreshing instance network info cache due to event network-changed-59bb701f-9125-43d2-9023-6af0a1f18fea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:43:17 np0005548731 nova_compute[232433]: 2025-12-06 07:43:17.925 232437 DEBUG oslo_concurrency.lockutils [req-7f32edc7-f38a-4e1f-a1ad-2494d22db066 req-c02716a1-209f-4bad-8639-ac3d5cee77e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4c6cadbd-1c16-490c-aa8c-049b1595ca1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.284 232437 DEBUG nova.network.neutron [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.288 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:18.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.537 232437 DEBUG oslo_concurrency.lockutils [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "e5068414-3119-4041-97c2-e4cef3bb779a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.537 232437 DEBUG oslo_concurrency.lockutils [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.538 232437 DEBUG oslo_concurrency.lockutils [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.538 232437 DEBUG oslo_concurrency.lockutils [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.539 232437 DEBUG oslo_concurrency.lockutils [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.540 232437 INFO nova.compute.manager [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Terminating instance#033[00m
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.541 232437 DEBUG nova.compute.manager [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.543 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Updating instance_info_cache with network_info: [{"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.565 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-e5068414-3119-4041-97c2-e4cef3bb779a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:43:18 np0005548731 nova_compute[232433]: 2025-12-06 07:43:18.566 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:43:19 np0005548731 kernel: tap9150edb9-25 (unregistering): left promiscuous mode
Dec  6 02:43:19 np0005548731 NetworkManager[49182]: <info>  [1765006999.0060] device (tap9150edb9-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:43:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:19Z|00690|binding|INFO|Releasing lport 9150edb9-25c0-443e-baec-8ba956bec23f from this chassis (sb_readonly=0)
Dec  6 02:43:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:19Z|00691|binding|INFO|Setting lport 9150edb9-25c0-443e-baec-8ba956bec23f down in Southbound
Dec  6 02:43:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:19Z|00692|binding|INFO|Removing iface tap9150edb9-25 ovn-installed in OVS
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.059 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.072 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:19 np0005548731 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000092.scope: Deactivated successfully.
Dec  6 02:43:19 np0005548731 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000092.scope: Consumed 14.190s CPU time.
Dec  6 02:43:19 np0005548731 systemd-machined[195355]: Machine qemu-69-instance-00000092 terminated.
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.161 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.165 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.175 232437 INFO nova.virt.libvirt.driver [-] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Instance destroyed successfully.#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.176 232437 DEBUG nova.objects.instance [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'resources' on Instance uuid e5068414-3119-4041-97c2-e4cef3bb779a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:43:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:19.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.275 232437 DEBUG nova.network.neutron [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Updating instance_info_cache with network_info: [{"id": "59bb701f-9125-43d2-9023-6af0a1f18fea", "address": "fa:16:3e:e0:ff:4a", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59bb701f-91", "ovs_interfaceid": "59bb701f-9125-43d2-9023-6af0a1f18fea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.429 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:8d:1f 10.100.0.14'], port_security=['fa:16:3e:f5:8d:1f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e5068414-3119-4041-97c2-e4cef3bb779a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '50882c7d-6ee4-44a6-917b-83c2c44b35c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a93df87f-d2df-4d3a-b692-98bba32f2fe1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9150edb9-25c0-443e-baec-8ba956bec23f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.430 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9150edb9-25c0-443e-baec-8ba956bec23f in datapath 3c5d4817-c3d5-45fc-9890-418e779bacb2 unbound from our chassis#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.431 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c5d4817-c3d5-45fc-9890-418e779bacb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.433 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[404c0c50-eaec-4f49-b9dc-d762932141a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.433 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 namespace which is not needed anymore#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.485 232437 DEBUG nova.virt.libvirt.vif [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:42:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-8475590',display_name='tempest-AttachVolumeNegativeTest-server-8475590',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-8475590',id=146,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOAw6woOATDIm/DWqTIph4LibbWTtXMwMrs+K9JWtnpxynvb7JtS3OPLqjHfzzc8U4N4dhPecjsu4dz194SejMj6fp49R968XBi14ndk4PM/WEWME049321djHvZx3nDw==',key_name='tempest-keypair-1954418097',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:42:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-w4vw8hdt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:42:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=e5068414-3119-4041-97c2-e4cef3bb779a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.486 232437 DEBUG nova.network.os_vif_util [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "9150edb9-25c0-443e-baec-8ba956bec23f", "address": "fa:16:3e:f5:8d:1f", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9150edb9-25", "ovs_interfaceid": "9150edb9-25c0-443e-baec-8ba956bec23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.487 232437 DEBUG nova.network.os_vif_util [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:8d:1f,bridge_name='br-int',has_traffic_filtering=True,id=9150edb9-25c0-443e-baec-8ba956bec23f,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9150edb9-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.487 232437 DEBUG os_vif [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:8d:1f,bridge_name='br-int',has_traffic_filtering=True,id=9150edb9-25c0-443e-baec-8ba956bec23f,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9150edb9-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.490 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.490 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9150edb9-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.492 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.493 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.497 232437 INFO os_vif [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:8d:1f,bridge_name='br-int',has_traffic_filtering=True,id=9150edb9-25c0-443e-baec-8ba956bec23f,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9150edb9-25')#033[00m
Dec  6 02:43:19 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[297213]: [NOTICE]   (297217) : haproxy version is 2.8.14-c23fe91
Dec  6 02:43:19 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[297213]: [NOTICE]   (297217) : path to executable is /usr/sbin/haproxy
Dec  6 02:43:19 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[297213]: [WARNING]  (297217) : Exiting Master process...
Dec  6 02:43:19 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[297213]: [ALERT]    (297217) : Current worker (297219) exited with code 143 (Terminated)
Dec  6 02:43:19 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[297213]: [WARNING]  (297217) : All workers exited. Exiting... (0)
Dec  6 02:43:19 np0005548731 systemd[1]: libpod-29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e.scope: Deactivated successfully.
Dec  6 02:43:19 np0005548731 podman[298250]: 2025-12-06 07:43:19.567426606 +0000 UTC m=+0.049901638 container died 29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 02:43:19 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e-userdata-shm.mount: Deactivated successfully.
Dec  6 02:43:19 np0005548731 systemd[1]: var-lib-containers-storage-overlay-66c664a058df5ac54a67b23d6929894f9aaa92601294d8f6cce9a620741acc3a-merged.mount: Deactivated successfully.
Dec  6 02:43:19 np0005548731 podman[298250]: 2025-12-06 07:43:19.607815211 +0000 UTC m=+0.090290243 container cleanup 29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 02:43:19 np0005548731 systemd[1]: libpod-conmon-29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e.scope: Deactivated successfully.
Dec  6 02:43:19 np0005548731 podman[298296]: 2025-12-06 07:43:19.670453408 +0000 UTC m=+0.043391509 container remove 29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.676 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b890f980-9c71-403f-ab0d-ba6c6ec6a078]: (4, ('Sat Dec  6 07:43:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 (29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e)\n29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e\nSat Dec  6 07:43:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 (29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e)\n29da0ea242486d426a7880cc0b88d08df1439ab7c7ec0b98a8b5629eacc3773e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.677 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[385a07f5-8bc5-4393-8ef5-0d70a004e7c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.679 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c5d4817-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:19 np0005548731 kernel: tap3c5d4817-c0: left promiscuous mode
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.682 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.696 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.699 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d77878e6-3870-45ba-b1af-a02adc366c20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.711 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9973eda4-ff0e-48aa-bbcf-75ca25d12316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.712 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5e855fea-e8a6-4f90-b169-e1c7478943bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.732 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[45104bf8-16c7-4241-b23a-0dac3415597e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722384, 'reachable_time': 17861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298312, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.735 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:43:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:19.736 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[a03d7f56-767a-4786-b3ae-08cddf8bd7da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:19 np0005548731 systemd[1]: run-netns-ovnmeta\x2d3c5d4817\x2dc3d5\x2d45fc\x2d9890\x2d418e779bacb2.mount: Deactivated successfully.
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.887 232437 INFO nova.virt.libvirt.driver [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Deleting instance files /var/lib/nova/instances/e5068414-3119-4041-97c2-e4cef3bb779a_del#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.889 232437 INFO nova.virt.libvirt.driver [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Deletion of /var/lib/nova/instances/e5068414-3119-4041-97c2-e4cef3bb779a_del complete#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.895 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Releasing lock "refresh_cache-4c6cadbd-1c16-490c-aa8c-049b1595ca1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.895 232437 DEBUG nova.compute.manager [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Instance network_info: |[{"id": "59bb701f-9125-43d2-9023-6af0a1f18fea", "address": "fa:16:3e:e0:ff:4a", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59bb701f-91", "ovs_interfaceid": "59bb701f-9125-43d2-9023-6af0a1f18fea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.895 232437 DEBUG oslo_concurrency.lockutils [req-7f32edc7-f38a-4e1f-a1ad-2494d22db066 req-c02716a1-209f-4bad-8639-ac3d5cee77e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4c6cadbd-1c16-490c-aa8c-049b1595ca1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.895 232437 DEBUG nova.network.neutron [req-7f32edc7-f38a-4e1f-a1ad-2494d22db066 req-c02716a1-209f-4bad-8639-ac3d5cee77e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Refreshing network info cache for port 59bb701f-9125-43d2-9023-6af0a1f18fea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.898 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Start _get_guest_xml network_info=[{"id": "59bb701f-9125-43d2-9023-6af0a1f18fea", "address": "fa:16:3e:e0:ff:4a", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59bb701f-91", "ovs_interfaceid": "59bb701f-9125-43d2-9023-6af0a1f18fea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.902 232437 WARNING nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.906 232437 DEBUG nova.virt.libvirt.host [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.907 232437 DEBUG nova.virt.libvirt.host [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.909 232437 DEBUG nova.virt.libvirt.host [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.910 232437 DEBUG nova.virt.libvirt.host [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.911 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.911 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.912 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.912 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.912 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.913 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.913 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.913 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.914 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.914 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.914 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.915 232437 DEBUG nova.virt.hardware [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:43:19 np0005548731 nova_compute[232433]: 2025-12-06 07:43:19.917 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:43:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:43:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4055792314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.353 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:20.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.377 232437 DEBUG nova.storage.rbd_utils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.380 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.408 232437 INFO nova.compute.manager [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Took 1.87 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.409 232437 DEBUG oslo.service.loopingcall [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.409 232437 DEBUG nova.compute.manager [-] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.409 232437 DEBUG nova.network.neutron [-] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:43:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:43:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:43:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:43:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1147296475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.796 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.797 232437 DEBUG nova.virt.libvirt.vif [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:43:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-616994635',display_name='tempest-ServersTestJSON-server-616994635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-616994635',id=151,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-261ay309',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:43:14Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=4c6cadbd-1c16-490c-aa8c-049b1595ca1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59bb701f-9125-43d2-9023-6af0a1f18fea", "address": "fa:16:3e:e0:ff:4a", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59bb701f-91", "ovs_interfaceid": "59bb701f-9125-43d2-9023-6af0a1f18fea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.797 232437 DEBUG nova.network.os_vif_util [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "59bb701f-9125-43d2-9023-6af0a1f18fea", "address": "fa:16:3e:e0:ff:4a", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59bb701f-91", "ovs_interfaceid": "59bb701f-9125-43d2-9023-6af0a1f18fea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.798 232437 DEBUG nova.network.os_vif_util [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:ff:4a,bridge_name='br-int',has_traffic_filtering=True,id=59bb701f-9125-43d2-9023-6af0a1f18fea,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59bb701f-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.799 232437 DEBUG nova.objects.instance [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c6cadbd-1c16-490c-aa8c-049b1595ca1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.828 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  <uuid>4c6cadbd-1c16-490c-aa8c-049b1595ca1b</uuid>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  <name>instance-00000097</name>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersTestJSON-server-616994635</nova:name>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:43:19</nova:creationTime>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <nova:user uuid="0d8b62a3276f4a8b8349af67b82134c8">tempest-ServersTestJSON-374151197-project-member</nova:user>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <nova:project uuid="eff1f6a1654b45079de20eddb830e76d">tempest-ServersTestJSON-374151197</nova:project>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <nova:port uuid="59bb701f-9125-43d2-9023-6af0a1f18fea">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <entry name="serial">4c6cadbd-1c16-490c-aa8c-049b1595ca1b</entry>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <entry name="uuid">4c6cadbd-1c16-490c-aa8c-049b1595ca1b</entry>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk.config">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:e0:ff:4a"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <target dev="tap59bb701f-91"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/4c6cadbd-1c16-490c-aa8c-049b1595ca1b/console.log" append="off"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:43:20 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:43:20 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:43:20 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:43:20 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.829 232437 DEBUG nova.compute.manager [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Preparing to wait for external event network-vif-plugged-59bb701f-9125-43d2-9023-6af0a1f18fea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.829 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.830 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.830 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.830 232437 DEBUG nova.virt.libvirt.vif [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:43:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-616994635',display_name='tempest-ServersTestJSON-server-616994635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-616994635',id=151,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-261ay309',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:43:14Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=4c6cadbd-1c16-490c-aa8c-049b1595ca1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59bb701f-9125-43d2-9023-6af0a1f18fea", "address": "fa:16:3e:e0:ff:4a", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59bb701f-91", "ovs_interfaceid": "59bb701f-9125-43d2-9023-6af0a1f18fea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.831 232437 DEBUG nova.network.os_vif_util [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "59bb701f-9125-43d2-9023-6af0a1f18fea", "address": "fa:16:3e:e0:ff:4a", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59bb701f-91", "ovs_interfaceid": "59bb701f-9125-43d2-9023-6af0a1f18fea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.831 232437 DEBUG nova.network.os_vif_util [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:ff:4a,bridge_name='br-int',has_traffic_filtering=True,id=59bb701f-9125-43d2-9023-6af0a1f18fea,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59bb701f-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.832 232437 DEBUG os_vif [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:ff:4a,bridge_name='br-int',has_traffic_filtering=True,id=59bb701f-9125-43d2-9023-6af0a1f18fea,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59bb701f-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.832 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.833 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.833 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.835 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.836 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59bb701f-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.836 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59bb701f-91, col_values=(('external_ids', {'iface-id': '59bb701f-9125-43d2-9023-6af0a1f18fea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:ff:4a', 'vm-uuid': '4c6cadbd-1c16-490c-aa8c-049b1595ca1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.837 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:20 np0005548731 NetworkManager[49182]: <info>  [1765007000.8384] manager: (tap59bb701f-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.840 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.842 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.843 232437 INFO os_vif [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:ff:4a,bridge_name='br-int',has_traffic_filtering=True,id=59bb701f-9125-43d2-9023-6af0a1f18fea,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59bb701f-91')#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.901 232437 DEBUG nova.compute.manager [req-22576f30-2b62-4aa0-8bac-a25babdf9583 req-4c7a00d0-b7d5-4707-bc5a-9e4ec5f81268 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Received event network-vif-unplugged-9150edb9-25c0-443e-baec-8ba956bec23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.901 232437 DEBUG oslo_concurrency.lockutils [req-22576f30-2b62-4aa0-8bac-a25babdf9583 req-4c7a00d0-b7d5-4707-bc5a-9e4ec5f81268 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.902 232437 DEBUG oslo_concurrency.lockutils [req-22576f30-2b62-4aa0-8bac-a25babdf9583 req-4c7a00d0-b7d5-4707-bc5a-9e4ec5f81268 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.902 232437 DEBUG oslo_concurrency.lockutils [req-22576f30-2b62-4aa0-8bac-a25babdf9583 req-4c7a00d0-b7d5-4707-bc5a-9e4ec5f81268 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.903 232437 DEBUG nova.compute.manager [req-22576f30-2b62-4aa0-8bac-a25babdf9583 req-4c7a00d0-b7d5-4707-bc5a-9e4ec5f81268 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] No waiting events found dispatching network-vif-unplugged-9150edb9-25c0-443e-baec-8ba956bec23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.903 232437 DEBUG nova.compute.manager [req-22576f30-2b62-4aa0-8bac-a25babdf9583 req-4c7a00d0-b7d5-4707-bc5a-9e4ec5f81268 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Received event network-vif-unplugged-9150edb9-25c0-443e-baec-8ba956bec23f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.912 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.913 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.913 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No VIF found with MAC fa:16:3e:e0:ff:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.914 232437 INFO nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Using config drive#033[00m
Dec  6 02:43:20 np0005548731 nova_compute[232433]: 2025-12-06 07:43:20.948 232437 DEBUG nova.storage.rbd_utils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:43:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:21.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:43:21 np0005548731 nova_compute[232433]: 2025-12-06 07:43:21.510 232437 INFO nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Creating config drive at /var/lib/nova/instances/4c6cadbd-1c16-490c-aa8c-049b1595ca1b/disk.config#033[00m
Dec  6 02:43:21 np0005548731 nova_compute[232433]: 2025-12-06 07:43:21.515 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c6cadbd-1c16-490c-aa8c-049b1595ca1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5zu30ovm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:21 np0005548731 nova_compute[232433]: 2025-12-06 07:43:21.647 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c6cadbd-1c16-490c-aa8c-049b1595ca1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5zu30ovm" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:21 np0005548731 nova_compute[232433]: 2025-12-06 07:43:21.674 232437 DEBUG nova.storage.rbd_utils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:21 np0005548731 nova_compute[232433]: 2025-12-06 07:43:21.678 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c6cadbd-1c16-490c-aa8c-049b1595ca1b/disk.config 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:21 np0005548731 nova_compute[232433]: 2025-12-06 07:43:21.863 232437 DEBUG oslo_concurrency.processutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c6cadbd-1c16-490c-aa8c-049b1595ca1b/disk.config 4c6cadbd-1c16-490c-aa8c-049b1595ca1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:21 np0005548731 nova_compute[232433]: 2025-12-06 07:43:21.864 232437 INFO nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Deleting local config drive /var/lib/nova/instances/4c6cadbd-1c16-490c-aa8c-049b1595ca1b/disk.config because it was imported into RBD.#033[00m
Dec  6 02:43:21 np0005548731 kernel: tap59bb701f-91: entered promiscuous mode
Dec  6 02:43:21 np0005548731 NetworkManager[49182]: <info>  [1765007001.9169] manager: (tap59bb701f-91): new Tun device (/org/freedesktop/NetworkManager/Devices/319)
Dec  6 02:43:21 np0005548731 nova_compute[232433]: 2025-12-06 07:43:21.916 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:21 np0005548731 systemd-udevd[298222]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:43:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:21Z|00693|binding|INFO|Claiming lport 59bb701f-9125-43d2-9023-6af0a1f18fea for this chassis.
Dec  6 02:43:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:21Z|00694|binding|INFO|59bb701f-9125-43d2-9023-6af0a1f18fea: Claiming fa:16:3e:e0:ff:4a 10.100.0.3
Dec  6 02:43:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:21.923 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:ff:4a 10.100.0.3'], port_security=['fa:16:3e:e0:ff:4a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4c6cadbd-1c16-490c-aa8c-049b1595ca1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=59bb701f-9125-43d2-9023-6af0a1f18fea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:43:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:21.924 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 59bb701f-9125-43d2-9023-6af0a1f18fea in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 bound to our chassis#033[00m
Dec  6 02:43:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:21.925 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35a27638-382c-4afb-83b0-edd6d7f4bca8#033[00m
Dec  6 02:43:21 np0005548731 NetworkManager[49182]: <info>  [1765007001.9293] device (tap59bb701f-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:43:21 np0005548731 NetworkManager[49182]: <info>  [1765007001.9302] device (tap59bb701f-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:43:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:21Z|00695|binding|INFO|Setting lport 59bb701f-9125-43d2-9023-6af0a1f18fea ovn-installed in OVS
Dec  6 02:43:21 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:21Z|00696|binding|INFO|Setting lport 59bb701f-9125-43d2-9023-6af0a1f18fea up in Southbound
Dec  6 02:43:21 np0005548731 nova_compute[232433]: 2025-12-06 07:43:21.934 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:21 np0005548731 nova_compute[232433]: 2025-12-06 07:43:21.937 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:21.938 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6903987f-6665-4e3f-8e85-c8b9aa990c87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:21.939 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35a27638-31 in ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:43:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:21.942 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35a27638-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:43:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:21.942 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[615e7636-b2d9-43e5-9cb3-04da1344b613]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:21.943 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[182efc22-8792-4a4e-ad53-2ebc9127a810]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:21 np0005548731 systemd-machined[195355]: New machine qemu-70-instance-00000097.
Dec  6 02:43:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:21.955 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[95c53db6-ce35-44a6-b82e-e676f394c060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:21 np0005548731 systemd[1]: Started Virtual Machine qemu-70-instance-00000097.
Dec  6 02:43:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:21.980 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa0208d-311a-48ab-b65c-6505b2d33e1d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.014 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[42f8f70b-d66f-4f4f-bf03-787ba9e0744f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 NetworkManager[49182]: <info>  [1765007002.0205] manager: (tap35a27638-30): new Veth device (/org/freedesktop/NetworkManager/Devices/320)
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.019 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4444170d-68ed-4679-aa55-89dddc137cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 systemd-udevd[298564]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.055 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8487a2-648d-44d5-9fe6-8adea844aed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.058 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[15b28b25-d515-4a5b-a319-551a4eea5905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.077 232437 DEBUG nova.network.neutron [-] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:43:22 np0005548731 NetworkManager[49182]: <info>  [1765007002.0872] device (tap35a27638-30): carrier: link connected
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.093 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea7a6a8-aecd-48c9-ab37-10094981aaa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.099 232437 INFO nova.compute.manager [-] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Took 1.69 seconds to deallocate network for instance.#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.111 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7e630d-0e60-47ba-ab68-c26007498a43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726409, 'reachable_time': 35712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298583, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.126 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ae857a-b5fb-4fbd-9902-9bc5fb17e280]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:c527'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 726409, 'tstamp': 726409}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298584, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.143 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b8467b1a-6598-4fe1-93f5-8f82f65cff4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726409, 'reachable_time': 35712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298585, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.147 232437 DEBUG oslo_concurrency.lockutils [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.148 232437 DEBUG oslo_concurrency.lockutils [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.168 232437 DEBUG nova.compute.manager [req-6b70efca-f1f3-49b6-86d5-124b28e7b164 req-cfe0d3cb-b531-4b5f-8822-6d7bb3f2d914 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Received event network-vif-deleted-9150edb9-25c0-443e-baec-8ba956bec23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.172 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[74139b20-2692-4d23-b16d-052f83691481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.227 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[faa74198-8861-4fbd-87f9-2d22741e0b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.228 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.228 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.229 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35a27638-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.230 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:22 np0005548731 NetworkManager[49182]: <info>  [1765007002.2313] manager: (tap35a27638-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Dec  6 02:43:22 np0005548731 kernel: tap35a27638-30: entered promiscuous mode
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.234 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35a27638-30, col_values=(('external_ids', {'iface-id': '5e371956-96bf-49df-b4a8-97154044dc54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:22Z|00697|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.233 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.235 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.251 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.252 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.254 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcc70bf-bcbe-4feb-ab01-58c5c960ca6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.255 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:43:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:22.256 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'env', 'PROCESS_TAG=haproxy-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35a27638-382c-4afb-83b0-edd6d7f4bca8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:43:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:22.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.390 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007002.3897216, 4c6cadbd-1c16-490c-aa8c-049b1595ca1b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.390 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] VM Started (Lifecycle Event)#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.413 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.417 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007002.3920388, 4c6cadbd-1c16-490c-aa8c-049b1595ca1b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.417 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.444 232437 DEBUG oslo_concurrency.processutils [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.470 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.473 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.490 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.586 232437 DEBUG nova.network.neutron [req-7f32edc7-f38a-4e1f-a1ad-2494d22db066 req-c02716a1-209f-4bad-8639-ac3d5cee77e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Updated VIF entry in instance network info cache for port 59bb701f-9125-43d2-9023-6af0a1f18fea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.586 232437 DEBUG nova.network.neutron [req-7f32edc7-f38a-4e1f-a1ad-2494d22db066 req-c02716a1-209f-4bad-8639-ac3d5cee77e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Updating instance_info_cache with network_info: [{"id": "59bb701f-9125-43d2-9023-6af0a1f18fea", "address": "fa:16:3e:e0:ff:4a", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59bb701f-91", "ovs_interfaceid": "59bb701f-9125-43d2-9023-6af0a1f18fea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.603 232437 DEBUG oslo_concurrency.lockutils [req-7f32edc7-f38a-4e1f-a1ad-2494d22db066 req-c02716a1-209f-4bad-8639-ac3d5cee77e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4c6cadbd-1c16-490c-aa8c-049b1595ca1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:43:22 np0005548731 podman[298662]: 2025-12-06 07:43:22.58835837 +0000 UTC m=+0.026130429 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.728 232437 DEBUG nova.compute.manager [req-6e644a1c-889d-48a5-9fd7-8ba6852fd51d req-c7c2f472-8b14-4d91-9089-e8620ff8530f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Received event network-vif-plugged-59bb701f-9125-43d2-9023-6af0a1f18fea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.729 232437 DEBUG oslo_concurrency.lockutils [req-6e644a1c-889d-48a5-9fd7-8ba6852fd51d req-c7c2f472-8b14-4d91-9089-e8620ff8530f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.729 232437 DEBUG oslo_concurrency.lockutils [req-6e644a1c-889d-48a5-9fd7-8ba6852fd51d req-c7c2f472-8b14-4d91-9089-e8620ff8530f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.730 232437 DEBUG oslo_concurrency.lockutils [req-6e644a1c-889d-48a5-9fd7-8ba6852fd51d req-c7c2f472-8b14-4d91-9089-e8620ff8530f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.730 232437 DEBUG nova.compute.manager [req-6e644a1c-889d-48a5-9fd7-8ba6852fd51d req-c7c2f472-8b14-4d91-9089-e8620ff8530f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Processing event network-vif-plugged-59bb701f-9125-43d2-9023-6af0a1f18fea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.731 232437 DEBUG nova.compute.manager [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.734 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007002.7346416, 4c6cadbd-1c16-490c-aa8c-049b1595ca1b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.735 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.737 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.741 232437 INFO nova.virt.libvirt.driver [-] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Instance spawned successfully.#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.741 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.754 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.767 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.769 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.770 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.770 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.770 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.771 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.771 232437 DEBUG nova.virt.libvirt.driver [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.792 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.829 232437 INFO nova.compute.manager [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Took 8.49 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.829 232437 DEBUG nova.compute.manager [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:43:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:43:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3041811195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.864 232437 DEBUG oslo_concurrency.processutils [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.870 232437 DEBUG nova.compute.provider_tree [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.886 232437 DEBUG nova.scheduler.client.report [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.891 232437 INFO nova.compute.manager [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Took 9.45 seconds to build instance.#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.915 232437 DEBUG oslo_concurrency.lockutils [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.918 232437 DEBUG oslo_concurrency.lockutils [None req-16071c71-f9d3-45f4-a64d-de26d1bb2ee1 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.945 232437 INFO nova.scheduler.client.report [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Deleted allocations for instance e5068414-3119-4041-97c2-e4cef3bb779a#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.976 232437 DEBUG nova.compute.manager [req-f4c9d768-52df-4982-ab2d-0440f7141a7b req-430b0345-44e5-46a5-b190-3ee08e344a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Received event network-vif-plugged-9150edb9-25c0-443e-baec-8ba956bec23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.976 232437 DEBUG oslo_concurrency.lockutils [req-f4c9d768-52df-4982-ab2d-0440f7141a7b req-430b0345-44e5-46a5-b190-3ee08e344a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.976 232437 DEBUG oslo_concurrency.lockutils [req-f4c9d768-52df-4982-ab2d-0440f7141a7b req-430b0345-44e5-46a5-b190-3ee08e344a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.976 232437 DEBUG oslo_concurrency.lockutils [req-f4c9d768-52df-4982-ab2d-0440f7141a7b req-430b0345-44e5-46a5-b190-3ee08e344a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.977 232437 DEBUG nova.compute.manager [req-f4c9d768-52df-4982-ab2d-0440f7141a7b req-430b0345-44e5-46a5-b190-3ee08e344a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] No waiting events found dispatching network-vif-plugged-9150edb9-25c0-443e-baec-8ba956bec23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:43:22 np0005548731 nova_compute[232433]: 2025-12-06 07:43:22.977 232437 WARNING nova.compute.manager [req-f4c9d768-52df-4982-ab2d-0440f7141a7b req-430b0345-44e5-46a5-b190-3ee08e344a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Received unexpected event network-vif-plugged-9150edb9-25c0-443e-baec-8ba956bec23f for instance with vm_state deleted and task_state None.#033[00m
Dec  6 02:43:23 np0005548731 nova_compute[232433]: 2025-12-06 07:43:23.021 232437 DEBUG oslo_concurrency.lockutils [None req-89fded64-0805-47d9-884c-cec8e8a186e4 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "e5068414-3119-4041-97c2-e4cef3bb779a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:23 np0005548731 nova_compute[232433]: 2025-12-06 07:43:23.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:23.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:23 np0005548731 nova_compute[232433]: 2025-12-06 07:43:23.289 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.139 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.139 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.139 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.139 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.140 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:24.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.901 232437 DEBUG nova.compute.manager [req-77e299c0-1d48-4a26-872d-09c5d600e731 req-1ae793b9-abb3-4715-b331-70ee11ea18d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Received event network-vif-plugged-59bb701f-9125-43d2-9023-6af0a1f18fea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.903 232437 DEBUG oslo_concurrency.lockutils [req-77e299c0-1d48-4a26-872d-09c5d600e731 req-1ae793b9-abb3-4715-b331-70ee11ea18d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.903 232437 DEBUG oslo_concurrency.lockutils [req-77e299c0-1d48-4a26-872d-09c5d600e731 req-1ae793b9-abb3-4715-b331-70ee11ea18d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.904 232437 DEBUG oslo_concurrency.lockutils [req-77e299c0-1d48-4a26-872d-09c5d600e731 req-1ae793b9-abb3-4715-b331-70ee11ea18d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.904 232437 DEBUG nova.compute.manager [req-77e299c0-1d48-4a26-872d-09c5d600e731 req-1ae793b9-abb3-4715-b331-70ee11ea18d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] No waiting events found dispatching network-vif-plugged-59bb701f-9125-43d2-9023-6af0a1f18fea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:43:24 np0005548731 nova_compute[232433]: 2025-12-06 07:43:24.904 232437 WARNING nova.compute.manager [req-77e299c0-1d48-4a26-872d-09c5d600e731 req-1ae793b9-abb3-4715-b331-70ee11ea18d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Received unexpected event network-vif-plugged-59bb701f-9125-43d2-9023-6af0a1f18fea for instance with vm_state active and task_state None.#033[00m
Dec  6 02:43:24 np0005548731 podman[298662]: 2025-12-06 07:43:24.973586954 +0000 UTC m=+2.411359013 container create 8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:43:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:43:25 np0005548731 systemd[1]: Started libpod-conmon-8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c.scope.
Dec  6 02:43:25 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:43:25 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fe1bcbce48424da29d12c18955903a64bd43447623e964696dedd86c8d1828/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:43:25 np0005548731 podman[298662]: 2025-12-06 07:43:25.080994893 +0000 UTC m=+2.518766982 container init 8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:43:25 np0005548731 podman[298662]: 2025-12-06 07:43:25.086872776 +0000 UTC m=+2.524644835 container start 8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:43:25 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[298717]: [NOTICE]   (298721) : New worker (298723) forked
Dec  6 02:43:25 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[298717]: [NOTICE]   (298721) : Loading success.
Dec  6 02:43:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.003000071s ======
Dec  6 02:43:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:25.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Dec  6 02:43:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:43:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2786445463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.348 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.427 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.427 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.597 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.598 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4158MB free_disk=20.796375274658203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.598 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.599 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.661 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 4c6cadbd-1c16-490c-aa8c-049b1595ca1b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.662 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.662 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.699 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.840 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.874 232437 DEBUG oslo_concurrency.lockutils [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.875 232437 DEBUG oslo_concurrency.lockutils [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.875 232437 DEBUG nova.compute.manager [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.879 232437 DEBUG nova.compute.manager [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.880 232437 DEBUG nova.objects.instance [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'flavor' on Instance uuid 4c6cadbd-1c16-490c-aa8c-049b1595ca1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:43:25 np0005548731 nova_compute[232433]: 2025-12-06 07:43:25.915 232437 DEBUG nova.virt.libvirt.driver [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:43:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e329 e329: 3 total, 3 up, 3 in
Dec  6 02:43:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:43:26 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4128211629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:43:26 np0005548731 nova_compute[232433]: 2025-12-06 07:43:26.138 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:26 np0005548731 nova_compute[232433]: 2025-12-06 07:43:26.147 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:43:26 np0005548731 nova_compute[232433]: 2025-12-06 07:43:26.167 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:43:26 np0005548731 nova_compute[232433]: 2025-12-06 07:43:26.186 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:43:26 np0005548731 nova_compute[232433]: 2025-12-06 07:43:26.187 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:43:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:26.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:43:27 np0005548731 nova_compute[232433]: 2025-12-06 07:43:27.186 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:27 np0005548731 nova_compute[232433]: 2025-12-06 07:43:27.217 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:27 np0005548731 nova_compute[232433]: 2025-12-06 07:43:27.217 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:27.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:28 np0005548731 nova_compute[232433]: 2025-12-06 07:43:28.293 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.369613) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008369665, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 861, "num_deletes": 251, "total_data_size": 1650390, "memory_usage": 1668720, "flush_reason": "Manual Compaction"}
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Dec  6 02:43:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:28.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008382332, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 1077711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55519, "largest_seqno": 56374, "table_properties": {"data_size": 1073528, "index_size": 1835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9844, "raw_average_key_size": 20, "raw_value_size": 1065006, "raw_average_value_size": 2182, "num_data_blocks": 79, "num_entries": 488, "num_filter_entries": 488, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006955, "oldest_key_time": 1765006955, "file_creation_time": 1765007008, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 13015 microseconds, and 5323 cpu microseconds.
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.382628) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 1077711 bytes OK
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.382753) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.384514) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.384530) EVENT_LOG_v1 {"time_micros": 1765007008384525, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.384577) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1645888, prev total WAL file size 1645888, number of live WAL files 2.
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.386175) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(1052KB)], [105(11MB)]
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008386220, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 13052736, "oldest_snapshot_seqno": -1}
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 8841 keys, 10855413 bytes, temperature: kUnknown
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008464353, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 10855413, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10799225, "index_size": 32972, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22149, "raw_key_size": 229594, "raw_average_key_size": 25, "raw_value_size": 10644909, "raw_average_value_size": 1204, "num_data_blocks": 1284, "num_entries": 8841, "num_filter_entries": 8841, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765007008, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.464806) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 10855413 bytes
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.466248) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.5 rd, 138.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(22.2) write-amplify(10.1) OK, records in: 9362, records dropped: 521 output_compression: NoCompression
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.466270) EVENT_LOG_v1 {"time_micros": 1765007008466261, "job": 66, "event": "compaction_finished", "compaction_time_micros": 78402, "compaction_time_cpu_micros": 39167, "output_level": 6, "num_output_files": 1, "total_output_size": 10855413, "num_input_records": 9362, "num_output_records": 8841, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008467083, "job": 66, "event": "table_file_deletion", "file_number": 107}
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008469251, "job": 66, "event": "table_file_deletion", "file_number": 105}
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.386128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.469347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.469351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.469352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.469353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:28.469355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:43:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:29.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e330 e330: 3 total, 3 up, 3 in
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.356868) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009356904, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 291, "num_deletes": 260, "total_data_size": 96589, "memory_usage": 103672, "flush_reason": "Manual Compaction"}
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009358737, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 63496, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56379, "largest_seqno": 56665, "table_properties": {"data_size": 61528, "index_size": 132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4999, "raw_average_key_size": 17, "raw_value_size": 57554, "raw_average_value_size": 204, "num_data_blocks": 6, "num_entries": 282, "num_filter_entries": 282, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007009, "oldest_key_time": 1765007009, "file_creation_time": 1765007009, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 1901 microseconds, and 801 cpu microseconds.
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.358767) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 63496 bytes OK
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.358783) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.360305) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.360319) EVENT_LOG_v1 {"time_micros": 1765007009360315, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.360333) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 94376, prev total WAL file size 94376, number of live WAL files 2.
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.360711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373638' seq:72057594037927935, type:22 .. '6C6F676D0032303234' seq:0, type:0; will stop at (end)
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(62KB)], [108(10MB)]
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009360826, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 10918909, "oldest_snapshot_seqno": -1}
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 8592 keys, 10772426 bytes, temperature: kUnknown
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009423612, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 10772426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10717460, "index_size": 32390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 225413, "raw_average_key_size": 26, "raw_value_size": 10566838, "raw_average_value_size": 1229, "num_data_blocks": 1255, "num_entries": 8592, "num_filter_entries": 8592, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765007009, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.423849) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10772426 bytes
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.425347) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.7 rd, 171.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.4 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(341.6) write-amplify(169.7) OK, records in: 9123, records dropped: 531 output_compression: NoCompression
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.425363) EVENT_LOG_v1 {"time_micros": 1765007009425356, "job": 68, "event": "compaction_finished", "compaction_time_micros": 62845, "compaction_time_cpu_micros": 25746, "output_level": 6, "num_output_files": 1, "total_output_size": 10772426, "num_input_records": 9123, "num_output_records": 8592, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009425615, "job": 68, "event": "table_file_deletion", "file_number": 110}
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009427218, "job": 68, "event": "table_file_deletion", "file_number": 108}
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.360571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.427266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.427270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.427271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.427273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:43:29.427274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:43:29 np0005548731 nova_compute[232433]: 2025-12-06 07:43:29.685 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:29.684 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:43:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:29.685 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:43:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:43:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e331 e331: 3 total, 3 up, 3 in
Dec  6 02:43:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:30.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:30.687 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:30 np0005548731 nova_compute[232433]: 2025-12-06 07:43:30.843 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:31 np0005548731 nova_compute[232433]: 2025-12-06 07:43:31.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:31.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:43:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:32.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:43:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:33.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:33 np0005548731 nova_compute[232433]: 2025-12-06 07:43:33.294 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:34 np0005548731 nova_compute[232433]: 2025-12-06 07:43:34.174 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006999.173303, e5068414-3119-4041-97c2-e4cef3bb779a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:43:34 np0005548731 nova_compute[232433]: 2025-12-06 07:43:34.174 232437 INFO nova.compute.manager [-] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:43:34 np0005548731 nova_compute[232433]: 2025-12-06 07:43:34.198 232437 DEBUG nova.compute.manager [None req-cb84d9c7-80e1-4018-8481-0e9702eb8662 - - - - - -] [instance: e5068414-3119-4041-97c2-e4cef3bb779a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:43:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:43:35 np0005548731 nova_compute[232433]: 2025-12-06 07:43:35.039 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:35.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:35 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Dec  6 02:43:35 np0005548731 nova_compute[232433]: 2025-12-06 07:43:35.846 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:35 np0005548731 nova_compute[232433]: 2025-12-06 07:43:35.951 232437 DEBUG nova.virt.libvirt.driver [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:43:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:36.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:37.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:38Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:ff:4a 10.100.0.3
Dec  6 02:43:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:38Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:ff:4a 10.100.0.3
Dec  6 02:43:38 np0005548731 nova_compute[232433]: 2025-12-06 07:43:38.190 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:38 np0005548731 nova_compute[232433]: 2025-12-06 07:43:38.296 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:38 np0005548731 podman[298765]: 2025-12-06 07:43:38.886421754 +0000 UTC m=+0.047986071 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec  6 02:43:38 np0005548731 podman[298767]: 2025-12-06 07:43:38.940330837 +0000 UTC m=+0.094946515 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Dec  6 02:43:38 np0005548731 podman[298766]: 2025-12-06 07:43:38.985813837 +0000 UTC m=+0.145093049 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:43:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:39.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e332 e332: 3 total, 3 up, 3 in
Dec  6 02:43:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:43:40 np0005548731 nova_compute[232433]: 2025-12-06 07:43:40.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:40 np0005548731 nova_compute[232433]: 2025-12-06 07:43:40.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:43:40 np0005548731 nova_compute[232433]: 2025-12-06 07:43:40.125 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:43:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:40.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:40 np0005548731 nova_compute[232433]: 2025-12-06 07:43:40.849 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:43:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:41.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:43:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:42.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:43.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:43 np0005548731 nova_compute[232433]: 2025-12-06 07:43:43.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:43 np0005548731 nova_compute[232433]: 2025-12-06 07:43:43.978 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:43 np0005548731 nova_compute[232433]: 2025-12-06 07:43:43.979 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:44 np0005548731 nova_compute[232433]: 2025-12-06 07:43:43.999 232437 DEBUG nova.compute.manager [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:43:44 np0005548731 nova_compute[232433]: 2025-12-06 07:43:44.085 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:44 np0005548731 nova_compute[232433]: 2025-12-06 07:43:44.086 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:44 np0005548731 nova_compute[232433]: 2025-12-06 07:43:44.094 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:43:44 np0005548731 nova_compute[232433]: 2025-12-06 07:43:44.095 232437 INFO nova.compute.claims [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:43:44 np0005548731 nova_compute[232433]: 2025-12-06 07:43:44.202 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:44.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:43:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3709914878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:43:44 np0005548731 nova_compute[232433]: 2025-12-06 07:43:44.681 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:44 np0005548731 nova_compute[232433]: 2025-12-06 07:43:44.686 232437 DEBUG nova.compute.provider_tree [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:43:44 np0005548731 nova_compute[232433]: 2025-12-06 07:43:44.774 232437 DEBUG nova.scheduler.client.report [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:43:44 np0005548731 nova_compute[232433]: 2025-12-06 07:43:44.914 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:44 np0005548731 nova_compute[232433]: 2025-12-06 07:43:44.915 232437 DEBUG nova.compute.manager [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:43:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.171 232437 DEBUG nova.compute.manager [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.172 232437 DEBUG nova.network.neutron [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:43:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:45.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.332 232437 INFO nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.347 232437 DEBUG nova.compute.manager [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.579 232437 DEBUG nova.compute.manager [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.580 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.581 232437 INFO nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Creating image(s)#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.610 232437 DEBUG nova.storage.rbd_utils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.635 232437 DEBUG nova.storage.rbd_utils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.659 232437 DEBUG nova.storage.rbd_utils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.662 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.732 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.733 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.734 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.734 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.757 232437 DEBUG nova.storage.rbd_utils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.760 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f67d89a8-836a-4f47-af8d-37cf99529275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:45 np0005548731 nova_compute[232433]: 2025-12-06 07:43:45.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:46 np0005548731 nova_compute[232433]: 2025-12-06 07:43:46.177 232437 DEBUG nova.policy [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1035ecd55ed54b57aa35fe32fb915cc5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:43:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:46.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:46 np0005548731 nova_compute[232433]: 2025-12-06 07:43:46.994 232437 DEBUG nova.virt.libvirt.driver [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec  6 02:43:47 np0005548731 nova_compute[232433]: 2025-12-06 07:43:47.020 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f67d89a8-836a-4f47-af8d-37cf99529275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:47 np0005548731 nova_compute[232433]: 2025-12-06 07:43:47.114 232437 DEBUG nova.storage.rbd_utils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] resizing rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:43:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:43:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:47.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:43:48 np0005548731 nova_compute[232433]: 2025-12-06 07:43:48.130 232437 DEBUG nova.objects.instance [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'migration_context' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:43:48 np0005548731 nova_compute[232433]: 2025-12-06 07:43:48.226 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:43:48 np0005548731 nova_compute[232433]: 2025-12-06 07:43:48.227 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Ensure instance console log exists: /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:43:48 np0005548731 nova_compute[232433]: 2025-12-06 07:43:48.227 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:48 np0005548731 nova_compute[232433]: 2025-12-06 07:43:48.228 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:48 np0005548731 nova_compute[232433]: 2025-12-06 07:43:48.228 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:48 np0005548731 nova_compute[232433]: 2025-12-06 07:43:48.297 232437 DEBUG nova.network.neutron [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Successfully created port: d4087b80-19fa-44d3-8dd5-406c2b01b6f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:43:48 np0005548731 nova_compute[232433]: 2025-12-06 07:43:48.334 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:49.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:49 np0005548731 kernel: tap59bb701f-91 (unregistering): left promiscuous mode
Dec  6 02:43:49 np0005548731 NetworkManager[49182]: <info>  [1765007029.5829] device (tap59bb701f-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:43:49 np0005548731 nova_compute[232433]: 2025-12-06 07:43:49.591 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:49 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:49Z|00698|binding|INFO|Releasing lport 59bb701f-9125-43d2-9023-6af0a1f18fea from this chassis (sb_readonly=0)
Dec  6 02:43:49 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:49Z|00699|binding|INFO|Setting lport 59bb701f-9125-43d2-9023-6af0a1f18fea down in Southbound
Dec  6 02:43:49 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:49Z|00700|binding|INFO|Removing iface tap59bb701f-91 ovn-installed in OVS
Dec  6 02:43:49 np0005548731 nova_compute[232433]: 2025-12-06 07:43:49.594 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.606 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:ff:4a 10.100.0.3'], port_security=['fa:16:3e:e0:ff:4a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4c6cadbd-1c16-490c-aa8c-049b1595ca1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=59bb701f-9125-43d2-9023-6af0a1f18fea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.608 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 59bb701f-9125-43d2-9023-6af0a1f18fea in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 unbound from our chassis#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.609 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35a27638-382c-4afb-83b0-edd6d7f4bca8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.611 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c1664886-460f-4474-bd37-9f19cd35ce2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.611 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 namespace which is not needed anymore#033[00m
Dec  6 02:43:49 np0005548731 nova_compute[232433]: 2025-12-06 07:43:49.614 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:49 np0005548731 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000097.scope: Deactivated successfully.
Dec  6 02:43:49 np0005548731 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000097.scope: Consumed 14.552s CPU time.
Dec  6 02:43:49 np0005548731 systemd-machined[195355]: Machine qemu-70-instance-00000097 terminated.
Dec  6 02:43:49 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[298717]: [NOTICE]   (298721) : haproxy version is 2.8.14-c23fe91
Dec  6 02:43:49 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[298717]: [NOTICE]   (298721) : path to executable is /usr/sbin/haproxy
Dec  6 02:43:49 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[298717]: [WARNING]  (298721) : Exiting Master process...
Dec  6 02:43:49 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[298717]: [ALERT]    (298721) : Current worker (298723) exited with code 143 (Terminated)
Dec  6 02:43:49 np0005548731 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[298717]: [WARNING]  (298721) : All workers exited. Exiting... (0)
Dec  6 02:43:49 np0005548731 systemd[1]: libpod-8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c.scope: Deactivated successfully.
Dec  6 02:43:49 np0005548731 podman[299107]: 2025-12-06 07:43:49.746794973 +0000 UTC m=+0.041381380 container died 8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  6 02:43:49 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c-userdata-shm.mount: Deactivated successfully.
Dec  6 02:43:49 np0005548731 systemd[1]: var-lib-containers-storage-overlay-76fe1bcbce48424da29d12c18955903a64bd43447623e964696dedd86c8d1828-merged.mount: Deactivated successfully.
Dec  6 02:43:49 np0005548731 podman[299107]: 2025-12-06 07:43:49.776720133 +0000 UTC m=+0.071306550 container cleanup 8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:43:49 np0005548731 systemd[1]: libpod-conmon-8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c.scope: Deactivated successfully.
Dec  6 02:43:49 np0005548731 podman[299138]: 2025-12-06 07:43:49.846634597 +0000 UTC m=+0.049940189 container remove 8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.855 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9f52b0d2-1155-4a99-913e-ad7e051dcaf2]: (4, ('Sat Dec  6 07:43:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 (8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c)\n8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c\nSat Dec  6 07:43:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 (8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c)\n8a53cb9cad1e564374e58e115081483925e265d9474b01437901806ad334e13c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.858 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f977caed-cf4d-4287-b61e-31ebae9b2980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.859 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:49 np0005548731 kernel: tap35a27638-30: left promiscuous mode
Dec  6 02:43:49 np0005548731 nova_compute[232433]: 2025-12-06 07:43:49.861 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:49 np0005548731 nova_compute[232433]: 2025-12-06 07:43:49.878 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.881 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c03fc64c-d1b7-4aa7-8ee0-458d09df8acd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.896 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb604f4-1de3-4c33-b231-18e471aee1d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.898 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d847a0-2c9b-4200-b36c-413863175708]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.915 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4861396c-4e06-456e-a62d-af7443557d1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 726401, 'reachable_time': 35809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299168, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:49 np0005548731 systemd[1]: run-netns-ovnmeta\x2d35a27638\x2d382c\x2d4afb\x2d83b0\x2dedd6d7f4bca8.mount: Deactivated successfully.
Dec  6 02:43:49 np0005548731 nova_compute[232433]: 2025-12-06 07:43:49.918 232437 DEBUG nova.compute.manager [req-653a8acb-87f7-43e5-aeb3-57c9907d9b03 req-4c18471a-8e32-4ef8-9413-ca7d81dd840b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Received event network-vif-unplugged-59bb701f-9125-43d2-9023-6af0a1f18fea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:49 np0005548731 nova_compute[232433]: 2025-12-06 07:43:49.919 232437 DEBUG oslo_concurrency.lockutils [req-653a8acb-87f7-43e5-aeb3-57c9907d9b03 req-4c18471a-8e32-4ef8-9413-ca7d81dd840b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:49 np0005548731 nova_compute[232433]: 2025-12-06 07:43:49.919 232437 DEBUG oslo_concurrency.lockutils [req-653a8acb-87f7-43e5-aeb3-57c9907d9b03 req-4c18471a-8e32-4ef8-9413-ca7d81dd840b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:49 np0005548731 nova_compute[232433]: 2025-12-06 07:43:49.919 232437 DEBUG oslo_concurrency.lockutils [req-653a8acb-87f7-43e5-aeb3-57c9907d9b03 req-4c18471a-8e32-4ef8-9413-ca7d81dd840b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:49 np0005548731 nova_compute[232433]: 2025-12-06 07:43:49.919 232437 DEBUG nova.compute.manager [req-653a8acb-87f7-43e5-aeb3-57c9907d9b03 req-4c18471a-8e32-4ef8-9413-ca7d81dd840b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] No waiting events found dispatching network-vif-unplugged-59bb701f-9125-43d2-9023-6af0a1f18fea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:43:49 np0005548731 nova_compute[232433]: 2025-12-06 07:43:49.919 232437 WARNING nova.compute.manager [req-653a8acb-87f7-43e5-aeb3-57c9907d9b03 req-4c18471a-8e32-4ef8-9413-ca7d81dd840b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Received unexpected event network-vif-unplugged-59bb701f-9125-43d2-9023-6af0a1f18fea for instance with vm_state active and task_state powering-off.#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.919 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:43:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:49.920 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[533dd26d-98f8-44a9-bebc-6671a33146a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.008 232437 INFO nova.virt.libvirt.driver [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Instance shutdown successfully after 24 seconds.#033[00m
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.012 232437 INFO nova.virt.libvirt.driver [-] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Instance destroyed successfully.#033[00m
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.012 232437 DEBUG nova.objects.instance [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'numa_topology' on Instance uuid 4c6cadbd-1c16-490c-aa8c-049b1595ca1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.056 232437 DEBUG nova.compute.manager [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.269 232437 DEBUG nova.network.neutron [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Successfully updated port: d4087b80-19fa-44d3-8dd5-406c2b01b6f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.272 232437 DEBUG oslo_concurrency.lockutils [None req-25c2a4a0-bdbc-4252-8cfe-e85144516653 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.298 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.299 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquired lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.299 232437 DEBUG nova.network.neutron [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.386 232437 DEBUG nova.compute.manager [req-c579d576-5713-4e16-914f-996b5312c410 req-df18b5a0-2ddd-43ab-aa79-dbe669fe3d27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-changed-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.387 232437 DEBUG nova.compute.manager [req-c579d576-5713-4e16-914f-996b5312c410 req-df18b5a0-2ddd-43ab-aa79-dbe669fe3d27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Refreshing instance network info cache due to event network-changed-d4087b80-19fa-44d3-8dd5-406c2b01b6f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.387 232437 DEBUG oslo_concurrency.lockutils [req-c579d576-5713-4e16-914f-996b5312c410 req-df18b5a0-2ddd-43ab-aa79-dbe669fe3d27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:43:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:50.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:50 np0005548731 nova_compute[232433]: 2025-12-06 07:43:50.855 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:51 np0005548731 nova_compute[232433]: 2025-12-06 07:43:51.110 232437 DEBUG nova.network.neutron [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:43:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:51.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:51 np0005548731 nova_compute[232433]: 2025-12-06 07:43:51.270 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:51 np0005548731 nova_compute[232433]: 2025-12-06 07:43:51.303 232437 WARNING nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.#033[00m
Dec  6 02:43:51 np0005548731 nova_compute[232433]: 2025-12-06 07:43:51.303 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Triggering sync for uuid 4c6cadbd-1c16-490c-aa8c-049b1595ca1b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  6 02:43:51 np0005548731 nova_compute[232433]: 2025-12-06 07:43:51.304 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Triggering sync for uuid f67d89a8-836a-4f47-af8d-37cf99529275 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  6 02:43:51 np0005548731 nova_compute[232433]: 2025-12-06 07:43:51.304 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:51 np0005548731 nova_compute[232433]: 2025-12-06 07:43:51.305 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:51 np0005548731 nova_compute[232433]: 2025-12-06 07:43:51.306 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:51 np0005548731 nova_compute[232433]: 2025-12-06 07:43:51.418 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:52 np0005548731 nova_compute[232433]: 2025-12-06 07:43:52.303 232437 DEBUG nova.compute.manager [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Received event network-vif-plugged-59bb701f-9125-43d2-9023-6af0a1f18fea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:52 np0005548731 nova_compute[232433]: 2025-12-06 07:43:52.303 232437 DEBUG oslo_concurrency.lockutils [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:52 np0005548731 nova_compute[232433]: 2025-12-06 07:43:52.303 232437 DEBUG oslo_concurrency.lockutils [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:52 np0005548731 nova_compute[232433]: 2025-12-06 07:43:52.304 232437 DEBUG oslo_concurrency.lockutils [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:52 np0005548731 nova_compute[232433]: 2025-12-06 07:43:52.304 232437 DEBUG nova.compute.manager [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] No waiting events found dispatching network-vif-plugged-59bb701f-9125-43d2-9023-6af0a1f18fea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:43:52 np0005548731 nova_compute[232433]: 2025-12-06 07:43:52.304 232437 WARNING nova.compute.manager [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Received unexpected event network-vif-plugged-59bb701f-9125-43d2-9023-6af0a1f18fea for instance with vm_state stopped and task_state None.#033[00m
Dec  6 02:43:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:52.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:43:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:53.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.336 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.760 232437 DEBUG nova.network.neutron [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updating instance_info_cache with network_info: [{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.788 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Releasing lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.789 232437 DEBUG nova.compute.manager [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance network_info: |[{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.789 232437 DEBUG oslo_concurrency.lockutils [req-c579d576-5713-4e16-914f-996b5312c410 req-df18b5a0-2ddd-43ab-aa79-dbe669fe3d27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.790 232437 DEBUG nova.network.neutron [req-c579d576-5713-4e16-914f-996b5312c410 req-df18b5a0-2ddd-43ab-aa79-dbe669fe3d27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Refreshing network info cache for port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.795 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Start _get_guest_xml network_info=[{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.802 232437 WARNING nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.815 232437 DEBUG nova.virt.libvirt.host [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.817 232437 DEBUG nova.virt.libvirt.host [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.832 232437 DEBUG nova.virt.libvirt.host [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.833 232437 DEBUG nova.virt.libvirt.host [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.835 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.835 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.836 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.837 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.837 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.837 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.838 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.838 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.838 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.839 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.839 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.839 232437 DEBUG nova.virt.hardware [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:43:53 np0005548731 nova_compute[232433]: 2025-12-06 07:43:53.845 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:43:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2032903449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.299 232437 DEBUG oslo_concurrency.lockutils [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.300 232437 DEBUG oslo_concurrency.lockutils [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.300 232437 DEBUG oslo_concurrency.lockutils [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.300 232437 DEBUG oslo_concurrency.lockutils [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.301 232437 DEBUG oslo_concurrency.lockutils [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.302 232437 INFO nova.compute.manager [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Terminating instance#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.303 232437 DEBUG nova.compute.manager [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.309 232437 INFO nova.virt.libvirt.driver [-] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Instance destroyed successfully.#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.309 232437 DEBUG nova.objects.instance [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'resources' on Instance uuid 4c6cadbd-1c16-490c-aa8c-049b1595ca1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.323 232437 DEBUG nova.virt.libvirt.vif [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:43:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-616994635',display_name='tempest-Íñstáñcé-1615481276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstestjson-server-616994635',id=151,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:43:22Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-261ay309',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:43:52Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=4c6cadbd-1c16-490c-aa8c-049b1595ca1b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "59bb701f-9125-43d2-9023-6af0a1f18fea", "address": "fa:16:3e:e0:ff:4a", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59bb701f-91", "ovs_interfaceid": "59bb701f-9125-43d2-9023-6af0a1f18fea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.324 232437 DEBUG nova.network.os_vif_util [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "59bb701f-9125-43d2-9023-6af0a1f18fea", "address": "fa:16:3e:e0:ff:4a", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59bb701f-91", "ovs_interfaceid": "59bb701f-9125-43d2-9023-6af0a1f18fea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.325 232437 DEBUG nova.network.os_vif_util [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:ff:4a,bridge_name='br-int',has_traffic_filtering=True,id=59bb701f-9125-43d2-9023-6af0a1f18fea,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59bb701f-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.325 232437 DEBUG os_vif [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:ff:4a,bridge_name='br-int',has_traffic_filtering=True,id=59bb701f-9125-43d2-9023-6af0a1f18fea,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59bb701f-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.328 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59bb701f-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.334 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.342 232437 INFO os_vif [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:ff:4a,bridge_name='br-int',has_traffic_filtering=True,id=59bb701f-9125-43d2-9023-6af0a1f18fea,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59bb701f-91')#033[00m
Dec  6 02:43:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:54.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.626 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.782s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.653 232437 DEBUG nova.storage.rbd_utils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:54 np0005548731 nova_compute[232433]: 2025-12-06 07:43:54.658 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:43:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:43:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3309694968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.096 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.098 232437 DEBUG nova.virt.libvirt.vif [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:43:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2113241509',display_name='tempest-ServersNegativeTestJSON-server-2113241509',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2113241509',id=153,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ec9294f6d4b4f44a72414374d646a4a',ramdisk_id='',reservation_id='r-r3odtvmp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-776446295',owner_user_name='tempest-ServersNegativeTestJSON-776446295-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:43:45Z,user_data=None,user_id='1035ecd55ed54b57aa35fe32fb915cc5',uuid=f67d89a8-836a-4f47-af8d-37cf99529275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.099 232437 DEBUG nova.network.os_vif_util [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converting VIF {"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.100 232437 DEBUG nova.network.os_vif_util [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.101 232437 DEBUG nova.objects.instance [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'pci_devices' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.124 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  <uuid>f67d89a8-836a-4f47-af8d-37cf99529275</uuid>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  <name>instance-00000099</name>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersNegativeTestJSON-server-2113241509</nova:name>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:43:53</nova:creationTime>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <nova:user uuid="1035ecd55ed54b57aa35fe32fb915cc5">tempest-ServersNegativeTestJSON-776446295-project-member</nova:user>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <nova:project uuid="4ec9294f6d4b4f44a72414374d646a4a">tempest-ServersNegativeTestJSON-776446295</nova:project>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <nova:port uuid="d4087b80-19fa-44d3-8dd5-406c2b01b6f4">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <entry name="serial">f67d89a8-836a-4f47-af8d-37cf99529275</entry>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <entry name="uuid">f67d89a8-836a-4f47-af8d-37cf99529275</entry>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f67d89a8-836a-4f47-af8d-37cf99529275_disk">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f67d89a8-836a-4f47-af8d-37cf99529275_disk.config">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:16:41:1e"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <target dev="tapd4087b80-19"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/console.log" append="off"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:43:55 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:43:55 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:43:55 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:43:55 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.125 232437 DEBUG nova.compute.manager [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Preparing to wait for external event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.125 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.126 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.126 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.127 232437 DEBUG nova.virt.libvirt.vif [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:43:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2113241509',display_name='tempest-ServersNegativeTestJSON-server-2113241509',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2113241509',id=153,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ec9294f6d4b4f44a72414374d646a4a',ramdisk_id='',reservation_id='r-r3odtvmp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-776446295',owner_user_name='tempest-ServersNegativeTestJSON-776446295-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:43:45Z,user_data=None,user_id='1035ecd55ed54b57aa35fe32fb915cc5',uuid=f67d89a8-836a-4f47-af8d-37cf99529275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.127 232437 DEBUG nova.network.os_vif_util [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converting VIF {"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.127 232437 DEBUG nova.network.os_vif_util [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.128 232437 DEBUG os_vif [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.128 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.129 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.130 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.133 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.134 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4087b80-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.134 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4087b80-19, col_values=(('external_ids', {'iface-id': 'd4087b80-19fa-44d3-8dd5-406c2b01b6f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:41:1e', 'vm-uuid': 'f67d89a8-836a-4f47-af8d-37cf99529275'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.136 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:55 np0005548731 NetworkManager[49182]: <info>  [1765007035.1368] manager: (tapd4087b80-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.138 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.140 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.141 232437 INFO os_vif [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19')#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.204 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.205 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.205 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] No VIF found with MAC fa:16:3e:16:41:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.206 232437 INFO nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Using config drive#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.231 232437 DEBUG nova.storage.rbd_utils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:55.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.968 232437 INFO nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Creating config drive at /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config#033[00m
Dec  6 02:43:55 np0005548731 nova_compute[232433]: 2025-12-06 07:43:55.976 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn8bul276 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.004 232437 DEBUG nova.network.neutron [req-c579d576-5713-4e16-914f-996b5312c410 req-df18b5a0-2ddd-43ab-aa79-dbe669fe3d27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updated VIF entry in instance network info cache for port d4087b80-19fa-44d3-8dd5-406c2b01b6f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.006 232437 DEBUG nova.network.neutron [req-c579d576-5713-4e16-914f-996b5312c410 req-df18b5a0-2ddd-43ab-aa79-dbe669fe3d27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updating instance_info_cache with network_info: [{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.025 232437 DEBUG oslo_concurrency.lockutils [req-c579d576-5713-4e16-914f-996b5312c410 req-df18b5a0-2ddd-43ab-aa79-dbe669fe3d27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.111 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn8bul276" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.144 232437 DEBUG nova.storage.rbd_utils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.148 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config f67d89a8-836a-4f47-af8d-37cf99529275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.187 232437 INFO nova.virt.libvirt.driver [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Deleting instance files /var/lib/nova/instances/4c6cadbd-1c16-490c-aa8c-049b1595ca1b_del#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.188 232437 INFO nova.virt.libvirt.driver [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Deletion of /var/lib/nova/instances/4c6cadbd-1c16-490c-aa8c-049b1595ca1b_del complete#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.257 232437 INFO nova.compute.manager [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Took 1.95 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.259 232437 DEBUG oslo.service.loopingcall [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.259 232437 DEBUG nova.compute.manager [-] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.260 232437 DEBUG nova.network.neutron [-] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.393 232437 DEBUG oslo_concurrency.processutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config f67d89a8-836a-4f47-af8d-37cf99529275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.394 232437 INFO nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Deleting local config drive /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config because it was imported into RBD.#033[00m
Dec  6 02:43:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:56.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:56 np0005548731 kernel: tapd4087b80-19: entered promiscuous mode
Dec  6 02:43:56 np0005548731 NetworkManager[49182]: <info>  [1765007036.4441] manager: (tapd4087b80-19): new Tun device (/org/freedesktop/NetworkManager/Devices/323)
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.481 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:56Z|00701|binding|INFO|Claiming lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for this chassis.
Dec  6 02:43:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:56Z|00702|binding|INFO|d4087b80-19fa-44d3-8dd5-406c2b01b6f4: Claiming fa:16:3e:16:41:1e 10.100.0.7
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.485 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.489 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:41:1e 10.100.0.7'], port_security=['fa:16:3e:16:41:1e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f67d89a8-836a-4f47-af8d-37cf99529275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ef8d01c6-8791-4eef-8d34-906ccbdd6237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a13aa6e4-a519-406f-87b7-05ba3d74a296, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=d4087b80-19fa-44d3-8dd5-406c2b01b6f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.490 143965 INFO neutron.agent.ovn.metadata.agent [-] Port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 in datapath 67a02abd-6f15-4e26-ba0d-8a091ca98239 bound to our chassis#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.491 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67a02abd-6f15-4e26-ba0d-8a091ca98239#033[00m
Dec  6 02:43:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:56Z|00703|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 ovn-installed in OVS
Dec  6 02:43:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:56Z|00704|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 up in Southbound
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.501 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.502 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.506 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9043b420-04e7-4259-abd0-8bf706d84556]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.507 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67a02abd-61 in ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:43:56 np0005548731 systemd-udevd[299331]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.508 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67a02abd-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.509 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd86c4b-e43c-468e-8151-dac7388c8715]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.509 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9373db0d-083f-41bc-ade8-983721f2faab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 02:43:56 np0005548731 systemd-machined[195355]: New machine qemu-71-instance-00000099.
Dec  6 02:43:56 np0005548731 NetworkManager[49182]: <info>  [1765007036.5180] device (tapd4087b80-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:43:56 np0005548731 NetworkManager[49182]: <info>  [1765007036.5192] device (tapd4087b80-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.520 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[fa591102-352b-4de6-916f-5a82a2820b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 systemd[1]: Started Virtual Machine qemu-71-instance-00000099.
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.543 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b63791dd-926f-4d12-a453-910faa59cf5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.571 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[31893307-0e58-4ca9-8fd8-30c3a969edca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.577 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c031e45d-7539-45d7-9b80-f1dd5d81a9d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 NetworkManager[49182]: <info>  [1765007036.5787] manager: (tap67a02abd-60): new Veth device (/org/freedesktop/NetworkManager/Devices/324)
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.609 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[638a911a-107a-4052-9171-c545680efba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.612 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[2631ad78-5d02-45be-acc5-bbda0aae35a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 NetworkManager[49182]: <info>  [1765007036.6343] device (tap67a02abd-60): carrier: link connected
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.640 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[598c5c70-4b6c-4e14-8093-9e2a055667fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.655 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9497d1e2-f840-471e-a8a8-fc53486bfb3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67a02abd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:89:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 729864, 'reachable_time': 40789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299364, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.673 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bf2a7db7-5b39-4885-b9d7-7158cea59768]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:89ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 729864, 'tstamp': 729864}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299365, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.691 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d10daf28-3385-4e79-9fe4-8e89bdd47499]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67a02abd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:89:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 729864, 'reachable_time': 40789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299366, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.712 232437 DEBUG nova.compute.manager [req-aa3a6668-3854-4d69-8782-cd4f565bbe95 req-e30af83e-d46a-4c02-9da5-fca3dc664460 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.713 232437 DEBUG oslo_concurrency.lockutils [req-aa3a6668-3854-4d69-8782-cd4f565bbe95 req-e30af83e-d46a-4c02-9da5-fca3dc664460 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.713 232437 DEBUG oslo_concurrency.lockutils [req-aa3a6668-3854-4d69-8782-cd4f565bbe95 req-e30af83e-d46a-4c02-9da5-fca3dc664460 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.713 232437 DEBUG oslo_concurrency.lockutils [req-aa3a6668-3854-4d69-8782-cd4f565bbe95 req-e30af83e-d46a-4c02-9da5-fca3dc664460 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.714 232437 DEBUG nova.compute.manager [req-aa3a6668-3854-4d69-8782-cd4f565bbe95 req-e30af83e-d46a-4c02-9da5-fca3dc664460 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Processing event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.720 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[114cdaf0-780c-4761-a5a4-c06a05a5cf5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.772 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcd8e00-f9f9-446c-956c-483fce62eb7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.773 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a02abd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.774 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.774 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67a02abd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.776 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:56 np0005548731 NetworkManager[49182]: <info>  [1765007036.7768] manager: (tap67a02abd-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Dec  6 02:43:56 np0005548731 kernel: tap67a02abd-60: entered promiscuous mode
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.780 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.781 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67a02abd-60, col_values=(('external_ids', {'iface-id': 'f1ca157c-f88b-4351-b03a-b04a75537062'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.782 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:43:56Z|00705|binding|INFO|Releasing lport f1ca157c-f88b-4351-b03a-b04a75537062 from this chassis (sb_readonly=0)
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.796 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.797 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67a02abd-6f15-4e26-ba0d-8a091ca98239.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67a02abd-6f15-4e26-ba0d-8a091ca98239.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.798 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[28d91792-9b56-48bb-8671-d880adc00818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.799 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-67a02abd-6f15-4e26-ba0d-8a091ca98239
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/67a02abd-6f15-4e26-ba0d-8a091ca98239.pid.haproxy
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 67a02abd-6f15-4e26-ba0d-8a091ca98239
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:43:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:43:56.800 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'env', 'PROCESS_TAG=haproxy-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67a02abd-6f15-4e26-ba0d-8a091ca98239.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:43:56 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.983 232437 DEBUG nova.network.neutron [-] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:56.998 232437 INFO nova.compute.manager [-] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Took 0.74 seconds to deallocate network for instance.#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.051 232437 DEBUG oslo_concurrency.lockutils [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.051 232437 DEBUG oslo_concurrency.lockutils [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.056 232437 DEBUG nova.compute.manager [req-1e5d7643-c5c5-4067-91a7-8783793b2ffb req-11b06873-f64f-4d33-b844-7feb55659c10 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Received event network-vif-deleted-59bb701f-9125-43d2-9023-6af0a1f18fea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.214 232437 DEBUG oslo_concurrency.processutils [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:43:57 np0005548731 podman[299398]: 2025-12-06 07:43:57.131376716 +0000 UTC m=+0.035919537 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:43:57 np0005548731 podman[299398]: 2025-12-06 07:43:57.270825966 +0000 UTC m=+0.175368787 container create 159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 02:43:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:57.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:57 np0005548731 systemd[1]: Started libpod-conmon-159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a.scope.
Dec  6 02:43:57 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:43:57 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ab0bc05fb774abac9b26cd4de108994e052c2b276d819756c70d80383af3ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:43:57 np0005548731 podman[299398]: 2025-12-06 07:43:57.422513075 +0000 UTC m=+0.327055916 container init 159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:43:57 np0005548731 podman[299398]: 2025-12-06 07:43:57.428212193 +0000 UTC m=+0.332755014 container start 159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:43:57 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[299449]: [NOTICE]   (299471) : New worker (299476) forked
Dec  6 02:43:57 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[299449]: [NOTICE]   (299471) : Loading success.
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.600 232437 DEBUG nova.compute.manager [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.601 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007037.5998096, f67d89a8-836a-4f47-af8d-37cf99529275 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.602 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Started (Lifecycle Event)#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.607 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.610 232437 INFO nova.virt.libvirt.driver [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance spawned successfully.#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.610 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.623 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.625 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:43:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:43:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3852900960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.644 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.645 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.645 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.646 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.646 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.646 232437 DEBUG nova.virt.libvirt.driver [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.653 232437 DEBUG oslo_concurrency.processutils [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.657 232437 DEBUG nova.compute.provider_tree [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.687 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.688 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007037.6011288, f67d89a8-836a-4f47-af8d-37cf99529275 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.688 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.698 232437 DEBUG nova.scheduler.client.report [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.724 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.728 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007037.606225, f67d89a8-836a-4f47-af8d-37cf99529275 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.728 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.732 232437 DEBUG oslo_concurrency.lockutils [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.748 232437 INFO nova.compute.manager [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Took 12.17 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.748 232437 DEBUG nova.compute.manager [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.757 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.761 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.770 232437 INFO nova.scheduler.client.report [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Deleted allocations for instance 4c6cadbd-1c16-490c-aa8c-049b1595ca1b#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.796 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.943 232437 INFO nova.compute.manager [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Took 13.89 seconds to build instance.#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.997 232437 DEBUG oslo_concurrency.lockutils [None req-8ec154d9-8256-425b-8bcc-293b014167b2 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.998 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "f67d89a8-836a-4f47-af8d-37cf99529275" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 6.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.998 232437 INFO nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:43:57 np0005548731 nova_compute[232433]: 2025-12-06 07:43:57.999 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "f67d89a8-836a-4f47-af8d-37cf99529275" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:58 np0005548731 nova_compute[232433]: 2025-12-06 07:43:58.006 232437 DEBUG oslo_concurrency.lockutils [None req-899d69a2-d40f-4b21-8c7d-298c84fd4677 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4c6cadbd-1c16-490c-aa8c-049b1595ca1b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:58 np0005548731 nova_compute[232433]: 2025-12-06 07:43:58.339 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:43:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:58.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:58 np0005548731 nova_compute[232433]: 2025-12-06 07:43:58.889 232437 DEBUG nova.compute.manager [req-61ac3c89-89b8-4431-87da-463da0525f72 req-18d49d94-2730-4b2e-aa00-ffca4c9a5e21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:43:58 np0005548731 nova_compute[232433]: 2025-12-06 07:43:58.889 232437 DEBUG oslo_concurrency.lockutils [req-61ac3c89-89b8-4431-87da-463da0525f72 req-18d49d94-2730-4b2e-aa00-ffca4c9a5e21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:43:58 np0005548731 nova_compute[232433]: 2025-12-06 07:43:58.890 232437 DEBUG oslo_concurrency.lockutils [req-61ac3c89-89b8-4431-87da-463da0525f72 req-18d49d94-2730-4b2e-aa00-ffca4c9a5e21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:43:58 np0005548731 nova_compute[232433]: 2025-12-06 07:43:58.890 232437 DEBUG oslo_concurrency.lockutils [req-61ac3c89-89b8-4431-87da-463da0525f72 req-18d49d94-2730-4b2e-aa00-ffca4c9a5e21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:43:58 np0005548731 nova_compute[232433]: 2025-12-06 07:43:58.890 232437 DEBUG nova.compute.manager [req-61ac3c89-89b8-4431-87da-463da0525f72 req-18d49d94-2730-4b2e-aa00-ffca4c9a5e21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] No waiting events found dispatching network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:43:58 np0005548731 nova_compute[232433]: 2025-12-06 07:43:58.890 232437 WARNING nova.compute.manager [req-61ac3c89-89b8-4431-87da-463da0525f72 req-18d49d94-2730-4b2e-aa00-ffca4c9a5e21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received unexpected event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:43:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:43:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:43:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:59.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:43:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:00 np0005548731 nova_compute[232433]: 2025-12-06 07:44:00.138 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:00.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:00.883 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:00.885 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:00.886 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:44:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:01.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:44:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:02.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:02 np0005548731 nova_compute[232433]: 2025-12-06 07:44:02.512 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "64c5b412-f2df-4207-86ed-ebf11666bb96" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:02 np0005548731 nova_compute[232433]: 2025-12-06 07:44:02.513 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:02 np0005548731 nova_compute[232433]: 2025-12-06 07:44:02.542 232437 DEBUG nova.compute.manager [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:44:02 np0005548731 nova_compute[232433]: 2025-12-06 07:44:02.626 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:02 np0005548731 nova_compute[232433]: 2025-12-06 07:44:02.627 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:02 np0005548731 nova_compute[232433]: 2025-12-06 07:44:02.635 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:44:02 np0005548731 nova_compute[232433]: 2025-12-06 07:44:02.635 232437 INFO nova.compute.claims [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:44:02 np0005548731 nova_compute[232433]: 2025-12-06 07:44:02.738 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:44:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:44:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/561146328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.187 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.196 232437 DEBUG nova.compute.provider_tree [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.222 232437 DEBUG nova.scheduler.client.report [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.249 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.250 232437 DEBUG nova.compute.manager [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:44:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:03.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.312 232437 DEBUG nova.compute.manager [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.313 232437 DEBUG nova.network.neutron [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.340 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.345 232437 INFO nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.368 232437 DEBUG nova.compute.manager [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.570 232437 DEBUG nova.compute.manager [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.571 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.572 232437 INFO nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Creating image(s)#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.593 232437 DEBUG nova.storage.rbd_utils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image 64c5b412-f2df-4207-86ed-ebf11666bb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.615 232437 DEBUG nova.storage.rbd_utils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image 64c5b412-f2df-4207-86ed-ebf11666bb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.641 232437 DEBUG nova.storage.rbd_utils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image 64c5b412-f2df-4207-86ed-ebf11666bb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.645 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.674 232437 DEBUG nova.policy [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1035ecd55ed54b57aa35fe32fb915cc5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.711 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.711 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.712 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.712 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.736 232437 DEBUG nova.storage.rbd_utils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image 64c5b412-f2df-4207-86ed-ebf11666bb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:44:03 np0005548731 nova_compute[232433]: 2025-12-06 07:44:03.740 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 64c5b412-f2df-4207-86ed-ebf11666bb96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:44:04 np0005548731 nova_compute[232433]: 2025-12-06 07:44:04.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:44:04 np0005548731 nova_compute[232433]: 2025-12-06 07:44:04.227 232437 DEBUG nova.network.neutron [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Successfully created port: d80a22f5-c63c-4325-a07c-77494e2ebebb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:44:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:04.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:04 np0005548731 nova_compute[232433]: 2025-12-06 07:44:04.840 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007029.8394823, 4c6cadbd-1c16-490c-aa8c-049b1595ca1b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:44:04 np0005548731 nova_compute[232433]: 2025-12-06 07:44:04.841 232437 INFO nova.compute.manager [-] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:44:04 np0005548731 nova_compute[232433]: 2025-12-06 07:44:04.898 232437 DEBUG nova.compute.manager [None req-1a163c63-8846-44aa-b704-7feb420f611e - - - - - -] [instance: 4c6cadbd-1c16-490c-aa8c-049b1595ca1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:44:04 np0005548731 nova_compute[232433]: 2025-12-06 07:44:04.967 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 64c5b412-f2df-4207-86ed-ebf11666bb96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:44:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:05 np0005548731 nova_compute[232433]: 2025-12-06 07:44:05.044 232437 DEBUG nova.storage.rbd_utils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] resizing rbd image 64c5b412-f2df-4207-86ed-ebf11666bb96_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:44:05 np0005548731 nova_compute[232433]: 2025-12-06 07:44:05.142 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:05 np0005548731 nova_compute[232433]: 2025-12-06 07:44:05.271 232437 DEBUG nova.network.neutron [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Successfully updated port: d80a22f5-c63c-4325-a07c-77494e2ebebb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:44:05 np0005548731 nova_compute[232433]: 2025-12-06 07:44:05.286 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "refresh_cache-64c5b412-f2df-4207-86ed-ebf11666bb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:44:05 np0005548731 nova_compute[232433]: 2025-12-06 07:44:05.286 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquired lock "refresh_cache-64c5b412-f2df-4207-86ed-ebf11666bb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:44:05 np0005548731 nova_compute[232433]: 2025-12-06 07:44:05.286 232437 DEBUG nova.network.neutron [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:44:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:44:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:05.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:44:05 np0005548731 nova_compute[232433]: 2025-12-06 07:44:05.736 232437 DEBUG nova.network.neutron [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:44:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:44:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:06.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:44:06 np0005548731 nova_compute[232433]: 2025-12-06 07:44:06.470 232437 DEBUG nova.network.neutron [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Updating instance_info_cache with network_info: [{"id": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "address": "fa:16:3e:5b:b9:66", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80a22f5-c6", "ovs_interfaceid": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:44:06 np0005548731 nova_compute[232433]: 2025-12-06 07:44:06.507 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Releasing lock "refresh_cache-64c5b412-f2df-4207-86ed-ebf11666bb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:44:06 np0005548731 nova_compute[232433]: 2025-12-06 07:44:06.508 232437 DEBUG nova.compute.manager [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Instance network_info: |[{"id": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "address": "fa:16:3e:5b:b9:66", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80a22f5-c6", "ovs_interfaceid": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:44:06 np0005548731 nova_compute[232433]: 2025-12-06 07:44:06.778 232437 DEBUG nova.compute.manager [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Received event network-changed-d80a22f5-c63c-4325-a07c-77494e2ebebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:44:06 np0005548731 nova_compute[232433]: 2025-12-06 07:44:06.779 232437 DEBUG nova.compute.manager [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Refreshing instance network info cache due to event network-changed-d80a22f5-c63c-4325-a07c-77494e2ebebb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:44:06 np0005548731 nova_compute[232433]: 2025-12-06 07:44:06.779 232437 DEBUG oslo_concurrency.lockutils [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-64c5b412-f2df-4207-86ed-ebf11666bb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:44:06 np0005548731 nova_compute[232433]: 2025-12-06 07:44:06.779 232437 DEBUG oslo_concurrency.lockutils [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-64c5b412-f2df-4207-86ed-ebf11666bb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:44:06 np0005548731 nova_compute[232433]: 2025-12-06 07:44:06.780 232437 DEBUG nova.network.neutron [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Refreshing network info cache for port d80a22f5-c63c-4325-a07c-77494e2ebebb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:44:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:07.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:07 np0005548731 nova_compute[232433]: 2025-12-06 07:44:07.932 232437 DEBUG nova.network.neutron [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Updated VIF entry in instance network info cache for port d80a22f5-c63c-4325-a07c-77494e2ebebb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:44:07 np0005548731 nova_compute[232433]: 2025-12-06 07:44:07.933 232437 DEBUG nova.network.neutron [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Updating instance_info_cache with network_info: [{"id": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "address": "fa:16:3e:5b:b9:66", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80a22f5-c6", "ovs_interfaceid": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:44:08 np0005548731 nova_compute[232433]: 2025-12-06 07:44:08.119 232437 DEBUG oslo_concurrency.lockutils [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-64c5b412-f2df-4207-86ed-ebf11666bb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:44:08 np0005548731 nova_compute[232433]: 2025-12-06 07:44:08.342 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:08.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.008 232437 DEBUG nova.objects.instance [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'migration_context' on Instance uuid 64c5b412-f2df-4207-86ed-ebf11666bb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.022 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.023 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Ensure instance console log exists: /var/lib/nova/instances/64c5b412-f2df-4207-86ed-ebf11666bb96/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.023 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.024 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.024 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.026 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Start _get_guest_xml network_info=[{"id": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "address": "fa:16:3e:5b:b9:66", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80a22f5-c6", "ovs_interfaceid": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.029 232437 WARNING nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.038 232437 DEBUG nova.virt.libvirt.host [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.039 232437 DEBUG nova.virt.libvirt.host [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.043 232437 DEBUG nova.virt.libvirt.host [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.043 232437 DEBUG nova.virt.libvirt.host [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.044 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.044 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.045 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.045 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.045 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.046 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.046 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.046 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.046 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.047 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.047 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.047 232437 DEBUG nova.virt.hardware [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.050 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:44:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:09.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:44:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2028827212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.468 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.497 232437 DEBUG nova.storage.rbd_utils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image 64c5b412-f2df-4207-86ed-ebf11666bb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.502 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:44:09 np0005548731 podman[299799]: 2025-12-06 07:44:09.893996022 +0000 UTC m=+0.055076473 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:44:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:44:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2461673043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:44:09 np0005548731 podman[299800]: 2025-12-06 07:44:09.928426282 +0000 UTC m=+0.087377992 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 02:44:09 np0005548731 podman[299801]: 2025-12-06 07:44:09.928675048 +0000 UTC m=+0.086686195 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.957 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.959 232437 DEBUG nova.virt.libvirt.vif [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:44:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1667548925',display_name='tempest-ServersNegativeTestJSON-server-1667548925',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1667548925',id=155,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ec9294f6d4b4f44a72414374d646a4a',ramdisk_id='',reservation_id='r-g0hnfotp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-776446295',owner_user_name='tempest-ServersNegativeTestJSON-776446295-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:44:03Z,user_data=None,user_id='1035ecd55ed54b57aa35fe32fb915cc5',uuid=64c5b412-f2df-4207-86ed-ebf11666bb96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "address": "fa:16:3e:5b:b9:66", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80a22f5-c6", "ovs_interfaceid": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.960 232437 DEBUG nova.network.os_vif_util [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converting VIF {"id": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "address": "fa:16:3e:5b:b9:66", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80a22f5-c6", "ovs_interfaceid": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.961 232437 DEBUG nova.network.os_vif_util [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b9:66,bridge_name='br-int',has_traffic_filtering=True,id=d80a22f5-c63c-4325-a07c-77494e2ebebb,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80a22f5-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.962 232437 DEBUG nova.objects.instance [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'pci_devices' on Instance uuid 64c5b412-f2df-4207-86ed-ebf11666bb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.983 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  <uuid>64c5b412-f2df-4207-86ed-ebf11666bb96</uuid>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  <name>instance-0000009b</name>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersNegativeTestJSON-server-1667548925</nova:name>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:44:09</nova:creationTime>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <nova:user uuid="1035ecd55ed54b57aa35fe32fb915cc5">tempest-ServersNegativeTestJSON-776446295-project-member</nova:user>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <nova:project uuid="4ec9294f6d4b4f44a72414374d646a4a">tempest-ServersNegativeTestJSON-776446295</nova:project>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <nova:port uuid="d80a22f5-c63c-4325-a07c-77494e2ebebb">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <entry name="serial">64c5b412-f2df-4207-86ed-ebf11666bb96</entry>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <entry name="uuid">64c5b412-f2df-4207-86ed-ebf11666bb96</entry>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/64c5b412-f2df-4207-86ed-ebf11666bb96_disk">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/64c5b412-f2df-4207-86ed-ebf11666bb96_disk.config">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:5b:b9:66"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <target dev="tapd80a22f5-c6"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/64c5b412-f2df-4207-86ed-ebf11666bb96/console.log" append="off"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:44:09 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:44:09 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:44:09 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:44:09 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.986 232437 DEBUG nova.compute.manager [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Preparing to wait for external event network-vif-plugged-d80a22f5-c63c-4325-a07c-77494e2ebebb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.987 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.987 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.988 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.989 232437 DEBUG nova.virt.libvirt.vif [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:44:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1667548925',display_name='tempest-ServersNegativeTestJSON-server-1667548925',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1667548925',id=155,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ec9294f6d4b4f44a72414374d646a4a',ramdisk_id='',reservation_id='r-g0hnfotp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-776446295',owner_user_name='tempest-ServersNegativeTestJSON-776446295-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:44:03Z,user_data=None,user_id='1035ecd55ed54b57aa35fe32fb915cc5',uuid=64c5b412-f2df-4207-86ed-ebf11666bb96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "address": "fa:16:3e:5b:b9:66", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80a22f5-c6", "ovs_interfaceid": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.989 232437 DEBUG nova.network.os_vif_util [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converting VIF {"id": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "address": "fa:16:3e:5b:b9:66", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80a22f5-c6", "ovs_interfaceid": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.990 232437 DEBUG nova.network.os_vif_util [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b9:66,bridge_name='br-int',has_traffic_filtering=True,id=d80a22f5-c63c-4325-a07c-77494e2ebebb,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80a22f5-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.990 232437 DEBUG os_vif [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b9:66,bridge_name='br-int',has_traffic_filtering=True,id=d80a22f5-c63c-4325-a07c-77494e2ebebb,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80a22f5-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.991 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.992 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.992 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.997 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd80a22f5-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:44:09 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.997 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd80a22f5-c6, col_values=(('external_ids', {'iface-id': 'd80a22f5-c63c-4325-a07c-77494e2ebebb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:b9:66', 'vm-uuid': '64c5b412-f2df-4207-86ed-ebf11666bb96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:44:10 np0005548731 nova_compute[232433]: 2025-12-06 07:44:09.999 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:10 np0005548731 NetworkManager[49182]: <info>  [1765007050.0001] manager: (tapd80a22f5-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Dec  6 02:44:10 np0005548731 nova_compute[232433]: 2025-12-06 07:44:10.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:44:10 np0005548731 nova_compute[232433]: 2025-12-06 07:44:10.005 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:10 np0005548731 nova_compute[232433]: 2025-12-06 07:44:10.006 232437 INFO os_vif [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b9:66,bridge_name='br-int',has_traffic_filtering=True,id=d80a22f5-c63c-4325-a07c-77494e2ebebb,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80a22f5-c6')#033[00m
Dec  6 02:44:10 np0005548731 nova_compute[232433]: 2025-12-06 07:44:10.071 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:44:10 np0005548731 nova_compute[232433]: 2025-12-06 07:44:10.072 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:44:10 np0005548731 nova_compute[232433]: 2025-12-06 07:44:10.072 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] No VIF found with MAC fa:16:3e:5b:b9:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:44:10 np0005548731 nova_compute[232433]: 2025-12-06 07:44:10.073 232437 INFO nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Using config drive#033[00m
Dec  6 02:44:10 np0005548731 nova_compute[232433]: 2025-12-06 07:44:10.099 232437 DEBUG nova.storage.rbd_utils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image 64c5b412-f2df-4207-86ed-ebf11666bb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:44:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:10.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:11 np0005548731 nova_compute[232433]: 2025-12-06 07:44:11.095 232437 INFO nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Creating config drive at /var/lib/nova/instances/64c5b412-f2df-4207-86ed-ebf11666bb96/disk.config#033[00m
Dec  6 02:44:11 np0005548731 nova_compute[232433]: 2025-12-06 07:44:11.101 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/64c5b412-f2df-4207-86ed-ebf11666bb96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfv8elfzm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:44:11 np0005548731 nova_compute[232433]: 2025-12-06 07:44:11.232 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/64c5b412-f2df-4207-86ed-ebf11666bb96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfv8elfzm" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:44:11 np0005548731 nova_compute[232433]: 2025-12-06 07:44:11.261 232437 DEBUG nova.storage.rbd_utils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image 64c5b412-f2df-4207-86ed-ebf11666bb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:44:11 np0005548731 nova_compute[232433]: 2025-12-06 07:44:11.265 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/64c5b412-f2df-4207-86ed-ebf11666bb96/disk.config 64c5b412-f2df-4207-86ed-ebf11666bb96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:44:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:11.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e333 e333: 3 total, 3 up, 3 in
Dec  6 02:44:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:12.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:44:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:13.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:44:13 np0005548731 nova_compute[232433]: 2025-12-06 07:44:13.343 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:14.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:14 np0005548731 nova_compute[232433]: 2025-12-06 07:44:14.817 232437 DEBUG oslo_concurrency.processutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/64c5b412-f2df-4207-86ed-ebf11666bb96/disk.config 64c5b412-f2df-4207-86ed-ebf11666bb96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:44:14 np0005548731 nova_compute[232433]: 2025-12-06 07:44:14.819 232437 INFO nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Deleting local config drive /var/lib/nova/instances/64c5b412-f2df-4207-86ed-ebf11666bb96/disk.config because it was imported into RBD.#033[00m
Dec  6 02:44:14 np0005548731 kernel: tapd80a22f5-c6: entered promiscuous mode
Dec  6 02:44:14 np0005548731 NetworkManager[49182]: <info>  [1765007054.8716] manager: (tapd80a22f5-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/327)
Dec  6 02:44:14 np0005548731 nova_compute[232433]: 2025-12-06 07:44:14.873 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:44:14Z|00706|binding|INFO|Claiming lport d80a22f5-c63c-4325-a07c-77494e2ebebb for this chassis.
Dec  6 02:44:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:44:14Z|00707|binding|INFO|d80a22f5-c63c-4325-a07c-77494e2ebebb: Claiming fa:16:3e:5b:b9:66 10.100.0.5
Dec  6 02:44:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:14.890 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b9:66 10.100.0.5'], port_security=['fa:16:3e:5b:b9:66 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '64c5b412-f2df-4207-86ed-ebf11666bb96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ef8d01c6-8791-4eef-8d34-906ccbdd6237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a13aa6e4-a519-406f-87b7-05ba3d74a296, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=d80a22f5-c63c-4325-a07c-77494e2ebebb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:44:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:14.892 143965 INFO neutron.agent.ovn.metadata.agent [-] Port d80a22f5-c63c-4325-a07c-77494e2ebebb in datapath 67a02abd-6f15-4e26-ba0d-8a091ca98239 bound to our chassis#033[00m
Dec  6 02:44:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:44:14Z|00708|binding|INFO|Setting lport d80a22f5-c63c-4325-a07c-77494e2ebebb ovn-installed in OVS
Dec  6 02:44:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:44:14Z|00709|binding|INFO|Setting lport d80a22f5-c63c-4325-a07c-77494e2ebebb up in Southbound
Dec  6 02:44:14 np0005548731 nova_compute[232433]: 2025-12-06 07:44:14.893 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:14.895 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67a02abd-6f15-4e26-ba0d-8a091ca98239#033[00m
Dec  6 02:44:14 np0005548731 nova_compute[232433]: 2025-12-06 07:44:14.900 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:14 np0005548731 systemd-udevd[299935]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:44:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:14.911 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[60158ef8-9084-479f-b31a-fa12ab8f2212]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:14 np0005548731 NetworkManager[49182]: <info>  [1765007054.9143] device (tapd80a22f5-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:44:14 np0005548731 NetworkManager[49182]: <info>  [1765007054.9155] device (tapd80a22f5-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:44:14 np0005548731 systemd-machined[195355]: New machine qemu-72-instance-0000009b.
Dec  6 02:44:14 np0005548731 systemd[1]: Started Virtual Machine qemu-72-instance-0000009b.
Dec  6 02:44:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:14.940 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0a98b723-2d43-44ac-a540-4062f0ff8aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:14.945 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f371ee48-42bc-4563-b251-f39a175ef358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:14.973 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1e75a3-d563-4135-bb3e-07432147e0e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:14.994 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e009e0-6b13-43c2-be1e-a256946e8282]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67a02abd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:89:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 729864, 'reachable_time': 40789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299952, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:14.999 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:15.011 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[26a764d3-68bb-4c07-9bc4-7ddd694f6cd5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap67a02abd-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 729874, 'tstamp': 729874}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299953, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67a02abd-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 729877, 'tstamp': 729877}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299953, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:15.012 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a02abd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.014 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.015 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:15.015 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67a02abd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:44:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:15.016 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:44:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:15.016 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67a02abd-60, col_values=(('external_ids', {'iface-id': 'f1ca157c-f88b-4351-b03a-b04a75537062'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:44:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:15.017 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:44:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e334 e334: 3 total, 3 up, 3 in
Dec  6 02:44:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:44:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:15.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.398 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007055.3968763, 64c5b412-f2df-4207-86ed-ebf11666bb96 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.398 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] VM Started (Lifecycle Event)#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.438 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.442 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007055.3974707, 64c5b412-f2df-4207-86ed-ebf11666bb96 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.443 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.472 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.475 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.587 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.646 232437 DEBUG nova.compute.manager [req-4abaa566-11ec-4e96-81ad-f5a5fd7fe240 req-e936736b-3d77-4ffc-89bd-adc09cec1ab0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Received event network-vif-plugged-d80a22f5-c63c-4325-a07c-77494e2ebebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.646 232437 DEBUG oslo_concurrency.lockutils [req-4abaa566-11ec-4e96-81ad-f5a5fd7fe240 req-e936736b-3d77-4ffc-89bd-adc09cec1ab0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.646 232437 DEBUG oslo_concurrency.lockutils [req-4abaa566-11ec-4e96-81ad-f5a5fd7fe240 req-e936736b-3d77-4ffc-89bd-adc09cec1ab0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.647 232437 DEBUG oslo_concurrency.lockutils [req-4abaa566-11ec-4e96-81ad-f5a5fd7fe240 req-e936736b-3d77-4ffc-89bd-adc09cec1ab0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.647 232437 DEBUG nova.compute.manager [req-4abaa566-11ec-4e96-81ad-f5a5fd7fe240 req-e936736b-3d77-4ffc-89bd-adc09cec1ab0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Processing event network-vif-plugged-d80a22f5-c63c-4325-a07c-77494e2ebebb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.647 232437 DEBUG nova.compute.manager [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.651 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007055.651239, 64c5b412-f2df-4207-86ed-ebf11666bb96 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.651 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.653 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.655 232437 INFO nova.virt.libvirt.driver [-] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Instance spawned successfully.#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.656 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.680 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.683 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.690 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.691 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.691 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.691 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.692 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.692 232437 DEBUG nova.virt.libvirt.driver [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.749 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.849 232437 INFO nova.compute.manager [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Took 12.28 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.850 232437 DEBUG nova.compute.manager [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:44:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:44:15Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:41:1e 10.100.0.7
Dec  6 02:44:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:44:15Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:41:1e 10.100.0.7
Dec  6 02:44:15 np0005548731 nova_compute[232433]: 2025-12-06 07:44:15.997 232437 INFO nova.compute.manager [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Took 13.39 seconds to build instance.#033[00m
Dec  6 02:44:16 np0005548731 nova_compute[232433]: 2025-12-06 07:44:16.130 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:44:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:16.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:16 np0005548731 nova_compute[232433]: 2025-12-06 07:44:16.507 232437 DEBUG oslo_concurrency.lockutils [None req-cf674562-18ce-4d39-93b5-6fefd29106fa 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e335 e335: 3 total, 3 up, 3 in
Dec  6 02:44:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:16.795 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:44:16 np0005548731 nova_compute[232433]: 2025-12-06 07:44:16.795 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:16.796 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.103 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.103 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:44:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:17.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.539 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.539 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.539 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.539 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.955 232437 DEBUG nova.compute.manager [req-c0f0ebd5-603d-4f5b-8a92-d7d08a3ec8e7 req-bae97baa-1293-4b14-80de-c47db91e1365 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Received event network-vif-plugged-d80a22f5-c63c-4325-a07c-77494e2ebebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.955 232437 DEBUG oslo_concurrency.lockutils [req-c0f0ebd5-603d-4f5b-8a92-d7d08a3ec8e7 req-bae97baa-1293-4b14-80de-c47db91e1365 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.955 232437 DEBUG oslo_concurrency.lockutils [req-c0f0ebd5-603d-4f5b-8a92-d7d08a3ec8e7 req-bae97baa-1293-4b14-80de-c47db91e1365 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.956 232437 DEBUG oslo_concurrency.lockutils [req-c0f0ebd5-603d-4f5b-8a92-d7d08a3ec8e7 req-bae97baa-1293-4b14-80de-c47db91e1365 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.956 232437 DEBUG nova.compute.manager [req-c0f0ebd5-603d-4f5b-8a92-d7d08a3ec8e7 req-bae97baa-1293-4b14-80de-c47db91e1365 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] No waiting events found dispatching network-vif-plugged-d80a22f5-c63c-4325-a07c-77494e2ebebb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:44:17 np0005548731 nova_compute[232433]: 2025-12-06 07:44:17.956 232437 WARNING nova.compute.manager [req-c0f0ebd5-603d-4f5b-8a92-d7d08a3ec8e7 req-bae97baa-1293-4b14-80de-c47db91e1365 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Received unexpected event network-vif-plugged-d80a22f5-c63c-4325-a07c-77494e2ebebb for instance with vm_state active and task_state None.#033[00m
Dec  6 02:44:18 np0005548731 nova_compute[232433]: 2025-12-06 07:44:18.348 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:19.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:19 np0005548731 nova_compute[232433]: 2025-12-06 07:44:19.546 232437 DEBUG oslo_concurrency.lockutils [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "64c5b412-f2df-4207-86ed-ebf11666bb96" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:19 np0005548731 nova_compute[232433]: 2025-12-06 07:44:19.547 232437 DEBUG oslo_concurrency.lockutils [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:19 np0005548731 nova_compute[232433]: 2025-12-06 07:44:19.547 232437 DEBUG oslo_concurrency.lockutils [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:19 np0005548731 nova_compute[232433]: 2025-12-06 07:44:19.547 232437 DEBUG oslo_concurrency.lockutils [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:19 np0005548731 nova_compute[232433]: 2025-12-06 07:44:19.548 232437 DEBUG oslo_concurrency.lockutils [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:19 np0005548731 nova_compute[232433]: 2025-12-06 07:44:19.549 232437 INFO nova.compute.manager [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Terminating instance#033[00m
Dec  6 02:44:19 np0005548731 nova_compute[232433]: 2025-12-06 07:44:19.550 232437 DEBUG nova.compute.manager [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.003 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:20 np0005548731 kernel: tapd80a22f5-c6 (unregistering): left promiscuous mode
Dec  6 02:44:20 np0005548731 NetworkManager[49182]: <info>  [1765007060.5046] device (tapd80a22f5-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.511 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:44:20Z|00710|binding|INFO|Releasing lport d80a22f5-c63c-4325-a07c-77494e2ebebb from this chassis (sb_readonly=0)
Dec  6 02:44:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:44:20Z|00711|binding|INFO|Setting lport d80a22f5-c63c-4325-a07c-77494e2ebebb down in Southbound
Dec  6 02:44:20 np0005548731 ovn_controller[133927]: 2025-12-06T07:44:20Z|00712|binding|INFO|Removing iface tapd80a22f5-c6 ovn-installed in OVS
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.514 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.528 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.542 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b9:66 10.100.0.5'], port_security=['fa:16:3e:5b:b9:66 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '64c5b412-f2df-4207-86ed-ebf11666bb96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ef8d01c6-8791-4eef-8d34-906ccbdd6237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a13aa6e4-a519-406f-87b7-05ba3d74a296, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=d80a22f5-c63c-4325-a07c-77494e2ebebb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.543 143965 INFO neutron.agent.ovn.metadata.agent [-] Port d80a22f5-c63c-4325-a07c-77494e2ebebb in datapath 67a02abd-6f15-4e26-ba0d-8a091ca98239 unbound from our chassis#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.544 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67a02abd-6f15-4e26-ba0d-8a091ca98239#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.558 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[28e89027-a363-4686-8a07-e364fe5aae5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:20 np0005548731 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009b.scope: Deactivated successfully.
Dec  6 02:44:20 np0005548731 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009b.scope: Consumed 4.386s CPU time.
Dec  6 02:44:20 np0005548731 systemd-machined[195355]: Machine qemu-72-instance-0000009b terminated.
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.587 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[19d167d6-1abd-4208-b47a-04931110382e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.590 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[1a656147-2db8-4303-90a2-8b05f1707775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.616 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ccadb891-18c8-4f0e-8a0a-ad5ca4e9c536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.632 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c6dc779a-e1bd-42bd-8f01-a0c83103863d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67a02abd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:89:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 729864, 'reachable_time': 40789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300139, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.648 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[890781dd-0cbc-4477-82cd-ee33dd3b497e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap67a02abd-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 729874, 'tstamp': 729874}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300140, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67a02abd-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 729877, 'tstamp': 729877}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300140, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.650 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a02abd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.651 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.656 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.657 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67a02abd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.657 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.658 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67a02abd-60, col_values=(('external_ids', {'iface-id': 'f1ca157c-f88b-4351-b03a-b04a75537062'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:44:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:20.658 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.791 232437 INFO nova.virt.libvirt.driver [-] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Instance destroyed successfully.#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.792 232437 DEBUG nova.objects.instance [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'resources' on Instance uuid 64c5b412-f2df-4207-86ed-ebf11666bb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.814 232437 DEBUG nova.virt.libvirt.vif [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:44:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1667548925',display_name='tempest-ServersNegativeTestJSON-server-1667548925',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1667548925',id=155,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:44:15Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ec9294f6d4b4f44a72414374d646a4a',ramdisk_id='',reservation_id='r-g0hnfotp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-776446295',owner_user_name='tempest-ServersNegativeTestJSON-776446295-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:44:15Z,user_data=None,user_id='1035ecd55ed54b57aa35fe32fb915cc5',uuid=64c5b412-f2df-4207-86ed-ebf11666bb96,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "address": "fa:16:3e:5b:b9:66", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80a22f5-c6", "ovs_interfaceid": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.815 232437 DEBUG nova.network.os_vif_util [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converting VIF {"id": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "address": "fa:16:3e:5b:b9:66", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80a22f5-c6", "ovs_interfaceid": "d80a22f5-c63c-4325-a07c-77494e2ebebb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.816 232437 DEBUG nova.network.os_vif_util [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b9:66,bridge_name='br-int',has_traffic_filtering=True,id=d80a22f5-c63c-4325-a07c-77494e2ebebb,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80a22f5-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.816 232437 DEBUG os_vif [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b9:66,bridge_name='br-int',has_traffic_filtering=True,id=d80a22f5-c63c-4325-a07c-77494e2ebebb,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80a22f5-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.818 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.818 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd80a22f5-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.821 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:20 np0005548731 nova_compute[232433]: 2025-12-06 07:44:20.823 232437 INFO os_vif [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b9:66,bridge_name='br-int',has_traffic_filtering=True,id=d80a22f5-c63c-4325-a07c-77494e2ebebb,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80a22f5-c6')#033[00m
Dec  6 02:44:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:44:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/323194662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:44:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:21.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:21 np0005548731 nova_compute[232433]: 2025-12-06 07:44:21.558 232437 DEBUG nova.compute.manager [req-2a5b5494-f262-4154-9978-0c0841be6900 req-f1aa39af-5112-40ba-924d-78f3222a9639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Received event network-vif-unplugged-d80a22f5-c63c-4325-a07c-77494e2ebebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:44:21 np0005548731 nova_compute[232433]: 2025-12-06 07:44:21.558 232437 DEBUG oslo_concurrency.lockutils [req-2a5b5494-f262-4154-9978-0c0841be6900 req-f1aa39af-5112-40ba-924d-78f3222a9639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:21 np0005548731 nova_compute[232433]: 2025-12-06 07:44:21.559 232437 DEBUG oslo_concurrency.lockutils [req-2a5b5494-f262-4154-9978-0c0841be6900 req-f1aa39af-5112-40ba-924d-78f3222a9639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:21 np0005548731 nova_compute[232433]: 2025-12-06 07:44:21.559 232437 DEBUG oslo_concurrency.lockutils [req-2a5b5494-f262-4154-9978-0c0841be6900 req-f1aa39af-5112-40ba-924d-78f3222a9639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:21 np0005548731 nova_compute[232433]: 2025-12-06 07:44:21.559 232437 DEBUG nova.compute.manager [req-2a5b5494-f262-4154-9978-0c0841be6900 req-f1aa39af-5112-40ba-924d-78f3222a9639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] No waiting events found dispatching network-vif-unplugged-d80a22f5-c63c-4325-a07c-77494e2ebebb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:44:21 np0005548731 nova_compute[232433]: 2025-12-06 07:44:21.560 232437 DEBUG nova.compute.manager [req-2a5b5494-f262-4154-9978-0c0841be6900 req-f1aa39af-5112-40ba-924d-78f3222a9639 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Received event network-vif-unplugged-d80a22f5-c63c-4325-a07c-77494e2ebebb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:44:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:22.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:44:22.799 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:44:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:44:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:44:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:44:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:44:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:23.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:44:23 np0005548731 nova_compute[232433]: 2025-12-06 07:44:23.349 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:23 np0005548731 nova_compute[232433]: 2025-12-06 07:44:23.974 232437 DEBUG nova.compute.manager [req-be0c7af2-3388-42cf-b04d-c9910eddccb8 req-c28dcba8-8f73-49d5-835f-8f38149f4b67 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Received event network-vif-plugged-d80a22f5-c63c-4325-a07c-77494e2ebebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:44:23 np0005548731 nova_compute[232433]: 2025-12-06 07:44:23.975 232437 DEBUG oslo_concurrency.lockutils [req-be0c7af2-3388-42cf-b04d-c9910eddccb8 req-c28dcba8-8f73-49d5-835f-8f38149f4b67 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:23 np0005548731 nova_compute[232433]: 2025-12-06 07:44:23.976 232437 DEBUG oslo_concurrency.lockutils [req-be0c7af2-3388-42cf-b04d-c9910eddccb8 req-c28dcba8-8f73-49d5-835f-8f38149f4b67 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:23 np0005548731 nova_compute[232433]: 2025-12-06 07:44:23.976 232437 DEBUG oslo_concurrency.lockutils [req-be0c7af2-3388-42cf-b04d-c9910eddccb8 req-c28dcba8-8f73-49d5-835f-8f38149f4b67 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:23 np0005548731 nova_compute[232433]: 2025-12-06 07:44:23.977 232437 DEBUG nova.compute.manager [req-be0c7af2-3388-42cf-b04d-c9910eddccb8 req-c28dcba8-8f73-49d5-835f-8f38149f4b67 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] No waiting events found dispatching network-vif-plugged-d80a22f5-c63c-4325-a07c-77494e2ebebb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:44:23 np0005548731 nova_compute[232433]: 2025-12-06 07:44:23.977 232437 WARNING nova.compute.manager [req-be0c7af2-3388-42cf-b04d-c9910eddccb8 req-c28dcba8-8f73-49d5-835f-8f38149f4b67 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Received unexpected event network-vif-plugged-d80a22f5-c63c-4325-a07c-77494e2ebebb for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:44:24 np0005548731 nova_compute[232433]: 2025-12-06 07:44:24.070 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updating instance_info_cache with network_info: [{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:44:24 np0005548731 nova_compute[232433]: 2025-12-06 07:44:24.291 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:44:24 np0005548731 nova_compute[232433]: 2025-12-06 07:44:24.292 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:44:24 np0005548731 nova_compute[232433]: 2025-12-06 07:44:24.292 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:44:24 np0005548731 nova_compute[232433]: 2025-12-06 07:44:24.292 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:44:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:24.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:25 np0005548731 nova_compute[232433]: 2025-12-06 07:44:25.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:44:25 np0005548731 nova_compute[232433]: 2025-12-06 07:44:25.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:44:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:44:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:25.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:44:25 np0005548731 nova_compute[232433]: 2025-12-06 07:44:25.493 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:25 np0005548731 nova_compute[232433]: 2025-12-06 07:44:25.494 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:25 np0005548731 nova_compute[232433]: 2025-12-06 07:44:25.494 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:25 np0005548731 nova_compute[232433]: 2025-12-06 07:44:25.494 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:44:25 np0005548731 nova_compute[232433]: 2025-12-06 07:44:25.495 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:44:25 np0005548731 nova_compute[232433]: 2025-12-06 07:44:25.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 e336: 3 total, 3 up, 3 in
Dec  6 02:44:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:44:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3531443425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:44:25 np0005548731 nova_compute[232433]: 2025-12-06 07:44:25.949 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.039 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.040 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.042 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.043 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.181 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.182 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4107MB free_disk=20.801490783691406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.182 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.183 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:26.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.611 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance f67d89a8-836a-4f47-af8d-37cf99529275 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.611 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 64c5b412-f2df-4207-86ed-ebf11666bb96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.612 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.612 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:44:26 np0005548731 nova_compute[232433]: 2025-12-06 07:44:26.743 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:44:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:44:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/678080406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:44:27 np0005548731 nova_compute[232433]: 2025-12-06 07:44:27.167 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:44:27 np0005548731 nova_compute[232433]: 2025-12-06 07:44:27.173 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:44:27 np0005548731 nova_compute[232433]: 2025-12-06 07:44:27.284 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:44:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:27.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:27 np0005548731 nova_compute[232433]: 2025-12-06 07:44:27.345 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:44:27 np0005548731 nova_compute[232433]: 2025-12-06 07:44:27.345 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:28 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:44:28 np0005548731 nova_compute[232433]: 2025-12-06 07:44:28.350 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:44:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:28.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:44:29 np0005548731 ovn_controller[133927]: 2025-12-06T07:44:29Z|00713|binding|INFO|Releasing lport f1ca157c-f88b-4351-b03a-b04a75537062 from this chassis (sb_readonly=0)
Dec  6 02:44:29 np0005548731 nova_compute[232433]: 2025-12-06 07:44:29.297 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:44:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:29.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:44:29 np0005548731 nova_compute[232433]: 2025-12-06 07:44:29.346 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:44:29 np0005548731 nova_compute[232433]: 2025-12-06 07:44:29.346 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:44:29 np0005548731 ovn_controller[133927]: 2025-12-06T07:44:29Z|00714|binding|INFO|Releasing lport f1ca157c-f88b-4351-b03a-b04a75537062 from this chassis (sb_readonly=0)
Dec  6 02:44:29 np0005548731 nova_compute[232433]: 2025-12-06 07:44:29.409 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:44:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:30.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:30 np0005548731 nova_compute[232433]: 2025-12-06 07:44:30.821 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:31 np0005548731 nova_compute[232433]: 2025-12-06 07:44:31.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:44:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:31.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:32.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:33.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:33 np0005548731 nova_compute[232433]: 2025-12-06 07:44:33.353 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:34.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:44:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:35.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:44:35 np0005548731 nova_compute[232433]: 2025-12-06 07:44:35.790 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007060.7887194, 64c5b412-f2df-4207-86ed-ebf11666bb96 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:44:35 np0005548731 nova_compute[232433]: 2025-12-06 07:44:35.791 232437 INFO nova.compute.manager [-] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:44:35 np0005548731 nova_compute[232433]: 2025-12-06 07:44:35.822 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:35 np0005548731 nova_compute[232433]: 2025-12-06 07:44:35.825 232437 DEBUG nova.compute.manager [None req-093e5a62-ae78-49c5-a738-1c21fc14b008 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:44:35 np0005548731 nova_compute[232433]: 2025-12-06 07:44:35.828 232437 DEBUG nova.compute.manager [None req-093e5a62-ae78-49c5-a738-1c21fc14b008 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:44:35 np0005548731 nova_compute[232433]: 2025-12-06 07:44:35.850 232437 INFO nova.compute.manager [None req-093e5a62-ae78-49c5-a738-1c21fc14b008 - - - - - -] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Dec  6 02:44:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:36.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:44:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:37.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:44:38 np0005548731 nova_compute[232433]: 2025-12-06 07:44:38.400 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:38.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:39.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:40.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:40 np0005548731 nova_compute[232433]: 2025-12-06 07:44:40.849 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:40 np0005548731 podman[300340]: 2025-12-06 07:44:40.920417193 +0000 UTC m=+0.054247423 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:44:40 np0005548731 podman[300328]: 2025-12-06 07:44:40.920417443 +0000 UTC m=+0.081934678 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:44:40 np0005548731 podman[300337]: 2025-12-06 07:44:40.93668171 +0000 UTC m=+0.073926294 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  6 02:44:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:41.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:42.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:42 np0005548731 nova_compute[232433]: 2025-12-06 07:44:42.903 232437 INFO nova.virt.libvirt.driver [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Deleting instance files /var/lib/nova/instances/64c5b412-f2df-4207-86ed-ebf11666bb96_del#033[00m
Dec  6 02:44:42 np0005548731 nova_compute[232433]: 2025-12-06 07:44:42.904 232437 INFO nova.virt.libvirt.driver [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Deletion of /var/lib/nova/instances/64c5b412-f2df-4207-86ed-ebf11666bb96_del complete#033[00m
Dec  6 02:44:43 np0005548731 nova_compute[232433]: 2025-12-06 07:44:43.011 232437 INFO nova.compute.manager [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Took 23.46 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:44:43 np0005548731 nova_compute[232433]: 2025-12-06 07:44:43.012 232437 DEBUG oslo.service.loopingcall [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:44:43 np0005548731 nova_compute[232433]: 2025-12-06 07:44:43.012 232437 DEBUG nova.compute.manager [-] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:44:43 np0005548731 nova_compute[232433]: 2025-12-06 07:44:43.012 232437 DEBUG nova.network.neutron [-] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:44:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:44:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:44:43 np0005548731 nova_compute[232433]: 2025-12-06 07:44:43.403 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:44.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:45 np0005548731 nova_compute[232433]: 2025-12-06 07:44:45.265 232437 DEBUG nova.network.neutron [-] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:44:45 np0005548731 nova_compute[232433]: 2025-12-06 07:44:45.300 232437 INFO nova.compute.manager [-] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Took 2.29 seconds to deallocate network for instance.#033[00m
Dec  6 02:44:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:45.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:45 np0005548731 nova_compute[232433]: 2025-12-06 07:44:45.382 232437 DEBUG oslo_concurrency.lockutils [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:44:45 np0005548731 nova_compute[232433]: 2025-12-06 07:44:45.383 232437 DEBUG oslo_concurrency.lockutils [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:44:45 np0005548731 nova_compute[232433]: 2025-12-06 07:44:45.535 232437 DEBUG oslo_concurrency.processutils [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:44:45 np0005548731 nova_compute[232433]: 2025-12-06 07:44:45.851 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:44:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4294926738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:44:45 np0005548731 nova_compute[232433]: 2025-12-06 07:44:45.956 232437 DEBUG oslo_concurrency.processutils [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:44:45 np0005548731 nova_compute[232433]: 2025-12-06 07:44:45.962 232437 DEBUG nova.compute.provider_tree [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:44:45 np0005548731 nova_compute[232433]: 2025-12-06 07:44:45.991 232437 DEBUG nova.scheduler.client.report [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:44:46 np0005548731 nova_compute[232433]: 2025-12-06 07:44:46.018 232437 DEBUG oslo_concurrency.lockutils [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:46 np0005548731 nova_compute[232433]: 2025-12-06 07:44:46.053 232437 INFO nova.scheduler.client.report [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Deleted allocations for instance 64c5b412-f2df-4207-86ed-ebf11666bb96#033[00m
Dec  6 02:44:46 np0005548731 nova_compute[232433]: 2025-12-06 07:44:46.150 232437 DEBUG oslo_concurrency.lockutils [None req-156d6eea-3fb6-4ac2-b62b-98baa11d48c3 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "64c5b412-f2df-4207-86ed-ebf11666bb96" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 26.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:44:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:46.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:47.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:47 np0005548731 nova_compute[232433]: 2025-12-06 07:44:47.679 232437 DEBUG nova.compute.manager [req-ee0b3334-1c39-4a01-8263-51f3c08aceb6 req-e49a82c4-a4d1-48c3-824d-6a70c16447f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 64c5b412-f2df-4207-86ed-ebf11666bb96] Received event network-vif-deleted-d80a22f5-c63c-4325-a07c-77494e2ebebb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:44:48 np0005548731 nova_compute[232433]: 2025-12-06 07:44:48.404 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:44:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:48.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:44:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:49.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:50.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:50 np0005548731 nova_compute[232433]: 2025-12-06 07:44:50.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:51.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:44:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:52.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:44:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:53.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:53 np0005548731 nova_compute[232433]: 2025-12-06 07:44:53.406 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:44:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:54.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:44:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:44:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:44:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:55.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:44:55 np0005548731 nova_compute[232433]: 2025-12-06 07:44:55.854 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:56.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:44:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:57.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:44:58 np0005548731 nova_compute[232433]: 2025-12-06 07:44:58.409 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:44:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:44:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:58.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:44:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:44:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:44:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:59.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:45:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:00.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:00 np0005548731 nova_compute[232433]: 2025-12-06 07:45:00.856 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:00.885 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:00.886 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:00.886 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:01.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:02.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:02 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 02:45:02 np0005548731 nova_compute[232433]: 2025-12-06 07:45:02.929 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:02 np0005548731 nova_compute[232433]: 2025-12-06 07:45:02.929 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:02 np0005548731 nova_compute[232433]: 2025-12-06 07:45:02.966 232437 DEBUG nova.compute.manager [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.085 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.086 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.099 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.100 232437 INFO nova.compute.claims [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.257 232437 DEBUG nova.scheduler.client.report [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.279 232437 DEBUG nova.scheduler.client.report [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.279 232437 DEBUG nova.compute.provider_tree [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.300 232437 DEBUG nova.scheduler.client.report [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.336 232437 DEBUG nova.scheduler.client.report [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 02:45:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:03.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.411 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.447 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:45:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2923747230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.872 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.878 232437 DEBUG nova.compute.provider_tree [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.905 232437 DEBUG nova.scheduler.client.report [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.946 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:03 np0005548731 nova_compute[232433]: 2025-12-06 07:45:03.947 232437 DEBUG nova.compute.manager [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.088 232437 DEBUG nova.compute.manager [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.088 232437 DEBUG nova.network.neutron [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.159 232437 INFO nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.190 232437 DEBUG nova.compute.manager [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.398 232437 DEBUG nova.compute.manager [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.400 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.401 232437 INFO nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Creating image(s)#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.433 232437 DEBUG nova.storage.rbd_utils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image d66a90db-5589-4b7f-b805-4096fffbd9f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.461 232437 DEBUG nova.storage.rbd_utils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image d66a90db-5589-4b7f-b805-4096fffbd9f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:45:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:04.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.488 232437 DEBUG nova.storage.rbd_utils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image d66a90db-5589-4b7f-b805-4096fffbd9f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.492 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.553 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.554 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.554 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.555 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.580 232437 DEBUG nova.storage.rbd_utils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image d66a90db-5589-4b7f-b805-4096fffbd9f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.583 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d66a90db-5589-4b7f-b805-4096fffbd9f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.663 232437 DEBUG nova.policy [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '297bc99c242e4fa8aedea4a6367b61c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.901 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d66a90db-5589-4b7f-b805-4096fffbd9f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:04 np0005548731 nova_compute[232433]: 2025-12-06 07:45:04.982 232437 DEBUG nova.storage.rbd_utils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] resizing rbd image d66a90db-5589-4b7f-b805-4096fffbd9f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:45:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:05 np0005548731 nova_compute[232433]: 2025-12-06 07:45:05.133 232437 DEBUG nova.objects.instance [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'migration_context' on Instance uuid d66a90db-5589-4b7f-b805-4096fffbd9f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:45:05 np0005548731 nova_compute[232433]: 2025-12-06 07:45:05.156 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:45:05 np0005548731 nova_compute[232433]: 2025-12-06 07:45:05.157 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Ensure instance console log exists: /var/lib/nova/instances/d66a90db-5589-4b7f-b805-4096fffbd9f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:45:05 np0005548731 nova_compute[232433]: 2025-12-06 07:45:05.157 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:05 np0005548731 nova_compute[232433]: 2025-12-06 07:45:05.157 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:05 np0005548731 nova_compute[232433]: 2025-12-06 07:45:05.158 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:05.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:05 np0005548731 nova_compute[232433]: 2025-12-06 07:45:05.858 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:06 np0005548731 nova_compute[232433]: 2025-12-06 07:45:06.053 232437 DEBUG nova.network.neutron [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Successfully created port: dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:45:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:45:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:06.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:45:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:07.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:08 np0005548731 nova_compute[232433]: 2025-12-06 07:45:08.413 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:08.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:45:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2063318395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:45:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:45:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2063318395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:45:09 np0005548731 nova_compute[232433]: 2025-12-06 07:45:09.273 232437 DEBUG nova.network.neutron [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Successfully updated port: dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:45:09 np0005548731 nova_compute[232433]: 2025-12-06 07:45:09.297 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "refresh_cache-d66a90db-5589-4b7f-b805-4096fffbd9f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:45:09 np0005548731 nova_compute[232433]: 2025-12-06 07:45:09.297 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquired lock "refresh_cache-d66a90db-5589-4b7f-b805-4096fffbd9f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:45:09 np0005548731 nova_compute[232433]: 2025-12-06 07:45:09.297 232437 DEBUG nova.network.neutron [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:45:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:09.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:09 np0005548731 nova_compute[232433]: 2025-12-06 07:45:09.426 232437 DEBUG nova.compute.manager [req-ece76451-49cb-4643-8a22-35e89e40715f req-8d603b86-2b10-4c5b-97e4-17022622ef4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Received event network-changed-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:45:09 np0005548731 nova_compute[232433]: 2025-12-06 07:45:09.427 232437 DEBUG nova.compute.manager [req-ece76451-49cb-4643-8a22-35e89e40715f req-8d603b86-2b10-4c5b-97e4-17022622ef4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Refreshing instance network info cache due to event network-changed-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:45:09 np0005548731 nova_compute[232433]: 2025-12-06 07:45:09.427 232437 DEBUG oslo_concurrency.lockutils [req-ece76451-49cb-4643-8a22-35e89e40715f req-8d603b86-2b10-4c5b-97e4-17022622ef4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d66a90db-5589-4b7f-b805-4096fffbd9f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:45:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:10.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:10 np0005548731 nova_compute[232433]: 2025-12-06 07:45:10.860 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:11 np0005548731 nova_compute[232433]: 2025-12-06 07:45:11.240 232437 DEBUG nova.network.neutron [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:45:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:45:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:11.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:45:11 np0005548731 podman[300716]: 2025-12-06 07:45:11.890473364 +0000 UTC m=+0.048584596 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 02:45:11 np0005548731 podman[300718]: 2025-12-06 07:45:11.898176541 +0000 UTC m=+0.048745609 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  6 02:45:11 np0005548731 podman[300717]: 2025-12-06 07:45:11.930488519 +0000 UTC m=+0.084536792 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  6 02:45:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:45:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:12.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:45:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:13.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:13 np0005548731 nova_compute[232433]: 2025-12-06 07:45:13.415 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.051 232437 DEBUG nova.network.neutron [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Updating instance_info_cache with network_info: [{"id": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "address": "fa:16:3e:d2:01:47", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc0dd2b5-bf", "ovs_interfaceid": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.073 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Releasing lock "refresh_cache-d66a90db-5589-4b7f-b805-4096fffbd9f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.074 232437 DEBUG nova.compute.manager [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Instance network_info: |[{"id": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "address": "fa:16:3e:d2:01:47", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc0dd2b5-bf", "ovs_interfaceid": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.075 232437 DEBUG oslo_concurrency.lockutils [req-ece76451-49cb-4643-8a22-35e89e40715f req-8d603b86-2b10-4c5b-97e4-17022622ef4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d66a90db-5589-4b7f-b805-4096fffbd9f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.075 232437 DEBUG nova.network.neutron [req-ece76451-49cb-4643-8a22-35e89e40715f req-8d603b86-2b10-4c5b-97e4-17022622ef4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Refreshing network info cache for port dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.080 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Start _get_guest_xml network_info=[{"id": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "address": "fa:16:3e:d2:01:47", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc0dd2b5-bf", "ovs_interfaceid": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.085 232437 WARNING nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.102 232437 DEBUG nova.virt.libvirt.host [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.103 232437 DEBUG nova.virt.libvirt.host [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.107 232437 DEBUG nova.virt.libvirt.host [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.108 232437 DEBUG nova.virt.libvirt.host [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.109 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.110 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.110 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.111 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.111 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.111 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.111 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.112 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.112 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.112 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.113 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.113 232437 DEBUG nova.virt.hardware [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.116 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:14.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:45:14 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1731268524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.569 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.599 232437 DEBUG nova.storage.rbd_utils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image d66a90db-5589-4b7f-b805-4096fffbd9f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:45:14 np0005548731 nova_compute[232433]: 2025-12-06 07:45:14.603 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:45:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4129380359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:45:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.036 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.038 232437 DEBUG nova.virt.libvirt.vif [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:45:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1008074304',display_name='tempest-AttachVolumeNegativeTest-server-1008074304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1008074304',id=158,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq2Ps+LisHXOt3k1Z1PkGhN+se8X4Ia0K+j3S7twrCOt5pK+3twP1awRwIa//I/8m7510cD7nQ4ICavPZtWHJvZPzsVp8pGmCsyZIyj5ImlYzn8fGUic6wmw2Is2N8TtA==',key_name='tempest-keypair-304190457',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-9g8i9tcb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:45:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=d66a90db-5589-4b7f-b805-4096fffbd9f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "address": "fa:16:3e:d2:01:47", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc0dd2b5-bf", "ovs_interfaceid": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.039 232437 DEBUG nova.network.os_vif_util [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "address": "fa:16:3e:d2:01:47", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc0dd2b5-bf", "ovs_interfaceid": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.040 232437 DEBUG nova.network.os_vif_util [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:01:47,bridge_name='br-int',has_traffic_filtering=True,id=dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc0dd2b5-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.041 232437 DEBUG nova.objects.instance [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'pci_devices' on Instance uuid d66a90db-5589-4b7f-b805-4096fffbd9f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.060 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  <uuid>d66a90db-5589-4b7f-b805-4096fffbd9f8</uuid>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  <name>instance-0000009e</name>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <nova:name>tempest-AttachVolumeNegativeTest-server-1008074304</nova:name>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:45:14</nova:creationTime>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <nova:user uuid="297bc99c242e4fa8aedea4a6367b61c0">tempest-AttachVolumeNegativeTest-2080911030-project-member</nova:user>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <nova:project uuid="741dc47f9ced423cbd99fd6f9d32904f">tempest-AttachVolumeNegativeTest-2080911030</nova:project>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <nova:port uuid="dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <entry name="serial">d66a90db-5589-4b7f-b805-4096fffbd9f8</entry>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <entry name="uuid">d66a90db-5589-4b7f-b805-4096fffbd9f8</entry>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/d66a90db-5589-4b7f-b805-4096fffbd9f8_disk">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/d66a90db-5589-4b7f-b805-4096fffbd9f8_disk.config">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:d2:01:47"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <target dev="tapdc0dd2b5-bf"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/d66a90db-5589-4b7f-b805-4096fffbd9f8/console.log" append="off"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:45:15 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:45:15 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:45:15 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:45:15 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.063 232437 DEBUG nova.compute.manager [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Preparing to wait for external event network-vif-plugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.063 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.063 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.064 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.064 232437 DEBUG nova.virt.libvirt.vif [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:45:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1008074304',display_name='tempest-AttachVolumeNegativeTest-server-1008074304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1008074304',id=158,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq2Ps+LisHXOt3k1Z1PkGhN+se8X4Ia0K+j3S7twrCOt5pK+3twP1awRwIa//I/8m7510cD7nQ4ICavPZtWHJvZPzsVp8pGmCsyZIyj5ImlYzn8fGUic6wmw2Is2N8TtA==',key_name='tempest-keypair-304190457',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-9g8i9tcb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:45:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=d66a90db-5589-4b7f-b805-4096fffbd9f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "address": "fa:16:3e:d2:01:47", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc0dd2b5-bf", "ovs_interfaceid": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.065 232437 DEBUG nova.network.os_vif_util [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "address": "fa:16:3e:d2:01:47", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc0dd2b5-bf", "ovs_interfaceid": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.065 232437 DEBUG nova.network.os_vif_util [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:01:47,bridge_name='br-int',has_traffic_filtering=True,id=dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc0dd2b5-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.066 232437 DEBUG os_vif [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:01:47,bridge_name='br-int',has_traffic_filtering=True,id=dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc0dd2b5-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.066 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.067 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.067 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.070 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.070 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc0dd2b5-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.071 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc0dd2b5-bf, col_values=(('external_ids', {'iface-id': 'dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:01:47', 'vm-uuid': 'd66a90db-5589-4b7f-b805-4096fffbd9f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.072 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:15 np0005548731 NetworkManager[49182]: <info>  [1765007115.0729] manager: (tapdc0dd2b5-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.074 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.080 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.080 232437 INFO os_vif [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:01:47,bridge_name='br-int',has_traffic_filtering=True,id=dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc0dd2b5-bf')#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.134 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.135 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.135 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No VIF found with MAC fa:16:3e:d2:01:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.135 232437 INFO nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Using config drive#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.159 232437 DEBUG nova.storage.rbd_utils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image d66a90db-5589-4b7f-b805-4096fffbd9f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:45:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:15.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.970 232437 INFO nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Creating config drive at /var/lib/nova/instances/d66a90db-5589-4b7f-b805-4096fffbd9f8/disk.config#033[00m
Dec  6 02:45:15 np0005548731 nova_compute[232433]: 2025-12-06 07:45:15.977 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d66a90db-5589-4b7f-b805-4096fffbd9f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ewvw4p7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.116 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d66a90db-5589-4b7f-b805-4096fffbd9f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ewvw4p7" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.141 232437 DEBUG nova.storage.rbd_utils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image d66a90db-5589-4b7f-b805-4096fffbd9f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.144 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d66a90db-5589-4b7f-b805-4096fffbd9f8/disk.config d66a90db-5589-4b7f-b805-4096fffbd9f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.415 232437 DEBUG oslo_concurrency.processutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d66a90db-5589-4b7f-b805-4096fffbd9f8/disk.config d66a90db-5589-4b7f-b805-4096fffbd9f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.416 232437 INFO nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Deleting local config drive /var/lib/nova/instances/d66a90db-5589-4b7f-b805-4096fffbd9f8/disk.config because it was imported into RBD.#033[00m
Dec  6 02:45:16 np0005548731 kernel: tapdc0dd2b5-bf: entered promiscuous mode
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.464 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:16Z|00715|binding|INFO|Claiming lport dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 for this chassis.
Dec  6 02:45:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:16Z|00716|binding|INFO|dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5: Claiming fa:16:3e:d2:01:47 10.100.0.10
Dec  6 02:45:16 np0005548731 NetworkManager[49182]: <info>  [1765007116.4665] manager: (tapdc0dd2b5-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/329)
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.474 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:01:47 10.100.0.10'], port_security=['fa:16:3e:d2:01:47 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd66a90db-5589-4b7f-b805-4096fffbd9f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe3fa474-81b7-490f-80d0-dc30cbd0f0d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a93df87f-d2df-4d3a-b692-98bba32f2fe1, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.475 143965 INFO neutron.agent.ovn.metadata.agent [-] Port dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 in datapath 3c5d4817-c3d5-45fc-9890-418e779bacb2 bound to our chassis#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.476 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c5d4817-c3d5-45fc-9890-418e779bacb2#033[00m
Dec  6 02:45:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:16.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.487 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[beb0dddb-d8f1-4414-bad7-348c4a98da19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.488 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c5d4817-c1 in ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.490 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c5d4817-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.490 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f45e7f-5b2b-4c2b-a188-04726467f2cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.491 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d92abf1f-f65e-4083-94bb-1af6ee3f6613]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 systemd-udevd[300917]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:45:16 np0005548731 systemd-machined[195355]: New machine qemu-73-instance-0000009e.
Dec  6 02:45:16 np0005548731 NetworkManager[49182]: <info>  [1765007116.5056] device (tapdc0dd2b5-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.505 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[78834e6b-b0e8-4c65-8c7c-d433b81cac9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 NetworkManager[49182]: <info>  [1765007116.5069] device (tapdc0dd2b5-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.530 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[81b65050-a863-4296-bd13-6f881020cef6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 systemd[1]: Started Virtual Machine qemu-73-instance-0000009e.
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.553 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:16Z|00717|binding|INFO|Setting lport dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 ovn-installed in OVS
Dec  6 02:45:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:16Z|00718|binding|INFO|Setting lport dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 up in Southbound
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.557 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.559 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e5af3a-f371-4386-b946-d931e9fc99a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 systemd-udevd[300921]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.564 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[329535c3-37a9-4ddc-8c42-a4aaad589baf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 NetworkManager[49182]: <info>  [1765007116.5652] manager: (tap3c5d4817-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/330)
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.591 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[62fa78e4-5673-4c0e-8725-e920e98cd4de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.595 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3a73e2a3-7b3d-4b23-9a84-289912760cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 NetworkManager[49182]: <info>  [1765007116.6164] device (tap3c5d4817-c0): carrier: link connected
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.623 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[25f0126c-fbe5-47b6-9339-d19ef07e176b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.639 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[735d780d-9896-4e4a-af22-213897796496]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c5d4817-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:51:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737862, 'reachable_time': 30263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300950, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.654 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d97205-4039-47ac-8fb8-0c6db1a89bfb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:517c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737862, 'tstamp': 737862}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300951, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.673 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[27b70fc8-1918-4623-9576-f5ccaa8aa954]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c5d4817-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:51:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737862, 'reachable_time': 30263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300952, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.710 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[286e015f-6e5d-41b5-a03e-e6348fd32165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.783 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[23dc6b73-ae1e-4fca-9946-b92209ea8e98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.784 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c5d4817-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.784 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.785 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c5d4817-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.787 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:16 np0005548731 NetworkManager[49182]: <info>  [1765007116.7875] manager: (tap3c5d4817-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Dec  6 02:45:16 np0005548731 kernel: tap3c5d4817-c0: entered promiscuous mode
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.789 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.790 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c5d4817-c0, col_values=(('external_ids', {'iface-id': 'dc336d05-182d-42ac-ab5e-a73bf30a0662'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.791 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:16Z|00719|binding|INFO|Releasing lport dc336d05-182d-42ac-ab5e-a73bf30a0662 from this chassis (sb_readonly=0)
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.812 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.813 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.814 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[efa7b98f-2aea-4988-a639-68da255d890e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.814 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-3c5d4817-c3d5-45fc-9890-418e779bacb2
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 3c5d4817-c3d5-45fc-9890-418e779bacb2
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:45:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:16.816 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'env', 'PROCESS_TAG=haproxy-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c5d4817-c3d5-45fc-9890-418e779bacb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.918 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007116.9178572, d66a90db-5589-4b7f-b805-4096fffbd9f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.918 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] VM Started (Lifecycle Event)#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.938 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.941 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007116.9180188, d66a90db-5589-4b7f-b805-4096fffbd9f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.941 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.966 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:45:16 np0005548731 nova_compute[232433]: 2025-12-06 07:45:16.970 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.005 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:45:17 np0005548731 podman[301026]: 2025-12-06 07:45:17.165149706 +0000 UTC m=+0.046890445 container create 01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:45:17 np0005548731 systemd[1]: Started libpod-conmon-01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50.scope.
Dec  6 02:45:17 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:45:17 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbffc255c26ab06e1bae2717abf4b8e13a91de01b77acaa7585b5b657523d5f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:45:17 np0005548731 podman[301026]: 2025-12-06 07:45:17.141492889 +0000 UTC m=+0.023233628 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:45:17 np0005548731 podman[301026]: 2025-12-06 07:45:17.237147031 +0000 UTC m=+0.118887770 container init 01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 02:45:17 np0005548731 podman[301026]: 2025-12-06 07:45:17.242624705 +0000 UTC m=+0.124365424 container start 01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  6 02:45:17 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[301042]: [NOTICE]   (301046) : New worker (301048) forked
Dec  6 02:45:17 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[301042]: [NOTICE]   (301046) : Loading success.
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.326 232437 DEBUG nova.network.neutron [req-ece76451-49cb-4643-8a22-35e89e40715f req-8d603b86-2b10-4c5b-97e4-17022622ef4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Updated VIF entry in instance network info cache for port dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.327 232437 DEBUG nova.network.neutron [req-ece76451-49cb-4643-8a22-35e89e40715f req-8d603b86-2b10-4c5b-97e4-17022622ef4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Updating instance_info_cache with network_info: [{"id": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "address": "fa:16:3e:d2:01:47", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc0dd2b5-bf", "ovs_interfaceid": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:45:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:17.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.590 232437 DEBUG oslo_concurrency.lockutils [req-ece76451-49cb-4643-8a22-35e89e40715f req-8d603b86-2b10-4c5b-97e4-17022622ef4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d66a90db-5589-4b7f-b805-4096fffbd9f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.829 232437 DEBUG nova.compute.manager [req-362ac607-ae39-48c5-935d-312c83ecef7a req-f9c837f7-37ac-4014-a2b7-71658d66a563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Received event network-vif-plugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.830 232437 DEBUG oslo_concurrency.lockutils [req-362ac607-ae39-48c5-935d-312c83ecef7a req-f9c837f7-37ac-4014-a2b7-71658d66a563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.831 232437 DEBUG oslo_concurrency.lockutils [req-362ac607-ae39-48c5-935d-312c83ecef7a req-f9c837f7-37ac-4014-a2b7-71658d66a563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.831 232437 DEBUG oslo_concurrency.lockutils [req-362ac607-ae39-48c5-935d-312c83ecef7a req-f9c837f7-37ac-4014-a2b7-71658d66a563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.832 232437 DEBUG nova.compute.manager [req-362ac607-ae39-48c5-935d-312c83ecef7a req-f9c837f7-37ac-4014-a2b7-71658d66a563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Processing event network-vif-plugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.833 232437 DEBUG nova.compute.manager [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.838 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007117.8382952, d66a90db-5589-4b7f-b805-4096fffbd9f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.839 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.842 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.847 232437 INFO nova.virt.libvirt.driver [-] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Instance spawned successfully.#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.848 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.882 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.888 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.893 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.893 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.894 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.894 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.895 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.895 232437 DEBUG nova.virt.libvirt.driver [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:45:17 np0005548731 nova_compute[232433]: 2025-12-06 07:45:17.956 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:45:18 np0005548731 nova_compute[232433]: 2025-12-06 07:45:18.034 232437 INFO nova.compute.manager [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Took 13.63 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:45:18 np0005548731 nova_compute[232433]: 2025-12-06 07:45:18.034 232437 DEBUG nova.compute.manager [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:45:18 np0005548731 nova_compute[232433]: 2025-12-06 07:45:18.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:45:18 np0005548731 nova_compute[232433]: 2025-12-06 07:45:18.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:45:18 np0005548731 nova_compute[232433]: 2025-12-06 07:45:18.141 232437 INFO nova.compute.manager [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Took 15.11 seconds to build instance.#033[00m
Dec  6 02:45:18 np0005548731 nova_compute[232433]: 2025-12-06 07:45:18.159 232437 DEBUG oslo_concurrency.lockutils [None req-4b7cc1b3-b207-498c-8b20-75bb88b5e14d 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:18 np0005548731 nova_compute[232433]: 2025-12-06 07:45:18.417 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:19 np0005548731 nova_compute[232433]: 2025-12-06 07:45:19.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:45:19 np0005548731 nova_compute[232433]: 2025-12-06 07:45:19.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:45:19 np0005548731 nova_compute[232433]: 2025-12-06 07:45:19.148 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:45:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:45:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:19.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:45:20 np0005548731 nova_compute[232433]: 2025-12-06 07:45:20.003 232437 DEBUG nova.compute.manager [req-a5d4482b-9e5b-4e8c-b13a-a4c613715679 req-8555a7e1-2399-4a79-977b-50eff9d1e78b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Received event network-vif-plugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:45:20 np0005548731 nova_compute[232433]: 2025-12-06 07:45:20.004 232437 DEBUG oslo_concurrency.lockutils [req-a5d4482b-9e5b-4e8c-b13a-a4c613715679 req-8555a7e1-2399-4a79-977b-50eff9d1e78b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:20 np0005548731 nova_compute[232433]: 2025-12-06 07:45:20.004 232437 DEBUG oslo_concurrency.lockutils [req-a5d4482b-9e5b-4e8c-b13a-a4c613715679 req-8555a7e1-2399-4a79-977b-50eff9d1e78b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:20 np0005548731 nova_compute[232433]: 2025-12-06 07:45:20.004 232437 DEBUG oslo_concurrency.lockutils [req-a5d4482b-9e5b-4e8c-b13a-a4c613715679 req-8555a7e1-2399-4a79-977b-50eff9d1e78b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:20 np0005548731 nova_compute[232433]: 2025-12-06 07:45:20.004 232437 DEBUG nova.compute.manager [req-a5d4482b-9e5b-4e8c-b13a-a4c613715679 req-8555a7e1-2399-4a79-977b-50eff9d1e78b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] No waiting events found dispatching network-vif-plugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:45:20 np0005548731 nova_compute[232433]: 2025-12-06 07:45:20.004 232437 WARNING nova.compute.manager [req-a5d4482b-9e5b-4e8c-b13a-a4c613715679 req-8555a7e1-2399-4a79-977b-50eff9d1e78b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Received unexpected event network-vif-plugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:45:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:20 np0005548731 nova_compute[232433]: 2025-12-06 07:45:20.074 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:20.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:45:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:21.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:45:22 np0005548731 nova_compute[232433]: 2025-12-06 07:45:22.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:45:22 np0005548731 nova_compute[232433]: 2025-12-06 07:45:22.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:45:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:22.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:23 np0005548731 NetworkManager[49182]: <info>  [1765007123.0706] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Dec  6 02:45:23 np0005548731 NetworkManager[49182]: <info>  [1765007123.0718] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Dec  6 02:45:23 np0005548731 nova_compute[232433]: 2025-12-06 07:45:23.069 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:23 np0005548731 nova_compute[232433]: 2025-12-06 07:45:23.153 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:23 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:23Z|00720|binding|INFO|Releasing lport dc336d05-182d-42ac-ab5e-a73bf30a0662 from this chassis (sb_readonly=0)
Dec  6 02:45:23 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:23Z|00721|binding|INFO|Releasing lport f1ca157c-f88b-4351-b03a-b04a75537062 from this chassis (sb_readonly=0)
Dec  6 02:45:23 np0005548731 nova_compute[232433]: 2025-12-06 07:45:23.164 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:23.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:23 np0005548731 nova_compute[232433]: 2025-12-06 07:45:23.421 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:23 np0005548731 nova_compute[232433]: 2025-12-06 07:45:23.751 232437 DEBUG nova.compute.manager [req-9cc20513-30c6-4431-9286-dde64fa296b5 req-8bca7a8e-4f1a-43e2-ac8b-8d45d75f18c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Received event network-changed-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:45:23 np0005548731 nova_compute[232433]: 2025-12-06 07:45:23.751 232437 DEBUG nova.compute.manager [req-9cc20513-30c6-4431-9286-dde64fa296b5 req-8bca7a8e-4f1a-43e2-ac8b-8d45d75f18c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Refreshing instance network info cache due to event network-changed-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:45:23 np0005548731 nova_compute[232433]: 2025-12-06 07:45:23.751 232437 DEBUG oslo_concurrency.lockutils [req-9cc20513-30c6-4431-9286-dde64fa296b5 req-8bca7a8e-4f1a-43e2-ac8b-8d45d75f18c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d66a90db-5589-4b7f-b805-4096fffbd9f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:45:23 np0005548731 nova_compute[232433]: 2025-12-06 07:45:23.751 232437 DEBUG oslo_concurrency.lockutils [req-9cc20513-30c6-4431-9286-dde64fa296b5 req-8bca7a8e-4f1a-43e2-ac8b-8d45d75f18c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d66a90db-5589-4b7f-b805-4096fffbd9f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:45:23 np0005548731 nova_compute[232433]: 2025-12-06 07:45:23.751 232437 DEBUG nova.network.neutron [req-9cc20513-30c6-4431-9286-dde64fa296b5 req-8bca7a8e-4f1a-43e2-ac8b-8d45d75f18c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Refreshing network info cache for port dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:45:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e337 e337: 3 total, 3 up, 3 in
Dec  6 02:45:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:24.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:25 np0005548731 nova_compute[232433]: 2025-12-06 07:45:25.077 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:25 np0005548731 nova_compute[232433]: 2025-12-06 07:45:25.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:45:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e338 e338: 3 total, 3 up, 3 in
Dec  6 02:45:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:25.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e339 e339: 3 total, 3 up, 3 in
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.274 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.274 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.274 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.275 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.275 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.344 232437 DEBUG nova.network.neutron [req-9cc20513-30c6-4431-9286-dde64fa296b5 req-8bca7a8e-4f1a-43e2-ac8b-8d45d75f18c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Updated VIF entry in instance network info cache for port dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.345 232437 DEBUG nova.network.neutron [req-9cc20513-30c6-4431-9286-dde64fa296b5 req-8bca7a8e-4f1a-43e2-ac8b-8d45d75f18c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Updating instance_info_cache with network_info: [{"id": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "address": "fa:16:3e:d2:01:47", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc0dd2b5-bf", "ovs_interfaceid": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:45:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:27.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.429 232437 DEBUG oslo_concurrency.lockutils [req-9cc20513-30c6-4431-9286-dde64fa296b5 req-8bca7a8e-4f1a-43e2-ac8b-8d45d75f18c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d66a90db-5589-4b7f-b805-4096fffbd9f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:45:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:45:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/644100411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.747 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.833 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.834 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.837 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.837 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.982 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.984 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3957MB free_disk=20.80970001220703GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.984 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:27 np0005548731 nova_compute[232433]: 2025-12-06 07:45:27.984 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:28 np0005548731 nova_compute[232433]: 2025-12-06 07:45:28.152 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance f67d89a8-836a-4f47-af8d-37cf99529275 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:45:28 np0005548731 nova_compute[232433]: 2025-12-06 07:45:28.152 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance d66a90db-5589-4b7f-b805-4096fffbd9f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:45:28 np0005548731 nova_compute[232433]: 2025-12-06 07:45:28.152 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:45:28 np0005548731 nova_compute[232433]: 2025-12-06 07:45:28.152 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:45:28 np0005548731 nova_compute[232433]: 2025-12-06 07:45:28.210 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:28 np0005548731 nova_compute[232433]: 2025-12-06 07:45:28.422 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:28.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:45:28 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3247020229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:45:28 np0005548731 nova_compute[232433]: 2025-12-06 07:45:28.681 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:28 np0005548731 nova_compute[232433]: 2025-12-06 07:45:28.687 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:45:28 np0005548731 nova_compute[232433]: 2025-12-06 07:45:28.713 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:45:28 np0005548731 nova_compute[232433]: 2025-12-06 07:45:28.739 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:45:28 np0005548731 nova_compute[232433]: 2025-12-06 07:45:28.739 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:29.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:29 np0005548731 nova_compute[232433]: 2025-12-06 07:45:29.740 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:45:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:30 np0005548731 nova_compute[232433]: 2025-12-06 07:45:30.116 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:30.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:30.598 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:45:30 np0005548731 nova_compute[232433]: 2025-12-06 07:45:30.599 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:30.600 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:45:31 np0005548731 nova_compute[232433]: 2025-12-06 07:45:31.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:45:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:45:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:45:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:45:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:45:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:31.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:45:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:32.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:32 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:32Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:01:47 10.100.0.10
Dec  6 02:45:32 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:32Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:01:47 10.100.0.10
Dec  6 02:45:33 np0005548731 nova_compute[232433]: 2025-12-06 07:45:33.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:45:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:33.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:33 np0005548731 nova_compute[232433]: 2025-12-06 07:45:33.456 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:33.601 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:45:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:34.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e340 e340: 3 total, 3 up, 3 in
Dec  6 02:45:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:35 np0005548731 nova_compute[232433]: 2025-12-06 07:45:35.120 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:35.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:45:35 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:45:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:37.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:38 np0005548731 nova_compute[232433]: 2025-12-06 07:45:38.500 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:38.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:39.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:40 np0005548731 nova_compute[232433]: 2025-12-06 07:45:40.123 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:40.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:45:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:41.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:45:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:42.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:42 np0005548731 nova_compute[232433]: 2025-12-06 07:45:42.641 232437 DEBUG oslo_concurrency.lockutils [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:42 np0005548731 nova_compute[232433]: 2025-12-06 07:45:42.641 232437 DEBUG oslo_concurrency.lockutils [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:42 np0005548731 nova_compute[232433]: 2025-12-06 07:45:42.670 232437 DEBUG nova.objects.instance [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'flavor' on Instance uuid d66a90db-5589-4b7f-b805-4096fffbd9f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:45:42 np0005548731 nova_compute[232433]: 2025-12-06 07:45:42.790 232437 DEBUG oslo_concurrency.lockutils [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:42 np0005548731 podman[301401]: 2025-12-06 07:45:42.926509133 +0000 UTC m=+0.066187985 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 02:45:42 np0005548731 podman[301399]: 2025-12-06 07:45:42.926502163 +0000 UTC m=+0.063854219 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:45:42 np0005548731 podman[301400]: 2025-12-06 07:45:42.94607534 +0000 UTC m=+0.080461042 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:45:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:43.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.437 232437 DEBUG oslo_concurrency.lockutils [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.438 232437 DEBUG oslo_concurrency.lockutils [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.438 232437 INFO nova.compute.manager [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Attaching volume 15c50314-82b8-4e1c-81d1-ad7ba14c56de to /dev/vdb#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.501 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.705 232437 DEBUG os_brick.utils [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.707 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.717 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.717 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[fc63cd30-478e-4129-9bf6-6dbc0939d9ac]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.718 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.725 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.726 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7e15fe-9989-4867-87dc-4782cd4f3f8f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.727 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.734 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.734 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf45be6-4746-4862-afae-826066cf0d5f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.735 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[8f52d141-5c31-40a0-8cef-92b1fb5f1b52]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.736 232437 DEBUG oslo_concurrency.processutils [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.760 232437 DEBUG oslo_concurrency.processutils [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.763 232437 DEBUG os_brick.initiator.connectors.lightos [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.763 232437 DEBUG os_brick.initiator.connectors.lightos [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.763 232437 DEBUG os_brick.initiator.connectors.lightos [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.763 232437 DEBUG os_brick.utils [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:45:43 np0005548731 nova_compute[232433]: 2025-12-06 07:45:43.764 232437 DEBUG nova.virt.block_device [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Updating existing volume attachment record: 0299eeaa-32ac-40fc-92e9-a9d346380c58 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:45:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:45:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:44.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:45:44 np0005548731 nova_compute[232433]: 2025-12-06 07:45:44.803 232437 DEBUG nova.objects.instance [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'flavor' on Instance uuid d66a90db-5589-4b7f-b805-4096fffbd9f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:45:44 np0005548731 nova_compute[232433]: 2025-12-06 07:45:44.839 232437 DEBUG nova.virt.libvirt.driver [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Attempting to attach volume 15c50314-82b8-4e1c-81d1-ad7ba14c56de with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 02:45:44 np0005548731 nova_compute[232433]: 2025-12-06 07:45:44.841 232437 DEBUG nova.virt.libvirt.guest [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:45:44 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:45:44 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-15c50314-82b8-4e1c-81d1-ad7ba14c56de">
Dec  6 02:45:44 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:45:44 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:45:44 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:45:44 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:45:44 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:45:44 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:45:44 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:45:44 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:45:44 np0005548731 nova_compute[232433]:  <serial>15c50314-82b8-4e1c-81d1-ad7ba14c56de</serial>
Dec  6 02:45:44 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:45:44 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:45:44 np0005548731 nova_compute[232433]: 2025-12-06 07:45:44.978 232437 DEBUG nova.virt.libvirt.driver [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:45:44 np0005548731 nova_compute[232433]: 2025-12-06 07:45:44.979 232437 DEBUG nova.virt.libvirt.driver [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:45:44 np0005548731 nova_compute[232433]: 2025-12-06 07:45:44.979 232437 DEBUG nova.virt.libvirt.driver [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:45:44 np0005548731 nova_compute[232433]: 2025-12-06 07:45:44.979 232437 DEBUG nova.virt.libvirt.driver [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No VIF found with MAC fa:16:3e:d2:01:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:45:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:45 np0005548731 nova_compute[232433]: 2025-12-06 07:45:45.126 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:45.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:45 np0005548731 nova_compute[232433]: 2025-12-06 07:45:45.589 232437 DEBUG oslo_concurrency.lockutils [None req-e0f1825d-ad98-4f67-b7f4-08f9d01961aa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:45:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:46.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.112 232437 INFO nova.compute.manager [None req-bc0a37d7-6f4c-458a-94a9-c8ac33db7262 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Pausing#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.115 232437 DEBUG nova.objects.instance [None req-bc0a37d7-6f4c-458a-94a9-c8ac33db7262 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'flavor' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.119 232437 DEBUG oslo_concurrency.lockutils [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.120 232437 DEBUG oslo_concurrency.lockutils [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.154 232437 INFO nova.compute.manager [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Detaching volume 15c50314-82b8-4e1c-81d1-ad7ba14c56de#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.166 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007147.1663032, f67d89a8-836a-4f47-af8d-37cf99529275 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.167 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.172 232437 DEBUG nova.compute.manager [None req-bc0a37d7-6f4c-458a-94a9-c8ac33db7262 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.214 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.218 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.250 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Dec  6 02:45:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:45:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:47.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.471 232437 INFO nova.virt.block_device [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Attempting to driver detach volume 15c50314-82b8-4e1c-81d1-ad7ba14c56de from mountpoint /dev/vdb#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.483 232437 DEBUG nova.virt.libvirt.driver [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Attempting to detach device vdb from instance d66a90db-5589-4b7f-b805-4096fffbd9f8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.484 232437 DEBUG nova.virt.libvirt.guest [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-15c50314-82b8-4e1c-81d1-ad7ba14c56de">
Dec  6 02:45:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  <serial>15c50314-82b8-4e1c-81d1-ad7ba14c56de</serial>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:45:47 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.509 232437 INFO nova.virt.libvirt.driver [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully detached device vdb from instance d66a90db-5589-4b7f-b805-4096fffbd9f8 from the persistent domain config.#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.510 232437 DEBUG nova.virt.libvirt.driver [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d66a90db-5589-4b7f-b805-4096fffbd9f8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.511 232437 DEBUG nova.virt.libvirt.guest [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-15c50314-82b8-4e1c-81d1-ad7ba14c56de">
Dec  6 02:45:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  <serial>15c50314-82b8-4e1c-81d1-ad7ba14c56de</serial>
Dec  6 02:45:47 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:45:47 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:45:47 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.657 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765007147.6567543, d66a90db-5589-4b7f-b805-4096fffbd9f8 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.660 232437 DEBUG nova.virt.libvirt.driver [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d66a90db-5589-4b7f-b805-4096fffbd9f8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.662 232437 INFO nova.virt.libvirt.driver [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully detached device vdb from instance d66a90db-5589-4b7f-b805-4096fffbd9f8 from the live domain config.#033[00m
Dec  6 02:45:47 np0005548731 nova_compute[232433]: 2025-12-06 07:45:47.954 232437 DEBUG nova.objects.instance [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'flavor' on Instance uuid d66a90db-5589-4b7f-b805-4096fffbd9f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.002 232437 DEBUG oslo_concurrency.lockutils [None req-41f58143-a523-4bf5-8ae2-c6f980e0727b 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:48.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.531 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.716 232437 DEBUG oslo_concurrency.lockutils [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.717 232437 DEBUG oslo_concurrency.lockutils [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.717 232437 DEBUG oslo_concurrency.lockutils [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.717 232437 DEBUG oslo_concurrency.lockutils [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.718 232437 DEBUG oslo_concurrency.lockutils [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.719 232437 INFO nova.compute.manager [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Terminating instance#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.720 232437 DEBUG nova.compute.manager [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:45:48 np0005548731 kernel: tapdc0dd2b5-bf (unregistering): left promiscuous mode
Dec  6 02:45:48 np0005548731 NetworkManager[49182]: <info>  [1765007148.7802] device (tapdc0dd2b5-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.790 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:48Z|00722|binding|INFO|Releasing lport dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 from this chassis (sb_readonly=0)
Dec  6 02:45:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:48Z|00723|binding|INFO|Setting lport dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 down in Southbound
Dec  6 02:45:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:45:48Z|00724|binding|INFO|Removing iface tapdc0dd2b5-bf ovn-installed in OVS
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.793 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:48.802 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:01:47 10.100.0.10'], port_security=['fa:16:3e:d2:01:47 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd66a90db-5589-4b7f-b805-4096fffbd9f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe3fa474-81b7-490f-80d0-dc30cbd0f0d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a93df87f-d2df-4d3a-b692-98bba32f2fe1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:45:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:48.803 143965 INFO neutron.agent.ovn.metadata.agent [-] Port dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 in datapath 3c5d4817-c3d5-45fc-9890-418e779bacb2 unbound from our chassis#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.860 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:48.804 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c5d4817-c3d5-45fc-9890-418e779bacb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:45:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:48.861 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ef07529d-b807-4271-9c4b-a7a561bd3e32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:48.862 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 namespace which is not needed anymore#033[00m
Dec  6 02:45:48 np0005548731 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009e.scope: Deactivated successfully.
Dec  6 02:45:48 np0005548731 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009e.scope: Consumed 14.927s CPU time.
Dec  6 02:45:48 np0005548731 systemd-machined[195355]: Machine qemu-73-instance-0000009e terminated.
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.942 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.947 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.957 232437 INFO nova.virt.libvirt.driver [-] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Instance destroyed successfully.#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.957 232437 DEBUG nova.objects.instance [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'resources' on Instance uuid d66a90db-5589-4b7f-b805-4096fffbd9f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.979 232437 DEBUG nova.virt.libvirt.vif [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:45:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1008074304',display_name='tempest-AttachVolumeNegativeTest-server-1008074304',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1008074304',id=158,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGq2Ps+LisHXOt3k1Z1PkGhN+se8X4Ia0K+j3S7twrCOt5pK+3twP1awRwIa//I/8m7510cD7nQ4ICavPZtWHJvZPzsVp8pGmCsyZIyj5ImlYzn8fGUic6wmw2Is2N8TtA==',key_name='tempest-keypair-304190457',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:45:18Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-9g8i9tcb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:45:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=d66a90db-5589-4b7f-b805-4096fffbd9f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "address": "fa:16:3e:d2:01:47", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc0dd2b5-bf", "ovs_interfaceid": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.980 232437 DEBUG nova.network.os_vif_util [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "address": "fa:16:3e:d2:01:47", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc0dd2b5-bf", "ovs_interfaceid": "dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.981 232437 DEBUG nova.network.os_vif_util [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:01:47,bridge_name='br-int',has_traffic_filtering=True,id=dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc0dd2b5-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.982 232437 DEBUG os_vif [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:01:47,bridge_name='br-int',has_traffic_filtering=True,id=dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc0dd2b5-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.984 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.985 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc0dd2b5-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.989 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:45:48 np0005548731 nova_compute[232433]: 2025-12-06 07:45:48.991 232437 INFO os_vif [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:01:47,bridge_name='br-int',has_traffic_filtering=True,id=dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc0dd2b5-bf')#033[00m
Dec  6 02:45:48 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[301042]: [NOTICE]   (301046) : haproxy version is 2.8.14-c23fe91
Dec  6 02:45:48 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[301042]: [NOTICE]   (301046) : path to executable is /usr/sbin/haproxy
Dec  6 02:45:48 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[301042]: [WARNING]  (301046) : Exiting Master process...
Dec  6 02:45:48 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[301042]: [WARNING]  (301046) : Exiting Master process...
Dec  6 02:45:48 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[301042]: [ALERT]    (301046) : Current worker (301048) exited with code 143 (Terminated)
Dec  6 02:45:48 np0005548731 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[301042]: [WARNING]  (301046) : All workers exited. Exiting... (0)
Dec  6 02:45:48 np0005548731 systemd[1]: libpod-01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50.scope: Deactivated successfully.
Dec  6 02:45:49 np0005548731 podman[301521]: 2025-12-06 07:45:49.004204243 +0000 UTC m=+0.051573098 container died 01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:45:49 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50-userdata-shm.mount: Deactivated successfully.
Dec  6 02:45:49 np0005548731 systemd[1]: var-lib-containers-storage-overlay-cbffc255c26ab06e1bae2717abf4b8e13a91de01b77acaa7585b5b657523d5f4-merged.mount: Deactivated successfully.
Dec  6 02:45:49 np0005548731 podman[301521]: 2025-12-06 07:45:49.05781512 +0000 UTC m=+0.105183965 container cleanup 01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  6 02:45:49 np0005548731 systemd[1]: libpod-conmon-01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50.scope: Deactivated successfully.
Dec  6 02:45:49 np0005548731 podman[301577]: 2025-12-06 07:45:49.131601459 +0000 UTC m=+0.050625035 container remove 01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:45:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:49.137 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[97d05bc9-d558-48ae-9452-293dcfecd3cc]: (4, ('Sat Dec  6 07:45:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 (01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50)\n01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50\nSat Dec  6 07:45:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 (01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50)\n01474f431b3c7a5ed9a4ee76053146eaba68feca9d0b59e3be1163f6fe66aa50\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:49.138 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[255cf98a-6420-4e06-98fa-d67b29ab2f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:49.139 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c5d4817-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.141 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:49 np0005548731 kernel: tap3c5d4817-c0: left promiscuous mode
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.155 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:49.158 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3180bc8a-e607-41d4-9252-3f8eab45fdd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:49.179 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5986df63-8b0c-4909-8f88-3d1b1612f908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:49.181 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2daa25-883e-40cf-aadc-d7d7c4d618c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:49.197 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b0872bb7-dd88-4588-9c06-b21de9712563]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737856, 'reachable_time': 37438, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301592, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:49.199 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:45:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:45:49.199 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[f071b009-9579-436a-824f-f14e8cd4450f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:45:49 np0005548731 systemd[1]: run-netns-ovnmeta\x2d3c5d4817\x2dc3d5\x2d45fc\x2d9890\x2d418e779bacb2.mount: Deactivated successfully.
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.418 232437 INFO nova.virt.libvirt.driver [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Deleting instance files /var/lib/nova/instances/d66a90db-5589-4b7f-b805-4096fffbd9f8_del#033[00m
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.418 232437 INFO nova.virt.libvirt.driver [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Deletion of /var/lib/nova/instances/d66a90db-5589-4b7f-b805-4096fffbd9f8_del complete#033[00m
Dec  6 02:45:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:45:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:49.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.599 232437 INFO nova.compute.manager [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.600 232437 DEBUG oslo.service.loopingcall [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.600 232437 DEBUG nova.compute.manager [-] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.600 232437 DEBUG nova.network.neutron [-] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.890 232437 DEBUG nova.compute.manager [req-4f6208de-5094-4d21-8de5-9aad9fb2a635 req-e9d1fa01-1ad2-44f3-b8ef-ae1e52b5b36b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Received event network-vif-unplugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.890 232437 DEBUG oslo_concurrency.lockutils [req-4f6208de-5094-4d21-8de5-9aad9fb2a635 req-e9d1fa01-1ad2-44f3-b8ef-ae1e52b5b36b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.891 232437 DEBUG oslo_concurrency.lockutils [req-4f6208de-5094-4d21-8de5-9aad9fb2a635 req-e9d1fa01-1ad2-44f3-b8ef-ae1e52b5b36b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.891 232437 DEBUG oslo_concurrency.lockutils [req-4f6208de-5094-4d21-8de5-9aad9fb2a635 req-e9d1fa01-1ad2-44f3-b8ef-ae1e52b5b36b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.891 232437 DEBUG nova.compute.manager [req-4f6208de-5094-4d21-8de5-9aad9fb2a635 req-e9d1fa01-1ad2-44f3-b8ef-ae1e52b5b36b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] No waiting events found dispatching network-vif-unplugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:45:49 np0005548731 nova_compute[232433]: 2025-12-06 07:45:49.892 232437 DEBUG nova.compute.manager [req-4f6208de-5094-4d21-8de5-9aad9fb2a635 req-e9d1fa01-1ad2-44f3-b8ef-ae1e52b5b36b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Received event network-vif-unplugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:45:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:50.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:51 np0005548731 nova_compute[232433]: 2025-12-06 07:45:51.135 232437 INFO nova.compute.manager [None req-3f801a74-22d2-4a2a-82b6-340683d4a0a6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Unpausing#033[00m
Dec  6 02:45:51 np0005548731 nova_compute[232433]: 2025-12-06 07:45:51.136 232437 DEBUG nova.objects.instance [None req-3f801a74-22d2-4a2a-82b6-340683d4a0a6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'flavor' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:45:51 np0005548731 nova_compute[232433]: 2025-12-06 07:45:51.172 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007151.1721368, f67d89a8-836a-4f47-af8d-37cf99529275 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:45:51 np0005548731 nova_compute[232433]: 2025-12-06 07:45:51.172 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:45:51 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 02:45:51 np0005548731 nova_compute[232433]: 2025-12-06 07:45:51.177 232437 DEBUG nova.virt.libvirt.guest [None req-3f801a74-22d2-4a2a-82b6-340683d4a0a6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 02:45:51 np0005548731 nova_compute[232433]: 2025-12-06 07:45:51.177 232437 DEBUG nova.compute.manager [None req-3f801a74-22d2-4a2a-82b6-340683d4a0a6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:45:51 np0005548731 nova_compute[232433]: 2025-12-06 07:45:51.200 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:45:51 np0005548731 nova_compute[232433]: 2025-12-06 07:45:51.203 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:45:51 np0005548731 nova_compute[232433]: 2025-12-06 07:45:51.236 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Dec  6 02:45:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:45:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:51.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.161 232437 DEBUG nova.compute.manager [req-e292bfd3-045b-490b-ae67-907b6ca18037 req-4f57b2d9-07bb-4bef-b79e-9d7b6442e101 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Received event network-vif-plugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.162 232437 DEBUG oslo_concurrency.lockutils [req-e292bfd3-045b-490b-ae67-907b6ca18037 req-4f57b2d9-07bb-4bef-b79e-9d7b6442e101 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.162 232437 DEBUG oslo_concurrency.lockutils [req-e292bfd3-045b-490b-ae67-907b6ca18037 req-4f57b2d9-07bb-4bef-b79e-9d7b6442e101 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.163 232437 DEBUG oslo_concurrency.lockutils [req-e292bfd3-045b-490b-ae67-907b6ca18037 req-4f57b2d9-07bb-4bef-b79e-9d7b6442e101 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.163 232437 DEBUG nova.compute.manager [req-e292bfd3-045b-490b-ae67-907b6ca18037 req-4f57b2d9-07bb-4bef-b79e-9d7b6442e101 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] No waiting events found dispatching network-vif-plugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.163 232437 WARNING nova.compute.manager [req-e292bfd3-045b-490b-ae67-907b6ca18037 req-4f57b2d9-07bb-4bef-b79e-9d7b6442e101 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Received unexpected event network-vif-plugged-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.165 232437 DEBUG nova.network.neutron [-] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.201 232437 INFO nova.compute.manager [-] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Took 2.60 seconds to deallocate network for instance.#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.233 232437 DEBUG nova.compute.manager [req-2de706df-6ea3-4020-b6ec-0254ee117bf2 req-a3f75b3b-7a36-4ffd-844e-ff4b3b9be4e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Received event network-vif-deleted-dc0dd2b5-bfe6-4c8b-99b1-9a0ea59a4bf5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.262 232437 DEBUG oslo_concurrency.lockutils [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.263 232437 DEBUG oslo_concurrency.lockutils [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.489 232437 DEBUG oslo_concurrency.processutils [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:45:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:52.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:45:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2982845957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.993 232437 DEBUG oslo_concurrency.processutils [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:45:52 np0005548731 nova_compute[232433]: 2025-12-06 07:45:52.999 232437 DEBUG nova.compute.provider_tree [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:45:53 np0005548731 nova_compute[232433]: 2025-12-06 07:45:53.177 232437 DEBUG nova.scheduler.client.report [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:45:53 np0005548731 nova_compute[232433]: 2025-12-06 07:45:53.256 232437 DEBUG oslo_concurrency.lockutils [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:53 np0005548731 nova_compute[232433]: 2025-12-06 07:45:53.297 232437 INFO nova.scheduler.client.report [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Deleted allocations for instance d66a90db-5589-4b7f-b805-4096fffbd9f8#033[00m
Dec  6 02:45:53 np0005548731 nova_compute[232433]: 2025-12-06 07:45:53.390 232437 DEBUG oslo_concurrency.lockutils [None req-eeb31a86-12a4-4c4a-8686-fc697b0f53f9 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "d66a90db-5589-4b7f-b805-4096fffbd9f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:45:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:53.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:53 np0005548731 nova_compute[232433]: 2025-12-06 07:45:53.532 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:53 np0005548731 nova_compute[232433]: 2025-12-06 07:45:53.986 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:54.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:45:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:45:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4244794580' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:45:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:45:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4244794580' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:45:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:45:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:55.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:45:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:45:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:56.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:45:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:57.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:45:58 np0005548731 nova_compute[232433]: 2025-12-06 07:45:58.535 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:45:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2381368513' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:45:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:45:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2381368513' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:45:58 np0005548731 nova_compute[232433]: 2025-12-06 07:45:58.989 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:45:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:45:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:45:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:59.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:46:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1020618160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:46:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:46:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1020618160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:46:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:00.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:00.887 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:00.887 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:00.888 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:01.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:02.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:02 np0005548731 nova_compute[232433]: 2025-12-06 07:46:02.696 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:02 np0005548731 nova_compute[232433]: 2025-12-06 07:46:02.697 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:02 np0005548731 nova_compute[232433]: 2025-12-06 07:46:02.712 232437 DEBUG nova.compute.manager [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:46:02 np0005548731 nova_compute[232433]: 2025-12-06 07:46:02.783 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:02 np0005548731 nova_compute[232433]: 2025-12-06 07:46:02.783 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:02 np0005548731 nova_compute[232433]: 2025-12-06 07:46:02.789 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:46:02 np0005548731 nova_compute[232433]: 2025-12-06 07:46:02.789 232437 INFO nova.compute.claims [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:46:02 np0005548731 nova_compute[232433]: 2025-12-06 07:46:02.902 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:46:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2049582650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.368 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.374 232437 DEBUG nova.compute.provider_tree [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.399 232437 DEBUG nova.scheduler.client.report [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.421 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.422 232437 DEBUG nova.compute.manager [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:46:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:03.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.484 232437 DEBUG nova.compute.manager [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.485 232437 DEBUG nova.network.neutron [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.508 232437 INFO nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.525 232437 DEBUG nova.compute.manager [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.537 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.618 232437 DEBUG nova.compute.manager [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.619 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.620 232437 INFO nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Creating image(s)#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.649 232437 DEBUG nova.storage.rbd_utils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.673 232437 DEBUG nova.storage.rbd_utils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.697 232437 DEBUG nova.storage.rbd_utils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.701 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.762 232437 DEBUG nova.policy [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e997a5eeee174b368a43ed8cb35fa1d0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f44ecb8bdc7e4692a299e29603301124', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.766 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.766 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.767 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.767 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.790 232437 DEBUG nova.storage.rbd_utils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.794 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2b4b3181-da12-4705-9215-7ee5b869102b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.956 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007148.9548285, d66a90db-5589-4b7f-b805-4096fffbd9f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.957 232437 INFO nova.compute.manager [-] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.990 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:03 np0005548731 nova_compute[232433]: 2025-12-06 07:46:03.995 232437 DEBUG nova.compute.manager [None req-2420a936-7c74-4eaa-95e3-84e286f05eb9 - - - - - -] [instance: d66a90db-5589-4b7f-b805-4096fffbd9f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:04 np0005548731 nova_compute[232433]: 2025-12-06 07:46:04.483 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2b4b3181-da12-4705-9215-7ee5b869102b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.689s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:04.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:04 np0005548731 nova_compute[232433]: 2025-12-06 07:46:04.567 232437 DEBUG nova.storage.rbd_utils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] resizing rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:46:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:05 np0005548731 nova_compute[232433]: 2025-12-06 07:46:05.084 232437 DEBUG nova.network.neutron [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Successfully created port: 879edf90-0812-4adf-b171-d993c6cfb26c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:46:05 np0005548731 nova_compute[232433]: 2025-12-06 07:46:05.193 232437 DEBUG nova.objects.instance [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:05 np0005548731 nova_compute[232433]: 2025-12-06 07:46:05.218 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:46:05 np0005548731 nova_compute[232433]: 2025-12-06 07:46:05.219 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Ensure instance console log exists: /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:46:05 np0005548731 nova_compute[232433]: 2025-12-06 07:46:05.219 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:05 np0005548731 nova_compute[232433]: 2025-12-06 07:46:05.220 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:05 np0005548731 nova_compute[232433]: 2025-12-06 07:46:05.220 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:46:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:05.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:46:06 np0005548731 nova_compute[232433]: 2025-12-06 07:46:06.297 232437 DEBUG nova.network.neutron [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Successfully updated port: 879edf90-0812-4adf-b171-d993c6cfb26c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:46:06 np0005548731 nova_compute[232433]: 2025-12-06 07:46:06.320 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:46:06 np0005548731 nova_compute[232433]: 2025-12-06 07:46:06.321 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:46:06 np0005548731 nova_compute[232433]: 2025-12-06 07:46:06.322 232437 DEBUG nova.network.neutron [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:46:06 np0005548731 nova_compute[232433]: 2025-12-06 07:46:06.473 232437 DEBUG nova.compute.manager [req-371f9ecb-e4a3-4af0-95db-6f720ed32b59 req-2ef57b68-bc24-40fd-bfa5-fde7449bea89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-changed-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:06 np0005548731 nova_compute[232433]: 2025-12-06 07:46:06.473 232437 DEBUG nova.compute.manager [req-371f9ecb-e4a3-4af0-95db-6f720ed32b59 req-2ef57b68-bc24-40fd-bfa5-fde7449bea89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Refreshing instance network info cache due to event network-changed-879edf90-0812-4adf-b171-d993c6cfb26c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:46:06 np0005548731 nova_compute[232433]: 2025-12-06 07:46:06.474 232437 DEBUG oslo_concurrency.lockutils [req-371f9ecb-e4a3-4af0-95db-6f720ed32b59 req-2ef57b68-bc24-40fd-bfa5-fde7449bea89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:46:06 np0005548731 nova_compute[232433]: 2025-12-06 07:46:06.530 232437 DEBUG nova.network.neutron [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:46:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:06.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:07.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.813 232437 DEBUG nova.network.neutron [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updating instance_info_cache with network_info: [{"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.833 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.834 232437 DEBUG nova.compute.manager [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Instance network_info: |[{"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.834 232437 DEBUG oslo_concurrency.lockutils [req-371f9ecb-e4a3-4af0-95db-6f720ed32b59 req-2ef57b68-bc24-40fd-bfa5-fde7449bea89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.835 232437 DEBUG nova.network.neutron [req-371f9ecb-e4a3-4af0-95db-6f720ed32b59 req-2ef57b68-bc24-40fd-bfa5-fde7449bea89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Refreshing network info cache for port 879edf90-0812-4adf-b171-d993c6cfb26c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.839 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Start _get_guest_xml network_info=[{"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.845 232437 WARNING nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.857 232437 DEBUG nova.virt.libvirt.host [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.858 232437 DEBUG nova.virt.libvirt.host [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.869 232437 DEBUG nova.virt.libvirt.host [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.870 232437 DEBUG nova.virt.libvirt.host [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.871 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.872 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.872 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.872 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.873 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.873 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.873 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.874 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.874 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.874 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.875 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.875 232437 DEBUG nova.virt.hardware [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.879 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:07 np0005548731 nova_compute[232433]: 2025-12-06 07:46:07.952 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:46:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/750190271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:46:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:08Z|00725|binding|INFO|Releasing lport f1ca157c-f88b-4351-b03a-b04a75537062 from this chassis (sb_readonly=0)
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.450 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.451 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.487 232437 DEBUG nova.storage.rbd_utils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.493 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:08.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.612 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:08 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:08Z|00726|binding|INFO|Releasing lport f1ca157c-f88b-4351-b03a-b04a75537062 from this chassis (sb_readonly=0)
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.625 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:46:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2971671356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.922 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.924 232437 DEBUG nova.virt.libvirt.vif [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:46:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1284166049',display_name='tempest-ServerStableDeviceRescueTest-server-1284166049',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1284166049',id=159,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGiOM3BHZdMb9T+i2z28aV9d25R+ylqHokCUcVIaz2h7DQq++pQYpAQHDFB53HHc1FWwSamucUXGjXIy3rSPSRgsSGsD3yr/u/2kWMgZhyKrH9WAv41/n4dRvjMhKO5fVA==',key_name='tempest-keypair-1110836856',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-73o1o3hz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:46:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=2b4b3181-da12-4705-9215-7ee5b869102b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.925 232437 DEBUG nova.network.os_vif_util [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.926 232437 DEBUG nova.network.os_vif_util [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:dc:f4,bridge_name='br-int',has_traffic_filtering=True,id=879edf90-0812-4adf-b171-d993c6cfb26c,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap879edf90-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.927 232437 DEBUG nova.objects.instance [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.946 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  <uuid>2b4b3181-da12-4705-9215-7ee5b869102b</uuid>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  <name>instance-0000009f</name>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1284166049</nova:name>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:46:07</nova:creationTime>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <nova:user uuid="e997a5eeee174b368a43ed8cb35fa1d0">tempest-ServerStableDeviceRescueTest-1830949011-project-member</nova:user>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <nova:project uuid="f44ecb8bdc7e4692a299e29603301124">tempest-ServerStableDeviceRescueTest-1830949011</nova:project>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <nova:port uuid="879edf90-0812-4adf-b171-d993c6cfb26c">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <entry name="serial">2b4b3181-da12-4705-9215-7ee5b869102b</entry>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <entry name="uuid">2b4b3181-da12-4705-9215-7ee5b869102b</entry>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2b4b3181-da12-4705-9215-7ee5b869102b_disk">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2b4b3181-da12-4705-9215-7ee5b869102b_disk.config">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:9c:dc:f4"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <target dev="tap879edf90-08"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/console.log" append="off"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:46:08 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:46:08 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:46:08 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:46:08 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.948 232437 DEBUG nova.compute.manager [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Preparing to wait for external event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.948 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.948 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.949 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.949 232437 DEBUG nova.virt.libvirt.vif [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:46:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1284166049',display_name='tempest-ServerStableDeviceRescueTest-server-1284166049',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1284166049',id=159,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGiOM3BHZdMb9T+i2z28aV9d25R+ylqHokCUcVIaz2h7DQq++pQYpAQHDFB53HHc1FWwSamucUXGjXIy3rSPSRgsSGsD3yr/u/2kWMgZhyKrH9WAv41/n4dRvjMhKO5fVA==',key_name='tempest-keypair-1110836856',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-73o1o3hz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:46:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=2b4b3181-da12-4705-9215-7ee5b869102b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.950 232437 DEBUG nova.network.os_vif_util [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.950 232437 DEBUG nova.network.os_vif_util [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:dc:f4,bridge_name='br-int',has_traffic_filtering=True,id=879edf90-0812-4adf-b171-d993c6cfb26c,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap879edf90-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.950 232437 DEBUG os_vif [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:dc:f4,bridge_name='br-int',has_traffic_filtering=True,id=879edf90-0812-4adf-b171-d993c6cfb26c,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap879edf90-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.951 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.951 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.952 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.955 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.955 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap879edf90-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:08 np0005548731 nova_compute[232433]: 2025-12-06 07:46:08.956 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap879edf90-08, col_values=(('external_ids', {'iface-id': '879edf90-0812-4adf-b171-d993c6cfb26c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:dc:f4', 'vm-uuid': '2b4b3181-da12-4705-9215-7ee5b869102b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.005 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:09 np0005548731 NetworkManager[49182]: <info>  [1765007169.0062] manager: (tap879edf90-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.008 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.012 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.012 232437 INFO os_vif [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:dc:f4,bridge_name='br-int',has_traffic_filtering=True,id=879edf90-0812-4adf-b171-d993c6cfb26c,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap879edf90-08')#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.062 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.062 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.063 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No VIF found with MAC fa:16:3e:9c:dc:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.064 232437 INFO nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Using config drive#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.086 232437 DEBUG nova.storage.rbd_utils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:46:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:09.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.639 232437 INFO nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Creating config drive at /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.646 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5d2xycap execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.788 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5d2xycap" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.812 232437 DEBUG nova.storage.rbd_utils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:09 np0005548731 nova_compute[232433]: 2025-12-06 07:46:09.816 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config 2b4b3181-da12-4705-9215-7ee5b869102b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:10 np0005548731 nova_compute[232433]: 2025-12-06 07:46:10.008 232437 DEBUG nova.network.neutron [req-371f9ecb-e4a3-4af0-95db-6f720ed32b59 req-2ef57b68-bc24-40fd-bfa5-fde7449bea89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updated VIF entry in instance network info cache for port 879edf90-0812-4adf-b171-d993c6cfb26c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:46:10 np0005548731 nova_compute[232433]: 2025-12-06 07:46:10.009 232437 DEBUG nova.network.neutron [req-371f9ecb-e4a3-4af0-95db-6f720ed32b59 req-2ef57b68-bc24-40fd-bfa5-fde7449bea89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updating instance_info_cache with network_info: [{"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:46:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:10 np0005548731 nova_compute[232433]: 2025-12-06 07:46:10.038 232437 DEBUG oslo_concurrency.lockutils [req-371f9ecb-e4a3-4af0-95db-6f720ed32b59 req-2ef57b68-bc24-40fd-bfa5-fde7449bea89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:46:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:10.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.049 232437 DEBUG oslo_concurrency.processutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config 2b4b3181-da12-4705-9215-7ee5b869102b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.050 232437 INFO nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Deleting local config drive /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config because it was imported into RBD.#033[00m
Dec  6 02:46:11 np0005548731 kernel: tap879edf90-08: entered promiscuous mode
Dec  6 02:46:11 np0005548731 NetworkManager[49182]: <info>  [1765007171.0929] manager: (tap879edf90-08): new Tun device (/org/freedesktop/NetworkManager/Devices/335)
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.092 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:11Z|00727|binding|INFO|Claiming lport 879edf90-0812-4adf-b171-d993c6cfb26c for this chassis.
Dec  6 02:46:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:11Z|00728|binding|INFO|879edf90-0812-4adf-b171-d993c6cfb26c: Claiming fa:16:3e:9c:dc:f4 10.100.0.7
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.104 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:dc:f4 10.100.0.7'], port_security=['fa:16:3e:9c:dc:f4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b4b3181-da12-4705-9215-7ee5b869102b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f9c55c19-5297-4b9d-814f-3c2f976ceba7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=879edf90-0812-4adf-b171-d993c6cfb26c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.106 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 879edf90-0812-4adf-b171-d993c6cfb26c in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 bound to our chassis#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.107 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.119 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ba405c94-c14e-4d90-a3f0-fd1986b6e4d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.120 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d1a17d6-51 in ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.122 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d1a17d6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.122 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4b824eac-883b-4c34-8125-46e663220c89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.123 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9b4ebf-7154-42e9-9fba-2420fa007430]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 systemd-udevd[302001]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.133 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2093dc-d7b4-4d2a-b1be-75463cb8c1b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 systemd-machined[195355]: New machine qemu-74-instance-0000009f.
Dec  6 02:46:11 np0005548731 NetworkManager[49182]: <info>  [1765007171.1461] device (tap879edf90-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:46:11 np0005548731 NetworkManager[49182]: <info>  [1765007171.1474] device (tap879edf90-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:46:11 np0005548731 systemd[1]: Started Virtual Machine qemu-74-instance-0000009f.
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.159 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ce06d821-aad3-4d83-a78b-ca67070cc65a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:11Z|00729|binding|INFO|Setting lport 879edf90-0812-4adf-b171-d993c6cfb26c ovn-installed in OVS
Dec  6 02:46:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:11Z|00730|binding|INFO|Setting lport 879edf90-0812-4adf-b171-d993c6cfb26c up in Southbound
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.188 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[24922ded-47ef-4afb-9159-70ce394cc2a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.208 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:11 np0005548731 NetworkManager[49182]: <info>  [1765007171.2090] manager: (tap6d1a17d6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/336)
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.208 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1cc31b-4711-406f-a1b7-59550077cd43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.241 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[22eab98d-107c-4666-b448-e79b3de23f1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.244 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f713c33b-3ded-4e87-9ffd-d9486127dd78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 NetworkManager[49182]: <info>  [1765007171.2622] device (tap6d1a17d6-50): carrier: link connected
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.268 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2ff27b-066b-4e7c-bbb4-878fe8724442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.284 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[81e54ab0-5edc-4098-b0d3-aaab0dedaa3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743326, 'reachable_time': 29082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302033, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.297 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e83c31-8dbe-4a4b-a93c-007e42a1f24c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:a2f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 743326, 'tstamp': 743326}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302035, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.313 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2b382cad-eee7-4a9a-bc82-0698275f8f4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743326, 'reachable_time': 29082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302036, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.342 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9c18a129-5ac0-4bfb-998f-4d92804b8bfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.399 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d5641c-078d-4cbb-8687-e06eb0d3a6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.400 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.400 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.401 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:11 np0005548731 NetworkManager[49182]: <info>  [1765007171.4032] manager: (tap6d1a17d6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Dec  6 02:46:11 np0005548731 kernel: tap6d1a17d6-50: entered promiscuous mode
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.402 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.405 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.406 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:11 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:11Z|00731|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.407 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.421 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.422 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.423 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[130cf208-7f24-498f-aeb7-30138d81daf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.423 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:46:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:11.424 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'env', 'PROCESS_TAG=haproxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d1a17d6-5e44-40b7-832a-81cb86c02e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:46:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:46:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:11.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:46:11 np0005548731 podman[302086]: 2025-12-06 07:46:11.77400064 +0000 UTC m=+0.045087080 container create 665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:46:11 np0005548731 systemd[1]: Started libpod-conmon-665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2.scope.
Dec  6 02:46:11 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:46:11 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/102755732f329a9f7f23fea7546836b06af77b58f35091410a2259b4ac9c2348/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:46:11 np0005548731 podman[302086]: 2025-12-06 07:46:11.835321486 +0000 UTC m=+0.106407946 container init 665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 02:46:11 np0005548731 podman[302086]: 2025-12-06 07:46:11.841459316 +0000 UTC m=+0.112545756 container start 665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  6 02:46:11 np0005548731 podman[302086]: 2025-12-06 07:46:11.749982505 +0000 UTC m=+0.021068965 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:46:11 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[302120]: [NOTICE]   (302128) : New worker (302131) forked
Dec  6 02:46:11 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[302120]: [NOTICE]   (302128) : Loading success.
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.892 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007171.8915005, 2b4b3181-da12-4705-9215-7ee5b869102b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.892 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] VM Started (Lifecycle Event)#033[00m
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.945 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.949 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007171.894184, 2b4b3181-da12-4705-9215-7ee5b869102b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:46:11 np0005548731 nova_compute[232433]: 2025-12-06 07:46:11.950 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.002 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.006 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.040 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:46:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:12.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.549 232437 DEBUG nova.compute.manager [req-0ae8cd48-f14b-445f-ab7d-ce5ca15a27b6 req-df2dcbea-43e5-49a5-9572-b2eca298c29b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.550 232437 DEBUG oslo_concurrency.lockutils [req-0ae8cd48-f14b-445f-ab7d-ce5ca15a27b6 req-df2dcbea-43e5-49a5-9572-b2eca298c29b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.550 232437 DEBUG oslo_concurrency.lockutils [req-0ae8cd48-f14b-445f-ab7d-ce5ca15a27b6 req-df2dcbea-43e5-49a5-9572-b2eca298c29b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.550 232437 DEBUG oslo_concurrency.lockutils [req-0ae8cd48-f14b-445f-ab7d-ce5ca15a27b6 req-df2dcbea-43e5-49a5-9572-b2eca298c29b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.550 232437 DEBUG nova.compute.manager [req-0ae8cd48-f14b-445f-ab7d-ce5ca15a27b6 req-df2dcbea-43e5-49a5-9572-b2eca298c29b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Processing event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.551 232437 DEBUG nova.compute.manager [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.554 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007172.554092, 2b4b3181-da12-4705-9215-7ee5b869102b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.554 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.557 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.559 232437 INFO nova.virt.libvirt.driver [-] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Instance spawned successfully.#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.560 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.579 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.585 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.588 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.589 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.589 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.590 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.590 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.590 232437 DEBUG nova.virt.libvirt.driver [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.629 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.670 232437 INFO nova.compute.manager [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Took 9.05 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.670 232437 DEBUG nova.compute.manager [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.761 232437 INFO nova.compute.manager [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Took 10.00 seconds to build instance.#033[00m
Dec  6 02:46:12 np0005548731 nova_compute[232433]: 2025-12-06 07:46:12.781 232437 DEBUG oslo_concurrency.lockutils [None req-1ad98dc1-af3d-4d45-baef-d3a2c2cb6beb e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:13.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:13 np0005548731 nova_compute[232433]: 2025-12-06 07:46:13.615 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:13 np0005548731 podman[302142]: 2025-12-06 07:46:13.922341019 +0000 UTC m=+0.080515564 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  6 02:46:13 np0005548731 podman[302141]: 2025-12-06 07:46:13.922439642 +0000 UTC m=+0.082078313 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  6 02:46:13 np0005548731 podman[302143]: 2025-12-06 07:46:13.928330195 +0000 UTC m=+0.084248595 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec  6 02:46:14 np0005548731 nova_compute[232433]: 2025-12-06 07:46:14.006 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:14.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:14 np0005548731 nova_compute[232433]: 2025-12-06 07:46:14.719 232437 DEBUG nova.compute.manager [req-2f874002-ffc4-471c-aabe-db970eefcb0b req-5a981b85-5f6c-4cde-b3b6-417d1fa76920 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:14 np0005548731 nova_compute[232433]: 2025-12-06 07:46:14.719 232437 DEBUG oslo_concurrency.lockutils [req-2f874002-ffc4-471c-aabe-db970eefcb0b req-5a981b85-5f6c-4cde-b3b6-417d1fa76920 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:14 np0005548731 nova_compute[232433]: 2025-12-06 07:46:14.720 232437 DEBUG oslo_concurrency.lockutils [req-2f874002-ffc4-471c-aabe-db970eefcb0b req-5a981b85-5f6c-4cde-b3b6-417d1fa76920 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:14 np0005548731 nova_compute[232433]: 2025-12-06 07:46:14.720 232437 DEBUG oslo_concurrency.lockutils [req-2f874002-ffc4-471c-aabe-db970eefcb0b req-5a981b85-5f6c-4cde-b3b6-417d1fa76920 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:14 np0005548731 nova_compute[232433]: 2025-12-06 07:46:14.720 232437 DEBUG nova.compute.manager [req-2f874002-ffc4-471c-aabe-db970eefcb0b req-5a981b85-5f6c-4cde-b3b6-417d1fa76920 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] No waiting events found dispatching network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:46:14 np0005548731 nova_compute[232433]: 2025-12-06 07:46:14.721 232437 WARNING nova.compute.manager [req-2f874002-ffc4-471c-aabe-db970eefcb0b req-5a981b85-5f6c-4cde-b3b6-417d1fa76920 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received unexpected event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:46:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:15.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:16 np0005548731 nova_compute[232433]: 2025-12-06 07:46:16.158 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:16 np0005548731 NetworkManager[49182]: <info>  [1765007176.1598] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Dec  6 02:46:16 np0005548731 NetworkManager[49182]: <info>  [1765007176.1608] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Dec  6 02:46:16 np0005548731 nova_compute[232433]: 2025-12-06 07:46:16.255 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:16Z|00732|binding|INFO|Releasing lport f1ca157c-f88b-4351-b03a-b04a75537062 from this chassis (sb_readonly=0)
Dec  6 02:46:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:16Z|00733|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec  6 02:46:16 np0005548731 nova_compute[232433]: 2025-12-06 07:46:16.264 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:46:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:16.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:46:17 np0005548731 nova_compute[232433]: 2025-12-06 07:46:17.398 232437 DEBUG nova.compute.manager [req-69d896c2-8e0c-4738-98f8-59aaacc2068e req-36c84675-8dba-4d35-b357-aba9ad1f1355 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-changed-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:17 np0005548731 nova_compute[232433]: 2025-12-06 07:46:17.399 232437 DEBUG nova.compute.manager [req-69d896c2-8e0c-4738-98f8-59aaacc2068e req-36c84675-8dba-4d35-b357-aba9ad1f1355 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Refreshing instance network info cache due to event network-changed-879edf90-0812-4adf-b171-d993c6cfb26c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:46:17 np0005548731 nova_compute[232433]: 2025-12-06 07:46:17.399 232437 DEBUG oslo_concurrency.lockutils [req-69d896c2-8e0c-4738-98f8-59aaacc2068e req-36c84675-8dba-4d35-b357-aba9ad1f1355 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:46:17 np0005548731 nova_compute[232433]: 2025-12-06 07:46:17.400 232437 DEBUG oslo_concurrency.lockutils [req-69d896c2-8e0c-4738-98f8-59aaacc2068e req-36c84675-8dba-4d35-b357-aba9ad1f1355 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:46:17 np0005548731 nova_compute[232433]: 2025-12-06 07:46:17.400 232437 DEBUG nova.network.neutron [req-69d896c2-8e0c-4738-98f8-59aaacc2068e req-36c84675-8dba-4d35-b357-aba9ad1f1355 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Refreshing network info cache for port 879edf90-0812-4adf-b171-d993c6cfb26c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:46:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:46:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:17.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:46:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:18.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:18 np0005548731 nova_compute[232433]: 2025-12-06 07:46:18.618 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.008 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.026 232437 DEBUG nova.network.neutron [req-69d896c2-8e0c-4738-98f8-59aaacc2068e req-36c84675-8dba-4d35-b357-aba9ad1f1355 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updated VIF entry in instance network info cache for port 879edf90-0812-4adf-b171-d993c6cfb26c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.027 232437 DEBUG nova.network.neutron [req-69d896c2-8e0c-4738-98f8-59aaacc2068e req-36c84675-8dba-4d35-b357-aba9ad1f1355 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updating instance_info_cache with network_info: [{"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.053 232437 DEBUG oslo_concurrency.lockutils [req-69d896c2-8e0c-4738-98f8-59aaacc2068e req-36c84675-8dba-4d35-b357-aba9ad1f1355 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.303 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.303 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.304 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.304 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:19.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e341 e341: 3 total, 3 up, 3 in
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.916 232437 DEBUG oslo_concurrency.lockutils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.917 232437 DEBUG oslo_concurrency.lockutils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.918 232437 INFO nova.compute.manager [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Shelving#033[00m
Dec  6 02:46:19 np0005548731 nova_compute[232433]: 2025-12-06 07:46:19.943 232437 DEBUG nova.virt.libvirt.driver [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:46:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:20.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:21 np0005548731 nova_compute[232433]: 2025-12-06 07:46:21.473 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:21.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:21 np0005548731 nova_compute[232433]: 2025-12-06 07:46:21.514 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updating instance_info_cache with network_info: [{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:46:21 np0005548731 nova_compute[232433]: 2025-12-06 07:46:21.530 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:46:21 np0005548731 nova_compute[232433]: 2025-12-06 07:46:21.530 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:46:21 np0005548731 nova_compute[232433]: 2025-12-06 07:46:21.531 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:46:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e342 e342: 3 total, 3 up, 3 in
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:46:22 np0005548731 kernel: tapd4087b80-19 (unregistering): left promiscuous mode
Dec  6 02:46:22 np0005548731 NetworkManager[49182]: <info>  [1765007182.1953] device (tapd4087b80-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:46:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:22Z|00734|binding|INFO|Releasing lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 from this chassis (sb_readonly=0)
Dec  6 02:46:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:22Z|00735|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 down in Southbound
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.203 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:22Z|00736|binding|INFO|Removing iface tapd4087b80-19 ovn-installed in OVS
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.208 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:41:1e 10.100.0.7'], port_security=['fa:16:3e:16:41:1e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f67d89a8-836a-4f47-af8d-37cf99529275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ef8d01c6-8791-4eef-8d34-906ccbdd6237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a13aa6e4-a519-406f-87b7-05ba3d74a296, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=d4087b80-19fa-44d3-8dd5-406c2b01b6f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.209 143965 INFO neutron.agent.ovn.metadata.agent [-] Port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 in datapath 67a02abd-6f15-4e26-ba0d-8a091ca98239 unbound from our chassis#033[00m
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.211 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67a02abd-6f15-4e26-ba0d-8a091ca98239, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.213 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5aaa31a8-1fd0-447a-a099-84c6fb54c963]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.213 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 namespace which is not needed anymore#033[00m
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.219 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:22 np0005548731 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000099.scope: Deactivated successfully.
Dec  6 02:46:22 np0005548731 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000099.scope: Consumed 19.520s CPU time.
Dec  6 02:46:22 np0005548731 systemd-machined[195355]: Machine qemu-71-instance-00000099 terminated.
Dec  6 02:46:22 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[299449]: [NOTICE]   (299471) : haproxy version is 2.8.14-c23fe91
Dec  6 02:46:22 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[299449]: [NOTICE]   (299471) : path to executable is /usr/sbin/haproxy
Dec  6 02:46:22 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[299449]: [WARNING]  (299471) : Exiting Master process...
Dec  6 02:46:22 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[299449]: [ALERT]    (299471) : Current worker (299476) exited with code 143 (Terminated)
Dec  6 02:46:22 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[299449]: [WARNING]  (299471) : All workers exited. Exiting... (0)
Dec  6 02:46:22 np0005548731 systemd[1]: libpod-159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a.scope: Deactivated successfully.
Dec  6 02:46:22 np0005548731 conmon[299449]: conmon 159207bd313a1a55e655 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a.scope/container/memory.events
Dec  6 02:46:22 np0005548731 podman[302286]: 2025-12-06 07:46:22.368374462 +0000 UTC m=+0.042667851 container died 159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:46:22 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a-userdata-shm.mount: Deactivated successfully.
Dec  6 02:46:22 np0005548731 systemd[1]: var-lib-containers-storage-overlay-d1ab0bc05fb774abac9b26cd4de108994e052c2b276d819756c70d80383af3ee-merged.mount: Deactivated successfully.
Dec  6 02:46:22 np0005548731 podman[302286]: 2025-12-06 07:46:22.417771636 +0000 UTC m=+0.092065025 container cleanup 159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 02:46:22 np0005548731 systemd[1]: libpod-conmon-159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a.scope: Deactivated successfully.
Dec  6 02:46:22 np0005548731 podman[302322]: 2025-12-06 07:46:22.497929941 +0000 UTC m=+0.042907027 container remove 159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.504 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1502717d-beb5-4740-8680-0a39b9a92dd3]: (4, ('Sat Dec  6 07:46:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 (159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a)\n159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a\nSat Dec  6 07:46:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 (159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a)\n159207bd313a1a55e655d366f3556200a3a9e1698660b965a9fc239071fae56a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.506 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd955d1-3464-421a-986b-b4f3a70c6bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.507 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a02abd-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.508 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:22 np0005548731 kernel: tap67a02abd-60: left promiscuous mode
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.525 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.528 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[32ed8ae5-f496-4b22-95c5-490b887835ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.543 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[17d19a2c-6960-4a5d-a1a3-5dfc6035b263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.545 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0159a511-fd44-4e18-9616-561b406673f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:22.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.565 232437 DEBUG nova.compute.manager [req-c2550464-15f2-477f-97b3-e2d93c1df1f4 req-865ab59a-bd3f-4857-b2c7-c92db402dee2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-unplugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.566 232437 DEBUG oslo_concurrency.lockutils [req-c2550464-15f2-477f-97b3-e2d93c1df1f4 req-865ab59a-bd3f-4857-b2c7-c92db402dee2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.566 232437 DEBUG oslo_concurrency.lockutils [req-c2550464-15f2-477f-97b3-e2d93c1df1f4 req-865ab59a-bd3f-4857-b2c7-c92db402dee2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.566 232437 DEBUG oslo_concurrency.lockutils [req-c2550464-15f2-477f-97b3-e2d93c1df1f4 req-865ab59a-bd3f-4857-b2c7-c92db402dee2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.566 232437 DEBUG nova.compute.manager [req-c2550464-15f2-477f-97b3-e2d93c1df1f4 req-865ab59a-bd3f-4857-b2c7-c92db402dee2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] No waiting events found dispatching network-vif-unplugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.567 232437 WARNING nova.compute.manager [req-c2550464-15f2-477f-97b3-e2d93c1df1f4 req-865ab59a-bd3f-4857-b2c7-c92db402dee2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received unexpected event network-vif-unplugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for instance with vm_state active and task_state shelving.#033[00m
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.570 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[88915b08-0538-49a2-b4a0-a35d2418d4f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 729857, 'reachable_time': 19846, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302343, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:22 np0005548731 systemd[1]: run-netns-ovnmeta\x2d67a02abd\x2d6f15\x2d4e26\x2dba0d\x2d8a091ca98239.mount: Deactivated successfully.
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.576 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:46:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:22.576 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[60119fbf-be6f-46e1-9b0d-87858dd320e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.962 232437 INFO nova.virt.libvirt.driver [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance shutdown successfully after 3 seconds.#033[00m
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.966 232437 INFO nova.virt.libvirt.driver [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance destroyed successfully.#033[00m
Dec  6 02:46:22 np0005548731 nova_compute[232433]: 2025-12-06 07:46:22.966 232437 DEBUG nova.objects.instance [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'numa_topology' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:23 np0005548731 nova_compute[232433]: 2025-12-06 07:46:23.283 232437 INFO nova.virt.libvirt.driver [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Beginning cold snapshot process#033[00m
Dec  6 02:46:23 np0005548731 nova_compute[232433]: 2025-12-06 07:46:23.415 232437 DEBUG nova.virt.libvirt.imagebackend [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec  6 02:46:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:46:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:23.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:46:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:46:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3024054679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:46:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:46:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3024054679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:46:23 np0005548731 nova_compute[232433]: 2025-12-06 07:46:23.621 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:23 np0005548731 nova_compute[232433]: 2025-12-06 07:46:23.714 232437 DEBUG nova.storage.rbd_utils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] creating snapshot(8f2b30ad297e4bc9b19d04e9423ebe18) on rbd image(f67d89a8-836a-4f47-af8d-37cf99529275_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:46:24 np0005548731 nova_compute[232433]: 2025-12-06 07:46:24.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:24 np0005548731 nova_compute[232433]: 2025-12-06 07:46:24.544 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:46:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:24.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e343 e343: 3 total, 3 up, 3 in
Dec  6 02:46:24 np0005548731 nova_compute[232433]: 2025-12-06 07:46:24.684 232437 DEBUG nova.compute.manager [req-e269217c-5133-4045-947c-f61ade56adeb req-d72ef01b-36bb-42ed-915d-824af7021b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:24 np0005548731 nova_compute[232433]: 2025-12-06 07:46:24.684 232437 DEBUG oslo_concurrency.lockutils [req-e269217c-5133-4045-947c-f61ade56adeb req-d72ef01b-36bb-42ed-915d-824af7021b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:24 np0005548731 nova_compute[232433]: 2025-12-06 07:46:24.685 232437 DEBUG oslo_concurrency.lockutils [req-e269217c-5133-4045-947c-f61ade56adeb req-d72ef01b-36bb-42ed-915d-824af7021b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:24 np0005548731 nova_compute[232433]: 2025-12-06 07:46:24.685 232437 DEBUG oslo_concurrency.lockutils [req-e269217c-5133-4045-947c-f61ade56adeb req-d72ef01b-36bb-42ed-915d-824af7021b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:24 np0005548731 nova_compute[232433]: 2025-12-06 07:46:24.685 232437 DEBUG nova.compute.manager [req-e269217c-5133-4045-947c-f61ade56adeb req-d72ef01b-36bb-42ed-915d-824af7021b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] No waiting events found dispatching network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:46:24 np0005548731 nova_compute[232433]: 2025-12-06 07:46:24.685 232437 WARNING nova.compute.manager [req-e269217c-5133-4045-947c-f61ade56adeb req-d72ef01b-36bb-42ed-915d-824af7021b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received unexpected event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec  6 02:46:24 np0005548731 nova_compute[232433]: 2025-12-06 07:46:24.712 232437 DEBUG nova.storage.rbd_utils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] cloning vms/f67d89a8-836a-4f47-af8d-37cf99529275_disk@8f2b30ad297e4bc9b19d04e9423ebe18 to images/1a501272-45c2-411f-9825-0c663b862e6b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:46:24 np0005548731 nova_compute[232433]: 2025-12-06 07:46:24.830 232437 DEBUG nova.storage.rbd_utils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] flattening images/1a501272-45c2-411f-9825-0c663b862e6b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  6 02:46:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:25 np0005548731 nova_compute[232433]: 2025-12-06 07:46:25.168 232437 DEBUG nova.storage.rbd_utils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] removing snapshot(8f2b30ad297e4bc9b19d04e9423ebe18) on rbd image(f67d89a8-836a-4f47-af8d-37cf99529275_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:46:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:46:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:25.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:46:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e344 e344: 3 total, 3 up, 3 in
Dec  6 02:46:25 np0005548731 nova_compute[232433]: 2025-12-06 07:46:25.689 232437 DEBUG nova.storage.rbd_utils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] creating snapshot(snap) on rbd image(1a501272-45c2-411f-9825-0c663b862e6b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:46:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:26.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:26Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:dc:f4 10.100.0.7
Dec  6 02:46:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:26Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:dc:f4 10.100.0.7
Dec  6 02:46:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e345 e345: 3 total, 3 up, 3 in
Dec  6 02:46:27 np0005548731 nova_compute[232433]: 2025-12-06 07:46:27.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:46:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:27.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:28 np0005548731 nova_compute[232433]: 2025-12-06 07:46:28.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:46:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:46:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:28.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:46:28 np0005548731 nova_compute[232433]: 2025-12-06 07:46:28.623 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.123 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.123 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.123 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.124 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.124 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.243 232437 INFO nova.virt.libvirt.driver [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Snapshot image upload complete#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.244 232437 DEBUG nova.compute.manager [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.295 232437 INFO nova.compute.manager [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Shelve offloading#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.302 232437 INFO nova.virt.libvirt.driver [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance destroyed successfully.#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.302 232437 DEBUG nova.compute.manager [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.304 232437 DEBUG oslo_concurrency.lockutils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.305 232437 DEBUG oslo_concurrency.lockutils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquired lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.305 232437 DEBUG nova.network.neutron [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:46:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:46:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:29.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.544 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:46:29 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2389771305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.572 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.654 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.654 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.657 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.657 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.794 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.795 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4088MB free_disk=20.761104583740234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.795 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.795 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e346 e346: 3 total, 3 up, 3 in
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.906 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance f67d89a8-836a-4f47-af8d-37cf99529275 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.907 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 2b4b3181-da12-4705-9215-7ee5b869102b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.907 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.907 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:46:29 np0005548731 nova_compute[232433]: 2025-12-06 07:46:29.951 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:46:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/347993965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:46:30 np0005548731 nova_compute[232433]: 2025-12-06 07:46:30.395 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:30 np0005548731 nova_compute[232433]: 2025-12-06 07:46:30.400 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:46:30 np0005548731 nova_compute[232433]: 2025-12-06 07:46:30.426 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:46:30 np0005548731 nova_compute[232433]: 2025-12-06 07:46:30.463 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:46:30 np0005548731 nova_compute[232433]: 2025-12-06 07:46:30.463 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:30.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:30 np0005548731 nova_compute[232433]: 2025-12-06 07:46:30.797 232437 DEBUG nova.network.neutron [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updating instance_info_cache with network_info: [{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:46:30 np0005548731 nova_compute[232433]: 2025-12-06 07:46:30.863 232437 DEBUG oslo_concurrency.lockutils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Releasing lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:46:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:46:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:31.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.791 232437 INFO nova.virt.libvirt.driver [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance destroyed successfully.#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.792 232437 DEBUG nova.objects.instance [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'resources' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.806 232437 DEBUG nova.virt.libvirt.vif [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:43:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2113241509',display_name='tempest-ServersNegativeTestJSON-server-2113241509',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2113241509',id=153,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:43:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4ec9294f6d4b4f44a72414374d646a4a',ramdisk_id='',reservation_id='r-r3odtvmp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-776446295',owner_user_name='tempest-ServersNegativeTestJSON-776446295-project-member',shelved_at='2025-12-06T07:46:29.244466',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='1a501272-45c2-411f-9825-0c663b862e6b'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:46:23Z,user_data=None,user_id='1035ecd55ed54b57aa35fe32fb915cc5',uuid=f67d89a8-836a-4f47-af8d-37cf99529275,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.807 232437 DEBUG nova.network.os_vif_util [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converting VIF {"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.808 232437 DEBUG nova.network.os_vif_util [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.809 232437 DEBUG os_vif [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.812 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.813 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4087b80-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.821 232437 INFO os_vif [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19')#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.954 232437 DEBUG nova.compute.manager [req-94a23119-bce0-4588-81eb-66b6be241e7e req-f8c610f8-0717-4569-ba26-890fd8704594 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-changed-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.955 232437 DEBUG nova.compute.manager [req-94a23119-bce0-4588-81eb-66b6be241e7e req-f8c610f8-0717-4569-ba26-890fd8704594 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Refreshing instance network info cache due to event network-changed-d4087b80-19fa-44d3-8dd5-406c2b01b6f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.955 232437 DEBUG oslo_concurrency.lockutils [req-94a23119-bce0-4588-81eb-66b6be241e7e req-f8c610f8-0717-4569-ba26-890fd8704594 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.955 232437 DEBUG oslo_concurrency.lockutils [req-94a23119-bce0-4588-81eb-66b6be241e7e req-f8c610f8-0717-4569-ba26-890fd8704594 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:46:31 np0005548731 nova_compute[232433]: 2025-12-06 07:46:31.955 232437 DEBUG nova.network.neutron [req-94a23119-bce0-4588-81eb-66b6be241e7e req-f8c610f8-0717-4569-ba26-890fd8704594 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Refreshing network info cache for port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:46:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:32.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.386 232437 INFO nova.virt.libvirt.driver [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Deleting instance files /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275_del#033[00m
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.386 232437 INFO nova.virt.libvirt.driver [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Deletion of /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275_del complete#033[00m
Dec  6 02:46:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:46:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:33.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.529 232437 INFO nova.scheduler.client.report [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Deleted allocations for instance f67d89a8-836a-4f47-af8d-37cf99529275#033[00m
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.611 232437 DEBUG nova.compute.manager [None req-f7572c46-7bc3-406a-a549-1b57ef35983b e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:33.614 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:46:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:33.615 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:46:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:33.617 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.620 232437 DEBUG oslo_concurrency.lockutils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.620 232437 DEBUG oslo_concurrency.lockutils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.632 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.671 232437 INFO nova.compute.manager [None req-f7572c46-7bc3-406a-a549-1b57ef35983b e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] instance snapshotting#033[00m
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.685 232437 DEBUG oslo_concurrency.processutils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.817 232437 DEBUG nova.network.neutron [req-94a23119-bce0-4588-81eb-66b6be241e7e req-f8c610f8-0717-4569-ba26-890fd8704594 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updated VIF entry in instance network info cache for port d4087b80-19fa-44d3-8dd5-406c2b01b6f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.818 232437 DEBUG nova.network.neutron [req-94a23119-bce0-4588-81eb-66b6be241e7e req-f8c610f8-0717-4569-ba26-890fd8704594 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updating instance_info_cache with network_info: [{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": null, "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapd4087b80-19", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.855 232437 DEBUG oslo_concurrency.lockutils [req-94a23119-bce0-4588-81eb-66b6be241e7e req-f8c610f8-0717-4569-ba26-890fd8704594 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:46:33 np0005548731 nova_compute[232433]: 2025-12-06 07:46:33.969 232437 INFO nova.virt.libvirt.driver [None req-f7572c46-7bc3-406a-a549-1b57ef35983b e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Beginning live snapshot process#033[00m
Dec  6 02:46:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:46:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/462057177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:46:34 np0005548731 nova_compute[232433]: 2025-12-06 07:46:34.160 232437 DEBUG oslo_concurrency.processutils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:34 np0005548731 nova_compute[232433]: 2025-12-06 07:46:34.167 232437 DEBUG nova.virt.libvirt.imagebackend [None req-f7572c46-7bc3-406a-a549-1b57ef35983b e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec  6 02:46:34 np0005548731 nova_compute[232433]: 2025-12-06 07:46:34.172 232437 DEBUG nova.compute.provider_tree [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:46:34 np0005548731 nova_compute[232433]: 2025-12-06 07:46:34.190 232437 DEBUG nova.scheduler.client.report [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:46:34 np0005548731 nova_compute[232433]: 2025-12-06 07:46:34.219 232437 DEBUG oslo_concurrency.lockutils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:34 np0005548731 nova_compute[232433]: 2025-12-06 07:46:34.275 232437 DEBUG oslo_concurrency.lockutils [None req-102a2fca-06f8-4768-aa56-8a215c0631c6 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:34 np0005548731 nova_compute[232433]: 2025-12-06 07:46:34.415 232437 DEBUG nova.storage.rbd_utils [None req-f7572c46-7bc3-406a-a549-1b57ef35983b e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] creating snapshot(bdebe49dbaa44b259d9661fb6c3d7899) on rbd image(2b4b3181-da12-4705-9215-7ee5b869102b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:46:34 np0005548731 nova_compute[232433]: 2025-12-06 07:46:34.481 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:46:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e347 e347: 3 total, 3 up, 3 in
Dec  6 02:46:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:46:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:34.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:46:34 np0005548731 nova_compute[232433]: 2025-12-06 07:46:34.623 232437 DEBUG nova.storage.rbd_utils [None req-f7572c46-7bc3-406a-a549-1b57ef35983b e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] cloning vms/2b4b3181-da12-4705-9215-7ee5b869102b_disk@bdebe49dbaa44b259d9661fb6c3d7899 to images/4a5b4c30-2416-4ed2-b947-8bb73ef88bd7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:46:34 np0005548731 nova_compute[232433]: 2025-12-06 07:46:34.763 232437 DEBUG nova.storage.rbd_utils [None req-f7572c46-7bc3-406a-a549-1b57ef35983b e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] flattening images/4a5b4c30-2416-4ed2-b947-8bb73ef88bd7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  6 02:46:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e348 e348: 3 total, 3 up, 3 in
Dec  6 02:46:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:35 np0005548731 nova_compute[232433]: 2025-12-06 07:46:35.175 232437 DEBUG nova.storage.rbd_utils [None req-f7572c46-7bc3-406a-a549-1b57ef35983b e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] removing snapshot(bdebe49dbaa44b259d9661fb6c3d7899) on rbd image(2b4b3181-da12-4705-9215-7ee5b869102b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:46:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:35.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e349 e349: 3 total, 3 up, 3 in
Dec  6 02:46:35 np0005548731 nova_compute[232433]: 2025-12-06 07:46:35.879 232437 DEBUG nova.storage.rbd_utils [None req-f7572c46-7bc3-406a-a549-1b57ef35983b e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] creating snapshot(snap) on rbd image(4a5b4c30-2416-4ed2-b947-8bb73ef88bd7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:46:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:36.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:36 np0005548731 nova_compute[232433]: 2025-12-06 07:46:36.673 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:36 np0005548731 nova_compute[232433]: 2025-12-06 07:46:36.675 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:36 np0005548731 nova_compute[232433]: 2025-12-06 07:46:36.675 232437 INFO nova.compute.manager [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Unshelving#033[00m
Dec  6 02:46:36 np0005548731 nova_compute[232433]: 2025-12-06 07:46:36.734 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:36 np0005548731 nova_compute[232433]: 2025-12-06 07:46:36.784 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:36 np0005548731 nova_compute[232433]: 2025-12-06 07:46:36.784 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:36 np0005548731 nova_compute[232433]: 2025-12-06 07:46:36.788 232437 DEBUG nova.objects.instance [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'pci_requests' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:36 np0005548731 nova_compute[232433]: 2025-12-06 07:46:36.803 232437 DEBUG nova.objects.instance [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'numa_topology' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:36 np0005548731 nova_compute[232433]: 2025-12-06 07:46:36.815 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:36 np0005548731 nova_compute[232433]: 2025-12-06 07:46:36.822 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:46:36 np0005548731 nova_compute[232433]: 2025-12-06 07:46:36.823 232437 INFO nova.compute.claims [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:46:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 02:46:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:46:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:46:36 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:46:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e350 e350: 3 total, 3 up, 3 in
Dec  6 02:46:37 np0005548731 nova_compute[232433]: 2025-12-06 07:46:37.009 232437 DEBUG oslo_concurrency.processutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:46:37 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/540055131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:46:37 np0005548731 nova_compute[232433]: 2025-12-06 07:46:37.432 232437 DEBUG oslo_concurrency.processutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:37 np0005548731 nova_compute[232433]: 2025-12-06 07:46:37.438 232437 DEBUG nova.compute.provider_tree [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:46:37 np0005548731 nova_compute[232433]: 2025-12-06 07:46:37.444 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007182.443919, f67d89a8-836a-4f47-af8d-37cf99529275 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:46:37 np0005548731 nova_compute[232433]: 2025-12-06 07:46:37.445 232437 INFO nova.compute.manager [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:46:37 np0005548731 nova_compute[232433]: 2025-12-06 07:46:37.454 232437 DEBUG nova.scheduler.client.report [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:46:37 np0005548731 nova_compute[232433]: 2025-12-06 07:46:37.466 232437 DEBUG nova.compute.manager [None req-1604e1d7-7e96-4aa2-91d3-4ad64f555e17 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:37 np0005548731 nova_compute[232433]: 2025-12-06 07:46:37.479 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:37.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:37 np0005548731 nova_compute[232433]: 2025-12-06 07:46:37.681 232437 INFO nova.network.neutron [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updating port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec  6 02:46:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:38.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:38 np0005548731 nova_compute[232433]: 2025-12-06 07:46:38.678 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:38 np0005548731 nova_compute[232433]: 2025-12-06 07:46:38.977 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:46:38 np0005548731 nova_compute[232433]: 2025-12-06 07:46:38.978 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquired lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:46:38 np0005548731 nova_compute[232433]: 2025-12-06 07:46:38.978 232437 DEBUG nova.network.neutron [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:46:39 np0005548731 nova_compute[232433]: 2025-12-06 07:46:39.095 232437 DEBUG nova.compute.manager [req-dbe862d1-f19d-443d-88ae-a47680f727e1 req-4ebd566c-3407-43c1-846e-26b183af6f11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-changed-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:39 np0005548731 nova_compute[232433]: 2025-12-06 07:46:39.096 232437 DEBUG nova.compute.manager [req-dbe862d1-f19d-443d-88ae-a47680f727e1 req-4ebd566c-3407-43c1-846e-26b183af6f11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Refreshing instance network info cache due to event network-changed-d4087b80-19fa-44d3-8dd5-406c2b01b6f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:46:39 np0005548731 nova_compute[232433]: 2025-12-06 07:46:39.097 232437 DEBUG oslo_concurrency.lockutils [req-dbe862d1-f19d-443d-88ae-a47680f727e1 req-4ebd566c-3407-43c1-846e-26b183af6f11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:46:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:39.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.079 232437 INFO nova.virt.libvirt.driver [None req-f7572c46-7bc3-406a-a549-1b57ef35983b e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Snapshot image upload complete#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.080 232437 INFO nova.compute.manager [None req-f7572c46-7bc3-406a-a549-1b57ef35983b e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Took 6.41 seconds to snapshot the instance on the hypervisor.#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.145 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.195 232437 DEBUG nova.network.neutron [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updating instance_info_cache with network_info: [{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.211 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Releasing lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.212 232437 DEBUG nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.212 232437 INFO nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Creating image(s)#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.242 232437 DEBUG nova.storage.rbd_utils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.247 232437 DEBUG nova.objects.instance [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'trusted_certs' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.249 232437 DEBUG oslo_concurrency.lockutils [req-dbe862d1-f19d-443d-88ae-a47680f727e1 req-4ebd566c-3407-43c1-846e-26b183af6f11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.250 232437 DEBUG nova.network.neutron [req-dbe862d1-f19d-443d-88ae-a47680f727e1 req-4ebd566c-3407-43c1-846e-26b183af6f11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Refreshing network info cache for port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.290 232437 DEBUG nova.storage.rbd_utils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.318 232437 DEBUG nova.storage.rbd_utils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.321 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "2839c55a9ae1267f35ffa18c93e0d76e5fd09939" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.322 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "2839c55a9ae1267f35ffa18c93e0d76e5fd09939" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:40.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.643 232437 DEBUG nova.virt.libvirt.imagebackend [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/1a501272-45c2-411f-9825-0c663b862e6b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/1a501272-45c2-411f-9825-0c663b862e6b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.695 232437 DEBUG nova.virt.libvirt.imagebackend [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/1a501272-45c2-411f-9825-0c663b862e6b/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.696 232437 DEBUG nova.storage.rbd_utils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] cloning images/1a501272-45c2-411f-9825-0c663b862e6b@snap to None/f67d89a8-836a-4f47-af8d-37cf99529275_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.814 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "2839c55a9ae1267f35ffa18c93e0d76e5fd09939" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:40 np0005548731 nova_compute[232433]: 2025-12-06 07:46:40.947 232437 DEBUG nova.objects.instance [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'migration_context' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.010 232437 DEBUG nova.storage.rbd_utils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] flattening vms/f67d89a8-836a-4f47-af8d-37cf99529275_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.342 232437 DEBUG nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Image rbd:vms/f67d89a8-836a-4f47-af8d-37cf99529275_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.342 232437 DEBUG nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.343 232437 DEBUG nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Ensure instance console log exists: /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.343 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.344 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.344 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.347 232437 DEBUG nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Start _get_guest_xml network_info=[{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:46:19Z,direct_url=<?>,disk_format='raw',id=1a501272-45c2-411f-9825-0c663b862e6b,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-2113241509-shelved',owner='4ec9294f6d4b4f44a72414374d646a4a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:46:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.350 232437 WARNING nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.354 232437 DEBUG nova.virt.libvirt.host [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.355 232437 DEBUG nova.virt.libvirt.host [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.358 232437 DEBUG nova.virt.libvirt.host [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.359 232437 DEBUG nova.virt.libvirt.host [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.360 232437 DEBUG nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.360 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:46:19Z,direct_url=<?>,disk_format='raw',id=1a501272-45c2-411f-9825-0c663b862e6b,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-2113241509-shelved',owner='4ec9294f6d4b4f44a72414374d646a4a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:46:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.361 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.361 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.361 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.361 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.361 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.362 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.362 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.362 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.362 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.362 232437 DEBUG nova.virt.hardware [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.363 232437 DEBUG nova.objects.instance [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'vcpu_model' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.378 232437 DEBUG oslo_concurrency.processutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:41.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:46:41 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1367365160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.807 232437 DEBUG oslo_concurrency.processutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.835 232437 DEBUG nova.storage.rbd_utils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.839 232437 DEBUG oslo_concurrency.processutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:41 np0005548731 nova_compute[232433]: 2025-12-06 07:46:41.865 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:46:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3457157827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.303 232437 DEBUG oslo_concurrency.processutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.305 232437 DEBUG nova.virt.libvirt.vif [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:43:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2113241509',display_name='tempest-ServersNegativeTestJSON-server-2113241509',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2113241509',id=153,image_ref='1a501272-45c2-411f-9825-0c663b862e6b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:43:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4ec9294f6d4b4f44a72414374d646a4a',ramdisk_id='',reservation_id='r-r3odtvmp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-776446295',owner_user_name='tempest-ServersNegativeTestJSON-776446295-project-member',shelved_at='2025-12-06T07:46:29.244466',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='1a501272-45c2-411f-9825-0c663b862e6b'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:46:36Z,user_data=None,user_id='1035ecd55ed54b57aa35fe32fb915cc5',uuid=f67d89a8-836a-4f47-af8d-37cf99529275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.305 232437 DEBUG nova.network.os_vif_util [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converting VIF {"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.306 232437 DEBUG nova.network.os_vif_util [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.307 232437 DEBUG nova.objects.instance [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'pci_devices' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.321 232437 DEBUG nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  <uuid>f67d89a8-836a-4f47-af8d-37cf99529275</uuid>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  <name>instance-00000099</name>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersNegativeTestJSON-server-2113241509</nova:name>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:46:41</nova:creationTime>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <nova:user uuid="1035ecd55ed54b57aa35fe32fb915cc5">tempest-ServersNegativeTestJSON-776446295-project-member</nova:user>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <nova:project uuid="4ec9294f6d4b4f44a72414374d646a4a">tempest-ServersNegativeTestJSON-776446295</nova:project>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="1a501272-45c2-411f-9825-0c663b862e6b"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <nova:port uuid="d4087b80-19fa-44d3-8dd5-406c2b01b6f4">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <entry name="serial">f67d89a8-836a-4f47-af8d-37cf99529275</entry>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <entry name="uuid">f67d89a8-836a-4f47-af8d-37cf99529275</entry>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f67d89a8-836a-4f47-af8d-37cf99529275_disk">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f67d89a8-836a-4f47-af8d-37cf99529275_disk.config">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:16:41:1e"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <target dev="tapd4087b80-19"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/console.log" append="off"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <input type="keyboard" bus="usb"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:46:42 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:46:42 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:46:42 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:46:42 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.322 232437 DEBUG nova.compute.manager [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Preparing to wait for external event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.323 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.323 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.323 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.324 232437 DEBUG nova.virt.libvirt.vif [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:43:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2113241509',display_name='tempest-ServersNegativeTestJSON-server-2113241509',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2113241509',id=153,image_ref='1a501272-45c2-411f-9825-0c663b862e6b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:43:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4ec9294f6d4b4f44a72414374d646a4a',ramdisk_id='',reservation_id='r-r3odtvmp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-776446295',owner_user_name='tempest-ServersNegativeTestJSON-776446295-project-member',shelved_at='2025-12-06T07:46:29.244466',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='1a501272-45c2-411f-9825-0c663b862e6b'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:46:36Z,user_data=None,user_id='1035ecd55ed54b57aa35fe32fb915cc5',uuid=f67d89a8-836a-4f47-af8d-37cf99529275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.324 232437 DEBUG nova.network.os_vif_util [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converting VIF {"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.325 232437 DEBUG nova.network.os_vif_util [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.325 232437 DEBUG os_vif [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.326 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.326 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.327 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.329 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4087b80-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.330 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4087b80-19, col_values=(('external_ids', {'iface-id': 'd4087b80-19fa-44d3-8dd5-406c2b01b6f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:41:1e', 'vm-uuid': 'f67d89a8-836a-4f47-af8d-37cf99529275'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.331 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:42 np0005548731 NetworkManager[49182]: <info>  [1765007202.3320] manager: (tapd4087b80-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.334 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.338 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.339 232437 INFO os_vif [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19')#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.380 232437 DEBUG nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.381 232437 DEBUG nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.381 232437 DEBUG nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] No VIF found with MAC fa:16:3e:16:41:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.381 232437 INFO nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Using config drive#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.403 232437 DEBUG nova.storage.rbd_utils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.434 232437 DEBUG nova.objects.instance [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'ec2_ids' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:42 np0005548731 nova_compute[232433]: 2025-12-06 07:46:42.495 232437 DEBUG nova.objects.instance [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'keypairs' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:42.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:43.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:43 np0005548731 nova_compute[232433]: 2025-12-06 07:46:43.625 232437 INFO nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Creating config drive at /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config#033[00m
Dec  6 02:46:43 np0005548731 nova_compute[232433]: 2025-12-06 07:46:43.630 232437 DEBUG oslo_concurrency.processutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx2wb59qy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:43 np0005548731 nova_compute[232433]: 2025-12-06 07:46:43.680 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:43 np0005548731 nova_compute[232433]: 2025-12-06 07:46:43.782 232437 DEBUG oslo_concurrency.processutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx2wb59qy" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:43 np0005548731 nova_compute[232433]: 2025-12-06 07:46:43.812 232437 DEBUG nova.storage.rbd_utils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] rbd image f67d89a8-836a-4f47-af8d-37cf99529275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:43 np0005548731 nova_compute[232433]: 2025-12-06 07:46:43.816 232437 DEBUG oslo_concurrency.processutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config f67d89a8-836a-4f47-af8d-37cf99529275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:43 np0005548731 nova_compute[232433]: 2025-12-06 07:46:43.905 232437 DEBUG nova.network.neutron [req-dbe862d1-f19d-443d-88ae-a47680f727e1 req-4ebd566c-3407-43c1-846e-26b183af6f11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updated VIF entry in instance network info cache for port d4087b80-19fa-44d3-8dd5-406c2b01b6f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:46:43 np0005548731 nova_compute[232433]: 2025-12-06 07:46:43.906 232437 DEBUG nova.network.neutron [req-dbe862d1-f19d-443d-88ae-a47680f727e1 req-4ebd566c-3407-43c1-846e-26b183af6f11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updating instance_info_cache with network_info: [{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:46:43 np0005548731 nova_compute[232433]: 2025-12-06 07:46:43.929 232437 DEBUG oslo_concurrency.lockutils [req-dbe862d1-f19d-443d-88ae-a47680f727e1 req-4ebd566c-3407-43c1-846e-26b183af6f11 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.278 232437 DEBUG oslo_concurrency.processutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config f67d89a8-836a-4f47-af8d-37cf99529275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.279 232437 INFO nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Deleting local config drive /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275/disk.config because it was imported into RBD.#033[00m
Dec  6 02:46:44 np0005548731 kernel: tapd4087b80-19: entered promiscuous mode
Dec  6 02:46:44 np0005548731 NetworkManager[49182]: <info>  [1765007204.3367] manager: (tapd4087b80-19): new Tun device (/org/freedesktop/NetworkManager/Devices/341)
Dec  6 02:46:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:44Z|00737|binding|INFO|Claiming lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for this chassis.
Dec  6 02:46:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:44Z|00738|binding|INFO|d4087b80-19fa-44d3-8dd5-406c2b01b6f4: Claiming fa:16:3e:16:41:1e 10.100.0.7
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.340 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.345 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:41:1e 10.100.0.7'], port_security=['fa:16:3e:16:41:1e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f67d89a8-836a-4f47-af8d-37cf99529275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'ef8d01c6-8791-4eef-8d34-906ccbdd6237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a13aa6e4-a519-406f-87b7-05ba3d74a296, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=d4087b80-19fa-44d3-8dd5-406c2b01b6f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.346 143965 INFO neutron.agent.ovn.metadata.agent [-] Port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 in datapath 67a02abd-6f15-4e26-ba0d-8a091ca98239 bound to our chassis#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.348 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67a02abd-6f15-4e26-ba0d-8a091ca98239#033[00m
Dec  6 02:46:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:44Z|00739|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 ovn-installed in OVS
Dec  6 02:46:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:44Z|00740|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 up in Southbound
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.358 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.361 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.360 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8c5038-38f7-412a-8265-e11c75dbd7a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.362 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67a02abd-61 in ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.364 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67a02abd-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.364 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[62c5e121-d797-4354-8f6f-9346aa0e7f84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.366 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2fde7c18-8db1-4f6d-8394-8218bfa6584c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.386 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[a83e3a6e-b6c0-4adf-ba70-713a871d24e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 systemd-machined[195355]: New machine qemu-75-instance-00000099.
Dec  6 02:46:44 np0005548731 systemd-udevd[303354]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:46:44 np0005548731 systemd[1]: Started Virtual Machine qemu-75-instance-00000099.
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.400 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[62d5dff6-3b71-452f-87de-f95d5bbc8deb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 NetworkManager[49182]: <info>  [1765007204.4186] device (tapd4087b80-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:46:44 np0005548731 NetworkManager[49182]: <info>  [1765007204.4207] device (tapd4087b80-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.437 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e517bcb6-1ead-44f7-8fd0-1f0937461ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.443 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3ba4e2-a244-402f-b331-0767e1c09fa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 NetworkManager[49182]: <info>  [1765007204.4461] manager: (tap67a02abd-60): new Veth device (/org/freedesktop/NetworkManager/Devices/342)
Dec  6 02:46:44 np0005548731 podman[303327]: 2025-12-06 07:46:44.453120799 +0000 UTC m=+0.078020832 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:46:44 np0005548731 podman[303324]: 2025-12-06 07:46:44.476627013 +0000 UTC m=+0.099319833 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 02:46:44 np0005548731 podman[303326]: 2025-12-06 07:46:44.480317413 +0000 UTC m=+0.105186625 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.479 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[31ff9782-23e7-47d7-b743-99dd95daa61d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.483 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3979b805-0094-488f-95d7-f67d89dbee6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 NetworkManager[49182]: <info>  [1765007204.5034] device (tap67a02abd-60): carrier: link connected
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.510 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b401f0eb-7047-4f34-9c4c-8e0e6dbd7054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.527 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f6959e2c-101a-4199-a4a4-2f6ea5fee4e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67a02abd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:89:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746651, 'reachable_time': 31489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303418, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.543 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0810b51b-3de0-4ebd-875a-78d304f81635]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:89ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746651, 'tstamp': 746651}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303419, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.557 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[450e1e6e-1d75-4e7e-94f0-2ab5f9077d49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67a02abd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:89:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746651, 'reachable_time': 31489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303420, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:44.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.586 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1d22bdef-dd18-4fa9-b4f6-6665c4cb144b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.632 232437 DEBUG nova.compute.manager [req-1c39c378-32f6-4893-8c5f-33fe270c0411 req-7a24f5d4-b176-4a49-b521-aae0daf36060 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.632 232437 DEBUG oslo_concurrency.lockutils [req-1c39c378-32f6-4893-8c5f-33fe270c0411 req-7a24f5d4-b176-4a49-b521-aae0daf36060 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.633 232437 DEBUG oslo_concurrency.lockutils [req-1c39c378-32f6-4893-8c5f-33fe270c0411 req-7a24f5d4-b176-4a49-b521-aae0daf36060 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.633 232437 DEBUG oslo_concurrency.lockutils [req-1c39c378-32f6-4893-8c5f-33fe270c0411 req-7a24f5d4-b176-4a49-b521-aae0daf36060 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.633 232437 DEBUG nova.compute.manager [req-1c39c378-32f6-4893-8c5f-33fe270c0411 req-7a24f5d4-b176-4a49-b521-aae0daf36060 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Processing event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.647 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb872fb-1e50-4533-b645-fff05c446cff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.649 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a02abd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.649 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.650 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67a02abd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:44 np0005548731 NetworkManager[49182]: <info>  [1765007204.6524] manager: (tap67a02abd-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Dec  6 02:46:44 np0005548731 kernel: tap67a02abd-60: entered promiscuous mode
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.651 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.655 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67a02abd-60, col_values=(('external_ids', {'iface-id': 'f1ca157c-f88b-4351-b03a-b04a75537062'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:44Z|00741|binding|INFO|Releasing lport f1ca157c-f88b-4351-b03a-b04a75537062 from this chassis (sb_readonly=0)
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.656 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.675 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.677 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67a02abd-6f15-4e26-ba0d-8a091ca98239.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67a02abd-6f15-4e26-ba0d-8a091ca98239.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.679 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d64c273e-672f-470c-bdd1-eab0956b5c31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.681 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-67a02abd-6f15-4e26-ba0d-8a091ca98239
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/67a02abd-6f15-4e26-ba0d-8a091ca98239.pid.haproxy
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 67a02abd-6f15-4e26-ba0d-8a091ca98239
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:46:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:44.683 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'env', 'PROCESS_TAG=haproxy-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67a02abd-6f15-4e26-ba0d-8a091ca98239.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.726 232437 DEBUG oslo_concurrency.lockutils [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.726 232437 DEBUG oslo_concurrency.lockutils [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.746 232437 DEBUG nova.objects.instance [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'flavor' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:44 np0005548731 nova_compute[232433]: 2025-12-06 07:46:44.799 232437 DEBUG oslo_concurrency.lockutils [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e351 e351: 3 total, 3 up, 3 in
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.847827) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204847860, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2496, "num_deletes": 257, "total_data_size": 5672224, "memory_usage": 5739088, "flush_reason": "Manual Compaction"}
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204867856, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 3714441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56671, "largest_seqno": 59161, "table_properties": {"data_size": 3704248, "index_size": 6495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21860, "raw_average_key_size": 20, "raw_value_size": 3683631, "raw_average_value_size": 3528, "num_data_blocks": 281, "num_entries": 1044, "num_filter_entries": 1044, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007009, "oldest_key_time": 1765007009, "file_creation_time": 1765007204, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 20099 microseconds, and 7363 cpu microseconds.
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.867919) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 3714441 bytes OK
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.867940) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.869640) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.869653) EVENT_LOG_v1 {"time_micros": 1765007204869648, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.869669) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 5661167, prev total WAL file size 5661167, number of live WAL files 2.
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.870865) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(3627KB)], [111(10MB)]
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204870906, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 14486867, "oldest_snapshot_seqno": -1}
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 9108 keys, 12445654 bytes, temperature: kUnknown
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204958832, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 12445654, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12385807, "index_size": 35991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 237280, "raw_average_key_size": 26, "raw_value_size": 12224803, "raw_average_value_size": 1342, "num_data_blocks": 1400, "num_entries": 9108, "num_filter_entries": 9108, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765007204, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.959170) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 12445654 bytes
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.961274) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.6 rd, 141.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.3 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 9636, records dropped: 528 output_compression: NoCompression
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.961304) EVENT_LOG_v1 {"time_micros": 1765007204961291, "job": 70, "event": "compaction_finished", "compaction_time_micros": 88018, "compaction_time_cpu_micros": 27928, "output_level": 6, "num_output_files": 1, "total_output_size": 12445654, "num_input_records": 9636, "num_output_records": 9108, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204962598, "job": 70, "event": "table_file_deletion", "file_number": 113}
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204966292, "job": 70, "event": "table_file_deletion", "file_number": 111}
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.870811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.966340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.966346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.966349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.966352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:46:44 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:46:44.966355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:46:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:45 np0005548731 podman[303450]: 2025-12-06 07:46:45.056135542 +0000 UTC m=+0.048691139 container create 7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 02:46:45 np0005548731 systemd[1]: Started libpod-conmon-7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0.scope.
Dec  6 02:46:45 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:46:45 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3462333cce9fe9b224782a8d9933f8f19479ef9415736baf9ff89afa2b339ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:46:45 np0005548731 podman[303450]: 2025-12-06 07:46:45.029970284 +0000 UTC m=+0.022525901 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:46:45 np0005548731 podman[303450]: 2025-12-06 07:46:45.131325055 +0000 UTC m=+0.123880672 container init 7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 02:46:45 np0005548731 podman[303450]: 2025-12-06 07:46:45.137242909 +0000 UTC m=+0.129798516 container start 7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:46:45 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[303498]: [NOTICE]   (303510) : New worker (303513) forked
Dec  6 02:46:45 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[303498]: [NOTICE]   (303510) : Loading success.
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.182 232437 DEBUG oslo_concurrency.lockutils [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.182 232437 DEBUG oslo_concurrency.lockutils [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.182 232437 INFO nova.compute.manager [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Attaching volume 1392b4ff-bd9d-4acf-a769-40407523ee0a to /dev/vdb#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.234 232437 DEBUG nova.compute.manager [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.235 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007205.2342656, f67d89a8-836a-4f47-af8d-37cf99529275 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.235 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Started (Lifecycle Event)#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.239 232437 DEBUG nova.virt.libvirt.driver [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.243 232437 INFO nova.virt.libvirt.driver [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance spawned successfully.#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.259 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.263 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.290 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.291 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007205.2355716, f67d89a8-836a-4f47-af8d-37cf99529275 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.291 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.318 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.322 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007205.2384214, f67d89a8-836a-4f47-af8d-37cf99529275 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.323 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.343 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.347 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.370 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.371 232437 DEBUG os_brick.utils [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.373 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.384 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.384 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9a9685-777f-4c76-8f2e-f9b1060deff2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.385 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.395 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.395 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[8c473694-e3b2-4844-ae9d-0604baaf9df5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.397 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.406 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.406 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[b353deda-43a2-402a-8838-7dca1242c897]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.408 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7b2954-dea2-4514-9c8a-23c092466330]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.408 232437 DEBUG oslo_concurrency.processutils [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.437 232437 DEBUG oslo_concurrency.processutils [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.440 232437 DEBUG os_brick.initiator.connectors.lightos [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.440 232437 DEBUG os_brick.initiator.connectors.lightos [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.440 232437 DEBUG os_brick.initiator.connectors.lightos [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.441 232437 DEBUG os_brick.utils [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:46:45 np0005548731 nova_compute[232433]: 2025-12-06 07:46:45.441 232437 DEBUG nova.virt.block_device [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updating existing volume attachment record: 6344a4b6-c2e7-4399-bfff-63a5072820b9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:46:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:45.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e352 e352: 3 total, 3 up, 3 in
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.197 232437 DEBUG nova.objects.instance [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'flavor' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.216 232437 DEBUG nova.virt.libvirt.driver [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Attempting to attach volume 1392b4ff-bd9d-4acf-a769-40407523ee0a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.219 232437 DEBUG nova.virt.libvirt.guest [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:46:46 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:46:46 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-1392b4ff-bd9d-4acf-a769-40407523ee0a">
Dec  6 02:46:46 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:46:46 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:46:46 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:46:46 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:46:46 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:46:46 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:46:46 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:46:46 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:46:46 np0005548731 nova_compute[232433]:  <serial>1392b4ff-bd9d-4acf-a769-40407523ee0a</serial>
Dec  6 02:46:46 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:46:46 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.243 232437 DEBUG nova.compute.manager [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.299 232437 DEBUG oslo_concurrency.lockutils [None req-e7bca8cf-51e0-4453-b3b4-a01ac1c8ab05 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 9.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.417 232437 DEBUG nova.virt.libvirt.driver [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.417 232437 DEBUG nova.virt.libvirt.driver [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.417 232437 DEBUG nova.virt.libvirt.driver [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.417 232437 DEBUG nova.virt.libvirt.driver [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No VIF found with MAC fa:16:3e:9c:dc:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:46:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:46.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.642 232437 DEBUG oslo_concurrency.lockutils [None req-70b0497e-03ab-45d7-b3e4-e940eccb1224 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.737 232437 DEBUG nova.compute.manager [req-424c63d0-963d-4aef-9c3d-d33648e105e7 req-430451b9-45d1-4a05-a872-7441751bb634 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.738 232437 DEBUG oslo_concurrency.lockutils [req-424c63d0-963d-4aef-9c3d-d33648e105e7 req-430451b9-45d1-4a05-a872-7441751bb634 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.739 232437 DEBUG oslo_concurrency.lockutils [req-424c63d0-963d-4aef-9c3d-d33648e105e7 req-430451b9-45d1-4a05-a872-7441751bb634 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.739 232437 DEBUG oslo_concurrency.lockutils [req-424c63d0-963d-4aef-9c3d-d33648e105e7 req-430451b9-45d1-4a05-a872-7441751bb634 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.740 232437 DEBUG nova.compute.manager [req-424c63d0-963d-4aef-9c3d-d33648e105e7 req-430451b9-45d1-4a05-a872-7441751bb634 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] No waiting events found dispatching network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:46:46 np0005548731 nova_compute[232433]: 2025-12-06 07:46:46.740 232437 WARNING nova.compute.manager [req-424c63d0-963d-4aef-9c3d-d33648e105e7 req-430451b9-45d1-4a05-a872-7441751bb634 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received unexpected event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:46:47 np0005548731 nova_compute[232433]: 2025-12-06 07:46:47.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:47 np0005548731 nova_compute[232433]: 2025-12-06 07:46:47.331 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:46:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:47.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:46:48 np0005548731 nova_compute[232433]: 2025-12-06 07:46:48.523 232437 INFO nova.compute.manager [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Rescuing#033[00m
Dec  6 02:46:48 np0005548731 nova_compute[232433]: 2025-12-06 07:46:48.524 232437 DEBUG oslo_concurrency.lockutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:46:48 np0005548731 nova_compute[232433]: 2025-12-06 07:46:48.524 232437 DEBUG oslo_concurrency.lockutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:46:48 np0005548731 nova_compute[232433]: 2025-12-06 07:46:48.524 232437 DEBUG nova.network.neutron [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:46:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:48.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:48 np0005548731 nova_compute[232433]: 2025-12-06 07:46:48.720 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:49.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:50.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:51 np0005548731 nova_compute[232433]: 2025-12-06 07:46:51.533 232437 DEBUG nova.network.neutron [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updating instance_info_cache with network_info: [{"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:46:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:51.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:51 np0005548731 nova_compute[232433]: 2025-12-06 07:46:51.561 232437 DEBUG oslo_concurrency.lockutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:46:51 np0005548731 nova_compute[232433]: 2025-12-06 07:46:51.873 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:46:52 np0005548731 nova_compute[232433]: 2025-12-06 07:46:52.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:52.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:53 np0005548731 nova_compute[232433]: 2025-12-06 07:46:53.116 232437 DEBUG nova.objects.instance [None req-cfe15110-5f35-4aca-8a79-49312e6306c0 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'pci_devices' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:53 np0005548731 nova_compute[232433]: 2025-12-06 07:46:53.151 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007213.150837, f67d89a8-836a-4f47-af8d-37cf99529275 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:46:53 np0005548731 nova_compute[232433]: 2025-12-06 07:46:53.151 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:46:53 np0005548731 nova_compute[232433]: 2025-12-06 07:46:53.311 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:53 np0005548731 nova_compute[232433]: 2025-12-06 07:46:53.315 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:46:53 np0005548731 nova_compute[232433]: 2025-12-06 07:46:53.469 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec  6 02:46:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:53.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:53 np0005548731 nova_compute[232433]: 2025-12-06 07:46:53.722 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:54 np0005548731 kernel: tapd4087b80-19 (unregistering): left promiscuous mode
Dec  6 02:46:54 np0005548731 NetworkManager[49182]: <info>  [1765007214.3021] device (tapd4087b80-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:46:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:54Z|00742|binding|INFO|Releasing lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 from this chassis (sb_readonly=0)
Dec  6 02:46:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:54Z|00743|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 down in Southbound
Dec  6 02:46:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:54Z|00744|binding|INFO|Removing iface tapd4087b80-19 ovn-installed in OVS
Dec  6 02:46:54 np0005548731 nova_compute[232433]: 2025-12-06 07:46:54.315 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:54 np0005548731 nova_compute[232433]: 2025-12-06 07:46:54.317 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:54 np0005548731 nova_compute[232433]: 2025-12-06 07:46:54.353 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:54 np0005548731 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000099.scope: Deactivated successfully.
Dec  6 02:46:54 np0005548731 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000099.scope: Consumed 9.184s CPU time.
Dec  6 02:46:54 np0005548731 systemd-machined[195355]: Machine qemu-75-instance-00000099 terminated.
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.406 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:41:1e 10.100.0.7'], port_security=['fa:16:3e:16:41:1e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f67d89a8-836a-4f47-af8d-37cf99529275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'ef8d01c6-8791-4eef-8d34-906ccbdd6237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a13aa6e4-a519-406f-87b7-05ba3d74a296, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=d4087b80-19fa-44d3-8dd5-406c2b01b6f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.409 143965 INFO neutron.agent.ovn.metadata.agent [-] Port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 in datapath 67a02abd-6f15-4e26-ba0d-8a091ca98239 unbound from our chassis#033[00m
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.413 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67a02abd-6f15-4e26-ba0d-8a091ca98239, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.414 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[92d59888-88ff-4498-99d0-43be97a63dc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.415 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 namespace which is not needed anymore#033[00m
Dec  6 02:46:54 np0005548731 nova_compute[232433]: 2025-12-06 07:46:54.522 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:54 np0005548731 nova_compute[232433]: 2025-12-06 07:46:54.527 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:54 np0005548731 nova_compute[232433]: 2025-12-06 07:46:54.530 232437 DEBUG nova.compute.manager [None req-cfe15110-5f35-4aca-8a79-49312e6306c0 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:46:54 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[303498]: [NOTICE]   (303510) : haproxy version is 2.8.14-c23fe91
Dec  6 02:46:54 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[303498]: [NOTICE]   (303510) : path to executable is /usr/sbin/haproxy
Dec  6 02:46:54 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[303498]: [WARNING]  (303510) : Exiting Master process...
Dec  6 02:46:54 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[303498]: [WARNING]  (303510) : Exiting Master process...
Dec  6 02:46:54 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[303498]: [ALERT]    (303510) : Current worker (303513) exited with code 143 (Terminated)
Dec  6 02:46:54 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[303498]: [WARNING]  (303510) : All workers exited. Exiting... (0)
Dec  6 02:46:54 np0005548731 systemd[1]: libpod-7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0.scope: Deactivated successfully.
Dec  6 02:46:54 np0005548731 podman[303580]: 2025-12-06 07:46:54.574342137 +0000 UTC m=+0.054562342 container died 7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:46:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:54.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:54 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0-userdata-shm.mount: Deactivated successfully.
Dec  6 02:46:54 np0005548731 systemd[1]: var-lib-containers-storage-overlay-a3462333cce9fe9b224782a8d9933f8f19479ef9415736baf9ff89afa2b339ba-merged.mount: Deactivated successfully.
Dec  6 02:46:54 np0005548731 podman[303580]: 2025-12-06 07:46:54.619949199 +0000 UTC m=+0.100169414 container cleanup 7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:46:54 np0005548731 systemd[1]: libpod-conmon-7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0.scope: Deactivated successfully.
Dec  6 02:46:54 np0005548731 podman[303616]: 2025-12-06 07:46:54.71390518 +0000 UTC m=+0.073885543 container remove 7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.719 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4c7281-34bc-4924-a72d-f5e995eab7cc]: (4, ('Sat Dec  6 07:46:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 (7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0)\n7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0\nSat Dec  6 07:46:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 (7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0)\n7611ee79a96f6c342b121961796fd839c2a20601f168bc773e5b1deae6deded0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.721 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd416a3-629d-4f06-b4d9-faeba9ba55e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.722 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a02abd-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:54 np0005548731 nova_compute[232433]: 2025-12-06 07:46:54.724 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:54 np0005548731 kernel: tap67a02abd-60: left promiscuous mode
Dec  6 02:46:54 np0005548731 nova_compute[232433]: 2025-12-06 07:46:54.743 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.745 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5577dd-f928-4bd7-8f91-73cfb4574feb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.759 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[46673bf4-1b24-4028-b903-299f87f3b84d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.760 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa4ce2d-bb46-434b-a0bb-7b883e2d17f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.776 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc6d4fa-40a5-46c2-b979-fc21947cf8ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746643, 'reachable_time': 21393, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303633, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:54 np0005548731 systemd[1]: run-netns-ovnmeta\x2d67a02abd\x2d6f15\x2d4e26\x2dba0d\x2d8a091ca98239.mount: Deactivated successfully.
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.778 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:46:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:54.778 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f8b24b-9552-49c1-88b2-0cb4eefba169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e353 e353: 3 total, 3 up, 3 in
Dec  6 02:46:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:46:55 np0005548731 kernel: tap879edf90-08 (unregistering): left promiscuous mode
Dec  6 02:46:55 np0005548731 NetworkManager[49182]: <info>  [1765007215.2375] device (tap879edf90-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.239 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:55Z|00745|binding|INFO|Releasing lport 879edf90-0812-4adf-b171-d993c6cfb26c from this chassis (sb_readonly=0)
Dec  6 02:46:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:55Z|00746|binding|INFO|Setting lport 879edf90-0812-4adf-b171-d993c6cfb26c down in Southbound
Dec  6 02:46:55 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:55Z|00747|binding|INFO|Removing iface tap879edf90-08 ovn-installed in OVS
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.248 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.249 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.254 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:dc:f4 10.100.0.7'], port_security=['fa:16:3e:9c:dc:f4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b4b3181-da12-4705-9215-7ee5b869102b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f9c55c19-5297-4b9d-814f-3c2f976ceba7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=879edf90-0812-4adf-b171-d993c6cfb26c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.255 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 879edf90-0812-4adf-b171-d993c6cfb26c in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.256 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.257 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[683e4319-6ef5-4137-b82f-febb7c1d91cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.257 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace which is not needed anymore#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.263 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:55 np0005548731 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Dec  6 02:46:55 np0005548731 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000009f.scope: Consumed 14.769s CPU time.
Dec  6 02:46:55 np0005548731 systemd-machined[195355]: Machine qemu-74-instance-0000009f terminated.
Dec  6 02:46:55 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[302120]: [NOTICE]   (302128) : haproxy version is 2.8.14-c23fe91
Dec  6 02:46:55 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[302120]: [NOTICE]   (302128) : path to executable is /usr/sbin/haproxy
Dec  6 02:46:55 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[302120]: [WARNING]  (302128) : Exiting Master process...
Dec  6 02:46:55 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[302120]: [ALERT]    (302128) : Current worker (302131) exited with code 143 (Terminated)
Dec  6 02:46:55 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[302120]: [WARNING]  (302128) : All workers exited. Exiting... (0)
Dec  6 02:46:55 np0005548731 systemd[1]: libpod-665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2.scope: Deactivated successfully.
Dec  6 02:46:55 np0005548731 podman[303656]: 2025-12-06 07:46:55.371764668 +0000 UTC m=+0.040518258 container died 665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 02:46:55 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2-userdata-shm.mount: Deactivated successfully.
Dec  6 02:46:55 np0005548731 systemd[1]: var-lib-containers-storage-overlay-102755732f329a9f7f23fea7546836b06af77b58f35091410a2259b4ac9c2348-merged.mount: Deactivated successfully.
Dec  6 02:46:55 np0005548731 podman[303656]: 2025-12-06 07:46:55.399600857 +0000 UTC m=+0.068354447 container cleanup 665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 02:46:55 np0005548731 systemd[1]: libpod-conmon-665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2.scope: Deactivated successfully.
Dec  6 02:46:55 np0005548731 podman[303688]: 2025-12-06 07:46:55.45431337 +0000 UTC m=+0.034347467 container remove 665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:46:55 np0005548731 NetworkManager[49182]: <info>  [1765007215.4580] manager: (tap879edf90-08): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.460 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7d78acaf-b86f-4b51-bd1f-cc1c3d6130c0]: (4, ('Sat Dec  6 07:46:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2)\n665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2\nSat Dec  6 07:46:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2)\n665f750fc9f370a1e9af431990bcafa0aae94185f18ace05e7ee8c9c011e8dd2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.462 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.462 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[99272c09-8d81-4989-9123-d228f03a1c0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.463 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.464 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.482 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:55 np0005548731 kernel: tap6d1a17d6-50: left promiscuous mode
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.490 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.492 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[debeb719-0336-4999-b3fa-9f6d5ec01f18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.507 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bf56f4d7-6247-49ee-aa00-90681445b551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.508 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc399cd-becc-43d8-8061-3c45748a34b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.524 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1962d51e-f33f-4e34-bb00-e1bc375f9549]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 743319, 'reachable_time': 32915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303721, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.526 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:46:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:55.526 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[4f67ee94-0602-4e8a-a1cd-efc83016cba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:55.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.545 232437 DEBUG nova.compute.manager [req-6c3dbb20-7b54-479f-8328-53b3811813fd req-f21a9755-aef9-477f-bddf-324149be6469 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-unplugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.545 232437 DEBUG oslo_concurrency.lockutils [req-6c3dbb20-7b54-479f-8328-53b3811813fd req-f21a9755-aef9-477f-bddf-324149be6469 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.546 232437 DEBUG oslo_concurrency.lockutils [req-6c3dbb20-7b54-479f-8328-53b3811813fd req-f21a9755-aef9-477f-bddf-324149be6469 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.546 232437 DEBUG oslo_concurrency.lockutils [req-6c3dbb20-7b54-479f-8328-53b3811813fd req-f21a9755-aef9-477f-bddf-324149be6469 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.546 232437 DEBUG nova.compute.manager [req-6c3dbb20-7b54-479f-8328-53b3811813fd req-f21a9755-aef9-477f-bddf-324149be6469 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] No waiting events found dispatching network-vif-unplugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.546 232437 WARNING nova.compute.manager [req-6c3dbb20-7b54-479f-8328-53b3811813fd req-f21a9755-aef9-477f-bddf-324149be6469 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received unexpected event network-vif-unplugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for instance with vm_state suspended and task_state None.#033[00m
Dec  6 02:46:55 np0005548731 systemd[1]: run-netns-ovnmeta\x2d6d1a17d6\x2d5e44\x2d40b7\x2d832a\x2d81cb86c02e71.mount: Deactivated successfully.
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.618 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.894 232437 INFO nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Instance shutdown successfully after 4 seconds.#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.901 232437 INFO nova.virt.libvirt.driver [-] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Instance destroyed successfully.#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.901 232437 DEBUG nova.objects.instance [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:55 np0005548731 nova_compute[232433]: 2025-12-06 07:46:55.917 232437 INFO nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Attempting a stable device rescue#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.197 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.201 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.201 232437 INFO nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Creating image(s)#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.227 232437 DEBUG nova.storage.rbd_utils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.231 232437 DEBUG nova.objects.instance [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.270 232437 DEBUG nova.storage.rbd_utils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.299 232437 DEBUG nova.storage.rbd_utils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.302 232437 DEBUG oslo_concurrency.lockutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "915d626f80cee8cb03d7948b5a868ee0cb382579" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.303 232437 DEBUG oslo_concurrency.lockutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "915d626f80cee8cb03d7948b5a868ee0cb382579" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.456 232437 INFO nova.compute.manager [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Resuming#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.457 232437 DEBUG nova.objects.instance [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'flavor' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.513 232437 DEBUG oslo_concurrency.lockutils [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.514 232437 DEBUG oslo_concurrency.lockutils [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquired lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.514 232437 DEBUG nova.network.neutron [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:46:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.637 232437 DEBUG nova.virt.libvirt.imagebackend [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/4a5b4c30-2416-4ed2-b947-8bb73ef88bd7/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/4a5b4c30-2416-4ed2-b947-8bb73ef88bd7/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.695 232437 DEBUG nova.virt.libvirt.imagebackend [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/4a5b4c30-2416-4ed2-b947-8bb73ef88bd7/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec  6 02:46:56 np0005548731 nova_compute[232433]: 2025-12-06 07:46:56.696 232437 DEBUG nova.storage.rbd_utils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] cloning images/4a5b4c30-2416-4ed2-b947-8bb73ef88bd7@snap to None/2b4b3181-da12-4705-9215-7ee5b869102b_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.335 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.435 232437 DEBUG oslo_concurrency.lockutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "915d626f80cee8cb03d7948b5a868ee0cb382579" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.481 232437 DEBUG nova.objects.instance [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.507 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.510 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Start _get_guest_xml network_info=[{"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:9c:dc:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '4a5b4c30-2416-4ed2-b947-8bb73ef88bd7', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1392b4ff-bd9d-4acf-a769-40407523ee0a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1392b4ff-bd9d-4acf-a769-40407523ee0a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '2b4b3181-da12-4705-9215-7ee5b869102b', 'attached_at': '', 'detached_at': '', 'volume_id': '1392b4ff-bd9d-4acf-a769-40407523ee0a', 'serial': '1392b4ff-bd9d-4acf-a769-40407523ee0a'}, 'disk_bus': 'virtio', 'boot_index': None, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'attachment_id': '6344a4b6-c2e7-4399-bfff-63a5072820b9', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.510 232437 DEBUG nova.objects.instance [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'resources' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.531 232437 WARNING nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.540 232437 DEBUG nova.virt.libvirt.host [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.541 232437 DEBUG nova.virt.libvirt.host [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:46:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:57.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.548 232437 DEBUG nova.virt.libvirt.host [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.549 232437 DEBUG nova.virt.libvirt.host [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.550 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.550 232437 DEBUG nova.virt.hardware [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.551 232437 DEBUG nova.virt.hardware [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.551 232437 DEBUG nova.virt.hardware [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.551 232437 DEBUG nova.virt.hardware [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.551 232437 DEBUG nova.virt.hardware [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.551 232437 DEBUG nova.virt.hardware [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.552 232437 DEBUG nova.virt.hardware [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.552 232437 DEBUG nova.virt.hardware [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.552 232437 DEBUG nova.virt.hardware [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.552 232437 DEBUG nova.virt.hardware [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.552 232437 DEBUG nova.virt.hardware [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.553 232437 DEBUG nova.objects.instance [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.574 232437 DEBUG oslo_concurrency.processutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.694 232437 DEBUG nova.compute.manager [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.695 232437 DEBUG oslo_concurrency.lockutils [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.695 232437 DEBUG oslo_concurrency.lockutils [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.695 232437 DEBUG oslo_concurrency.lockutils [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.696 232437 DEBUG nova.compute.manager [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] No waiting events found dispatching network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.696 232437 WARNING nova.compute.manager [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received unexpected event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for instance with vm_state suspended and task_state resuming.#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.696 232437 DEBUG nova.compute.manager [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-unplugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.696 232437 DEBUG oslo_concurrency.lockutils [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.696 232437 DEBUG oslo_concurrency.lockutils [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.697 232437 DEBUG oslo_concurrency.lockutils [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.697 232437 DEBUG nova.compute.manager [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] No waiting events found dispatching network-vif-unplugged-879edf90-0812-4adf-b171-d993c6cfb26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.697 232437 WARNING nova.compute.manager [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received unexpected event network-vif-unplugged-879edf90-0812-4adf-b171-d993c6cfb26c for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.697 232437 DEBUG nova.compute.manager [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.697 232437 DEBUG oslo_concurrency.lockutils [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.698 232437 DEBUG oslo_concurrency.lockutils [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.698 232437 DEBUG oslo_concurrency.lockutils [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.698 232437 DEBUG nova.compute.manager [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] No waiting events found dispatching network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.698 232437 WARNING nova.compute.manager [req-0814affc-4a28-437e-8456-d895a62d42a5 req-fe644e42-ee65-443d-8f77-26004a60ca20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received unexpected event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.966 232437 DEBUG nova.network.neutron [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updating instance_info_cache with network_info: [{"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.989 232437 DEBUG oslo_concurrency.lockutils [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Releasing lock "refresh_cache-f67d89a8-836a-4f47-af8d-37cf99529275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.995 232437 DEBUG nova.virt.libvirt.vif [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:43:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2113241509',display_name='tempest-ServersNegativeTestJSON-server-2113241509',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2113241509',id=153,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:46:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4ec9294f6d4b4f44a72414374d646a4a',ramdisk_id='',reservation_id='r-r3odtvmp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-776446295',owner_user_name='tempest-ServersNegativeTestJSON-776446295-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:46:54Z,user_data=None,user_id='1035ecd55ed54b57aa35fe32fb915cc5',uuid=f67d89a8-836a-4f47-af8d-37cf99529275,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.995 232437 DEBUG nova.network.os_vif_util [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converting VIF {"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.996 232437 DEBUG nova.network.os_vif_util [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.997 232437 DEBUG os_vif [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.998 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:57 np0005548731 nova_compute[232433]: 2025-12-06 07:46:57.998 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.000 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4087b80-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.001 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4087b80-19, col_values=(('external_ids', {'iface-id': 'd4087b80-19fa-44d3-8dd5-406c2b01b6f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:41:1e', 'vm-uuid': 'f67d89a8-836a-4f47-af8d-37cf99529275'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.001 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.001 232437 INFO os_vif [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19')#033[00m
Dec  6 02:46:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:46:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1548366512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.022 232437 DEBUG nova.objects.instance [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'numa_topology' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.039 232437 DEBUG oslo_concurrency.processutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.075 232437 DEBUG oslo_concurrency.processutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:58 np0005548731 NetworkManager[49182]: <info>  [1765007218.1138] manager: (tapd4087b80-19): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Dec  6 02:46:58 np0005548731 kernel: tapd4087b80-19: entered promiscuous mode
Dec  6 02:46:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:58Z|00748|binding|INFO|Claiming lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for this chassis.
Dec  6 02:46:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:58Z|00749|binding|INFO|d4087b80-19fa-44d3-8dd5-406c2b01b6f4: Claiming fa:16:3e:16:41:1e 10.100.0.7
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.180 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:58Z|00750|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 ovn-installed in OVS
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.202 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:58 np0005548731 systemd-udevd[303928]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.208 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:58 np0005548731 NetworkManager[49182]: <info>  [1765007218.2182] device (tapd4087b80-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:46:58 np0005548731 NetworkManager[49182]: <info>  [1765007218.2194] device (tapd4087b80-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:46:58 np0005548731 systemd-machined[195355]: New machine qemu-76-instance-00000099.
Dec  6 02:46:58 np0005548731 systemd[1]: Started Virtual Machine qemu-76-instance-00000099.
Dec  6 02:46:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:58Z|00751|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 up in Southbound
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.284 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:41:1e 10.100.0.7'], port_security=['fa:16:3e:16:41:1e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f67d89a8-836a-4f47-af8d-37cf99529275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'ef8d01c6-8791-4eef-8d34-906ccbdd6237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a13aa6e4-a519-406f-87b7-05ba3d74a296, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=d4087b80-19fa-44d3-8dd5-406c2b01b6f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.285 143965 INFO neutron.agent.ovn.metadata.agent [-] Port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 in datapath 67a02abd-6f15-4e26-ba0d-8a091ca98239 bound to our chassis#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.287 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67a02abd-6f15-4e26-ba0d-8a091ca98239#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.299 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[81d207e4-20db-49c5-a6a5-9df192404aab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.300 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67a02abd-61 in ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.301 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67a02abd-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.301 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a817d455-206b-4acd-bbb5-9af379bbc1d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.302 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1d44fb-e392-48f0-a0e8-73cffeb20b57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.312 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[e3598d73-e41d-42bd-b5c0-2fc7fc769052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.336 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9e007f-e254-4d4a-abfd-30ce87822c14]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.365 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[61d9e3b0-39e5-4509-84ea-a4024e140277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 NetworkManager[49182]: <info>  [1765007218.3746] manager: (tap67a02abd-60): new Veth device (/org/freedesktop/NetworkManager/Devices/346)
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.374 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[554f5f25-64e3-40a0-8d24-74d43a48e368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.418 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[49a50dbb-dc4d-4231-85aa-ec84bbf0b608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.421 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9c68d678-aeab-4302-9116-9de1ce8c7d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 NetworkManager[49182]: <info>  [1765007218.4478] device (tap67a02abd-60): carrier: link connected
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.455 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e36490-97f6-436c-8931-38de1ee1c0ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.476 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5cad2b8e-b544-4c98-b7dd-01196668af2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67a02abd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:89:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748045, 'reachable_time': 31084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303978, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.499 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2c543938-461e-4955-a63c-ed8a226c60f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:89ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748045, 'tstamp': 748045}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303979, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.521 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[88455f81-c2b7-44a6-8665-1fbd36695a53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67a02abd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:89:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748045, 'reachable_time': 31084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303980, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.551 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0202fa27-b06a-4e01-aa84-22b0f6b2370c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:58.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.609 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cacec16d-5b01-4de7-aa08-97a2c98a370a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.611 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a02abd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.611 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.612 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67a02abd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.613 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:58 np0005548731 NetworkManager[49182]: <info>  [1765007218.6140] manager: (tap67a02abd-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Dec  6 02:46:58 np0005548731 kernel: tap67a02abd-60: entered promiscuous mode
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.619 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.620 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67a02abd-60, col_values=(('external_ids', {'iface-id': 'f1ca157c-f88b-4351-b03a-b04a75537062'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.621 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:46:58Z|00752|binding|INFO|Releasing lport f1ca157c-f88b-4351-b03a-b04a75537062 from this chassis (sb_readonly=0)
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.636 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.639 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.640 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67a02abd-6f15-4e26-ba0d-8a091ca98239.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67a02abd-6f15-4e26-ba0d-8a091ca98239.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.641 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[837f2d53-7719-4ad8-bfd0-bbb1ce8377f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.641 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-67a02abd-6f15-4e26-ba0d-8a091ca98239
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/67a02abd-6f15-4e26-ba0d-8a091ca98239.pid.haproxy
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 67a02abd-6f15-4e26-ba0d-8a091ca98239
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:46:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:46:58.642 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'env', 'PROCESS_TAG=haproxy-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67a02abd-6f15-4e26-ba0d-8a091ca98239.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.723 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:46:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:46:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1075072744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.966 232437 DEBUG oslo_concurrency.processutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.891s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:58 np0005548731 nova_compute[232433]: 2025-12-06 07:46:58.996 232437 DEBUG oslo_concurrency.processutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:46:59 np0005548731 podman[304029]: 2025-12-06 07:46:59.058648309 +0000 UTC m=+0.096595696 container create 8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  6 02:46:59 np0005548731 podman[304029]: 2025-12-06 07:46:58.987418602 +0000 UTC m=+0.025366009 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:46:59 np0005548731 systemd[1]: Started libpod-conmon-8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03.scope.
Dec  6 02:46:59 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:46:59 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361f1b5329e450f33d13583799db1087f543b008b246e8b5ab20dcf9869d0c53/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:46:59 np0005548731 podman[304029]: 2025-12-06 07:46:59.172941725 +0000 UTC m=+0.210889122 container init 8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:46:59 np0005548731 podman[304029]: 2025-12-06 07:46:59.179577047 +0000 UTC m=+0.217524434 container start 8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 02:46:59 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[304069]: [NOTICE]   (304073) : New worker (304075) forked
Dec  6 02:46:59 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[304069]: [NOTICE]   (304073) : Loading success.
Dec  6 02:46:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:46:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3035636638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.439 232437 DEBUG oslo_concurrency.processutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.441 232437 DEBUG nova.virt.libvirt.vif [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:46:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1284166049',display_name='tempest-ServerStableDeviceRescueTest-server-1284166049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1284166049',id=159,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGiOM3BHZdMb9T+i2z28aV9d25R+ylqHokCUcVIaz2h7DQq++pQYpAQHDFB53HHc1FWwSamucUXGjXIy3rSPSRgsSGsD3yr/u/2kWMgZhyKrH9WAv41/n4dRvjMhKO5fVA==',key_name='tempest-keypair-1110836856',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:46:12Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-73o1o3hz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:46:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=2b4b3181-da12-4705-9215-7ee5b869102b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:9c:dc:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.442 232437 DEBUG nova.network.os_vif_util [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:9c:dc:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.443 232437 DEBUG nova.network.os_vif_util [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:dc:f4,bridge_name='br-int',has_traffic_filtering=True,id=879edf90-0812-4adf-b171-d993c6cfb26c,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap879edf90-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.444 232437 DEBUG nova.objects.instance [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.459 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  <uuid>2b4b3181-da12-4705-9215-7ee5b869102b</uuid>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  <name>instance-0000009f</name>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1284166049</nova:name>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:46:57</nova:creationTime>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <nova:user uuid="e997a5eeee174b368a43ed8cb35fa1d0">tempest-ServerStableDeviceRescueTest-1830949011-project-member</nova:user>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <nova:project uuid="f44ecb8bdc7e4692a299e29603301124">tempest-ServerStableDeviceRescueTest-1830949011</nova:project>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <nova:port uuid="879edf90-0812-4adf-b171-d993c6cfb26c">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <entry name="serial">2b4b3181-da12-4705-9215-7ee5b869102b</entry>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <entry name="uuid">2b4b3181-da12-4705-9215-7ee5b869102b</entry>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2b4b3181-da12-4705-9215-7ee5b869102b_disk">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2b4b3181-da12-4705-9215-7ee5b869102b_disk.config">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-1392b4ff-bd9d-4acf-a769-40407523ee0a">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <target dev="vdb" bus="virtio"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <serial>1392b4ff-bd9d-4acf-a769-40407523ee0a</serial>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/2b4b3181-da12-4705-9215-7ee5b869102b_disk.rescue">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <target dev="vdc" bus="virtio"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <boot order="1"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:9c:dc:f4"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <target dev="tap879edf90-08"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/console.log" append="off"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:46:59 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:46:59 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:46:59 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:46:59 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.467 232437 INFO nova.virt.libvirt.driver [-] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Instance destroyed successfully.#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.540 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.541 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.541 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.541 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.542 232437 DEBUG nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No VIF found with MAC fa:16:3e:9c:dc:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.542 232437 INFO nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Using config drive#033[00m
Dec  6 02:46:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:46:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:46:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:59.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.567 232437 DEBUG nova.storage.rbd_utils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.596 232437 DEBUG nova.objects.instance [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.622 232437 DEBUG nova.objects.instance [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'keypairs' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.837 232437 DEBUG nova.compute.manager [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.838 232437 DEBUG oslo_concurrency.lockutils [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.838 232437 DEBUG oslo_concurrency.lockutils [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.839 232437 DEBUG oslo_concurrency.lockutils [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.839 232437 DEBUG nova.compute.manager [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] No waiting events found dispatching network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.839 232437 WARNING nova.compute.manager [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received unexpected event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for instance with vm_state suspended and task_state resuming.#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.839 232437 DEBUG nova.compute.manager [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.840 232437 DEBUG oslo_concurrency.lockutils [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.840 232437 DEBUG oslo_concurrency.lockutils [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.840 232437 DEBUG oslo_concurrency.lockutils [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.840 232437 DEBUG nova.compute.manager [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] No waiting events found dispatching network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:46:59 np0005548731 nova_compute[232433]: 2025-12-06 07:46:59.841 232437 WARNING nova.compute.manager [req-c3be811e-313c-4a1d-b2fd-d1d41046fb62 req-2c639ed9-2c1a-4263-9619-4b34fedc09e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received unexpected event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for instance with vm_state suspended and task_state resuming.#033[00m
Dec  6 02:47:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:00 np0005548731 nova_compute[232433]: 2025-12-06 07:47:00.135 232437 INFO nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Creating config drive at /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config.rescue#033[00m
Dec  6 02:47:00 np0005548731 nova_compute[232433]: 2025-12-06 07:47:00.149 232437 DEBUG oslo_concurrency.processutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp405a4iz6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:00 np0005548731 nova_compute[232433]: 2025-12-06 07:47:00.285 232437 DEBUG oslo_concurrency.processutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp405a4iz6" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:00 np0005548731 nova_compute[232433]: 2025-12-06 07:47:00.322 232437 DEBUG nova.storage.rbd_utils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 2b4b3181-da12-4705-9215-7ee5b869102b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:47:00 np0005548731 nova_compute[232433]: 2025-12-06 07:47:00.326 232437 DEBUG oslo_concurrency.processutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config.rescue 2b4b3181-da12-4705-9215-7ee5b869102b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:00.888 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:00.889 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:00.889 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:47:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:01.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:47:02 np0005548731 nova_compute[232433]: 2025-12-06 07:47:02.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:02.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:03.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.625 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for f67d89a8-836a-4f47-af8d-37cf99529275 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.625 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007223.6246011, f67d89a8-836a-4f47-af8d-37cf99529275 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.625 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Started (Lifecycle Event)#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.639 232437 DEBUG nova.compute.manager [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.640 232437 DEBUG nova.objects.instance [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'pci_devices' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.653 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.659 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.661 232437 INFO nova.virt.libvirt.driver [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance running successfully.#033[00m
Dec  6 02:47:03 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.663 232437 DEBUG nova.virt.libvirt.guest [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.664 232437 DEBUG nova.compute.manager [None req-ba191309-52e7-4450-bfae-7e9029eb2973 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.684 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.684 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007223.6277928, f67d89a8-836a-4f47-af8d-37cf99529275 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.685 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.725 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.763 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:03 np0005548731 nova_compute[232433]: 2025-12-06 07:47:03.767 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.193 232437 DEBUG oslo_concurrency.processutils [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config.rescue 2b4b3181-da12-4705-9215-7ee5b869102b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.868s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.194 232437 INFO nova.virt.libvirt.driver [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Deleting local config drive /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b/disk.config.rescue because it was imported into RBD.#033[00m
Dec  6 02:47:04 np0005548731 kernel: tap879edf90-08: entered promiscuous mode
Dec  6 02:47:04 np0005548731 NetworkManager[49182]: <info>  [1765007224.2678] manager: (tap879edf90-08): new Tun device (/org/freedesktop/NetworkManager/Devices/348)
Dec  6 02:47:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:04Z|00753|binding|INFO|Claiming lport 879edf90-0812-4adf-b171-d993c6cfb26c for this chassis.
Dec  6 02:47:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:04Z|00754|binding|INFO|879edf90-0812-4adf-b171-d993c6cfb26c: Claiming fa:16:3e:9c:dc:f4 10.100.0.7
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.269 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:04 np0005548731 systemd-udevd[304219]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.284 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:dc:f4 10.100.0.7'], port_security=['fa:16:3e:9c:dc:f4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b4b3181-da12-4705-9215-7ee5b869102b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f9c55c19-5297-4b9d-814f-3c2f976ceba7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=879edf90-0812-4adf-b171-d993c6cfb26c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.285 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 879edf90-0812-4adf-b171-d993c6cfb26c in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 bound to our chassis#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.287 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71#033[00m
Dec  6 02:47:04 np0005548731 NetworkManager[49182]: <info>  [1765007224.2885] device (tap879edf90-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:47:04 np0005548731 NetworkManager[49182]: <info>  [1765007224.2899] device (tap879edf90-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:47:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:04Z|00755|binding|INFO|Setting lport 879edf90-0812-4adf-b171-d993c6cfb26c ovn-installed in OVS
Dec  6 02:47:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:04Z|00756|binding|INFO|Setting lport 879edf90-0812-4adf-b171-d993c6cfb26c up in Southbound
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.292 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.298 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.305 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[02e3c841-f082-4004-8aae-2361705c955f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.306 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d1a17d6-51 in ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.309 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d1a17d6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.309 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ca10067b-8b5b-469d-9203-7d2a95c80c85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.310 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[33135b2c-ac4e-4daa-87b1-c705825af91b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 systemd-machined[195355]: New machine qemu-77-instance-0000009f.
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.325 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[27d841c4-bd28-48d4-98de-9e436181af52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 systemd[1]: Started Virtual Machine qemu-77-instance-0000009f.
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.348 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2338d32c-e250-4811-93ce-5bf5b2d2ec63]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.381 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[edfdd080-3efd-42f6-874c-b13956e14ac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 NetworkManager[49182]: <info>  [1765007224.3897] manager: (tap6d1a17d6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/349)
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.388 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d9d77a-9b98-4909-9f57-196c8c5c8ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.426 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[58bdbe72-0ba7-4db7-a2e0-4dbac5b6281d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.431 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd786d4-b844-4148-849b-ec971e000000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 NetworkManager[49182]: <info>  [1765007224.4542] device (tap6d1a17d6-50): carrier: link connected
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.459 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5543c7b7-fc5a-489e-93c3-4d44f09767b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.476 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ae1ff9-96f9-49e9-ae1c-aee224fa1ea6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748646, 'reachable_time': 15979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304270, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.493 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c46fd9df-2f3c-47d4-aa13-27c3ae173cc6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:a2f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748646, 'tstamp': 748646}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304271, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.511 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6b15ba-dd5f-477a-955c-c9d4440f2e11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748646, 'reachable_time': 15979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304272, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.542 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[381c25dc-84a7-497e-9426-a2adc0836b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.589 232437 DEBUG nova.compute.manager [req-9206c4cb-4ec4-4ccd-8fed-1595c01c31cf req-60bc272a-f198-42a4-804e-c0547b001892 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.589 232437 DEBUG oslo_concurrency.lockutils [req-9206c4cb-4ec4-4ccd-8fed-1595c01c31cf req-60bc272a-f198-42a4-804e-c0547b001892 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.589 232437 DEBUG oslo_concurrency.lockutils [req-9206c4cb-4ec4-4ccd-8fed-1595c01c31cf req-60bc272a-f198-42a4-804e-c0547b001892 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.590 232437 DEBUG oslo_concurrency.lockutils [req-9206c4cb-4ec4-4ccd-8fed-1595c01c31cf req-60bc272a-f198-42a4-804e-c0547b001892 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.590 232437 DEBUG nova.compute.manager [req-9206c4cb-4ec4-4ccd-8fed-1595c01c31cf req-60bc272a-f198-42a4-804e-c0547b001892 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] No waiting events found dispatching network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.590 232437 WARNING nova.compute.manager [req-9206c4cb-4ec4-4ccd-8fed-1595c01c31cf req-60bc272a-f198-42a4-804e-c0547b001892 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received unexpected event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.594 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "e268006c-e26b-4400-8c9a-da1925cd1a57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.594 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.598 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6b748a8f-5a4e-4b82-98d6-ae8524e4f78c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:04.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.600 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.601 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.601 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:04 np0005548731 kernel: tap6d1a17d6-50: entered promiscuous mode
Dec  6 02:47:04 np0005548731 NetworkManager[49182]: <info>  [1765007224.6036] manager: (tap6d1a17d6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.602 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.606 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.607 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:04Z|00757|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.614 232437 DEBUG nova.compute.manager [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.623 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.625 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.626 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdccfce-7462-4595-97aa-13edc985542d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.626 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:47:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:04.628 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'env', 'PROCESS_TAG=haproxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d1a17d6-5e44-40b7-832a-81cb86c02e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.697 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.698 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.705 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.706 232437 INFO nova.compute.claims [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:47:04 np0005548731 nova_compute[232433]: 2025-12-06 07:47:04.833 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:04 np0005548731 podman[304323]: 2025-12-06 07:47:04.985612776 +0000 UTC m=+0.058911097 container create 3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 02:47:05 np0005548731 systemd[1]: Started libpod-conmon-3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a.scope.
Dec  6 02:47:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:05 np0005548731 podman[304323]: 2025-12-06 07:47:04.956314922 +0000 UTC m=+0.029613243 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:47:05 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:47:05 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb77b1a06819636d33bf70070f140c8cbee869f4f68e367166f67c63e125e82/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:47:05 np0005548731 podman[304323]: 2025-12-06 07:47:05.073319434 +0000 UTC m=+0.146617775 container init 3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:47:05 np0005548731 podman[304323]: 2025-12-06 07:47:05.079232989 +0000 UTC m=+0.152531300 container start 3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:47:05 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304393]: [NOTICE]   (304415) : New worker (304422) forked
Dec  6 02:47:05 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304393]: [NOTICE]   (304415) : Loading success.
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.196 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for 2b4b3181-da12-4705-9215-7ee5b869102b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.197 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007225.1947505, 2b4b3181-da12-4705-9215-7ee5b869102b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.197 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.205 232437 DEBUG nova.compute.manager [None req-93f37293-ba2a-46e2-b06e-b00bfa78dc93 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.229 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.233 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.254 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.255 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007225.1974685, 2b4b3181-da12-4705-9215-7ee5b869102b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.255 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] VM Started (Lifecycle Event)#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.278 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.282 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:47:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:47:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2972039573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.302 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.309 232437 DEBUG nova.compute.provider_tree [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.327 232437 DEBUG nova.scheduler.client.report [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.357 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.359 232437 DEBUG nova.compute.manager [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.430 232437 DEBUG nova.compute.manager [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.431 232437 DEBUG nova.network.neutron [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.458 232437 INFO nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.498 232437 DEBUG nova.compute.manager [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:47:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:05.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.633 232437 DEBUG nova.compute.manager [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.634 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.635 232437 INFO nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Creating image(s)#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.668 232437 DEBUG nova.storage.rbd_utils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] rbd image e268006c-e26b-4400-8c9a-da1925cd1a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.707 232437 DEBUG nova.storage.rbd_utils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] rbd image e268006c-e26b-4400-8c9a-da1925cd1a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.747 232437 DEBUG nova.storage.rbd_utils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] rbd image e268006c-e26b-4400-8c9a-da1925cd1a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.753 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.841 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.842 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.844 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.845 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.883 232437 DEBUG nova.storage.rbd_utils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] rbd image e268006c-e26b-4400-8c9a-da1925cd1a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:47:05 np0005548731 nova_compute[232433]: 2025-12-06 07:47:05.890 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e268006c-e26b-4400-8c9a-da1925cd1a57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.001 232437 DEBUG nova.policy [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5933a27f4e504e23a5d084501196c0fb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '329d4c9562c84ec5a42ca68894cbf27f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.212 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e268006c-e26b-4400-8c9a-da1925cd1a57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.306 232437 DEBUG nova.storage.rbd_utils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] resizing rbd image e268006c-e26b-4400-8c9a-da1925cd1a57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.342 232437 INFO nova.compute.manager [None req-0dfc4f11-abc1-4c2a-8de1-7e4c3608034d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Unrescuing#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.343 232437 DEBUG oslo_concurrency.lockutils [None req-0dfc4f11-abc1-4c2a-8de1-7e4c3608034d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.343 232437 DEBUG oslo_concurrency.lockutils [None req-0dfc4f11-abc1-4c2a-8de1-7e4c3608034d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.343 232437 DEBUG nova.network.neutron [None req-0dfc4f11-abc1-4c2a-8de1-7e4c3608034d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.409 232437 DEBUG nova.objects.instance [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lazy-loading 'migration_context' on Instance uuid e268006c-e26b-4400-8c9a-da1925cd1a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.427 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.428 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Ensure instance console log exists: /var/lib/nova/instances/e268006c-e26b-4400-8c9a-da1925cd1a57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.428 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.429 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.429 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:06.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.716 232437 DEBUG nova.compute.manager [req-f8610b2f-d9e7-4686-bd41-43216673eb16 req-1ac5c40e-fd52-4e69-9697-cf8383e3ee5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.717 232437 DEBUG oslo_concurrency.lockutils [req-f8610b2f-d9e7-4686-bd41-43216673eb16 req-1ac5c40e-fd52-4e69-9697-cf8383e3ee5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.717 232437 DEBUG oslo_concurrency.lockutils [req-f8610b2f-d9e7-4686-bd41-43216673eb16 req-1ac5c40e-fd52-4e69-9697-cf8383e3ee5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.717 232437 DEBUG oslo_concurrency.lockutils [req-f8610b2f-d9e7-4686-bd41-43216673eb16 req-1ac5c40e-fd52-4e69-9697-cf8383e3ee5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.717 232437 DEBUG nova.compute.manager [req-f8610b2f-d9e7-4686-bd41-43216673eb16 req-1ac5c40e-fd52-4e69-9697-cf8383e3ee5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] No waiting events found dispatching network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:06 np0005548731 nova_compute[232433]: 2025-12-06 07:47:06.717 232437 WARNING nova.compute.manager [req-f8610b2f-d9e7-4686-bd41-43216673eb16 req-1ac5c40e-fd52-4e69-9697-cf8383e3ee5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received unexpected event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:47:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e354 e354: 3 total, 3 up, 3 in
Dec  6 02:47:07 np0005548731 nova_compute[232433]: 2025-12-06 07:47:07.340 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:07.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e355 e355: 3 total, 3 up, 3 in
Dec  6 02:47:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:08.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:08 np0005548731 nova_compute[232433]: 2025-12-06 07:47:08.695 232437 DEBUG nova.network.neutron [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Successfully created port: 9182be1d-dcb1-498a-af53-8a36faebc1e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:47:08 np0005548731 nova_compute[232433]: 2025-12-06 07:47:08.727 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e356 e356: 3 total, 3 up, 3 in
Dec  6 02:47:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:47:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:09.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:47:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:10.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:10 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:10Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:41:1e 10.100.0.7
Dec  6 02:47:10 np0005548731 nova_compute[232433]: 2025-12-06 07:47:10.895 232437 DEBUG nova.network.neutron [None req-0dfc4f11-abc1-4c2a-8de1-7e4c3608034d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updating instance_info_cache with network_info: [{"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:47:11 np0005548731 nova_compute[232433]: 2025-12-06 07:47:11.192 232437 DEBUG oslo_concurrency.lockutils [None req-0dfc4f11-abc1-4c2a-8de1-7e4c3608034d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:47:11 np0005548731 nova_compute[232433]: 2025-12-06 07:47:11.193 232437 DEBUG nova.objects.instance [None req-0dfc4f11-abc1-4c2a-8de1-7e4c3608034d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'flavor' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:47:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:11.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:11 np0005548731 nova_compute[232433]: 2025-12-06 07:47:11.629 232437 DEBUG nova.compute.manager [req-d1ee51ad-9b8a-495b-a104-431a77b290c3 req-06be7a97-0497-4a5f-926d-fe6467ff031f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-changed-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:11 np0005548731 nova_compute[232433]: 2025-12-06 07:47:11.629 232437 DEBUG nova.compute.manager [req-d1ee51ad-9b8a-495b-a104-431a77b290c3 req-06be7a97-0497-4a5f-926d-fe6467ff031f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Refreshing instance network info cache due to event network-changed-879edf90-0812-4adf-b171-d993c6cfb26c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:47:11 np0005548731 nova_compute[232433]: 2025-12-06 07:47:11.630 232437 DEBUG oslo_concurrency.lockutils [req-d1ee51ad-9b8a-495b-a104-431a77b290c3 req-06be7a97-0497-4a5f-926d-fe6467ff031f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:47:11 np0005548731 nova_compute[232433]: 2025-12-06 07:47:11.630 232437 DEBUG oslo_concurrency.lockutils [req-d1ee51ad-9b8a-495b-a104-431a77b290c3 req-06be7a97-0497-4a5f-926d-fe6467ff031f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:47:11 np0005548731 nova_compute[232433]: 2025-12-06 07:47:11.630 232437 DEBUG nova.network.neutron [req-d1ee51ad-9b8a-495b-a104-431a77b290c3 req-06be7a97-0497-4a5f-926d-fe6467ff031f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Refreshing network info cache for port 879edf90-0812-4adf-b171-d993c6cfb26c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:47:12 np0005548731 kernel: tap879edf90-08 (unregistering): left promiscuous mode
Dec  6 02:47:12 np0005548731 NetworkManager[49182]: <info>  [1765007232.0427] device (tap879edf90-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:47:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:12Z|00758|binding|INFO|Releasing lport 879edf90-0812-4adf-b171-d993c6cfb26c from this chassis (sb_readonly=0)
Dec  6 02:47:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:12Z|00759|binding|INFO|Setting lport 879edf90-0812-4adf-b171-d993c6cfb26c down in Southbound
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.055 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:12Z|00760|binding|INFO|Removing iface tap879edf90-08 ovn-installed in OVS
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.058 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.061 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:dc:f4 10.100.0.7'], port_security=['fa:16:3e:9c:dc:f4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b4b3181-da12-4705-9215-7ee5b869102b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f9c55c19-5297-4b9d-814f-3c2f976ceba7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=879edf90-0812-4adf-b171-d993c6cfb26c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.062 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 879edf90-0812-4adf-b171-d993c6cfb26c in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.064 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.065 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c2cccbe5-6ced-4222-b1a7-d5502d676536]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.066 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace which is not needed anymore#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.071 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Dec  6 02:47:12 np0005548731 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009f.scope: Consumed 7.287s CPU time.
Dec  6 02:47:12 np0005548731 systemd-machined[195355]: Machine qemu-77-instance-0000009f terminated.
Dec  6 02:47:12 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304393]: [NOTICE]   (304415) : haproxy version is 2.8.14-c23fe91
Dec  6 02:47:12 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304393]: [NOTICE]   (304415) : path to executable is /usr/sbin/haproxy
Dec  6 02:47:12 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304393]: [WARNING]  (304415) : Exiting Master process...
Dec  6 02:47:12 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304393]: [ALERT]    (304415) : Current worker (304422) exited with code 143 (Terminated)
Dec  6 02:47:12 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304393]: [WARNING]  (304415) : All workers exited. Exiting... (0)
Dec  6 02:47:12 np0005548731 systemd[1]: libpod-3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a.scope: Deactivated successfully.
Dec  6 02:47:12 np0005548731 podman[304627]: 2025-12-06 07:47:12.205815972 +0000 UTC m=+0.043403799 container died 3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:47:12 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a-userdata-shm.mount: Deactivated successfully.
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.244 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 systemd[1]: var-lib-containers-storage-overlay-7eb77b1a06819636d33bf70070f140c8cbee869f4f68e367166f67c63e125e82-merged.mount: Deactivated successfully.
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.250 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 podman[304627]: 2025-12-06 07:47:12.257212665 +0000 UTC m=+0.094800492 container cleanup 3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.262 232437 INFO nova.virt.libvirt.driver [-] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Instance destroyed successfully.#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.263 232437 DEBUG nova.objects.instance [None req-0dfc4f11-abc1-4c2a-8de1-7e4c3608034d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:47:12 np0005548731 systemd[1]: libpod-conmon-3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a.scope: Deactivated successfully.
Dec  6 02:47:12 np0005548731 podman[304666]: 2025-12-06 07:47:12.317105165 +0000 UTC m=+0.040011897 container remove 3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.322 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3cb394-81c3-4f21-9216-64281b2df736]: (4, ('Sat Dec  6 07:47:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a)\n3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a\nSat Dec  6 07:47:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a)\n3302dcf5ca1e83c0b3575f8f32b65527197f1e4955a8c8efc90ed13e17ab1b4a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.325 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a38374a2-88ac-4a8f-be41-c31ecfb35b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.326 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 kernel: tap6d1a17d6-50: left promiscuous mode
Dec  6 02:47:12 np0005548731 NetworkManager[49182]: <info>  [1765007232.3385] manager: (tap879edf90-08): new Tun device (/org/freedesktop/NetworkManager/Devices/351)
Dec  6 02:47:12 np0005548731 systemd-udevd[304608]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.341 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.346 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.349 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[84def7b5-2434-481b-809d-b6fad83fd141]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 kernel: tap879edf90-08: entered promiscuous mode
Dec  6 02:47:12 np0005548731 NetworkManager[49182]: <info>  [1765007232.3520] device (tap879edf90-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:47:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:12Z|00761|binding|INFO|Claiming lport 879edf90-0812-4adf-b171-d993c6cfb26c for this chassis.
Dec  6 02:47:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:12Z|00762|binding|INFO|879edf90-0812-4adf-b171-d993c6cfb26c: Claiming fa:16:3e:9c:dc:f4 10.100.0.7
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.352 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 NetworkManager[49182]: <info>  [1765007232.3555] device (tap879edf90-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.360 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:dc:f4 10.100.0.7'], port_security=['fa:16:3e:9c:dc:f4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b4b3181-da12-4705-9215-7ee5b869102b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f9c55c19-5297-4b9d-814f-3c2f976ceba7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=879edf90-0812-4adf-b171-d993c6cfb26c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.363 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e99d844a-514d-4b83-a4de-c597330533bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.364 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f28fb3d1-2b47-47d0-af1c-fad156a0a7f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 systemd-machined[195355]: New machine qemu-78-instance-0000009f.
Dec  6 02:47:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:12Z|00763|binding|INFO|Setting lport 879edf90-0812-4adf-b171-d993c6cfb26c ovn-installed in OVS
Dec  6 02:47:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:12Z|00764|binding|INFO|Setting lport 879edf90-0812-4adf-b171-d993c6cfb26c up in Southbound
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.374 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.380 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9a20db95-1821-480f-8e5e-e3a18b4a4ab2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748638, 'reachable_time': 16127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304695, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 systemd[1]: Started Virtual Machine qemu-78-instance-0000009f.
Dec  6 02:47:12 np0005548731 systemd[1]: run-netns-ovnmeta\x2d6d1a17d6\x2d5e44\x2d40b7\x2d832a\x2d81cb86c02e71.mount: Deactivated successfully.
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.384 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.384 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[3dad7c27-7f67-4284-92c8-f2b451510c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.385 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 879edf90-0812-4adf-b171-d993c6cfb26c in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.387 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.396 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c73d0d14-4391-4259-a513-906545f79888]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.397 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d1a17d6-51 in ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.398 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d1a17d6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.398 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e8440d1b-10c9-4804-8fba-375cc02effe3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.399 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7767e0-2864-4229-b06d-93c666415f03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.410 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[1b98c725-6e13-4345-ac22-f214f55b3df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.423 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bffd2efe-ba21-4a0f-8d48-eb24a3c632b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.453 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[32cb33e5-837e-42f5-9c56-d91df3ae32b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.460 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9fbba94c-a5c8-459e-b6e9-2db87c4884e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 NetworkManager[49182]: <info>  [1765007232.4618] manager: (tap6d1a17d6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/352)
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.493 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8f7d07-e15b-43b8-b3be-bcf39ae8d80f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.496 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc551bd-6223-45eb-acd5-020ba78dca3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 NetworkManager[49182]: <info>  [1765007232.5216] device (tap6d1a17d6-50): carrier: link connected
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.529 232437 DEBUG nova.network.neutron [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Successfully updated port: 9182be1d-dcb1-498a-af53-8a36faebc1e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.530 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c541a28a-3965-45d9-8a94-bdbb1d3004d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.543 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "refresh_cache-e268006c-e26b-4400-8c9a-da1925cd1a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.543 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquired lock "refresh_cache-e268006c-e26b-4400-8c9a-da1925cd1a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.543 232437 DEBUG nova.network.neutron [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.549 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[95994ca0-c737-4436-ab56-f87733f94360]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749452, 'reachable_time': 28514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304728, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.565 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[75b4783a-ce4c-4a5d-b0c5-d0d2ed82f3e1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:a2f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749452, 'tstamp': 749452}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304729, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.586 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e09e3a83-c5dc-4b2d-8777-8c2eb57be087]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749452, 'reachable_time': 28514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304730, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:12.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.621 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2b076c-208c-40aa-b7e0-bd6298eb4c6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.662 232437 DEBUG nova.compute.manager [req-295a9047-e837-4f82-b30b-863897e54fcc req-bd095709-06d7-4824-af65-2f292d5b09f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-unplugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.662 232437 DEBUG oslo_concurrency.lockutils [req-295a9047-e837-4f82-b30b-863897e54fcc req-bd095709-06d7-4824-af65-2f292d5b09f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.662 232437 DEBUG oslo_concurrency.lockutils [req-295a9047-e837-4f82-b30b-863897e54fcc req-bd095709-06d7-4824-af65-2f292d5b09f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.662 232437 DEBUG oslo_concurrency.lockutils [req-295a9047-e837-4f82-b30b-863897e54fcc req-bd095709-06d7-4824-af65-2f292d5b09f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.663 232437 DEBUG nova.compute.manager [req-295a9047-e837-4f82-b30b-863897e54fcc req-bd095709-06d7-4824-af65-2f292d5b09f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] No waiting events found dispatching network-vif-unplugged-879edf90-0812-4adf-b171-d993c6cfb26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.663 232437 WARNING nova.compute.manager [req-295a9047-e837-4f82-b30b-863897e54fcc req-bd095709-06d7-4824-af65-2f292d5b09f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received unexpected event network-vif-unplugged-879edf90-0812-4adf-b171-d993c6cfb26c for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.678 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f83cca22-c30f-4a0d-90f1-0fb307b5cdb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.679 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.680 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.680 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.682 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 NetworkManager[49182]: <info>  [1765007232.6828] manager: (tap6d1a17d6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Dec  6 02:47:12 np0005548731 kernel: tap6d1a17d6-50: entered promiscuous mode
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.684 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.687 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.688 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:12Z|00765|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.689 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.690 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.691 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a7eec17f-2d93-4318-95fa-160cd7dabd00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.692 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:47:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:12.692 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'env', 'PROCESS_TAG=haproxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d1a17d6-5e44-40b7-832a-81cb86c02e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.705 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:12 np0005548731 nova_compute[232433]: 2025-12-06 07:47:12.726 232437 DEBUG nova.network.neutron [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.003 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for 2b4b3181-da12-4705-9215-7ee5b869102b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.003 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007232.9877245, 2b4b3181-da12-4705-9215-7ee5b869102b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.004 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.036 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.040 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:47:13 np0005548731 podman[304829]: 2025-12-06 07:47:13.04739172 +0000 UTC m=+0.042498357 container create 8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.059 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.060 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007232.9879541, 2b4b3181-da12-4705-9215-7ee5b869102b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.060 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] VM Started (Lifecycle Event)#033[00m
Dec  6 02:47:13 np0005548731 systemd[1]: Started libpod-conmon-8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249.scope.
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.092 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.095 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:47:13 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:47:13 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c79853b6cdc4327f207120ebbba64f9acda939186e8940588e8c29860df0a1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:47:13 np0005548731 podman[304829]: 2025-12-06 07:47:13.117093139 +0000 UTC m=+0.112199776 container init 8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:47:13 np0005548731 podman[304829]: 2025-12-06 07:47:13.024338278 +0000 UTC m=+0.019444935 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:47:13 np0005548731 podman[304829]: 2025-12-06 07:47:13.122940482 +0000 UTC m=+0.118047119 container start 8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.125 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Dec  6 02:47:13 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304854]: [NOTICE]   (304858) : New worker (304860) forked
Dec  6 02:47:13 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304854]: [NOTICE]   (304858) : Loading success.
Dec  6 02:47:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:47:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:13.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.729 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.787 232437 DEBUG nova.compute.manager [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-changed-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.787 232437 DEBUG nova.compute.manager [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Refreshing instance network info cache due to event network-changed-879edf90-0812-4adf-b171-d993c6cfb26c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:47:13 np0005548731 nova_compute[232433]: 2025-12-06 07:47:13.788 232437 DEBUG oslo_concurrency.lockutils [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.162 232437 DEBUG nova.network.neutron [req-d1ee51ad-9b8a-495b-a104-431a77b290c3 req-06be7a97-0497-4a5f-926d-fe6467ff031f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updated VIF entry in instance network info cache for port 879edf90-0812-4adf-b171-d993c6cfb26c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.163 232437 DEBUG nova.network.neutron [req-d1ee51ad-9b8a-495b-a104-431a77b290c3 req-06be7a97-0497-4a5f-926d-fe6467ff031f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updating instance_info_cache with network_info: [{"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.188 232437 DEBUG oslo_concurrency.lockutils [req-d1ee51ad-9b8a-495b-a104-431a77b290c3 req-06be7a97-0497-4a5f-926d-fe6467ff031f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.189 232437 DEBUG oslo_concurrency.lockutils [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.189 232437 DEBUG nova.network.neutron [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Refreshing network info cache for port 879edf90-0812-4adf-b171-d993c6cfb26c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.202 232437 DEBUG nova.network.neutron [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Updating instance_info_cache with network_info: [{"id": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "address": "fa:16:3e:06:7a:7f", "network": {"id": "61f8d3af-3627-497f-85d1-05f726aad6e5", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1070596354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "329d4c9562c84ec5a42ca68894cbf27f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9182be1d-dc", "ovs_interfaceid": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.236 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Releasing lock "refresh_cache-e268006c-e26b-4400-8c9a-da1925cd1a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.236 232437 DEBUG nova.compute.manager [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Instance network_info: |[{"id": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "address": "fa:16:3e:06:7a:7f", "network": {"id": "61f8d3af-3627-497f-85d1-05f726aad6e5", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1070596354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "329d4c9562c84ec5a42ca68894cbf27f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9182be1d-dc", "ovs_interfaceid": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.240 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Start _get_guest_xml network_info=[{"id": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "address": "fa:16:3e:06:7a:7f", "network": {"id": "61f8d3af-3627-497f-85d1-05f726aad6e5", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1070596354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "329d4c9562c84ec5a42ca68894cbf27f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9182be1d-dc", "ovs_interfaceid": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.253 232437 WARNING nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.258 232437 DEBUG nova.virt.libvirt.host [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.259 232437 DEBUG nova.virt.libvirt.host [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.263 232437 DEBUG nova.virt.libvirt.host [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.264 232437 DEBUG nova.virt.libvirt.host [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.265 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.265 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.266 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.266 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.267 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.267 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.267 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.267 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.268 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.268 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.268 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.268 232437 DEBUG nova.virt.hardware [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.271 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:14.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:14 np0005548731 podman[304881]: 2025-12-06 07:47:14.923380979 +0000 UTC m=+0.078115076 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.924 232437 DEBUG nova.compute.manager [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.926 232437 DEBUG oslo_concurrency.lockutils [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.926 232437 DEBUG oslo_concurrency.lockutils [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.926 232437 DEBUG oslo_concurrency.lockutils [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.927 232437 DEBUG nova.compute.manager [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] No waiting events found dispatching network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.927 232437 WARNING nova.compute.manager [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received unexpected event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.927 232437 DEBUG nova.compute.manager [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.928 232437 DEBUG oslo_concurrency.lockutils [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.928 232437 DEBUG oslo_concurrency.lockutils [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.928 232437 DEBUG oslo_concurrency.lockutils [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.931 232437 DEBUG nova.compute.manager [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] No waiting events found dispatching network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.931 232437 WARNING nova.compute.manager [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received unexpected event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.931 232437 DEBUG nova.compute.manager [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.932 232437 DEBUG oslo_concurrency.lockutils [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.932 232437 DEBUG oslo_concurrency.lockutils [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.933 232437 DEBUG oslo_concurrency.lockutils [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.933 232437 DEBUG nova.compute.manager [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] No waiting events found dispatching network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:14 np0005548731 nova_compute[232433]: 2025-12-06 07:47:14.933 232437 WARNING nova.compute.manager [req-a4430ba1-71f9-47e7-b73d-f769a232945e req-15208edb-ae44-498d-bc0d-13f0243f433f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received unexpected event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:47:14 np0005548731 podman[304883]: 2025-12-06 07:47:14.938568229 +0000 UTC m=+0.091518412 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:47:14 np0005548731 podman[304882]: 2025-12-06 07:47:14.943243883 +0000 UTC m=+0.100275456 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:47:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 02:47:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:15.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 02:47:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:47:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/183954237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:47:15 np0005548731 nova_compute[232433]: 2025-12-06 07:47:15.920 232437 DEBUG nova.network.neutron [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updated VIF entry in instance network info cache for port 879edf90-0812-4adf-b171-d993c6cfb26c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:47:15 np0005548731 nova_compute[232433]: 2025-12-06 07:47:15.921 232437 DEBUG nova.network.neutron [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updating instance_info_cache with network_info: [{"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:47:16 np0005548731 nova_compute[232433]: 2025-12-06 07:47:16.219 232437 DEBUG oslo_concurrency.lockutils [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:47:16 np0005548731 nova_compute[232433]: 2025-12-06 07:47:16.220 232437 DEBUG nova.compute.manager [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Received event network-changed-9182be1d-dcb1-498a-af53-8a36faebc1e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:16 np0005548731 nova_compute[232433]: 2025-12-06 07:47:16.220 232437 DEBUG nova.compute.manager [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Refreshing instance network info cache due to event network-changed-9182be1d-dcb1-498a-af53-8a36faebc1e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:47:16 np0005548731 nova_compute[232433]: 2025-12-06 07:47:16.220 232437 DEBUG oslo_concurrency.lockutils [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e268006c-e26b-4400-8c9a-da1925cd1a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:47:16 np0005548731 nova_compute[232433]: 2025-12-06 07:47:16.220 232437 DEBUG oslo_concurrency.lockutils [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e268006c-e26b-4400-8c9a-da1925cd1a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:47:16 np0005548731 nova_compute[232433]: 2025-12-06 07:47:16.220 232437 DEBUG nova.network.neutron [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Refreshing network info cache for port 9182be1d-dcb1-498a-af53-8a36faebc1e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:47:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:16.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:17 np0005548731 nova_compute[232433]: 2025-12-06 07:47:17.345 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:17.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:17 np0005548731 nova_compute[232433]: 2025-12-06 07:47:17.757 232437 DEBUG nova.network.neutron [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Updated VIF entry in instance network info cache for port 9182be1d-dcb1-498a-af53-8a36faebc1e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:47:17 np0005548731 nova_compute[232433]: 2025-12-06 07:47:17.757 232437 DEBUG nova.network.neutron [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Updating instance_info_cache with network_info: [{"id": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "address": "fa:16:3e:06:7a:7f", "network": {"id": "61f8d3af-3627-497f-85d1-05f726aad6e5", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1070596354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "329d4c9562c84ec5a42ca68894cbf27f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9182be1d-dc", "ovs_interfaceid": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:47:17 np0005548731 nova_compute[232433]: 2025-12-06 07:47:17.781 232437 DEBUG oslo_concurrency.lockutils [req-cbe61cab-4bf4-4a2f-a908-8ce9c6c5618a req-eb71e660-0e83-4384-890b-7c20701088a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e268006c-e26b-4400-8c9a-da1925cd1a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.282 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.457 232437 DEBUG nova.storage.rbd_utils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] rbd image e268006c-e26b-4400-8c9a-da1925cd1a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.461 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:18.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.704 232437 DEBUG nova.compute.manager [None req-0dfc4f11-abc1-4c2a-8de1-7e4c3608034d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.733 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:47:18 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4117359800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.916 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.917 232437 DEBUG nova.virt.libvirt.vif [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:47:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-658018281',display_name='tempest-TestEncryptedCinderVolumes-server-658018281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-658018281',id=161,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBzQXsxJ4IlVU9/ziKdus7ckrF5bZNSwdBfqRB5VjmxUmjLd5h8iTt4JA98i7mxL4C828L5gt+P1gg0wPunY6lsZpjBnTw5Zc0jrUn7eK9GFfGlA2dk7bgT7gIXBkA03DA==',key_name='tempest-keypair-527730895',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='329d4c9562c84ec5a42ca68894cbf27f',ramdisk_id='',reservation_id='r-20kj5uxy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1437526103',owner_user_name='tempest-TestEncryptedCinderVolumes-1437526103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:47:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5933a27f4e504e23a5d084501196c0fb',uuid=e268006c-e26b-4400-8c9a-da1925cd1a57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "address": "fa:16:3e:06:7a:7f", "network": {"id": "61f8d3af-3627-497f-85d1-05f726aad6e5", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1070596354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "329d4c9562c84ec5a42ca68894cbf27f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9182be1d-dc", "ovs_interfaceid": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.918 232437 DEBUG nova.network.os_vif_util [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Converting VIF {"id": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "address": "fa:16:3e:06:7a:7f", "network": {"id": "61f8d3af-3627-497f-85d1-05f726aad6e5", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1070596354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "329d4c9562c84ec5a42ca68894cbf27f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9182be1d-dc", "ovs_interfaceid": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.918 232437 DEBUG nova.network.os_vif_util [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:7a:7f,bridge_name='br-int',has_traffic_filtering=True,id=9182be1d-dcb1-498a-af53-8a36faebc1e2,network=Network(61f8d3af-3627-497f-85d1-05f726aad6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9182be1d-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.919 232437 DEBUG nova.objects.instance [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lazy-loading 'pci_devices' on Instance uuid e268006c-e26b-4400-8c9a-da1925cd1a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.934 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  <uuid>e268006c-e26b-4400-8c9a-da1925cd1a57</uuid>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  <name>instance-000000a1</name>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-658018281</nova:name>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:47:14</nova:creationTime>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <nova:user uuid="5933a27f4e504e23a5d084501196c0fb">tempest-TestEncryptedCinderVolumes-1437526103-project-member</nova:user>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <nova:project uuid="329d4c9562c84ec5a42ca68894cbf27f">tempest-TestEncryptedCinderVolumes-1437526103</nova:project>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <nova:port uuid="9182be1d-dcb1-498a-af53-8a36faebc1e2">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <entry name="serial">e268006c-e26b-4400-8c9a-da1925cd1a57</entry>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <entry name="uuid">e268006c-e26b-4400-8c9a-da1925cd1a57</entry>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e268006c-e26b-4400-8c9a-da1925cd1a57_disk">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e268006c-e26b-4400-8c9a-da1925cd1a57_disk.config">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:06:7a:7f"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <target dev="tap9182be1d-dc"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/e268006c-e26b-4400-8c9a-da1925cd1a57/console.log" append="off"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:47:18 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:47:18 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:47:18 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:47:18 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.936 232437 DEBUG nova.compute.manager [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Preparing to wait for external event network-vif-plugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.936 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.936 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.936 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.937 232437 DEBUG nova.virt.libvirt.vif [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:47:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-658018281',display_name='tempest-TestEncryptedCinderVolumes-server-658018281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-658018281',id=161,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBzQXsxJ4IlVU9/ziKdus7ckrF5bZNSwdBfqRB5VjmxUmjLd5h8iTt4JA98i7mxL4C828L5gt+P1gg0wPunY6lsZpjBnTw5Zc0jrUn7eK9GFfGlA2dk7bgT7gIXBkA03DA==',key_name='tempest-keypair-527730895',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='329d4c9562c84ec5a42ca68894cbf27f',ramdisk_id='',reservation_id='r-20kj5uxy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1437526103',owner_user_name='tempest-TestEncryptedCinderVolumes-1437526103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:47:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5933a27f4e504e23a5d084501196c0fb',uuid=e268006c-e26b-4400-8c9a-da1925cd1a57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "address": "fa:16:3e:06:7a:7f", "network": {"id": "61f8d3af-3627-497f-85d1-05f726aad6e5", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1070596354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "329d4c9562c84ec5a42ca68894cbf27f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9182be1d-dc", "ovs_interfaceid": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.937 232437 DEBUG nova.network.os_vif_util [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Converting VIF {"id": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "address": "fa:16:3e:06:7a:7f", "network": {"id": "61f8d3af-3627-497f-85d1-05f726aad6e5", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1070596354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "329d4c9562c84ec5a42ca68894cbf27f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9182be1d-dc", "ovs_interfaceid": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.938 232437 DEBUG nova.network.os_vif_util [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:7a:7f,bridge_name='br-int',has_traffic_filtering=True,id=9182be1d-dcb1-498a-af53-8a36faebc1e2,network=Network(61f8d3af-3627-497f-85d1-05f726aad6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9182be1d-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.938 232437 DEBUG os_vif [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:7a:7f,bridge_name='br-int',has_traffic_filtering=True,id=9182be1d-dcb1-498a-af53-8a36faebc1e2,network=Network(61f8d3af-3627-497f-85d1-05f726aad6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9182be1d-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.939 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.939 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.970 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.976 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.977 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9182be1d-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.977 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9182be1d-dc, col_values=(('external_ids', {'iface-id': '9182be1d-dcb1-498a-af53-8a36faebc1e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:7a:7f', 'vm-uuid': 'e268006c-e26b-4400-8c9a-da1925cd1a57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:18 np0005548731 NetworkManager[49182]: <info>  [1765007238.9802] manager: (tap9182be1d-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.980 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.982 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.986 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:18 np0005548731 nova_compute[232433]: 2025-12-06 07:47:18.987 232437 INFO os_vif [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:7a:7f,bridge_name='br-int',has_traffic_filtering=True,id=9182be1d-dcb1-498a-af53-8a36faebc1e2,network=Network(61f8d3af-3627-497f-85d1-05f726aad6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9182be1d-dc')#033[00m
Dec  6 02:47:19 np0005548731 nova_compute[232433]: 2025-12-06 07:47:19.041 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:47:19 np0005548731 nova_compute[232433]: 2025-12-06 07:47:19.042 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:47:19 np0005548731 nova_compute[232433]: 2025-12-06 07:47:19.042 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] No VIF found with MAC fa:16:3e:06:7a:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:47:19 np0005548731 nova_compute[232433]: 2025-12-06 07:47:19.042 232437 INFO nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Using config drive#033[00m
Dec  6 02:47:19 np0005548731 nova_compute[232433]: 2025-12-06 07:47:19.065 232437 DEBUG nova.storage.rbd_utils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] rbd image e268006c-e26b-4400-8c9a-da1925cd1a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:47:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:19.366 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:47:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:19.367 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:47:19 np0005548731 nova_compute[232433]: 2025-12-06 07:47:19.413 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:19.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:19 np0005548731 nova_compute[232433]: 2025-12-06 07:47:19.629 232437 INFO nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Creating config drive at /var/lib/nova/instances/e268006c-e26b-4400-8c9a-da1925cd1a57/disk.config#033[00m
Dec  6 02:47:19 np0005548731 nova_compute[232433]: 2025-12-06 07:47:19.642 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e268006c-e26b-4400-8c9a-da1925cd1a57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn2t723g6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:19 np0005548731 nova_compute[232433]: 2025-12-06 07:47:19.781 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e268006c-e26b-4400-8c9a-da1925cd1a57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn2t723g6" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:47:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:20.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:47:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:21.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:21 np0005548731 nova_compute[232433]: 2025-12-06 07:47:21.804 232437 DEBUG nova.storage.rbd_utils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] rbd image e268006c-e26b-4400-8c9a-da1925cd1a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:47:21 np0005548731 nova_compute[232433]: 2025-12-06 07:47:21.810 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e268006c-e26b-4400-8c9a-da1925cd1a57/disk.config e268006c-e26b-4400-8c9a-da1925cd1a57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:21 np0005548731 nova_compute[232433]: 2025-12-06 07:47:21.841 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:47:21 np0005548731 nova_compute[232433]: 2025-12-06 07:47:21.845 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:47:21 np0005548731 nova_compute[232433]: 2025-12-06 07:47:21.846 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.016 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.017 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.017 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:47:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e357 e357: 3 total, 3 up, 3 in
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.437 232437 DEBUG oslo_concurrency.processutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e268006c-e26b-4400-8c9a-da1925cd1a57/disk.config e268006c-e26b-4400-8c9a-da1925cd1a57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.437 232437 INFO nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Deleting local config drive /var/lib/nova/instances/e268006c-e26b-4400-8c9a-da1925cd1a57/disk.config because it was imported into RBD.#033[00m
Dec  6 02:47:22 np0005548731 kernel: tap9182be1d-dc: entered promiscuous mode
Dec  6 02:47:22 np0005548731 NetworkManager[49182]: <info>  [1765007242.4856] manager: (tap9182be1d-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/355)
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.491 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:22Z|00766|binding|INFO|Claiming lport 9182be1d-dcb1-498a-af53-8a36faebc1e2 for this chassis.
Dec  6 02:47:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:22Z|00767|binding|INFO|9182be1d-dcb1-498a-af53-8a36faebc1e2: Claiming fa:16:3e:06:7a:7f 10.100.0.9
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.503 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:7a:7f 10.100.0.9'], port_security=['fa:16:3e:06:7a:7f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e268006c-e26b-4400-8c9a-da1925cd1a57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61f8d3af-3627-497f-85d1-05f726aad6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '329d4c9562c84ec5a42ca68894cbf27f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b225c3bc-4906-441b-9e23-854a574bcecc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92477244-099e-4077-9b92-5060681e1a10, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9182be1d-dcb1-498a-af53-8a36faebc1e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.504 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9182be1d-dcb1-498a-af53-8a36faebc1e2 in datapath 61f8d3af-3627-497f-85d1-05f726aad6e5 bound to our chassis#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.506 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61f8d3af-3627-497f-85d1-05f726aad6e5#033[00m
Dec  6 02:47:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:22Z|00768|binding|INFO|Setting lport 9182be1d-dcb1-498a-af53-8a36faebc1e2 ovn-installed in OVS
Dec  6 02:47:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:22Z|00769|binding|INFO|Setting lport 9182be1d-dcb1-498a-af53-8a36faebc1e2 up in Southbound
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.514 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.520 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[eba6bf61-5d5c-4b2e-b164-0c605c65a686]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.521 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61f8d3af-31 in ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.523 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.526 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61f8d3af-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.526 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[55ffd32a-5572-4736-b966-0f6ba85ba8fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.528 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b06ab6e5-77c2-4aaf-92ef-f44d73d1694e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 systemd-udevd[305121]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.539 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[36f20b80-03ee-4455-98cc-480fe8ffe010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 systemd-machined[195355]: New machine qemu-79-instance-000000a1.
Dec  6 02:47:22 np0005548731 NetworkManager[49182]: <info>  [1765007242.5530] device (tap9182be1d-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:47:22 np0005548731 NetworkManager[49182]: <info>  [1765007242.5539] device (tap9182be1d-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:47:22 np0005548731 systemd[1]: Started Virtual Machine qemu-79-instance-000000a1.
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.557 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1835ad68-8520-4ee3-b631-a02ee059bda3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.584 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9f2416-d48c-49ae-b339-901994c089b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 NetworkManager[49182]: <info>  [1765007242.5902] manager: (tap61f8d3af-30): new Veth device (/org/freedesktop/NetworkManager/Devices/356)
Dec  6 02:47:22 np0005548731 systemd-udevd[305126]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.592 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc86a04-d458-4cc5-93f1-002c28a06d5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:22.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.619 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1aa267-c17a-44f7-8300-55c21d8566df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.622 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0a294412-909a-4f91-909c-d7f2c097786b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 NetworkManager[49182]: <info>  [1765007242.6433] device (tap61f8d3af-30): carrier: link connected
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.649 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[60e84f04-62ca-4951-9912-cbf3d04ff081]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.666 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd29c12-8b38-4ad1-8855-a011b3ec351e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61f8d3af-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:b2:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750465, 'reachable_time': 21133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305154, 'error': None, 'target': 'ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.682 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[acca4290-f43b-4405-98e6-83d85b086c30]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:b2a1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750465, 'tstamp': 750465}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305155, 'error': None, 'target': 'ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.696 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f2ebff-c1f9-4ad2-9f9c-92af17f0b400]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61f8d3af-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:b2:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750465, 'reachable_time': 21133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305156, 'error': None, 'target': 'ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.726 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f488e509-3f8c-4e44-b250-0fdb5d28d0bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.763 232437 DEBUG nova.compute.manager [req-00a77c6c-533a-4ba5-9ab0-fbf2b8478f70 req-68fabf8c-ae00-4969-80b6-83045bd66e7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Received event network-vif-plugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.764 232437 DEBUG oslo_concurrency.lockutils [req-00a77c6c-533a-4ba5-9ab0-fbf2b8478f70 req-68fabf8c-ae00-4969-80b6-83045bd66e7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.764 232437 DEBUG oslo_concurrency.lockutils [req-00a77c6c-533a-4ba5-9ab0-fbf2b8478f70 req-68fabf8c-ae00-4969-80b6-83045bd66e7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.764 232437 DEBUG oslo_concurrency.lockutils [req-00a77c6c-533a-4ba5-9ab0-fbf2b8478f70 req-68fabf8c-ae00-4969-80b6-83045bd66e7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.764 232437 DEBUG nova.compute.manager [req-00a77c6c-533a-4ba5-9ab0-fbf2b8478f70 req-68fabf8c-ae00-4969-80b6-83045bd66e7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Processing event network-vif-plugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.775 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a433feda-5e1d-4c8d-a4e3-cd070c388499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.777 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61f8d3af-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.778 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.778 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61f8d3af-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:22 np0005548731 NetworkManager[49182]: <info>  [1765007242.7809] manager: (tap61f8d3af-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.780 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:22 np0005548731 kernel: tap61f8d3af-30: entered promiscuous mode
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.783 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.783 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61f8d3af-30, col_values=(('external_ids', {'iface-id': '8ee8956a-e293-43e2-a41a-6931cf826f0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:22Z|00770|binding|INFO|Releasing lport 8ee8956a-e293-43e2-a41a-6931cf826f0b from this chassis (sb_readonly=0)
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.785 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.802 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:22 np0005548731 nova_compute[232433]: 2025-12-06 07:47:22.803 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.803 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61f8d3af-3627-497f-85d1-05f726aad6e5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61f8d3af-3627-497f-85d1-05f726aad6e5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.805 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[723002af-df92-4f36-80e9-95be01f9c600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.806 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-61f8d3af-3627-497f-85d1-05f726aad6e5
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/61f8d3af-3627-497f-85d1-05f726aad6e5.pid.haproxy
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 61f8d3af-3627-497f-85d1-05f726aad6e5
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:47:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:22.806 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5', 'env', 'PROCESS_TAG=haproxy-61f8d3af-3627-497f-85d1-05f726aad6e5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61f8d3af-3627-497f-85d1-05f726aad6e5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.138 232437 DEBUG nova.compute.manager [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.139 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007243.1372492, e268006c-e26b-4400-8c9a-da1925cd1a57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.140 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] VM Started (Lifecycle Event)#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.145 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.148 232437 INFO nova.virt.libvirt.driver [-] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Instance spawned successfully.#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.151 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:47:23 np0005548731 podman[305230]: 2025-12-06 07:47:23.190759586 +0000 UTC m=+0.047289434 container create cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.212 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.219 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:47:23 np0005548731 systemd[1]: Started libpod-conmon-cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc.scope.
Dec  6 02:47:23 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:47:23 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fd9a41ecf342434ea4260835ff8ece91e4f9f8e438fa2ae26762e87177c6e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:47:23 np0005548731 podman[305230]: 2025-12-06 07:47:23.167638753 +0000 UTC m=+0.024168621 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:47:23 np0005548731 podman[305230]: 2025-12-06 07:47:23.27091558 +0000 UTC m=+0.127445428 container init cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:47:23 np0005548731 podman[305230]: 2025-12-06 07:47:23.277066531 +0000 UTC m=+0.133596379 container start cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.292 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updating instance_info_cache with network_info: [{"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:47:23 np0005548731 neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5[305245]: [NOTICE]   (305249) : New worker (305251) forked
Dec  6 02:47:23 np0005548731 neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5[305245]: [NOTICE]   (305249) : Loading success.
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.367 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.368 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.368 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.369 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.369 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.370 232437 DEBUG nova.virt.libvirt.driver [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:47:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:47:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:23.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.786 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.844 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.845 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007243.1379883, e268006c-e26b-4400-8c9a-da1925cd1a57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.845 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.848 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-2b4b3181-da12-4705-9215-7ee5b869102b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.849 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.849 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.850 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.850 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.906 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.909 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007243.1436543, e268006c-e26b-4400-8c9a-da1925cd1a57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.910 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:47:23 np0005548731 nova_compute[232433]: 2025-12-06 07:47:23.979 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:24 np0005548731 nova_compute[232433]: 2025-12-06 07:47:24.377 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:24 np0005548731 nova_compute[232433]: 2025-12-06 07:47:24.382 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:47:24 np0005548731 nova_compute[232433]: 2025-12-06 07:47:24.414 232437 INFO nova.compute.manager [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Took 18.78 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:47:24 np0005548731 nova_compute[232433]: 2025-12-06 07:47:24.415 232437 DEBUG nova.compute.manager [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:24 np0005548731 nova_compute[232433]: 2025-12-06 07:47:24.431 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:47:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:47:24 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2017072707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:47:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:47:24 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2017072707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:47:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:24.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.175 232437 INFO nova.compute.manager [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Took 20.51 seconds to build instance.#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.221 232437 DEBUG nova.compute.manager [req-ed6cecc9-434c-4d58-b667-decaef59c044 req-02dd4855-0224-4fc2-b7af-c19d281b21cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Received event network-vif-plugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.222 232437 DEBUG oslo_concurrency.lockutils [req-ed6cecc9-434c-4d58-b667-decaef59c044 req-02dd4855-0224-4fc2-b7af-c19d281b21cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.222 232437 DEBUG oslo_concurrency.lockutils [req-ed6cecc9-434c-4d58-b667-decaef59c044 req-02dd4855-0224-4fc2-b7af-c19d281b21cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.223 232437 DEBUG oslo_concurrency.lockutils [req-ed6cecc9-434c-4d58-b667-decaef59c044 req-02dd4855-0224-4fc2-b7af-c19d281b21cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.223 232437 DEBUG nova.compute.manager [req-ed6cecc9-434c-4d58-b667-decaef59c044 req-02dd4855-0224-4fc2-b7af-c19d281b21cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] No waiting events found dispatching network-vif-plugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.223 232437 WARNING nova.compute.manager [req-ed6cecc9-434c-4d58-b667-decaef59c044 req-02dd4855-0224-4fc2-b7af-c19d281b21cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Received unexpected event network-vif-plugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.259 232437 DEBUG oslo_concurrency.lockutils [None req-d18a8586-6949-4dd6-939f-32391ebf3806 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.302 232437 DEBUG oslo_concurrency.lockutils [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.302 232437 DEBUG oslo_concurrency.lockutils [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.303 232437 DEBUG oslo_concurrency.lockutils [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.303 232437 DEBUG oslo_concurrency.lockutils [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.304 232437 DEBUG oslo_concurrency.lockutils [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.305 232437 INFO nova.compute.manager [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Terminating instance#033[00m
Dec  6 02:47:25 np0005548731 nova_compute[232433]: 2025-12-06 07:47:25.307 232437 DEBUG nova.compute.manager [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:47:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:25.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:47:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:26.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:47:26 np0005548731 kernel: tapd4087b80-19 (unregistering): left promiscuous mode
Dec  6 02:47:26 np0005548731 NetworkManager[49182]: <info>  [1765007246.7900] device (tapd4087b80-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00771|binding|INFO|Releasing lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 from this chassis (sb_readonly=0)
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00772|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 down in Southbound
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00773|binding|INFO|Removing iface tapd4087b80-19 ovn-installed in OVS
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.804 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:26.817 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:41:1e 10.100.0.7'], port_security=['fa:16:3e:16:41:1e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f67d89a8-836a-4f47-af8d-37cf99529275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'ef8d01c6-8791-4eef-8d34-906ccbdd6237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a13aa6e4-a519-406f-87b7-05ba3d74a296, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=d4087b80-19fa-44d3-8dd5-406c2b01b6f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:47:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:26.818 143965 INFO neutron.agent.ovn.metadata.agent [-] Port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 in datapath 67a02abd-6f15-4e26-ba0d-8a091ca98239 unbound from our chassis#033[00m
Dec  6 02:47:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:26.822 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67a02abd-6f15-4e26-ba0d-8a091ca98239, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:47:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:26.826 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a90ceb-8c79-4428-90c0-46e12cad7745]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:26.827 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 namespace which is not needed anymore#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.831 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:26 np0005548731 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000099.scope: Deactivated successfully.
Dec  6 02:47:26 np0005548731 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000099.scope: Consumed 7.215s CPU time.
Dec  6 02:47:26 np0005548731 systemd-machined[195355]: Machine qemu-76-instance-00000099 terminated.
Dec  6 02:47:26 np0005548731 kernel: tapd4087b80-19: entered promiscuous mode
Dec  6 02:47:26 np0005548731 NetworkManager[49182]: <info>  [1765007246.9292] manager: (tapd4087b80-19): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00774|binding|INFO|Claiming lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for this chassis.
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00775|binding|INFO|d4087b80-19fa-44d3-8dd5-406c2b01b6f4: Claiming fa:16:3e:16:41:1e 10.100.0.7
Dec  6 02:47:26 np0005548731 kernel: tapd4087b80-19 (unregistering): left promiscuous mode
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.931 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:26.936 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:41:1e 10.100.0.7'], port_security=['fa:16:3e:16:41:1e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f67d89a8-836a-4f47-af8d-37cf99529275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'ef8d01c6-8791-4eef-8d34-906ccbdd6237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a13aa6e4-a519-406f-87b7-05ba3d74a296, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=d4087b80-19fa-44d3-8dd5-406c2b01b6f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.956 232437 INFO nova.virt.libvirt.driver [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Instance destroyed successfully.#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.957 232437 DEBUG nova.objects.instance [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lazy-loading 'resources' on Instance uuid f67d89a8-836a-4f47-af8d-37cf99529275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.961 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00776|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 ovn-installed in OVS
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00777|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 up in Southbound
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00778|binding|INFO|Releasing lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 from this chassis (sb_readonly=1)
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00779|if_status|INFO|Dropped 4 log messages in last 409 seconds (most recently, 403 seconds ago) due to excessive rate
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00780|if_status|INFO|Not setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 down as sb is readonly
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00781|binding|INFO|Removing iface tapd4087b80-19 ovn-installed in OVS
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00782|binding|INFO|Releasing lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 from this chassis (sb_readonly=0)
Dec  6 02:47:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:26Z|00783|binding|INFO|Setting lport d4087b80-19fa-44d3-8dd5-406c2b01b6f4 down in Southbound
Dec  6 02:47:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:26.972 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:41:1e 10.100.0.7'], port_security=['fa:16:3e:16:41:1e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f67d89a8-836a-4f47-af8d-37cf99529275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ec9294f6d4b4f44a72414374d646a4a', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'ef8d01c6-8791-4eef-8d34-906ccbdd6237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a13aa6e4-a519-406f-87b7-05ba3d74a296, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=d4087b80-19fa-44d3-8dd5-406c2b01b6f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.975 232437 DEBUG nova.virt.libvirt.vif [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:43:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2113241509',display_name='tempest-ServersNegativeTestJSON-server-2113241509',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2113241509',id=153,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:46:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ec9294f6d4b4f44a72414374d646a4a',ramdisk_id='',reservation_id='r-r3odtvmp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-776446295',owner_user_name='tempest-ServersNegativeTestJSON-776446295-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:47:03Z,user_data=None,user_id='1035ecd55ed54b57aa35fe32fb915cc5',uuid=f67d89a8-836a-4f47-af8d-37cf99529275,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.975 232437 DEBUG nova.network.os_vif_util [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converting VIF {"id": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "address": "fa:16:3e:16:41:1e", "network": {"id": "67a02abd-6f15-4e26-ba0d-8a091ca98239", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-840496237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ec9294f6d4b4f44a72414374d646a4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4087b80-19", "ovs_interfaceid": "d4087b80-19fa-44d3-8dd5-406c2b01b6f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.976 232437 DEBUG nova.network.os_vif_util [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.977 232437 DEBUG os_vif [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.978 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.979 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4087b80-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.982 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:26 np0005548731 nova_compute[232433]: 2025-12-06 07:47:26.985 232437 INFO os_vif [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:41:1e,bridge_name='br-int',has_traffic_filtering=True,id=d4087b80-19fa-44d3-8dd5-406c2b01b6f4,network=Network(67a02abd-6f15-4e26-ba0d-8a091ca98239),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4087b80-19')#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.369 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:27Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:dc:f4 10.100.0.7
Dec  6 02:47:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:27.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:27 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[304069]: [NOTICE]   (304073) : haproxy version is 2.8.14-c23fe91
Dec  6 02:47:27 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[304069]: [NOTICE]   (304073) : path to executable is /usr/sbin/haproxy
Dec  6 02:47:27 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[304069]: [ALERT]    (304073) : Current worker (304075) exited with code 143 (Terminated)
Dec  6 02:47:27 np0005548731 neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239[304069]: [WARNING]  (304073) : All workers exited. Exiting... (0)
Dec  6 02:47:27 np0005548731 systemd[1]: libpod-8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03.scope: Deactivated successfully.
Dec  6 02:47:27 np0005548731 conmon[304069]: conmon 8fba2e2fdc7d60b3ca7b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03.scope/container/memory.events
Dec  6 02:47:27 np0005548731 podman[305286]: 2025-12-06 07:47:27.725250622 +0000 UTC m=+0.785726848 container died 8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 02:47:27 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03-userdata-shm.mount: Deactivated successfully.
Dec  6 02:47:27 np0005548731 systemd[1]: var-lib-containers-storage-overlay-361f1b5329e450f33d13583799db1087f543b008b246e8b5ab20dcf9869d0c53-merged.mount: Deactivated successfully.
Dec  6 02:47:27 np0005548731 podman[305286]: 2025-12-06 07:47:27.770827213 +0000 UTC m=+0.831303419 container cleanup 8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:47:27 np0005548731 systemd[1]: libpod-conmon-8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03.scope: Deactivated successfully.
Dec  6 02:47:27 np0005548731 podman[305333]: 2025-12-06 07:47:27.859777472 +0000 UTC m=+0.049835047 container remove 8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.866 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d97c541d-b4bc-4680-a654-ea2597fc489b]: (4, ('Sat Dec  6 07:47:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 (8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03)\n8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03\nSat Dec  6 07:47:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 (8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03)\n8fba2e2fdc7d60b3ca7ba34f6e4cbf1f918e754d3affc1a12d20397815173b03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.868 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[12ffe8e4-3995-4f0e-a0f1-0fbbffc308a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.870 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a02abd-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:27 np0005548731 nova_compute[232433]: 2025-12-06 07:47:27.872 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:27 np0005548731 kernel: tap67a02abd-60: left promiscuous mode
Dec  6 02:47:27 np0005548731 nova_compute[232433]: 2025-12-06 07:47:27.890 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.894 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf920a1-1949-4e53-810e-96d2c812ba17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.909 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[984f86da-e153-4ddd-8ed1-ad2225244764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.910 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[743f9d2b-7812-48ed-9778-d62673cd3d2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.927 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[799305ef-6405-4050-9090-e9f2c0d5c9eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748036, 'reachable_time': 25606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305347, 'error': None, 'target': 'ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:27 np0005548731 systemd[1]: run-netns-ovnmeta\x2d67a02abd\x2d6f15\x2d4e26\x2dba0d\x2d8a091ca98239.mount: Deactivated successfully.
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.932 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67a02abd-6f15-4e26-ba0d-8a091ca98239 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.933 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[faf33df1-81ab-4bf8-b2d1-a9b2bf181560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.933 143965 INFO neutron.agent.ovn.metadata.agent [-] Port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 in datapath 67a02abd-6f15-4e26-ba0d-8a091ca98239 unbound from our chassis#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.935 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67a02abd-6f15-4e26-ba0d-8a091ca98239, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.936 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[86fb0451-c625-443d-9946-7bdce077cac3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.936 143965 INFO neutron.agent.ovn.metadata.agent [-] Port d4087b80-19fa-44d3-8dd5-406c2b01b6f4 in datapath 67a02abd-6f15-4e26-ba0d-8a091ca98239 unbound from our chassis#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.937 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67a02abd-6f15-4e26-ba0d-8a091ca98239, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:47:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:27.938 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[32532fb7-92df-4987-9406-8eabcd258d9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:28 np0005548731 nova_compute[232433]: 2025-12-06 07:47:28.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:47:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:28.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:28 np0005548731 nova_compute[232433]: 2025-12-06 07:47:28.788 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:28 np0005548731 nova_compute[232433]: 2025-12-06 07:47:28.797 232437 DEBUG nova.compute.manager [req-21c971f9-b50a-432b-bab0-12dfc2c03c17 req-578d8724-a5e0-41d1-acea-d7f3b3e27684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-unplugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:28 np0005548731 nova_compute[232433]: 2025-12-06 07:47:28.798 232437 DEBUG oslo_concurrency.lockutils [req-21c971f9-b50a-432b-bab0-12dfc2c03c17 req-578d8724-a5e0-41d1-acea-d7f3b3e27684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:28 np0005548731 nova_compute[232433]: 2025-12-06 07:47:28.798 232437 DEBUG oslo_concurrency.lockutils [req-21c971f9-b50a-432b-bab0-12dfc2c03c17 req-578d8724-a5e0-41d1-acea-d7f3b3e27684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:28 np0005548731 nova_compute[232433]: 2025-12-06 07:47:28.799 232437 DEBUG oslo_concurrency.lockutils [req-21c971f9-b50a-432b-bab0-12dfc2c03c17 req-578d8724-a5e0-41d1-acea-d7f3b3e27684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:28 np0005548731 nova_compute[232433]: 2025-12-06 07:47:28.799 232437 DEBUG nova.compute.manager [req-21c971f9-b50a-432b-bab0-12dfc2c03c17 req-578d8724-a5e0-41d1-acea-d7f3b3e27684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] No waiting events found dispatching network-vif-unplugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:28 np0005548731 nova_compute[232433]: 2025-12-06 07:47:28.799 232437 DEBUG nova.compute.manager [req-21c971f9-b50a-432b-bab0-12dfc2c03c17 req-578d8724-a5e0-41d1-acea-d7f3b3e27684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-unplugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:47:29 np0005548731 nova_compute[232433]: 2025-12-06 07:47:29.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:47:29 np0005548731 nova_compute[232433]: 2025-12-06 07:47:29.579 232437 INFO nova.virt.libvirt.driver [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Deleting instance files /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275_del#033[00m
Dec  6 02:47:29 np0005548731 nova_compute[232433]: 2025-12-06 07:47:29.580 232437 INFO nova.virt.libvirt.driver [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Deletion of /var/lib/nova/instances/f67d89a8-836a-4f47-af8d-37cf99529275_del complete#033[00m
Dec  6 02:47:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:47:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:29.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:47:29 np0005548731 nova_compute[232433]: 2025-12-06 07:47:29.983 232437 INFO nova.compute.manager [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Took 4.68 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:47:29 np0005548731 nova_compute[232433]: 2025-12-06 07:47:29.984 232437 DEBUG oslo.service.loopingcall [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:47:29 np0005548731 nova_compute[232433]: 2025-12-06 07:47:29.985 232437 DEBUG nova.compute.manager [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:47:29 np0005548731 nova_compute[232433]: 2025-12-06 07:47:29.985 232437 DEBUG nova.network.neutron [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:47:30 np0005548731 nova_compute[232433]: 2025-12-06 07:47:30.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:47:30 np0005548731 nova_compute[232433]: 2025-12-06 07:47:30.149 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:30 np0005548731 nova_compute[232433]: 2025-12-06 07:47:30.149 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:30 np0005548731 nova_compute[232433]: 2025-12-06 07:47:30.150 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:30 np0005548731 nova_compute[232433]: 2025-12-06 07:47:30.150 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:47:30 np0005548731 nova_compute[232433]: 2025-12-06 07:47:30.150 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:47:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/855239122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:47:30 np0005548731 nova_compute[232433]: 2025-12-06 07:47:30.606 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:30.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.392 232437 DEBUG nova.compute.manager [req-ffec59a4-a8a3-4360-bdf8-12564f72474c req-fd572dab-8a7e-4372-befc-36f7ef1dd56d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.393 232437 DEBUG oslo_concurrency.lockutils [req-ffec59a4-a8a3-4360-bdf8-12564f72474c req-fd572dab-8a7e-4372-befc-36f7ef1dd56d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.394 232437 DEBUG oslo_concurrency.lockutils [req-ffec59a4-a8a3-4360-bdf8-12564f72474c req-fd572dab-8a7e-4372-befc-36f7ef1dd56d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.394 232437 DEBUG oslo_concurrency.lockutils [req-ffec59a4-a8a3-4360-bdf8-12564f72474c req-fd572dab-8a7e-4372-befc-36f7ef1dd56d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.394 232437 DEBUG nova.compute.manager [req-ffec59a4-a8a3-4360-bdf8-12564f72474c req-fd572dab-8a7e-4372-befc-36f7ef1dd56d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] No waiting events found dispatching network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.395 232437 WARNING nova.compute.manager [req-ffec59a4-a8a3-4360-bdf8-12564f72474c req-fd572dab-8a7e-4372-befc-36f7ef1dd56d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received unexpected event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.433 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.434 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.440 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.440 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.441 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:47:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:31.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.638 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.639 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3863MB free_disk=20.693702697753906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.640 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:31 np0005548731 nova_compute[232433]: 2025-12-06 07:47:31.640 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.030 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.128 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 2b4b3181-da12-4705-9215-7ee5b869102b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.129 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance f67d89a8-836a-4f47-af8d-37cf99529275 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.129 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance e268006c-e26b-4400-8c9a-da1925cd1a57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.129 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.129 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.207 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:47:32 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3830138832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:47:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:32.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.638 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.644 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.814 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.819 232437 DEBUG nova.network.neutron [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.861 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.862 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:32 np0005548731 nova_compute[232433]: 2025-12-06 07:47:32.862 232437 INFO nova.compute.manager [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Took 2.88 seconds to deallocate network for instance.#033[00m
Dec  6 02:47:33 np0005548731 nova_compute[232433]: 2025-12-06 07:47:33.144 232437 DEBUG oslo_concurrency.lockutils [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:33 np0005548731 nova_compute[232433]: 2025-12-06 07:47:33.145 232437 DEBUG oslo_concurrency.lockutils [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:33 np0005548731 nova_compute[232433]: 2025-12-06 07:47:33.421 232437 DEBUG nova.compute.manager [req-da7eff52-e7ab-4fe5-a277-4c8c56e5dff1 req-5978ab7c-760b-4e82-8718-bfd538c562a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-deleted-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:33.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:33 np0005548731 nova_compute[232433]: 2025-12-06 07:47:33.791 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:33 np0005548731 nova_compute[232433]: 2025-12-06 07:47:33.863 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:47:33 np0005548731 nova_compute[232433]: 2025-12-06 07:47:33.978 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:47:33 np0005548731 nova_compute[232433]: 2025-12-06 07:47:33.979 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.219 232437 DEBUG oslo_concurrency.processutils [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.260 232437 DEBUG nova.compute.manager [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.261 232437 DEBUG oslo_concurrency.lockutils [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.261 232437 DEBUG oslo_concurrency.lockutils [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.261 232437 DEBUG oslo_concurrency.lockutils [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.262 232437 DEBUG nova.compute.manager [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] No waiting events found dispatching network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.262 232437 WARNING nova.compute.manager [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Received unexpected event network-vif-plugged-d4087b80-19fa-44d3-8dd5-406c2b01b6f4 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.262 232437 DEBUG nova.compute.manager [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Received event network-changed-9182be1d-dcb1-498a-af53-8a36faebc1e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.262 232437 DEBUG nova.compute.manager [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Refreshing instance network info cache due to event network-changed-9182be1d-dcb1-498a-af53-8a36faebc1e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.263 232437 DEBUG oslo_concurrency.lockutils [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e268006c-e26b-4400-8c9a-da1925cd1a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.263 232437 DEBUG oslo_concurrency.lockutils [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e268006c-e26b-4400-8c9a-da1925cd1a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.263 232437 DEBUG nova.network.neutron [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Refreshing network info cache for port 9182be1d-dcb1-498a-af53-8a36faebc1e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:47:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:34.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:47:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3956898978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.662 232437 DEBUG oslo_concurrency.processutils [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.668 232437 DEBUG nova.compute.provider_tree [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.684 232437 DEBUG nova.scheduler.client.report [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.719 232437 DEBUG oslo_concurrency.lockutils [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.749 232437 INFO nova.scheduler.client.report [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Deleted allocations for instance f67d89a8-836a-4f47-af8d-37cf99529275#033[00m
Dec  6 02:47:34 np0005548731 nova_compute[232433]: 2025-12-06 07:47:34.808 232437 DEBUG oslo_concurrency.lockutils [None req-b196ebd4-a18a-4390-927e-509cdf25ce14 1035ecd55ed54b57aa35fe32fb915cc5 4ec9294f6d4b4f44a72414374d646a4a - - default default] Lock "f67d89a8-836a-4f47-af8d-37cf99529275" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:34 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Dec  6 02:47:35 np0005548731 nova_compute[232433]: 2025-12-06 07:47:35.375 232437 DEBUG nova.network.neutron [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Updated VIF entry in instance network info cache for port 9182be1d-dcb1-498a-af53-8a36faebc1e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:47:35 np0005548731 nova_compute[232433]: 2025-12-06 07:47:35.376 232437 DEBUG nova.network.neutron [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Updating instance_info_cache with network_info: [{"id": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "address": "fa:16:3e:06:7a:7f", "network": {"id": "61f8d3af-3627-497f-85d1-05f726aad6e5", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1070596354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "329d4c9562c84ec5a42ca68894cbf27f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9182be1d-dc", "ovs_interfaceid": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:47:35 np0005548731 nova_compute[232433]: 2025-12-06 07:47:35.394 232437 DEBUG oslo_concurrency.lockutils [req-8529c70e-a3d9-42cd-b0b2-a1deec0955bf req-d4e705b5-dbcc-4dd7-8d4c-64a339878ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e268006c-e26b-4400-8c9a-da1925cd1a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:47:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:47:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:35.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:47:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:36.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:37 np0005548731 nova_compute[232433]: 2025-12-06 07:47:37.033 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:37.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:38Z|00784|binding|INFO|Releasing lport 8ee8956a-e293-43e2-a41a-6931cf826f0b from this chassis (sb_readonly=0)
Dec  6 02:47:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:38Z|00785|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec  6 02:47:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:38.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:38 np0005548731 nova_compute[232433]: 2025-12-06 07:47:38.655 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:38Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:7a:7f 10.100.0.9
Dec  6 02:47:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:38Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:7a:7f 10.100.0.9
Dec  6 02:47:38 np0005548731 nova_compute[232433]: 2025-12-06 07:47:38.797 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:39.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:47:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:40.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:47:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:47:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:41.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:47:41 np0005548731 nova_compute[232433]: 2025-12-06 07:47:41.953 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007246.9503407, f67d89a8-836a-4f47-af8d-37cf99529275 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:47:41 np0005548731 nova_compute[232433]: 2025-12-06 07:47:41.954 232437 INFO nova.compute.manager [-] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:47:42 np0005548731 nova_compute[232433]: 2025-12-06 07:47:42.035 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:42 np0005548731 nova_compute[232433]: 2025-12-06 07:47:42.085 232437 DEBUG nova.compute.manager [None req-860dbeed-617a-4b9b-89ca-3ec1c85abee9 - - - - - -] [instance: f67d89a8-836a-4f47-af8d-37cf99529275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:47:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:47:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:42.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:47:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:47:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:43.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:47:43 np0005548731 nova_compute[232433]: 2025-12-06 07:47:43.794 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:47:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:47:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:47:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:44.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:44 np0005548731 nova_compute[232433]: 2025-12-06 07:47:44.738 232437 DEBUG oslo_concurrency.lockutils [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:44 np0005548731 nova_compute[232433]: 2025-12-06 07:47:44.738 232437 DEBUG oslo_concurrency.lockutils [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:44 np0005548731 nova_compute[232433]: 2025-12-06 07:47:44.757 232437 INFO nova.compute.manager [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Detaching volume 1392b4ff-bd9d-4acf-a769-40407523ee0a#033[00m
Dec  6 02:47:44 np0005548731 nova_compute[232433]: 2025-12-06 07:47:44.961 232437 INFO nova.virt.block_device [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Attempting to driver detach volume 1392b4ff-bd9d-4acf-a769-40407523ee0a from mountpoint /dev/vdb#033[00m
Dec  6 02:47:44 np0005548731 nova_compute[232433]: 2025-12-06 07:47:44.974 232437 DEBUG nova.virt.libvirt.driver [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Attempting to detach device vdb from instance 2b4b3181-da12-4705-9215-7ee5b869102b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:47:44 np0005548731 nova_compute[232433]: 2025-12-06 07:47:44.975 232437 DEBUG nova.virt.libvirt.guest [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-1392b4ff-bd9d-4acf-a769-40407523ee0a">
Dec  6 02:47:44 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  <serial>1392b4ff-bd9d-4acf-a769-40407523ee0a</serial>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:47:44 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:47:44 np0005548731 nova_compute[232433]: 2025-12-06 07:47:44.983 232437 INFO nova.virt.libvirt.driver [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Successfully detached device vdb from instance 2b4b3181-da12-4705-9215-7ee5b869102b from the persistent domain config.#033[00m
Dec  6 02:47:44 np0005548731 nova_compute[232433]: 2025-12-06 07:47:44.983 232437 DEBUG nova.virt.libvirt.driver [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 2b4b3181-da12-4705-9215-7ee5b869102b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:47:44 np0005548731 nova_compute[232433]: 2025-12-06 07:47:44.984 232437 DEBUG nova.virt.libvirt.guest [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-1392b4ff-bd9d-4acf-a769-40407523ee0a">
Dec  6 02:47:44 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  <serial>1392b4ff-bd9d-4acf-a769-40407523ee0a</serial>
Dec  6 02:47:44 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:47:44 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:47:44 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:47:45 np0005548731 nova_compute[232433]: 2025-12-06 07:47:45.099 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765007265.0989041, 2b4b3181-da12-4705-9215-7ee5b869102b => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:47:45 np0005548731 nova_compute[232433]: 2025-12-06 07:47:45.101 232437 DEBUG nova.virt.libvirt.driver [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 2b4b3181-da12-4705-9215-7ee5b869102b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:47:45 np0005548731 nova_compute[232433]: 2025-12-06 07:47:45.104 232437 INFO nova.virt.libvirt.driver [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Successfully detached device vdb from instance 2b4b3181-da12-4705-9215-7ee5b869102b from the live domain config.#033[00m
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.143978) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265144195, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 874, "num_deletes": 251, "total_data_size": 1546195, "memory_usage": 1573600, "flush_reason": "Manual Compaction"}
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265153280, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 1019453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59166, "largest_seqno": 60035, "table_properties": {"data_size": 1015316, "index_size": 1853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8795, "raw_average_key_size": 18, "raw_value_size": 1006798, "raw_average_value_size": 2071, "num_data_blocks": 82, "num_entries": 486, "num_filter_entries": 486, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007205, "oldest_key_time": 1765007205, "file_creation_time": 1765007265, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 9225 microseconds, and 4423 cpu microseconds.
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.153366) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 1019453 bytes OK
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.153387) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.156359) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.156382) EVENT_LOG_v1 {"time_micros": 1765007265156376, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.156402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 1541662, prev total WAL file size 1541662, number of live WAL files 2.
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.157462) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(995KB)], [114(11MB)]
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265157506, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 13465107, "oldest_snapshot_seqno": -1}
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 9074 keys, 12373168 bytes, temperature: kUnknown
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265240469, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 12373168, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12313450, "index_size": 35926, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 238485, "raw_average_key_size": 26, "raw_value_size": 12152725, "raw_average_value_size": 1339, "num_data_blocks": 1380, "num_entries": 9074, "num_filter_entries": 9074, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765007265, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.240811) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 12373168 bytes
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.243246) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.0 rd, 148.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(25.3) write-amplify(12.1) OK, records in: 9594, records dropped: 520 output_compression: NoCompression
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.243268) EVENT_LOG_v1 {"time_micros": 1765007265243259, "job": 72, "event": "compaction_finished", "compaction_time_micros": 83101, "compaction_time_cpu_micros": 30642, "output_level": 6, "num_output_files": 1, "total_output_size": 12373168, "num_input_records": 9594, "num_output_records": 9074, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265243578, "job": 72, "event": "table_file_deletion", "file_number": 116}
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265245480, "job": 72, "event": "table_file_deletion", "file_number": 114}
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.157420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.245599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.245607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.245609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.245612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:47:45 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:47:45.245615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:47:45 np0005548731 nova_compute[232433]: 2025-12-06 07:47:45.427 232437 DEBUG nova.objects.instance [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'flavor' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:47:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:47:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:45.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:47:45 np0005548731 nova_compute[232433]: 2025-12-06 07:47:45.814 232437 DEBUG oslo_concurrency.lockutils [None req-3c94a278-12c3-4e11-a46e-dd8e892e4714 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:45 np0005548731 podman[305609]: 2025-12-06 07:47:45.895998893 +0000 UTC m=+0.055985336 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 02:47:45 np0005548731 podman[305611]: 2025-12-06 07:47:45.908437296 +0000 UTC m=+0.063232293 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  6 02:47:45 np0005548731 podman[305610]: 2025-12-06 07:47:45.941386419 +0000 UTC m=+0.094478894 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller)
Dec  6 02:47:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e358 e358: 3 total, 3 up, 3 in
Dec  6 02:47:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:46.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:47 np0005548731 nova_compute[232433]: 2025-12-06 07:47:47.037 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:47.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:47 np0005548731 nova_compute[232433]: 2025-12-06 07:47:47.635 232437 DEBUG oslo_concurrency.lockutils [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:47 np0005548731 nova_compute[232433]: 2025-12-06 07:47:47.636 232437 DEBUG oslo_concurrency.lockutils [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:47 np0005548731 nova_compute[232433]: 2025-12-06 07:47:47.637 232437 DEBUG oslo_concurrency.lockutils [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:47 np0005548731 nova_compute[232433]: 2025-12-06 07:47:47.637 232437 DEBUG oslo_concurrency.lockutils [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:47 np0005548731 nova_compute[232433]: 2025-12-06 07:47:47.638 232437 DEBUG oslo_concurrency.lockutils [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:47 np0005548731 nova_compute[232433]: 2025-12-06 07:47:47.640 232437 INFO nova.compute.manager [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Terminating instance#033[00m
Dec  6 02:47:47 np0005548731 nova_compute[232433]: 2025-12-06 07:47:47.642 232437 DEBUG nova.compute.manager [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:47:48 np0005548731 kernel: tap879edf90-08 (unregistering): left promiscuous mode
Dec  6 02:47:48 np0005548731 NetworkManager[49182]: <info>  [1765007268.5333] device (tap879edf90-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.543 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:48Z|00786|binding|INFO|Releasing lport 879edf90-0812-4adf-b171-d993c6cfb26c from this chassis (sb_readonly=0)
Dec  6 02:47:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:48Z|00787|binding|INFO|Setting lport 879edf90-0812-4adf-b171-d993c6cfb26c down in Southbound
Dec  6 02:47:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:47:48Z|00788|binding|INFO|Removing iface tap879edf90-08 ovn-installed in OVS
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.546 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.564 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:48 np0005548731 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Dec  6 02:47:48 np0005548731 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d0000009f.scope: Consumed 14.627s CPU time.
Dec  6 02:47:48 np0005548731 systemd-machined[195355]: Machine qemu-78-instance-0000009f terminated.
Dec  6 02:47:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:48.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.678 232437 INFO nova.virt.libvirt.driver [-] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Instance destroyed successfully.#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.678 232437 DEBUG nova.objects.instance [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'resources' on Instance uuid 2b4b3181-da12-4705-9215-7ee5b869102b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:47:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:48.786 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:dc:f4 10.100.0.7'], port_security=['fa:16:3e:9c:dc:f4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b4b3181-da12-4705-9215-7ee5b869102b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f9c55c19-5297-4b9d-814f-3c2f976ceba7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=879edf90-0812-4adf-b171-d993c6cfb26c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:47:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:48.787 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 879edf90-0812-4adf-b171-d993c6cfb26c in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis#033[00m
Dec  6 02:47:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:48.788 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:47:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:48.789 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0af275-5e5a-4d22-a1ce-3c654320d1f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:48.789 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace which is not needed anymore#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.797 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.808 232437 DEBUG nova.virt.libvirt.vif [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:46:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1284166049',display_name='tempest-ServerStableDeviceRescueTest-server-1284166049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1284166049',id=159,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGiOM3BHZdMb9T+i2z28aV9d25R+ylqHokCUcVIaz2h7DQq++pQYpAQHDFB53HHc1FWwSamucUXGjXIy3rSPSRgsSGsD3yr/u/2kWMgZhyKrH9WAv41/n4dRvjMhKO5fVA==',key_name='tempest-keypair-1110836856',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:47:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-73o1o3hz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:47:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=2b4b3181-da12-4705-9215-7ee5b869102b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.808 232437 DEBUG nova.network.os_vif_util [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "879edf90-0812-4adf-b171-d993c6cfb26c", "address": "fa:16:3e:9c:dc:f4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap879edf90-08", "ovs_interfaceid": "879edf90-0812-4adf-b171-d993c6cfb26c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.809 232437 DEBUG nova.network.os_vif_util [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:dc:f4,bridge_name='br-int',has_traffic_filtering=True,id=879edf90-0812-4adf-b171-d993c6cfb26c,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap879edf90-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.809 232437 DEBUG os_vif [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:dc:f4,bridge_name='br-int',has_traffic_filtering=True,id=879edf90-0812-4adf-b171-d993c6cfb26c,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap879edf90-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.810 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.810 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap879edf90-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.812 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.814 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.818 232437 INFO os_vif [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:dc:f4,bridge_name='br-int',has_traffic_filtering=True,id=879edf90-0812-4adf-b171-d993c6cfb26c,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap879edf90-08')#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.848 232437 DEBUG oslo_concurrency.lockutils [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "e268006c-e26b-4400-8c9a-da1925cd1a57" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:48 np0005548731 nova_compute[232433]: 2025-12-06 07:47:48.850 232437 DEBUG oslo_concurrency.lockutils [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:48 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304854]: [NOTICE]   (304858) : haproxy version is 2.8.14-c23fe91
Dec  6 02:47:48 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304854]: [NOTICE]   (304858) : path to executable is /usr/sbin/haproxy
Dec  6 02:47:48 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304854]: [WARNING]  (304858) : Exiting Master process...
Dec  6 02:47:48 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304854]: [ALERT]    (304858) : Current worker (304860) exited with code 143 (Terminated)
Dec  6 02:47:48 np0005548731 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[304854]: [WARNING]  (304858) : All workers exited. Exiting... (0)
Dec  6 02:47:48 np0005548731 systemd[1]: libpod-8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249.scope: Deactivated successfully.
Dec  6 02:47:48 np0005548731 podman[305725]: 2025-12-06 07:47:48.936839371 +0000 UTC m=+0.051007274 container died 8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:47:48 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249-userdata-shm.mount: Deactivated successfully.
Dec  6 02:47:48 np0005548731 systemd[1]: var-lib-containers-storage-overlay-20c79853b6cdc4327f207120ebbba64f9acda939186e8940588e8c29860df0a1-merged.mount: Deactivated successfully.
Dec  6 02:47:48 np0005548731 podman[305725]: 2025-12-06 07:47:48.990452089 +0000 UTC m=+0.104620032 container cleanup 8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 02:47:48 np0005548731 systemd[1]: libpod-conmon-8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249.scope: Deactivated successfully.
Dec  6 02:47:49 np0005548731 nova_compute[232433]: 2025-12-06 07:47:49.007 232437 DEBUG nova.objects.instance [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lazy-loading 'flavor' on Instance uuid e268006c-e26b-4400-8c9a-da1925cd1a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:47:49 np0005548731 podman[305756]: 2025-12-06 07:47:49.060366233 +0000 UTC m=+0.046451224 container remove 8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:47:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:49.066 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1efc51fd-083a-4c86-823d-cd8147c43213]: (4, ('Sat Dec  6 07:47:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249)\n8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249\nSat Dec  6 07:47:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249)\n8fdda6825839511fd07c0d32e757fd06e6eb46125471194fa158cd1259d49249\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:49.068 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[eab084ab-fb8c-454c-99e7-f83771c28fe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:49.069 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:47:49 np0005548731 kernel: tap6d1a17d6-50: left promiscuous mode
Dec  6 02:47:49 np0005548731 nova_compute[232433]: 2025-12-06 07:47:49.081 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:49 np0005548731 nova_compute[232433]: 2025-12-06 07:47:49.098 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:49.101 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e9e691-ece4-47ac-a5e0-20d9cb54c66e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:49.120 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a513ab86-4924-4379-953e-2378ff7e4bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:49.122 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6f09acbe-72c4-4940-aa33-76aea390e74f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:49.140 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[13df0aee-bba2-449d-94a4-fef45d65a8b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749445, 'reachable_time': 36731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305771, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:49 np0005548731 systemd[1]: run-netns-ovnmeta\x2d6d1a17d6\x2d5e44\x2d40b7\x2d832a\x2d81cb86c02e71.mount: Deactivated successfully.
Dec  6 02:47:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:49.145 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:47:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:47:49.145 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[78766299-c325-44a8-93e2-4cc9f3371bae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:49 np0005548731 nova_compute[232433]: 2025-12-06 07:47:49.235 232437 DEBUG nova.compute.manager [req-948dcacd-d88f-4e01-96fc-95aa4974b57e req-3a580ce2-2442-4a32-a543-bae507c8e8b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-unplugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:49 np0005548731 nova_compute[232433]: 2025-12-06 07:47:49.235 232437 DEBUG oslo_concurrency.lockutils [req-948dcacd-d88f-4e01-96fc-95aa4974b57e req-3a580ce2-2442-4a32-a543-bae507c8e8b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:49 np0005548731 nova_compute[232433]: 2025-12-06 07:47:49.235 232437 DEBUG oslo_concurrency.lockutils [req-948dcacd-d88f-4e01-96fc-95aa4974b57e req-3a580ce2-2442-4a32-a543-bae507c8e8b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:49 np0005548731 nova_compute[232433]: 2025-12-06 07:47:49.236 232437 DEBUG oslo_concurrency.lockutils [req-948dcacd-d88f-4e01-96fc-95aa4974b57e req-3a580ce2-2442-4a32-a543-bae507c8e8b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:49 np0005548731 nova_compute[232433]: 2025-12-06 07:47:49.236 232437 DEBUG nova.compute.manager [req-948dcacd-d88f-4e01-96fc-95aa4974b57e req-3a580ce2-2442-4a32-a543-bae507c8e8b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] No waiting events found dispatching network-vif-unplugged-879edf90-0812-4adf-b171-d993c6cfb26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:49 np0005548731 nova_compute[232433]: 2025-12-06 07:47:49.236 232437 DEBUG nova.compute.manager [req-948dcacd-d88f-4e01-96fc-95aa4974b57e req-3a580ce2-2442-4a32-a543-bae507c8e8b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-unplugged-879edf90-0812-4adf-b171-d993c6cfb26c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:47:49 np0005548731 nova_compute[232433]: 2025-12-06 07:47:49.266 232437 DEBUG oslo_concurrency.lockutils [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:49.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:50 np0005548731 nova_compute[232433]: 2025-12-06 07:47:50.457 232437 INFO nova.virt.libvirt.driver [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Deleting instance files /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b_del#033[00m
Dec  6 02:47:50 np0005548731 nova_compute[232433]: 2025-12-06 07:47:50.458 232437 INFO nova.virt.libvirt.driver [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Deletion of /var/lib/nova/instances/2b4b3181-da12-4705-9215-7ee5b869102b_del complete#033[00m
Dec  6 02:47:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:50.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:50 np0005548731 nova_compute[232433]: 2025-12-06 07:47:50.853 232437 INFO nova.compute.manager [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Took 3.21 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:47:50 np0005548731 nova_compute[232433]: 2025-12-06 07:47:50.853 232437 DEBUG oslo.service.loopingcall [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:47:50 np0005548731 nova_compute[232433]: 2025-12-06 07:47:50.854 232437 DEBUG nova.compute.manager [-] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:47:50 np0005548731 nova_compute[232433]: 2025-12-06 07:47:50.854 232437 DEBUG nova.network.neutron [-] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:47:50 np0005548731 nova_compute[232433]: 2025-12-06 07:47:50.971 232437 DEBUG oslo_concurrency.lockutils [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "e268006c-e26b-4400-8c9a-da1925cd1a57" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:50 np0005548731 nova_compute[232433]: 2025-12-06 07:47:50.972 232437 DEBUG oslo_concurrency.lockutils [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:50 np0005548731 nova_compute[232433]: 2025-12-06 07:47:50.972 232437 INFO nova.compute.manager [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Attaching volume d3873e66-0cd3-4a84-b18d-551e693a7554 to /dev/vdb#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.163 232437 DEBUG os_brick.utils [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.165 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.178 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.178 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[e35f871b-4379-4169-92de-71fdc149c1d7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.180 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.189 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.189 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[da098d0d-8443-4ed7-b27d-2c2904498401]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.191 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.200 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.200 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[20f674d7-2153-41da-86d2-b033b3a99e8a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.202 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[36ddf68d-ab34-47d1-9cb1-976e87c3a755]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.203 232437 DEBUG oslo_concurrency.processutils [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.236 232437 DEBUG oslo_concurrency.processutils [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.238 232437 DEBUG os_brick.initiator.connectors.lightos [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.239 232437 DEBUG os_brick.initiator.connectors.lightos [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.239 232437 DEBUG os_brick.initiator.connectors.lightos [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.239 232437 DEBUG os_brick.utils [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.240 232437 DEBUG nova.virt.block_device [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Updating existing volume attachment record: 08050d54-8477-42c7-a114-d7e13e3651ee _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:47:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:47:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:47:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:51.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.625 232437 DEBUG nova.compute.manager [req-c77672e0-04dc-4a5d-b18c-96886135c436 req-feb87683-f46f-47e4-872f-782d0e9fe8e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.625 232437 DEBUG oslo_concurrency.lockutils [req-c77672e0-04dc-4a5d-b18c-96886135c436 req-feb87683-f46f-47e4-872f-782d0e9fe8e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.625 232437 DEBUG oslo_concurrency.lockutils [req-c77672e0-04dc-4a5d-b18c-96886135c436 req-feb87683-f46f-47e4-872f-782d0e9fe8e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.626 232437 DEBUG oslo_concurrency.lockutils [req-c77672e0-04dc-4a5d-b18c-96886135c436 req-feb87683-f46f-47e4-872f-782d0e9fe8e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.626 232437 DEBUG nova.compute.manager [req-c77672e0-04dc-4a5d-b18c-96886135c436 req-feb87683-f46f-47e4-872f-782d0e9fe8e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] No waiting events found dispatching network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:47:51 np0005548731 nova_compute[232433]: 2025-12-06 07:47:51.626 232437 WARNING nova.compute.manager [req-c77672e0-04dc-4a5d-b18c-96886135c436 req-feb87683-f46f-47e4-872f-782d0e9fe8e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received unexpected event network-vif-plugged-879edf90-0812-4adf-b171-d993c6cfb26c for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:47:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:52.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.184 232437 DEBUG nova.network.neutron [-] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.196 232437 DEBUG os_brick.encryptors [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Using volume encryption metadata '{'encryption_key_id': '7eed8ca9-eab0-4d8c-a68e-c776c26acb17', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d3873e66-0cd3-4a84-b18d-551e693a7554', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd3873e66-0cd3-4a84-b18d-551e693a7554', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e268006c-e26b-4400-8c9a-da1925cd1a57', 'attached_at': '', 'detached_at': '', 'volume_id': 'd3873e66-0cd3-4a84-b18d-551e693a7554', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.202 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.228 232437 DEBUG barbicanclient.v1.secrets [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.229 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.258 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.258 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.286 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.287 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.297 232437 INFO nova.compute.manager [-] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Took 2.44 seconds to deallocate network for instance.#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.309 232437 DEBUG nova.compute.manager [req-b65a5a02-0e0f-41ef-b770-78841fc66122 req-41dbb233-e0f9-4c08-b80b-0f85d41cfa7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Received event network-vif-deleted-879edf90-0812-4adf-b171-d993c6cfb26c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.310 232437 INFO nova.compute.manager [req-b65a5a02-0e0f-41ef-b770-78841fc66122 req-41dbb233-e0f9-4c08-b80b-0f85d41cfa7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Neutron deleted interface 879edf90-0812-4adf-b171-d993c6cfb26c; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.310 232437 DEBUG nova.network.neutron [req-b65a5a02-0e0f-41ef-b770-78841fc66122 req-41dbb233-e0f9-4c08-b80b-0f85d41cfa7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.319 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.320 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.343 232437 DEBUG nova.compute.manager [req-b65a5a02-0e0f-41ef-b770-78841fc66122 req-41dbb233-e0f9-4c08-b80b-0f85d41cfa7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Detach interface failed, port_id=879edf90-0812-4adf-b171-d993c6cfb26c, reason: Instance 2b4b3181-da12-4705-9215-7ee5b869102b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.368 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.368 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.401 232437 DEBUG oslo_concurrency.lockutils [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.401 232437 DEBUG oslo_concurrency.lockutils [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.404 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.404 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.461 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.461 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.488 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.489 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.519 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.520 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.529 232437 DEBUG oslo_concurrency.processutils [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.560 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.560 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.591 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.592 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.613 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.613 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:53.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.632 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.632 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.651 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.652 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.695 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.695 232437 INFO barbicanclient.base [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Calculated Secrets uuid ref: secrets/7eed8ca9-eab0-4d8c-a68e-c776c26acb17#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.722 232437 DEBUG barbicanclient.client [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.723 232437 DEBUG nova.virt.libvirt.host [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec  6 02:47:53 np0005548731 nova_compute[232433]:  <usage type="volume">
Dec  6 02:47:53 np0005548731 nova_compute[232433]:    <volume>d3873e66-0cd3-4a84-b18d-551e693a7554</volume>
Dec  6 02:47:53 np0005548731 nova_compute[232433]:  </usage>
Dec  6 02:47:53 np0005548731 nova_compute[232433]: </secret>
Dec  6 02:47:53 np0005548731 nova_compute[232433]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.735 232437 DEBUG nova.objects.instance [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lazy-loading 'flavor' on Instance uuid e268006c-e26b-4400-8c9a-da1925cd1a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.799 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.812 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:47:53 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3420904072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.948 232437 DEBUG oslo_concurrency.processutils [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:47:53 np0005548731 nova_compute[232433]: 2025-12-06 07:47:53.956 232437 DEBUG nova.compute.provider_tree [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:47:54 np0005548731 nova_compute[232433]: 2025-12-06 07:47:54.060 232437 DEBUG nova.virt.libvirt.driver [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Attempting to attach volume d3873e66-0cd3-4a84-b18d-551e693a7554 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 02:47:54 np0005548731 nova_compute[232433]: 2025-12-06 07:47:54.061 232437 DEBUG nova.scheduler.client.report [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:47:54 np0005548731 nova_compute[232433]: 2025-12-06 07:47:54.065 232437 DEBUG nova.virt.libvirt.guest [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:47:54 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:47:54 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-d3873e66-0cd3-4a84-b18d-551e693a7554">
Dec  6 02:47:54 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:47:54 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:47:54 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:47:54 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:47:54 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:47:54 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:47:54 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:47:54 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:47:54 np0005548731 nova_compute[232433]:  <serial>d3873e66-0cd3-4a84-b18d-551e693a7554</serial>
Dec  6 02:47:54 np0005548731 nova_compute[232433]:  <encryption format="luks">
Dec  6 02:47:54 np0005548731 nova_compute[232433]:    <secret type="passphrase" uuid="d7fa13c7-6665-4d30-9609-3f857055e3e1"/>
Dec  6 02:47:54 np0005548731 nova_compute[232433]:  </encryption>
Dec  6 02:47:54 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:47:54 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:47:54 np0005548731 nova_compute[232433]: 2025-12-06 07:47:54.192 232437 DEBUG oslo_concurrency.lockutils [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:54 np0005548731 nova_compute[232433]: 2025-12-06 07:47:54.226 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:54 np0005548731 nova_compute[232433]: 2025-12-06 07:47:54.312 232437 INFO nova.scheduler.client.report [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Deleted allocations for instance 2b4b3181-da12-4705-9215-7ee5b869102b#033[00m
Dec  6 02:47:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:54.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:54 np0005548731 nova_compute[232433]: 2025-12-06 07:47:54.660 232437 DEBUG oslo_concurrency.lockutils [None req-886f8678-ee95-4079-b7ef-c3f5b3ac976d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "2b4b3181-da12-4705-9215-7ee5b869102b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e359 e359: 3 total, 3 up, 3 in
Dec  6 02:47:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e360 e360: 3 total, 3 up, 3 in
Dec  6 02:47:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:55.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:56 np0005548731 nova_compute[232433]: 2025-12-06 07:47:56.524 232437 DEBUG nova.virt.libvirt.driver [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:47:56 np0005548731 nova_compute[232433]: 2025-12-06 07:47:56.524 232437 DEBUG nova.virt.libvirt.driver [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:47:56 np0005548731 nova_compute[232433]: 2025-12-06 07:47:56.525 232437 DEBUG nova.virt.libvirt.driver [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:47:56 np0005548731 nova_compute[232433]: 2025-12-06 07:47:56.525 232437 DEBUG nova.virt.libvirt.driver [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] No VIF found with MAC fa:16:3e:06:7a:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:47:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:47:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:56.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:57.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:57 np0005548731 nova_compute[232433]: 2025-12-06 07:47:57.777 232437 DEBUG oslo_concurrency.lockutils [None req-42fcbf2c-a5ec-4965-8b34-3427a1d88dad 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 6.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:47:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:47:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:58.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:47:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e361 e361: 3 total, 3 up, 3 in
Dec  6 02:47:58 np0005548731 nova_compute[232433]: 2025-12-06 07:47:58.814 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:47:58 np0005548731 nova_compute[232433]: 2025-12-06 07:47:58.876 232437 DEBUG oslo_concurrency.lockutils [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "e268006c-e26b-4400-8c9a-da1925cd1a57" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:47:58 np0005548731 nova_compute[232433]: 2025-12-06 07:47:58.876 232437 DEBUG oslo_concurrency.lockutils [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:47:58 np0005548731 nova_compute[232433]: 2025-12-06 07:47:58.905 232437 INFO nova.compute.manager [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Detaching volume d3873e66-0cd3-4a84-b18d-551e693a7554#033[00m
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.043 232437 INFO nova.virt.block_device [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Attempting to driver detach volume d3873e66-0cd3-4a84-b18d-551e693a7554 from mountpoint /dev/vdb#033[00m
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.182 232437 DEBUG os_brick.encryptors [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Using volume encryption metadata '{'encryption_key_id': '7eed8ca9-eab0-4d8c-a68e-c776c26acb17', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d3873e66-0cd3-4a84-b18d-551e693a7554', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd3873e66-0cd3-4a84-b18d-551e693a7554', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e268006c-e26b-4400-8c9a-da1925cd1a57', 'attached_at': '', 'detached_at': '', 'volume_id': 'd3873e66-0cd3-4a84-b18d-551e693a7554', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.188 232437 DEBUG nova.virt.libvirt.driver [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Attempting to detach device vdb from instance e268006c-e26b-4400-8c9a-da1925cd1a57 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.189 232437 DEBUG nova.virt.libvirt.guest [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-d3873e66-0cd3-4a84-b18d-551e693a7554">
Dec  6 02:47:59 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <serial>d3873e66-0cd3-4a84-b18d-551e693a7554</serial>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <encryption format="luks">
Dec  6 02:47:59 np0005548731 nova_compute[232433]:    <secret type="passphrase" uuid="d7fa13c7-6665-4d30-9609-3f857055e3e1"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  </encryption>
Dec  6 02:47:59 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:47:59 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.196 232437 INFO nova.virt.libvirt.driver [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Successfully detached device vdb from instance e268006c-e26b-4400-8c9a-da1925cd1a57 from the persistent domain config.#033[00m
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.196 232437 DEBUG nova.virt.libvirt.driver [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e268006c-e26b-4400-8c9a-da1925cd1a57 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.197 232437 DEBUG nova.virt.libvirt.guest [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-d3873e66-0cd3-4a84-b18d-551e693a7554">
Dec  6 02:47:59 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <serial>d3873e66-0cd3-4a84-b18d-551e693a7554</serial>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  <encryption format="luks">
Dec  6 02:47:59 np0005548731 nova_compute[232433]:    <secret type="passphrase" uuid="d7fa13c7-6665-4d30-9609-3f857055e3e1"/>
Dec  6 02:47:59 np0005548731 nova_compute[232433]:  </encryption>
Dec  6 02:47:59 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:47:59 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.246 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765007279.2463536, e268006c-e26b-4400-8c9a-da1925cd1a57 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.248 232437 DEBUG nova.virt.libvirt.driver [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e268006c-e26b-4400-8c9a-da1925cd1a57 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.249 232437 INFO nova.virt.libvirt.driver [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Successfully detached device vdb from instance e268006c-e26b-4400-8c9a-da1925cd1a57 from the live domain config.#033[00m
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.617 232437 DEBUG nova.objects.instance [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lazy-loading 'flavor' on Instance uuid e268006c-e26b-4400-8c9a-da1925cd1a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:47:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:47:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:47:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:59.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:47:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e362 e362: 3 total, 3 up, 3 in
Dec  6 02:47:59 np0005548731 nova_compute[232433]: 2025-12-06 07:47:59.924 232437 DEBUG oslo_concurrency.lockutils [None req-7874dd43-ff05-4873-b5a4-21ed28e1d9e5 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:00.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:00.889 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:00.890 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:00.890 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e363 e363: 3 total, 3 up, 3 in
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.495 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:48:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:01.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.647 232437 DEBUG oslo_concurrency.lockutils [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "e268006c-e26b-4400-8c9a-da1925cd1a57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.647 232437 DEBUG oslo_concurrency.lockutils [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.647 232437 DEBUG oslo_concurrency.lockutils [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.648 232437 DEBUG oslo_concurrency.lockutils [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.648 232437 DEBUG oslo_concurrency.lockutils [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.649 232437 INFO nova.compute.manager [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Terminating instance#033[00m
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.650 232437 DEBUG nova.compute.manager [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:48:01 np0005548731 kernel: tap9182be1d-dc (unregistering): left promiscuous mode
Dec  6 02:48:01 np0005548731 NetworkManager[49182]: <info>  [1765007281.6935] device (tap9182be1d-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:48:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:48:01Z|00789|binding|INFO|Releasing lport 9182be1d-dcb1-498a-af53-8a36faebc1e2 from this chassis (sb_readonly=0)
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.699 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:48:01Z|00790|binding|INFO|Setting lport 9182be1d-dcb1-498a-af53-8a36faebc1e2 down in Southbound
Dec  6 02:48:01 np0005548731 ovn_controller[133927]: 2025-12-06T07:48:01Z|00791|binding|INFO|Removing iface tap9182be1d-dc ovn-installed in OVS
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.702 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.720 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:01 np0005548731 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Dec  6 02:48:01 np0005548731 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a1.scope: Consumed 18.056s CPU time.
Dec  6 02:48:01 np0005548731 systemd-machined[195355]: Machine qemu-79-instance-000000a1 terminated.
Dec  6 02:48:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:01.801 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:7a:7f 10.100.0.9'], port_security=['fa:16:3e:06:7a:7f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e268006c-e26b-4400-8c9a-da1925cd1a57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61f8d3af-3627-497f-85d1-05f726aad6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '329d4c9562c84ec5a42ca68894cbf27f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b225c3bc-4906-441b-9e23-854a574bcecc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92477244-099e-4077-9b92-5060681e1a10, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9182be1d-dcb1-498a-af53-8a36faebc1e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:48:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:01.802 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9182be1d-dcb1-498a-af53-8a36faebc1e2 in datapath 61f8d3af-3627-497f-85d1-05f726aad6e5 unbound from our chassis#033[00m
Dec  6 02:48:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:01.804 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61f8d3af-3627-497f-85d1-05f726aad6e5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:48:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:01.805 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e99f25-ccc0-47bf-842a-b71d34b83352]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:01.806 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5 namespace which is not needed anymore#033[00m
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.883 232437 INFO nova.virt.libvirt.driver [-] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Instance destroyed successfully.#033[00m
Dec  6 02:48:01 np0005548731 nova_compute[232433]: 2025-12-06 07:48:01.884 232437 DEBUG nova.objects.instance [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lazy-loading 'resources' on Instance uuid e268006c-e26b-4400-8c9a-da1925cd1a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:48:01 np0005548731 neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5[305245]: [NOTICE]   (305249) : haproxy version is 2.8.14-c23fe91
Dec  6 02:48:01 np0005548731 neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5[305245]: [NOTICE]   (305249) : path to executable is /usr/sbin/haproxy
Dec  6 02:48:01 np0005548731 neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5[305245]: [WARNING]  (305249) : Exiting Master process...
Dec  6 02:48:01 np0005548731 neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5[305245]: [ALERT]    (305249) : Current worker (305251) exited with code 143 (Terminated)
Dec  6 02:48:01 np0005548731 neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5[305245]: [WARNING]  (305249) : All workers exited. Exiting... (0)
Dec  6 02:48:01 np0005548731 systemd[1]: libpod-cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc.scope: Deactivated successfully.
Dec  6 02:48:01 np0005548731 podman[305912]: 2025-12-06 07:48:01.935692005 +0000 UTC m=+0.044354612 container died cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 02:48:01 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc-userdata-shm.mount: Deactivated successfully.
Dec  6 02:48:01 np0005548731 systemd[1]: var-lib-containers-storage-overlay-e2fd9a41ecf342434ea4260835ff8ece91e4f9f8e438fa2ae26762e87177c6e7-merged.mount: Deactivated successfully.
Dec  6 02:48:01 np0005548731 podman[305912]: 2025-12-06 07:48:01.971210162 +0000 UTC m=+0.079872819 container cleanup cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:48:01 np0005548731 systemd[1]: libpod-conmon-cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc.scope: Deactivated successfully.
Dec  6 02:48:02 np0005548731 podman[305945]: 2025-12-06 07:48:02.027621597 +0000 UTC m=+0.037486065 container remove cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 02:48:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:02.036 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6a9cc7-d0c9-4758-b771-a47ea305cb93]: (4, ('Sat Dec  6 07:48:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5 (cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc)\ncc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc\nSat Dec  6 07:48:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5 (cc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc)\ncc78759f05919b753ad2668556e185746a0a2db5dad90a2d0e360090ffe5bafc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:02.037 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[78721d44-82b8-45fd-a601-277865cb52ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:02.038 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61f8d3af-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:48:02 np0005548731 nova_compute[232433]: 2025-12-06 07:48:02.040 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:02 np0005548731 kernel: tap61f8d3af-30: left promiscuous mode
Dec  6 02:48:02 np0005548731 nova_compute[232433]: 2025-12-06 07:48:02.066 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:02.069 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[73fc629c-fc78-4e28-b761-9028728529e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:02.086 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c12f0f66-1f7d-40b6-8dd7-ed3c19a29c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:02.087 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c95d3f02-af1d-4d56-bf02-ca0814546f35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:02.101 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[64e952de-6db4-43ca-a0a8-e4bf300b0305]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750458, 'reachable_time': 31640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305964, 'error': None, 'target': 'ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:02.103 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61f8d3af-3627-497f-85d1-05f726aad6e5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:48:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:02.104 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[e279a50b-7413-4b44-b3a3-e219d81da83d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:02 np0005548731 systemd[1]: run-netns-ovnmeta\x2d61f8d3af\x2d3627\x2d497f\x2d85d1\x2d05f726aad6e5.mount: Deactivated successfully.
Dec  6 02:48:02 np0005548731 nova_compute[232433]: 2025-12-06 07:48:02.270 232437 DEBUG nova.virt.libvirt.vif [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:47:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-658018281',display_name='tempest-TestEncryptedCinderVolumes-server-658018281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-658018281',id=161,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBzQXsxJ4IlVU9/ziKdus7ckrF5bZNSwdBfqRB5VjmxUmjLd5h8iTt4JA98i7mxL4C828L5gt+P1gg0wPunY6lsZpjBnTw5Zc0jrUn7eK9GFfGlA2dk7bgT7gIXBkA03DA==',key_name='tempest-keypair-527730895',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:47:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='329d4c9562c84ec5a42ca68894cbf27f',ramdisk_id='',reservation_id='r-20kj5uxy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1437526103',owner_user_name='tempest-TestEncryptedCinderVolumes-1437526103-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:47:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5933a27f4e504e23a5d084501196c0fb',uuid=e268006c-e26b-4400-8c9a-da1925cd1a57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "address": "fa:16:3e:06:7a:7f", "network": {"id": "61f8d3af-3627-497f-85d1-05f726aad6e5", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1070596354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "329d4c9562c84ec5a42ca68894cbf27f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9182be1d-dc", "ovs_interfaceid": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:48:02 np0005548731 nova_compute[232433]: 2025-12-06 07:48:02.271 232437 DEBUG nova.network.os_vif_util [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Converting VIF {"id": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "address": "fa:16:3e:06:7a:7f", "network": {"id": "61f8d3af-3627-497f-85d1-05f726aad6e5", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1070596354-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "329d4c9562c84ec5a42ca68894cbf27f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9182be1d-dc", "ovs_interfaceid": "9182be1d-dcb1-498a-af53-8a36faebc1e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:48:02 np0005548731 nova_compute[232433]: 2025-12-06 07:48:02.272 232437 DEBUG nova.network.os_vif_util [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:7a:7f,bridge_name='br-int',has_traffic_filtering=True,id=9182be1d-dcb1-498a-af53-8a36faebc1e2,network=Network(61f8d3af-3627-497f-85d1-05f726aad6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9182be1d-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:48:02 np0005548731 nova_compute[232433]: 2025-12-06 07:48:02.273 232437 DEBUG os_vif [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:7a:7f,bridge_name='br-int',has_traffic_filtering=True,id=9182be1d-dcb1-498a-af53-8a36faebc1e2,network=Network(61f8d3af-3627-497f-85d1-05f726aad6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9182be1d-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:48:02 np0005548731 nova_compute[232433]: 2025-12-06 07:48:02.276 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:02 np0005548731 nova_compute[232433]: 2025-12-06 07:48:02.276 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9182be1d-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:48:02 np0005548731 nova_compute[232433]: 2025-12-06 07:48:02.279 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:02 np0005548731 nova_compute[232433]: 2025-12-06 07:48:02.282 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:48:02 np0005548731 nova_compute[232433]: 2025-12-06 07:48:02.284 232437 INFO os_vif [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:7a:7f,bridge_name='br-int',has_traffic_filtering=True,id=9182be1d-dcb1-498a-af53-8a36faebc1e2,network=Network(61f8d3af-3627-497f-85d1-05f726aad6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9182be1d-dc')#033[00m
Dec  6 02:48:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:02.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e364 e364: 3 total, 3 up, 3 in
Dec  6 02:48:03 np0005548731 nova_compute[232433]: 2025-12-06 07:48:03.100 232437 INFO nova.virt.libvirt.driver [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Deleting instance files /var/lib/nova/instances/e268006c-e26b-4400-8c9a-da1925cd1a57_del#033[00m
Dec  6 02:48:03 np0005548731 nova_compute[232433]: 2025-12-06 07:48:03.101 232437 INFO nova.virt.libvirt.driver [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Deletion of /var/lib/nova/instances/e268006c-e26b-4400-8c9a-da1925cd1a57_del complete#033[00m
Dec  6 02:48:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:03.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:03 np0005548731 nova_compute[232433]: 2025-12-06 07:48:03.676 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007268.6751995, 2b4b3181-da12-4705-9215-7ee5b869102b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:48:03 np0005548731 nova_compute[232433]: 2025-12-06 07:48:03.676 232437 INFO nova.compute.manager [-] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:48:03 np0005548731 nova_compute[232433]: 2025-12-06 07:48:03.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:04 np0005548731 nova_compute[232433]: 2025-12-06 07:48:04.063 232437 DEBUG nova.compute.manager [None req-3ec32bc8-0e07-46c4-98a6-ec3c2a5b1879 - - - - - -] [instance: 2b4b3181-da12-4705-9215-7ee5b869102b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:48:04 np0005548731 nova_compute[232433]: 2025-12-06 07:48:04.122 232437 INFO nova.compute.manager [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Took 2.47 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:48:04 np0005548731 nova_compute[232433]: 2025-12-06 07:48:04.123 232437 DEBUG oslo.service.loopingcall [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:48:04 np0005548731 nova_compute[232433]: 2025-12-06 07:48:04.123 232437 DEBUG nova.compute.manager [-] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:48:04 np0005548731 nova_compute[232433]: 2025-12-06 07:48:04.124 232437 DEBUG nova.network.neutron [-] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:48:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:04.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:05 np0005548731 nova_compute[232433]: 2025-12-06 07:48:05.258 232437 DEBUG nova.compute.manager [req-1885a3b1-48c7-4530-b7a2-df74254fc3ef req-69668c56-a9ca-4435-85d0-46543d3e4c3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Received event network-vif-unplugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:48:05 np0005548731 nova_compute[232433]: 2025-12-06 07:48:05.258 232437 DEBUG oslo_concurrency.lockutils [req-1885a3b1-48c7-4530-b7a2-df74254fc3ef req-69668c56-a9ca-4435-85d0-46543d3e4c3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:05 np0005548731 nova_compute[232433]: 2025-12-06 07:48:05.259 232437 DEBUG oslo_concurrency.lockutils [req-1885a3b1-48c7-4530-b7a2-df74254fc3ef req-69668c56-a9ca-4435-85d0-46543d3e4c3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:05 np0005548731 nova_compute[232433]: 2025-12-06 07:48:05.259 232437 DEBUG oslo_concurrency.lockutils [req-1885a3b1-48c7-4530-b7a2-df74254fc3ef req-69668c56-a9ca-4435-85d0-46543d3e4c3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:05 np0005548731 nova_compute[232433]: 2025-12-06 07:48:05.259 232437 DEBUG nova.compute.manager [req-1885a3b1-48c7-4530-b7a2-df74254fc3ef req-69668c56-a9ca-4435-85d0-46543d3e4c3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] No waiting events found dispatching network-vif-unplugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:48:05 np0005548731 nova_compute[232433]: 2025-12-06 07:48:05.260 232437 DEBUG nova.compute.manager [req-1885a3b1-48c7-4530-b7a2-df74254fc3ef req-69668c56-a9ca-4435-85d0-46543d3e4c3c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Received event network-vif-unplugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:48:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e365 e365: 3 total, 3 up, 3 in
Dec  6 02:48:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:05.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:05.941 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:48:05 np0005548731 nova_compute[232433]: 2025-12-06 07:48:05.942 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:05.943 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:48:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:06.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.027 232437 DEBUG nova.compute.manager [req-6ecec602-a978-48ad-adf0-253198d20265 req-ea0d9251-b09e-4bf2-a3ef-2e210fea1d42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Received event network-vif-deleted-9182be1d-dcb1-498a-af53-8a36faebc1e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.028 232437 INFO nova.compute.manager [req-6ecec602-a978-48ad-adf0-253198d20265 req-ea0d9251-b09e-4bf2-a3ef-2e210fea1d42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Neutron deleted interface 9182be1d-dcb1-498a-af53-8a36faebc1e2; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.028 232437 DEBUG nova.network.neutron [req-6ecec602-a978-48ad-adf0-253198d20265 req-ea0d9251-b09e-4bf2-a3ef-2e210fea1d42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.035 232437 DEBUG nova.network.neutron [-] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.246 232437 INFO nova.compute.manager [-] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Took 3.12 seconds to deallocate network for instance.#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.251 232437 DEBUG nova.compute.manager [req-6ecec602-a978-48ad-adf0-253198d20265 req-ea0d9251-b09e-4bf2-a3ef-2e210fea1d42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Detach interface failed, port_id=9182be1d-dcb1-498a-af53-8a36faebc1e2, reason: Instance e268006c-e26b-4400-8c9a-da1925cd1a57 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.328 232437 DEBUG oslo_concurrency.lockutils [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.328 232437 DEBUG oslo_concurrency.lockutils [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.597 232437 DEBUG oslo_concurrency.processutils [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:48:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:48:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:07.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.767 232437 DEBUG nova.compute.manager [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Received event network-vif-plugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.767 232437 DEBUG oslo_concurrency.lockutils [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.768 232437 DEBUG oslo_concurrency.lockutils [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.768 232437 DEBUG oslo_concurrency.lockutils [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.768 232437 DEBUG nova.compute.manager [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] No waiting events found dispatching network-vif-plugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:48:07 np0005548731 nova_compute[232433]: 2025-12-06 07:48:07.768 232437 WARNING nova.compute.manager [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Received unexpected event network-vif-plugged-9182be1d-dcb1-498a-af53-8a36faebc1e2 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 02:48:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:48:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1771189620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:48:08 np0005548731 nova_compute[232433]: 2025-12-06 07:48:08.037 232437 DEBUG oslo_concurrency.processutils [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:48:08 np0005548731 nova_compute[232433]: 2025-12-06 07:48:08.042 232437 DEBUG nova.compute.provider_tree [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:48:08 np0005548731 nova_compute[232433]: 2025-12-06 07:48:08.335 232437 DEBUG nova.scheduler.client.report [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:48:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:08.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:08 np0005548731 nova_compute[232433]: 2025-12-06 07:48:08.691 232437 DEBUG oslo_concurrency.lockutils [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:08 np0005548731 nova_compute[232433]: 2025-12-06 07:48:08.818 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:09 np0005548731 nova_compute[232433]: 2025-12-06 07:48:09.040 232437 INFO nova.scheduler.client.report [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Deleted allocations for instance e268006c-e26b-4400-8c9a-da1925cd1a57#033[00m
Dec  6 02:48:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:09.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:09 np0005548731 nova_compute[232433]: 2025-12-06 07:48:09.862 232437 DEBUG oslo_concurrency.lockutils [None req-888eadd4-6825-4c07-ab64-2a983732a5ee 5933a27f4e504e23a5d084501196c0fb 329d4c9562c84ec5a42ca68894cbf27f - - default default] Lock "e268006c-e26b-4400-8c9a-da1925cd1a57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:48:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:10.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:48:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e366 e366: 3 total, 3 up, 3 in
Dec  6 02:48:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:11.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:11.945 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:48:12 np0005548731 nova_compute[232433]: 2025-12-06 07:48:12.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:12.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e367 e367: 3 total, 3 up, 3 in
Dec  6 02:48:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:13.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:13 np0005548731 nova_compute[232433]: 2025-12-06 07:48:13.820 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:14.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:15.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e368 e368: 3 total, 3 up, 3 in
Dec  6 02:48:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:16.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:16 np0005548731 nova_compute[232433]: 2025-12-06 07:48:16.883 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007281.8810577, e268006c-e26b-4400-8c9a-da1925cd1a57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:48:16 np0005548731 nova_compute[232433]: 2025-12-06 07:48:16.883 232437 INFO nova.compute.manager [-] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:48:16 np0005548731 nova_compute[232433]: 2025-12-06 07:48:16.906 232437 DEBUG nova.compute.manager [None req-8788872e-f030-485f-93d5-93e1e870001d - - - - - -] [instance: e268006c-e26b-4400-8c9a-da1925cd1a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:48:16 np0005548731 podman[306066]: 2025-12-06 07:48:16.922564681 +0000 UTC m=+0.078961875 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  6 02:48:16 np0005548731 podman[306065]: 2025-12-06 07:48:16.92250131 +0000 UTC m=+0.079380167 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 02:48:16 np0005548731 podman[306067]: 2025-12-06 07:48:16.930045804 +0000 UTC m=+0.082845361 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec  6 02:48:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:48:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2535541170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:48:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:48:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2535541170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:48:17 np0005548731 nova_compute[232433]: 2025-12-06 07:48:17.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:17.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:48:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:48:18 np0005548731 nova_compute[232433]: 2025-12-06 07:48:18.861 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e369 e369: 3 total, 3 up, 3 in
Dec  6 02:48:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:19.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:19 np0005548731 nova_compute[232433]: 2025-12-06 07:48:19.825 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e370 e370: 3 total, 3 up, 3 in
Dec  6 02:48:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:21.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:22 np0005548731 nova_compute[232433]: 2025-12-06 07:48:22.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:48:22 np0005548731 nova_compute[232433]: 2025-12-06 07:48:22.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:48:22 np0005548731 nova_compute[232433]: 2025-12-06 07:48:22.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:48:22 np0005548731 nova_compute[232433]: 2025-12-06 07:48:22.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:48:22 np0005548731 nova_compute[232433]: 2025-12-06 07:48:22.126 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:48:22 np0005548731 nova_compute[232433]: 2025-12-06 07:48:22.373 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:22.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e371 e371: 3 total, 3 up, 3 in
Dec  6 02:48:23 np0005548731 nova_compute[232433]: 2025-12-06 07:48:23.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:48:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:48:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:48:23 np0005548731 nova_compute[232433]: 2025-12-06 07:48:23.862 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:48:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:24.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:48:25 np0005548731 nova_compute[232433]: 2025-12-06 07:48:25.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:48:25 np0005548731 nova_compute[232433]: 2025-12-06 07:48:25.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:48:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:48:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:25.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:48:25 np0005548731 nova_compute[232433]: 2025-12-06 07:48:25.988 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:26 np0005548731 nova_compute[232433]: 2025-12-06 07:48:26.388 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:26.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:27 np0005548731 nova_compute[232433]: 2025-12-06 07:48:27.375 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:27.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:28 np0005548731 nova_compute[232433]: 2025-12-06 07:48:28.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:48:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:28.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:28 np0005548731 nova_compute[232433]: 2025-12-06 07:48:28.864 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:48:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:29.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:48:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:30.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 e372: 3 total, 3 up, 3 in
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.155 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.156 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.156 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.156 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.156 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:48:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:48:31 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2733307890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.621 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:48:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:31.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.781 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.782 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4303MB free_disk=20.988113403320312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.783 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:31 np0005548731 nova_compute[232433]: 2025-12-06 07:48:31.783 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:32 np0005548731 nova_compute[232433]: 2025-12-06 07:48:32.046 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:48:32 np0005548731 nova_compute[232433]: 2025-12-06 07:48:32.046 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:48:32 np0005548731 nova_compute[232433]: 2025-12-06 07:48:32.064 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:48:32 np0005548731 nova_compute[232433]: 2025-12-06 07:48:32.377 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:48:32 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2858422135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:48:32 np0005548731 nova_compute[232433]: 2025-12-06 07:48:32.511 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:48:32 np0005548731 nova_compute[232433]: 2025-12-06 07:48:32.518 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:48:32 np0005548731 nova_compute[232433]: 2025-12-06 07:48:32.596 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:48:32 np0005548731 nova_compute[232433]: 2025-12-06 07:48:32.670 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:48:32 np0005548731 nova_compute[232433]: 2025-12-06 07:48:32.671 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:32.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:33.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:33 np0005548731 nova_compute[232433]: 2025-12-06 07:48:33.927 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:34.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:34 np0005548731 nova_compute[232433]: 2025-12-06 07:48:34.901 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Acquiring lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:34 np0005548731 nova_compute[232433]: 2025-12-06 07:48:34.902 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:34 np0005548731 nova_compute[232433]: 2025-12-06 07:48:34.924 232437 DEBUG nova.compute.manager [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:48:35 np0005548731 nova_compute[232433]: 2025-12-06 07:48:35.214 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:35 np0005548731 nova_compute[232433]: 2025-12-06 07:48:35.215 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:35 np0005548731 nova_compute[232433]: 2025-12-06 07:48:35.231 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:48:35 np0005548731 nova_compute[232433]: 2025-12-06 07:48:35.232 232437 INFO nova.compute.claims [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:48:35 np0005548731 nova_compute[232433]: 2025-12-06 07:48:35.671 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:48:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:35.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:36 np0005548731 nova_compute[232433]: 2025-12-06 07:48:36.117 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:48:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:48:36 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3510655874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:48:36 np0005548731 nova_compute[232433]: 2025-12-06 07:48:36.600 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:48:36 np0005548731 nova_compute[232433]: 2025-12-06 07:48:36.607 232437 DEBUG nova.compute.provider_tree [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:48:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:36.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:37 np0005548731 nova_compute[232433]: 2025-12-06 07:48:37.379 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:37.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:38.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:38 np0005548731 nova_compute[232433]: 2025-12-06 07:48:38.929 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:39 np0005548731 nova_compute[232433]: 2025-12-06 07:48:39.687 232437 DEBUG nova.scheduler.client.report [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:48:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:48:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:39.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:48:40 np0005548731 nova_compute[232433]: 2025-12-06 07:48:40.198 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:40 np0005548731 nova_compute[232433]: 2025-12-06 07:48:40.199 232437 DEBUG nova.compute.manager [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:48:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:48:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:40.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:48:40 np0005548731 nova_compute[232433]: 2025-12-06 07:48:40.783 232437 DEBUG nova.compute.manager [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:48:40 np0005548731 nova_compute[232433]: 2025-12-06 07:48:40.784 232437 DEBUG nova.network.neutron [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.131 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.139 232437 INFO nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.358 232437 DEBUG nova.compute.manager [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.546 232437 DEBUG nova.compute.manager [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.547 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.547 232437 INFO nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Creating image(s)#033[00m
Dec  6 02:48:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.573 232437 DEBUG nova.storage.rbd_utils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] rbd image c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.599 232437 DEBUG nova.storage.rbd_utils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] rbd image c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.627 232437 DEBUG nova.storage.rbd_utils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] rbd image c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.631 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.696 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:48:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.697 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:41.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.698 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.698 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.723 232437 DEBUG nova.storage.rbd_utils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] rbd image c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.727 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:48:41 np0005548731 nova_compute[232433]: 2025-12-06 07:48:41.880 232437 DEBUG nova.policy [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1ecb7cb2b5e454d80f5a0ba9240a894', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '86b361be239c42758c8ab85ae6854857', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:48:42 np0005548731 nova_compute[232433]: 2025-12-06 07:48:42.322 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:48:42 np0005548731 nova_compute[232433]: 2025-12-06 07:48:42.381 232437 DEBUG nova.storage.rbd_utils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] resizing rbd image c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:48:42 np0005548731 nova_compute[232433]: 2025-12-06 07:48:42.408 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:42 np0005548731 nova_compute[232433]: 2025-12-06 07:48:42.591 232437 DEBUG nova.objects.instance [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lazy-loading 'migration_context' on Instance uuid c44fa39f-3d4d-4c6f-bbb2-5761daec080d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:48:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:42.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:43.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:43 np0005548731 nova_compute[232433]: 2025-12-06 07:48:43.931 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:44.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:44 np0005548731 nova_compute[232433]: 2025-12-06 07:48:44.993 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:48:44 np0005548731 nova_compute[232433]: 2025-12-06 07:48:44.994 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Ensure instance console log exists: /var/lib/nova/instances/c44fa39f-3d4d-4c6f-bbb2-5761daec080d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:48:44 np0005548731 nova_compute[232433]: 2025-12-06 07:48:44.994 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:44 np0005548731 nova_compute[232433]: 2025-12-06 07:48:44.995 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:44 np0005548731 nova_compute[232433]: 2025-12-06 07:48:44.995 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:48:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:45.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:48:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:46 np0005548731 nova_compute[232433]: 2025-12-06 07:48:46.689 232437 DEBUG nova.network.neutron [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Successfully created port: 6922c7df-0b48-4e43-9326-8eeab11521fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:48:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:46.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:47 np0005548731 nova_compute[232433]: 2025-12-06 07:48:47.411 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:48:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:47.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:48:47 np0005548731 podman[306480]: 2025-12-06 07:48:47.903598121 +0000 UTC m=+0.067585239 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 02:48:47 np0005548731 podman[306481]: 2025-12-06 07:48:47.929470942 +0000 UTC m=+0.090734964 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Dec  6 02:48:47 np0005548731 podman[306482]: 2025-12-06 07:48:47.932327141 +0000 UTC m=+0.090707012 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 02:48:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:48:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:48.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:48:48 np0005548731 nova_compute[232433]: 2025-12-06 07:48:48.933 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:49 np0005548731 nova_compute[232433]: 2025-12-06 07:48:49.168 232437 DEBUG nova.network.neutron [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Successfully updated port: 6922c7df-0b48-4e43-9326-8eeab11521fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:48:49 np0005548731 nova_compute[232433]: 2025-12-06 07:48:49.211 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Acquiring lock "refresh_cache-c44fa39f-3d4d-4c6f-bbb2-5761daec080d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:48:49 np0005548731 nova_compute[232433]: 2025-12-06 07:48:49.211 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Acquired lock "refresh_cache-c44fa39f-3d4d-4c6f-bbb2-5761daec080d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:48:49 np0005548731 nova_compute[232433]: 2025-12-06 07:48:49.211 232437 DEBUG nova.network.neutron [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:48:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:49.285 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:48:49 np0005548731 nova_compute[232433]: 2025-12-06 07:48:49.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:49.286 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:48:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:48:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:49.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:48:49 np0005548731 nova_compute[232433]: 2025-12-06 07:48:49.824 232437 DEBUG nova.compute.manager [req-b9d0645c-6397-4e14-98ef-d31c8b374513 req-715e1178-bfb1-4797-8e13-87c343a98be8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Received event network-changed-6922c7df-0b48-4e43-9326-8eeab11521fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:48:49 np0005548731 nova_compute[232433]: 2025-12-06 07:48:49.825 232437 DEBUG nova.compute.manager [req-b9d0645c-6397-4e14-98ef-d31c8b374513 req-715e1178-bfb1-4797-8e13-87c343a98be8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Refreshing instance network info cache due to event network-changed-6922c7df-0b48-4e43-9326-8eeab11521fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:48:49 np0005548731 nova_compute[232433]: 2025-12-06 07:48:49.825 232437 DEBUG oslo_concurrency.lockutils [req-b9d0645c-6397-4e14-98ef-d31c8b374513 req-715e1178-bfb1-4797-8e13-87c343a98be8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-c44fa39f-3d4d-4c6f-bbb2-5761daec080d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:48:50 np0005548731 nova_compute[232433]: 2025-12-06 07:48:50.597 232437 DEBUG nova.network.neutron [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:48:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:50.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:51.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:52 np0005548731 nova_compute[232433]: 2025-12-06 07:48:52.413 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:48:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:52.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.416 232437 DEBUG nova.network.neutron [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Updating instance_info_cache with network_info: [{"id": "6922c7df-0b48-4e43-9326-8eeab11521fa", "address": "fa:16:3e:8c:0f:0e", "network": {"id": "c213d8a6-b20d-4508-a037-6df70874ff25", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-816884140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86b361be239c42758c8ab85ae6854857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6922c7df-0b", "ovs_interfaceid": "6922c7df-0b48-4e43-9326-8eeab11521fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.482 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Releasing lock "refresh_cache-c44fa39f-3d4d-4c6f-bbb2-5761daec080d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.482 232437 DEBUG nova.compute.manager [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Instance network_info: |[{"id": "6922c7df-0b48-4e43-9326-8eeab11521fa", "address": "fa:16:3e:8c:0f:0e", "network": {"id": "c213d8a6-b20d-4508-a037-6df70874ff25", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-816884140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86b361be239c42758c8ab85ae6854857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6922c7df-0b", "ovs_interfaceid": "6922c7df-0b48-4e43-9326-8eeab11521fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.483 232437 DEBUG oslo_concurrency.lockutils [req-b9d0645c-6397-4e14-98ef-d31c8b374513 req-715e1178-bfb1-4797-8e13-87c343a98be8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-c44fa39f-3d4d-4c6f-bbb2-5761daec080d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.483 232437 DEBUG nova.network.neutron [req-b9d0645c-6397-4e14-98ef-d31c8b374513 req-715e1178-bfb1-4797-8e13-87c343a98be8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Refreshing network info cache for port 6922c7df-0b48-4e43-9326-8eeab11521fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.485 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Start _get_guest_xml network_info=[{"id": "6922c7df-0b48-4e43-9326-8eeab11521fa", "address": "fa:16:3e:8c:0f:0e", "network": {"id": "c213d8a6-b20d-4508-a037-6df70874ff25", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-816884140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86b361be239c42758c8ab85ae6854857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6922c7df-0b", "ovs_interfaceid": "6922c7df-0b48-4e43-9326-8eeab11521fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.490 232437 WARNING nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.494 232437 DEBUG nova.virt.libvirt.host [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.495 232437 DEBUG nova.virt.libvirt.host [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.510 232437 DEBUG nova.virt.libvirt.host [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.511 232437 DEBUG nova.virt.libvirt.host [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.513 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.513 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.513 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.513 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.514 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.514 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.514 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.514 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.514 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.515 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.515 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.515 232437 DEBUG nova.virt.hardware [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.518 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:48:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:48:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:48:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:48:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:48:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:48:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:53.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:48:53 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/994656536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.928 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.980 232437 DEBUG nova.storage.rbd_utils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] rbd image c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:48:53 np0005548731 nova_compute[232433]: 2025-12-06 07:48:53.985 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.011 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:54.289 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:48:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:48:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2368498693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.430 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.432 232437 DEBUG nova.virt.libvirt.vif [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1601820272',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1601820272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1601820272',id=163,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86b361be239c42758c8ab85ae6854857',ramdisk_id='',reservation_id='r-l4rv750a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-661901113',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-661901113-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:48:41Z,user_data=None,user_id='f1ecb7cb2b5e454d80f5a0ba9240a894',uuid=c44fa39f-3d4d-4c6f-bbb2-5761daec080d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6922c7df-0b48-4e43-9326-8eeab11521fa", "address": "fa:16:3e:8c:0f:0e", "network": {"id": "c213d8a6-b20d-4508-a037-6df70874ff25", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-816884140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86b361be239c42758c8ab85ae6854857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6922c7df-0b", "ovs_interfaceid": "6922c7df-0b48-4e43-9326-8eeab11521fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.433 232437 DEBUG nova.network.os_vif_util [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Converting VIF {"id": "6922c7df-0b48-4e43-9326-8eeab11521fa", "address": "fa:16:3e:8c:0f:0e", "network": {"id": "c213d8a6-b20d-4508-a037-6df70874ff25", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-816884140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86b361be239c42758c8ab85ae6854857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6922c7df-0b", "ovs_interfaceid": "6922c7df-0b48-4e43-9326-8eeab11521fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.434 232437 DEBUG nova.network.os_vif_util [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0f:0e,bridge_name='br-int',has_traffic_filtering=True,id=6922c7df-0b48-4e43-9326-8eeab11521fa,network=Network(c213d8a6-b20d-4508-a037-6df70874ff25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6922c7df-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.436 232437 DEBUG nova.objects.instance [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lazy-loading 'pci_devices' on Instance uuid c44fa39f-3d4d-4c6f-bbb2-5761daec080d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.469 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  <uuid>c44fa39f-3d4d-4c6f-bbb2-5761daec080d</uuid>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  <name>instance-000000a3</name>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-1601820272</nova:name>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:48:53</nova:creationTime>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <nova:user uuid="f1ecb7cb2b5e454d80f5a0ba9240a894">tempest-ServersNegativeTestMultiTenantJSON-661901113-project-member</nova:user>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <nova:project uuid="86b361be239c42758c8ab85ae6854857">tempest-ServersNegativeTestMultiTenantJSON-661901113</nova:project>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <nova:port uuid="6922c7df-0b48-4e43-9326-8eeab11521fa">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <entry name="serial">c44fa39f-3d4d-4c6f-bbb2-5761daec080d</entry>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <entry name="uuid">c44fa39f-3d4d-4c6f-bbb2-5761daec080d</entry>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk.config">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:8c:0f:0e"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <target dev="tap6922c7df-0b"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/c44fa39f-3d4d-4c6f-bbb2-5761daec080d/console.log" append="off"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:48:54 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:48:54 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:48:54 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:48:54 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.471 232437 DEBUG nova.compute.manager [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Preparing to wait for external event network-vif-plugged-6922c7df-0b48-4e43-9326-8eeab11521fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.472 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Acquiring lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.472 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.473 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.474 232437 DEBUG nova.virt.libvirt.vif [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1601820272',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1601820272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1601820272',id=163,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86b361be239c42758c8ab85ae6854857',ramdisk_id='',reservation_id='r-l4rv750a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-661901113',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-661901113-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:48:41Z,user_data=None,user_id='f1ecb7cb2b5e454d80f5a0ba9240a894',uuid=c44fa39f-3d4d-4c6f-bbb2-5761daec080d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6922c7df-0b48-4e43-9326-8eeab11521fa", "address": "fa:16:3e:8c:0f:0e", "network": {"id": "c213d8a6-b20d-4508-a037-6df70874ff25", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-816884140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86b361be239c42758c8ab85ae6854857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6922c7df-0b", "ovs_interfaceid": "6922c7df-0b48-4e43-9326-8eeab11521fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.475 232437 DEBUG nova.network.os_vif_util [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Converting VIF {"id": "6922c7df-0b48-4e43-9326-8eeab11521fa", "address": "fa:16:3e:8c:0f:0e", "network": {"id": "c213d8a6-b20d-4508-a037-6df70874ff25", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-816884140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86b361be239c42758c8ab85ae6854857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6922c7df-0b", "ovs_interfaceid": "6922c7df-0b48-4e43-9326-8eeab11521fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.475 232437 DEBUG nova.network.os_vif_util [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0f:0e,bridge_name='br-int',has_traffic_filtering=True,id=6922c7df-0b48-4e43-9326-8eeab11521fa,network=Network(c213d8a6-b20d-4508-a037-6df70874ff25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6922c7df-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.476 232437 DEBUG os_vif [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0f:0e,bridge_name='br-int',has_traffic_filtering=True,id=6922c7df-0b48-4e43-9326-8eeab11521fa,network=Network(c213d8a6-b20d-4508-a037-6df70874ff25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6922c7df-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.477 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.478 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.479 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.482 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.482 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6922c7df-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.483 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6922c7df-0b, col_values=(('external_ids', {'iface-id': '6922c7df-0b48-4e43-9326-8eeab11521fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:0f:0e', 'vm-uuid': 'c44fa39f-3d4d-4c6f-bbb2-5761daec080d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.485 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:54 np0005548731 NetworkManager[49182]: <info>  [1765007334.4863] manager: (tap6922c7df-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.487 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.495 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.497 232437 INFO os_vif [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0f:0e,bridge_name='br-int',has_traffic_filtering=True,id=6922c7df-0b48-4e43-9326-8eeab11521fa,network=Network(c213d8a6-b20d-4508-a037-6df70874ff25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6922c7df-0b')#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.588 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.589 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.589 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] No VIF found with MAC fa:16:3e:8c:0f:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.589 232437 INFO nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Using config drive#033[00m
Dec  6 02:48:54 np0005548731 nova_compute[232433]: 2025-12-06 07:48:54.613 232437 DEBUG nova.storage.rbd_utils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] rbd image c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:48:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:48:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:54.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:48:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:55.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:55 np0005548731 nova_compute[232433]: 2025-12-06 07:48:55.739 232437 INFO nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Creating config drive at /var/lib/nova/instances/c44fa39f-3d4d-4c6f-bbb2-5761daec080d/disk.config#033[00m
Dec  6 02:48:55 np0005548731 nova_compute[232433]: 2025-12-06 07:48:55.745 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c44fa39f-3d4d-4c6f-bbb2-5761daec080d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1yh3vpgt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:48:55 np0005548731 nova_compute[232433]: 2025-12-06 07:48:55.879 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c44fa39f-3d4d-4c6f-bbb2-5761daec080d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1yh3vpgt" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:48:55 np0005548731 nova_compute[232433]: 2025-12-06 07:48:55.908 232437 DEBUG nova.storage.rbd_utils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] rbd image c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:48:55 np0005548731 nova_compute[232433]: 2025-12-06 07:48:55.911 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c44fa39f-3d4d-4c6f-bbb2-5761daec080d/disk.config c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.053 232437 DEBUG oslo_concurrency.processutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c44fa39f-3d4d-4c6f-bbb2-5761daec080d/disk.config c44fa39f-3d4d-4c6f-bbb2-5761daec080d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.054 232437 INFO nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Deleting local config drive /var/lib/nova/instances/c44fa39f-3d4d-4c6f-bbb2-5761daec080d/disk.config because it was imported into RBD.#033[00m
Dec  6 02:48:56 np0005548731 kernel: tap6922c7df-0b: entered promiscuous mode
Dec  6 02:48:56 np0005548731 NetworkManager[49182]: <info>  [1765007336.1030] manager: (tap6922c7df-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/360)
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.112 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.113 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:48:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:48:56Z|00792|binding|INFO|Claiming lport 6922c7df-0b48-4e43-9326-8eeab11521fa for this chassis.
Dec  6 02:48:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:48:56Z|00793|binding|INFO|6922c7df-0b48-4e43-9326-8eeab11521fa: Claiming fa:16:3e:8c:0f:0e 10.100.0.11
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.115 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.119 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.134 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:0f:0e 10.100.0.11'], port_security=['fa:16:3e:8c:0f:0e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c44fa39f-3d4d-4c6f-bbb2-5761daec080d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c213d8a6-b20d-4508-a037-6df70874ff25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86b361be239c42758c8ab85ae6854857', 'neutron:revision_number': '2', 'neutron:security_group_ids': '085cf754-0e38-47f7-b63f-f72cc1c26f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c79a033f-7989-40ac-9f02-edd6c92b350b, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=6922c7df-0b48-4e43-9326-8eeab11521fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.135 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 6922c7df-0b48-4e43-9326-8eeab11521fa in datapath c213d8a6-b20d-4508-a037-6df70874ff25 bound to our chassis#033[00m
Dec  6 02:48:56 np0005548731 systemd-udevd[306816]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.136 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c213d8a6-b20d-4508-a037-6df70874ff25#033[00m
Dec  6 02:48:56 np0005548731 systemd-machined[195355]: New machine qemu-80-instance-000000a3.
Dec  6 02:48:56 np0005548731 NetworkManager[49182]: <info>  [1765007336.1484] device (tap6922c7df-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.148 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2c2c57-1cc6-474b-a5e8-25942e987247]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.149 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc213d8a6-b1 in ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:48:56 np0005548731 NetworkManager[49182]: <info>  [1765007336.1498] device (tap6922c7df-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.153 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc213d8a6-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.153 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ee919ad4-609c-4e08-9def-cb26e9274d74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.154 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1f045c-146d-4e7a-a867-05207653b8ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.164 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[dd110923-fa2a-4977-b6fe-65b203350e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 systemd[1]: Started Virtual Machine qemu-80-instance-000000a3.
Dec  6 02:48:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:48:56Z|00794|binding|INFO|Setting lport 6922c7df-0b48-4e43-9326-8eeab11521fa ovn-installed in OVS
Dec  6 02:48:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:48:56Z|00795|binding|INFO|Setting lport 6922c7df-0b48-4e43-9326-8eeab11521fa up in Southbound
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.183 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.191 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1f884535-ea26-4aaa-94e3-3e95f6ce6e9c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.224 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[25b8139f-a418-4a72-addd-b7427c92e745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.228 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3eae49-8717-4ec9-9c1f-864014d7b3f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 NetworkManager[49182]: <info>  [1765007336.2292] manager: (tapc213d8a6-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/361)
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.257 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7b261c03-0c4b-446b-acb8-cae7a5e1f8de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.260 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1d2fdb-fb97-44b2-9d78-38959be5a77a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 NetworkManager[49182]: <info>  [1765007336.2805] device (tapc213d8a6-b0): carrier: link connected
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.286 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[1126550b-bda2-4ba7-94af-034185ea2eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.304 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2090823c-7e8c-4300-aec7-4d81dd882e1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc213d8a6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:ce:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759828, 'reachable_time': 28535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306849, 'error': None, 'target': 'ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.319 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2e5b83-2d22-4b56-b8ae-a927d4bf77ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:ce8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 759828, 'tstamp': 759828}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306850, 'error': None, 'target': 'ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.335 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2a8014-cf36-40eb-a9ad-177e7944cc7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc213d8a6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:ce:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759828, 'reachable_time': 28535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306852, 'error': None, 'target': 'ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.360 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[41ff666b-6ef4-40af-9e31-53e93c267f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.419 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[43b6224d-4dcd-40a8-9b8b-6b4ce5828623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.422 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc213d8a6-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.423 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.423 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc213d8a6-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.425 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:56 np0005548731 NetworkManager[49182]: <info>  [1765007336.4258] manager: (tapc213d8a6-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Dec  6 02:48:56 np0005548731 kernel: tapc213d8a6-b0: entered promiscuous mode
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.436 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.437 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc213d8a6-b0, col_values=(('external_ids', {'iface-id': 'a0d5d20b-a229-475d-8eb0-b6ed69ef1301'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.438 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:56 np0005548731 ovn_controller[133927]: 2025-12-06T07:48:56Z|00796|binding|INFO|Releasing lport a0d5d20b-a229-475d-8eb0-b6ed69ef1301 from this chassis (sb_readonly=0)
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.469 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.470 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c213d8a6-b20d-4508-a037-6df70874ff25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c213d8a6-b20d-4508-a037-6df70874ff25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.471 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[74711274-3a1c-4dbd-b46d-85c301d6f3fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.472 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-c213d8a6-b20d-4508-a037-6df70874ff25
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/c213d8a6-b20d-4508-a037-6df70874ff25.pid.haproxy
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID c213d8a6-b20d-4508-a037-6df70874ff25
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:48:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:48:56.474 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25', 'env', 'PROCESS_TAG=haproxy-c213d8a6-b20d-4508-a037-6df70874ff25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c213d8a6-b20d-4508-a037-6df70874ff25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:48:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:48:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:56.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.716 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007336.715875, c44fa39f-3d4d-4c6f-bbb2-5761daec080d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:48:56 np0005548731 nova_compute[232433]: 2025-12-06 07:48:56.717 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] VM Started (Lifecycle Event)#033[00m
Dec  6 02:48:56 np0005548731 podman[306926]: 2025-12-06 07:48:56.839355893 +0000 UTC m=+0.043545142 container create 5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:48:56 np0005548731 systemd[1]: Started libpod-conmon-5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a.scope.
Dec  6 02:48:56 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:48:56 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d2c1297d48a568c8912d9d658fa0cae019c27d0c84c2a3d7a862a1981ea3fa6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:48:56 np0005548731 podman[306926]: 2025-12-06 07:48:56.817403369 +0000 UTC m=+0.021592638 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:48:56 np0005548731 podman[306926]: 2025-12-06 07:48:56.921488226 +0000 UTC m=+0.125677485 container init 5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  6 02:48:56 np0005548731 podman[306926]: 2025-12-06 07:48:56.926863057 +0000 UTC m=+0.131052306 container start 5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:48:56 np0005548731 neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25[306942]: [NOTICE]   (306946) : New worker (306948) forked
Dec  6 02:48:56 np0005548731 neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25[306942]: [NOTICE]   (306946) : Loading success.
Dec  6 02:48:57 np0005548731 nova_compute[232433]: 2025-12-06 07:48:57.259 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:48:57 np0005548731 nova_compute[232433]: 2025-12-06 07:48:57.263 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007336.7169757, c44fa39f-3d4d-4c6f-bbb2-5761daec080d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:48:57 np0005548731 nova_compute[232433]: 2025-12-06 07:48:57.263 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:48:57 np0005548731 nova_compute[232433]: 2025-12-06 07:48:57.561 232437 DEBUG nova.network.neutron [req-b9d0645c-6397-4e14-98ef-d31c8b374513 req-715e1178-bfb1-4797-8e13-87c343a98be8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Updated VIF entry in instance network info cache for port 6922c7df-0b48-4e43-9326-8eeab11521fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:48:57 np0005548731 nova_compute[232433]: 2025-12-06 07:48:57.562 232437 DEBUG nova.network.neutron [req-b9d0645c-6397-4e14-98ef-d31c8b374513 req-715e1178-bfb1-4797-8e13-87c343a98be8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Updating instance_info_cache with network_info: [{"id": "6922c7df-0b48-4e43-9326-8eeab11521fa", "address": "fa:16:3e:8c:0f:0e", "network": {"id": "c213d8a6-b20d-4508-a037-6df70874ff25", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-816884140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86b361be239c42758c8ab85ae6854857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6922c7df-0b", "ovs_interfaceid": "6922c7df-0b48-4e43-9326-8eeab11521fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:48:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:57.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:57 np0005548731 nova_compute[232433]: 2025-12-06 07:48:57.864 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:48:57 np0005548731 nova_compute[232433]: 2025-12-06 07:48:57.864 232437 DEBUG oslo_concurrency.lockutils [req-b9d0645c-6397-4e14-98ef-d31c8b374513 req-715e1178-bfb1-4797-8e13-87c343a98be8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-c44fa39f-3d4d-4c6f-bbb2-5761daec080d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:48:57 np0005548731 nova_compute[232433]: 2025-12-06 07:48:57.867 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:48:57 np0005548731 nova_compute[232433]: 2025-12-06 07:48:57.956 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:48:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:48:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:58.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.720 232437 DEBUG nova.compute.manager [req-1600df26-34be-4118-9e35-98c5c6935a98 req-2b2dbd6a-026e-40ba-9ff2-28a93a87ebba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Received event network-vif-plugged-6922c7df-0b48-4e43-9326-8eeab11521fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.720 232437 DEBUG oslo_concurrency.lockutils [req-1600df26-34be-4118-9e35-98c5c6935a98 req-2b2dbd6a-026e-40ba-9ff2-28a93a87ebba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.720 232437 DEBUG oslo_concurrency.lockutils [req-1600df26-34be-4118-9e35-98c5c6935a98 req-2b2dbd6a-026e-40ba-9ff2-28a93a87ebba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.720 232437 DEBUG oslo_concurrency.lockutils [req-1600df26-34be-4118-9e35-98c5c6935a98 req-2b2dbd6a-026e-40ba-9ff2-28a93a87ebba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.721 232437 DEBUG nova.compute.manager [req-1600df26-34be-4118-9e35-98c5c6935a98 req-2b2dbd6a-026e-40ba-9ff2-28a93a87ebba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Processing event network-vif-plugged-6922c7df-0b48-4e43-9326-8eeab11521fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.721 232437 DEBUG nova.compute.manager [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.724 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007338.7245066, c44fa39f-3d4d-4c6f-bbb2-5761daec080d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.724 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.726 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.729 232437 INFO nova.virt.libvirt.driver [-] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Instance spawned successfully.#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.729 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.802 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.808 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.808 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.808 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.809 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.809 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.809 232437 DEBUG nova.virt.libvirt.driver [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.812 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.938 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:58 np0005548731 nova_compute[232433]: 2025-12-06 07:48:58.949 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:48:59 np0005548731 nova_compute[232433]: 2025-12-06 07:48:59.029 232437 INFO nova.compute.manager [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Took 17.48 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:48:59 np0005548731 nova_compute[232433]: 2025-12-06 07:48:59.030 232437 DEBUG nova.compute.manager [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:48:59 np0005548731 nova_compute[232433]: 2025-12-06 07:48:59.201 232437 INFO nova.compute.manager [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Took 24.20 seconds to build instance.#033[00m
Dec  6 02:48:59 np0005548731 nova_compute[232433]: 2025-12-06 07:48:59.281 232437 DEBUG oslo_concurrency.lockutils [None req-8c4a269c-3373-4cc1-902b-1fb9e8584580 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:48:59 np0005548731 nova_compute[232433]: 2025-12-06 07:48:59.486 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:48:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:48:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:48:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:59.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:49:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:00.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:00.892 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:49:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:00.892 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:49:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:00.893 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:49:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:49:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:01.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:49:02 np0005548731 nova_compute[232433]: 2025-12-06 07:49:02.265 232437 DEBUG nova.compute.manager [req-5d7224a6-8f4a-4a34-9bee-3225c1471aec req-77a1884f-ed9f-4a5e-8105-25ee26d7cd2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Received event network-vif-plugged-6922c7df-0b48-4e43-9326-8eeab11521fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:49:02 np0005548731 nova_compute[232433]: 2025-12-06 07:49:02.266 232437 DEBUG oslo_concurrency.lockutils [req-5d7224a6-8f4a-4a34-9bee-3225c1471aec req-77a1884f-ed9f-4a5e-8105-25ee26d7cd2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:49:02 np0005548731 nova_compute[232433]: 2025-12-06 07:49:02.266 232437 DEBUG oslo_concurrency.lockutils [req-5d7224a6-8f4a-4a34-9bee-3225c1471aec req-77a1884f-ed9f-4a5e-8105-25ee26d7cd2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:49:02 np0005548731 nova_compute[232433]: 2025-12-06 07:49:02.266 232437 DEBUG oslo_concurrency.lockutils [req-5d7224a6-8f4a-4a34-9bee-3225c1471aec req-77a1884f-ed9f-4a5e-8105-25ee26d7cd2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:49:02 np0005548731 nova_compute[232433]: 2025-12-06 07:49:02.266 232437 DEBUG nova.compute.manager [req-5d7224a6-8f4a-4a34-9bee-3225c1471aec req-77a1884f-ed9f-4a5e-8105-25ee26d7cd2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] No waiting events found dispatching network-vif-plugged-6922c7df-0b48-4e43-9326-8eeab11521fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:49:02 np0005548731 nova_compute[232433]: 2025-12-06 07:49:02.266 232437 WARNING nova.compute.manager [req-5d7224a6-8f4a-4a34-9bee-3225c1471aec req-77a1884f-ed9f-4a5e-8105-25ee26d7cd2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Received unexpected event network-vif-plugged-6922c7df-0b48-4e43-9326-8eeab11521fa for instance with vm_state active and task_state None.#033[00m
Dec  6 02:49:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:02.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:49:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:49:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:03.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:03 np0005548731 nova_compute[232433]: 2025-12-06 07:49:03.940 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:04 np0005548731 nova_compute[232433]: 2025-12-06 07:49:04.488 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:49:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:04.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:49:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:05.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:06.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:07.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:08.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:08 np0005548731 nova_compute[232433]: 2025-12-06 07:49:08.942 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:09 np0005548731 nova_compute[232433]: 2025-12-06 07:49:09.489 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:09.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:10.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:11.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:49:12Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:0f:0e 10.100.0.11
Dec  6 02:49:12 np0005548731 ovn_controller[133927]: 2025-12-06T07:49:12Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:0f:0e 10.100.0.11
Dec  6 02:49:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:12.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:13.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:13 np0005548731 nova_compute[232433]: 2025-12-06 07:49:13.943 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:14 np0005548731 nova_compute[232433]: 2025-12-06 07:49:14.492 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:14.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:15 np0005548731 nova_compute[232433]: 2025-12-06 07:49:15.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:49:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:49:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:15.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:49:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:16.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:17.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:18.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:18 np0005548731 podman[307070]: 2025-12-06 07:49:18.893317149 +0000 UTC m=+0.050250056 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 02:49:18 np0005548731 podman[307072]: 2025-12-06 07:49:18.900468854 +0000 UTC m=+0.056438468 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:49:18 np0005548731 podman[307071]: 2025-12-06 07:49:18.930346902 +0000 UTC m=+0.087279309 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec  6 02:49:18 np0005548731 nova_compute[232433]: 2025-12-06 07:49:18.944 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:19 np0005548731 nova_compute[232433]: 2025-12-06 07:49:19.494 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:19.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:20 np0005548731 nova_compute[232433]: 2025-12-06 07:49:20.616 232437 DEBUG oslo_concurrency.lockutils [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Acquiring lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:49:20 np0005548731 nova_compute[232433]: 2025-12-06 07:49:20.617 232437 DEBUG oslo_concurrency.lockutils [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:49:20 np0005548731 nova_compute[232433]: 2025-12-06 07:49:20.617 232437 DEBUG oslo_concurrency.lockutils [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Acquiring lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:49:20 np0005548731 nova_compute[232433]: 2025-12-06 07:49:20.617 232437 DEBUG oslo_concurrency.lockutils [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:49:20 np0005548731 nova_compute[232433]: 2025-12-06 07:49:20.618 232437 DEBUG oslo_concurrency.lockutils [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:49:20 np0005548731 nova_compute[232433]: 2025-12-06 07:49:20.619 232437 INFO nova.compute.manager [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Terminating instance#033[00m
Dec  6 02:49:20 np0005548731 nova_compute[232433]: 2025-12-06 07:49:20.620 232437 DEBUG nova.compute.manager [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:49:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:20.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:49:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:21.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:49:22 np0005548731 kernel: tap6922c7df-0b (unregistering): left promiscuous mode
Dec  6 02:49:22 np0005548731 NetworkManager[49182]: <info>  [1765007362.0241] device (tap6922c7df-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:49:22 np0005548731 nova_compute[232433]: 2025-12-06 07:49:22.035 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:49:22Z|00797|binding|INFO|Releasing lport 6922c7df-0b48-4e43-9326-8eeab11521fa from this chassis (sb_readonly=0)
Dec  6 02:49:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:49:22Z|00798|binding|INFO|Setting lport 6922c7df-0b48-4e43-9326-8eeab11521fa down in Southbound
Dec  6 02:49:22 np0005548731 ovn_controller[133927]: 2025-12-06T07:49:22Z|00799|binding|INFO|Removing iface tap6922c7df-0b ovn-installed in OVS
Dec  6 02:49:22 np0005548731 nova_compute[232433]: 2025-12-06 07:49:22.061 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:22 np0005548731 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a3.scope: Deactivated successfully.
Dec  6 02:49:22 np0005548731 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a3.scope: Consumed 14.140s CPU time.
Dec  6 02:49:22 np0005548731 systemd-machined[195355]: Machine qemu-80-instance-000000a3 terminated.
Dec  6 02:49:22 np0005548731 nova_compute[232433]: 2025-12-06 07:49:22.264 232437 INFO nova.virt.libvirt.driver [-] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Instance destroyed successfully.#033[00m
Dec  6 02:49:22 np0005548731 nova_compute[232433]: 2025-12-06 07:49:22.265 232437 DEBUG nova.objects.instance [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lazy-loading 'resources' on Instance uuid c44fa39f-3d4d-4c6f-bbb2-5761daec080d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:49:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:49:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:22.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.436 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:0f:0e 10.100.0.11'], port_security=['fa:16:3e:8c:0f:0e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c44fa39f-3d4d-4c6f-bbb2-5761daec080d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c213d8a6-b20d-4508-a037-6df70874ff25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86b361be239c42758c8ab85ae6854857', 'neutron:revision_number': '4', 'neutron:security_group_ids': '085cf754-0e38-47f7-b63f-f72cc1c26f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c79a033f-7989-40ac-9f02-edd6c92b350b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=6922c7df-0b48-4e43-9326-8eeab11521fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.438 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 6922c7df-0b48-4e43-9326-8eeab11521fa in datapath c213d8a6-b20d-4508-a037-6df70874ff25 unbound from our chassis#033[00m
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.439 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c213d8a6-b20d-4508-a037-6df70874ff25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.441 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e6039e60-41c1-48c9-abf4-08ea5165bda2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.441 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25 namespace which is not needed anymore#033[00m
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.535 232437 DEBUG nova.virt.libvirt.vif [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:48:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1601820272',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1601820272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1601820272',id=163,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:48:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='86b361be239c42758c8ab85ae6854857',ramdisk_id='',reservation_id='r-l4rv750a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-661901113',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-661901113-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:48:59Z,user_data=None,user_id='f1ecb7cb2b5e454d80f5a0ba9240a894',uuid=c44fa39f-3d4d-4c6f-bbb2-5761daec080d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6922c7df-0b48-4e43-9326-8eeab11521fa", "address": "fa:16:3e:8c:0f:0e", "network": {"id": "c213d8a6-b20d-4508-a037-6df70874ff25", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-816884140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86b361be239c42758c8ab85ae6854857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6922c7df-0b", "ovs_interfaceid": "6922c7df-0b48-4e43-9326-8eeab11521fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.536 232437 DEBUG nova.network.os_vif_util [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Converting VIF {"id": "6922c7df-0b48-4e43-9326-8eeab11521fa", "address": "fa:16:3e:8c:0f:0e", "network": {"id": "c213d8a6-b20d-4508-a037-6df70874ff25", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-816884140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86b361be239c42758c8ab85ae6854857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6922c7df-0b", "ovs_interfaceid": "6922c7df-0b48-4e43-9326-8eeab11521fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.536 232437 DEBUG nova.network.os_vif_util [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0f:0e,bridge_name='br-int',has_traffic_filtering=True,id=6922c7df-0b48-4e43-9326-8eeab11521fa,network=Network(c213d8a6-b20d-4508-a037-6df70874ff25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6922c7df-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.537 232437 DEBUG os_vif [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0f:0e,bridge_name='br-int',has_traffic_filtering=True,id=6922c7df-0b48-4e43-9326-8eeab11521fa,network=Network(c213d8a6-b20d-4508-a037-6df70874ff25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6922c7df-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.538 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.538 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6922c7df-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.540 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.541 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.543 232437 INFO os_vif [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:0f:0e,bridge_name='br-int',has_traffic_filtering=True,id=6922c7df-0b48-4e43-9326-8eeab11521fa,network=Network(c213d8a6-b20d-4508-a037-6df70874ff25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6922c7df-0b')#033[00m
Dec  6 02:49:23 np0005548731 neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25[306942]: [NOTICE]   (306946) : haproxy version is 2.8.14-c23fe91
Dec  6 02:49:23 np0005548731 neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25[306942]: [NOTICE]   (306946) : path to executable is /usr/sbin/haproxy
Dec  6 02:49:23 np0005548731 neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25[306942]: [WARNING]  (306946) : Exiting Master process...
Dec  6 02:49:23 np0005548731 neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25[306942]: [ALERT]    (306946) : Current worker (306948) exited with code 143 (Terminated)
Dec  6 02:49:23 np0005548731 neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25[306942]: [WARNING]  (306946) : All workers exited. Exiting... (0)
Dec  6 02:49:23 np0005548731 systemd[1]: libpod-5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a.scope: Deactivated successfully.
Dec  6 02:49:23 np0005548731 podman[307213]: 2025-12-06 07:49:23.575031152 +0000 UTC m=+0.047538950 container died 5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:49:23 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a-userdata-shm.mount: Deactivated successfully.
Dec  6 02:49:23 np0005548731 systemd[1]: var-lib-containers-storage-overlay-8d2c1297d48a568c8912d9d658fa0cae019c27d0c84c2a3d7a862a1981ea3fa6-merged.mount: Deactivated successfully.
Dec  6 02:49:23 np0005548731 podman[307213]: 2025-12-06 07:49:23.612941437 +0000 UTC m=+0.085449235 container cleanup 5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:49:23 np0005548731 systemd[1]: libpod-conmon-5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a.scope: Deactivated successfully.
Dec  6 02:49:23 np0005548731 podman[307257]: 2025-12-06 07:49:23.669197639 +0000 UTC m=+0.038486260 container remove 5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.676 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bac04d56-bf63-4373-8e4b-50886cc12c5a]: (4, ('Sat Dec  6 07:49:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25 (5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a)\n5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a\nSat Dec  6 07:49:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25 (5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a)\n5a251f5a500086f943b4ae7be203840f5f5b8c77a695d749920069bda7127a7a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.678 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a4756d-d5c3-45d1-bd8d-f2455325a75c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.680 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc213d8a6-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.682 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:23 np0005548731 kernel: tapc213d8a6-b0: left promiscuous mode
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.695 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.698 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a35bd5-af9a-4c9e-98ad-65601f61c371]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.717 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1f72cf41-5237-41c3-acf3-b6d386ab3d39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.719 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[462e64dd-7d7b-4f1e-9031-a1968f200739]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.733 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[986763a3-2615-4dc2-a3f9-83b88da95ed1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759822, 'reachable_time': 20007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307273, 'error': None, 'target': 'ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:49:23 np0005548731 systemd[1]: run-netns-ovnmeta\x2dc213d8a6\x2db20d\x2d4508\x2da037\x2d6df70874ff25.mount: Deactivated successfully.
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.735 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c213d8a6-b20d-4508-a037-6df70874ff25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:49:23 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:23.736 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[4060a89d-6c3f-4740-a0f5-ef4a0c93eb08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:49:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:23.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:23 np0005548731 nova_compute[232433]: 2025-12-06 07:49:23.946 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:24 np0005548731 nova_compute[232433]: 2025-12-06 07:49:24.307 232437 DEBUG nova.compute.manager [req-57452eec-d139-4511-8d0d-632987f4e055 req-2aa8e3c0-970e-458a-9378-d22c4439ebf7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Received event network-vif-unplugged-6922c7df-0b48-4e43-9326-8eeab11521fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:49:24 np0005548731 nova_compute[232433]: 2025-12-06 07:49:24.308 232437 DEBUG oslo_concurrency.lockutils [req-57452eec-d139-4511-8d0d-632987f4e055 req-2aa8e3c0-970e-458a-9378-d22c4439ebf7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:49:24 np0005548731 nova_compute[232433]: 2025-12-06 07:49:24.309 232437 DEBUG oslo_concurrency.lockutils [req-57452eec-d139-4511-8d0d-632987f4e055 req-2aa8e3c0-970e-458a-9378-d22c4439ebf7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:49:24 np0005548731 nova_compute[232433]: 2025-12-06 07:49:24.309 232437 DEBUG oslo_concurrency.lockutils [req-57452eec-d139-4511-8d0d-632987f4e055 req-2aa8e3c0-970e-458a-9378-d22c4439ebf7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:49:24 np0005548731 nova_compute[232433]: 2025-12-06 07:49:24.310 232437 DEBUG nova.compute.manager [req-57452eec-d139-4511-8d0d-632987f4e055 req-2aa8e3c0-970e-458a-9378-d22c4439ebf7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] No waiting events found dispatching network-vif-unplugged-6922c7df-0b48-4e43-9326-8eeab11521fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:49:24 np0005548731 nova_compute[232433]: 2025-12-06 07:49:24.310 232437 DEBUG nova.compute.manager [req-57452eec-d139-4511-8d0d-632987f4e055 req-2aa8e3c0-970e-458a-9378-d22c4439ebf7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Received event network-vif-unplugged-6922c7df-0b48-4e43-9326-8eeab11521fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:49:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:49:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:24.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:49:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:49:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 61K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1693 writes, 8398 keys, 1693 commit groups, 1.0 writes per commit group, ingest: 16.55 MB, 0.03 MB/s#012Interval WAL: 1693 writes, 1693 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     31.9      2.34              0.22        36    0.065       0      0       0.0       0.0#012  L6      1/0   11.80 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7     91.8     77.9      4.50              1.02        35    0.128    243K    19K       0.0       0.0#012 Sum      1/0   11.80 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7     60.4     62.1      6.83              1.24        71    0.096    243K    19K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.2    146.5    148.2      0.53              0.22        12    0.045     56K   3109       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     91.8     77.9      4.50              1.02        35    0.128    243K    19K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     32.0      2.34              0.22        35    0.067       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.073, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.41 GB write, 0.09 MB/s write, 0.40 GB read, 0.09 MB/s read, 6.8 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 46.05 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000238 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2618,44.27 MB,14.5619%) FilterBlock(71,683.30 KB,0.219501%) IndexBlock(71,1.12 MB,0.367712%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 02:49:25 np0005548731 nova_compute[232433]: 2025-12-06 07:49:25.037 232437 INFO nova.virt.libvirt.driver [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Deleting instance files /var/lib/nova/instances/c44fa39f-3d4d-4c6f-bbb2-5761daec080d_del#033[00m
Dec  6 02:49:25 np0005548731 nova_compute[232433]: 2025-12-06 07:49:25.038 232437 INFO nova.virt.libvirt.driver [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Deletion of /var/lib/nova/instances/c44fa39f-3d4d-4c6f-bbb2-5761daec080d_del complete#033[00m
Dec  6 02:49:25 np0005548731 nova_compute[232433]: 2025-12-06 07:49:25.172 232437 INFO nova.compute.manager [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Took 4.55 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:49:25 np0005548731 nova_compute[232433]: 2025-12-06 07:49:25.173 232437 DEBUG oslo.service.loopingcall [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:49:25 np0005548731 nova_compute[232433]: 2025-12-06 07:49:25.173 232437 DEBUG nova.compute.manager [-] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:49:25 np0005548731 nova_compute[232433]: 2025-12-06 07:49:25.174 232437 DEBUG nova.network.neutron [-] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:49:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:25.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:49:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:26.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:49:26 np0005548731 nova_compute[232433]: 2025-12-06 07:49:26.755 232437 DEBUG nova.compute.manager [req-e383a173-15b7-4b0b-9765-1e4ca755839e req-72c12a81-b652-43c1-b173-9763e5f68e27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Received event network-vif-plugged-6922c7df-0b48-4e43-9326-8eeab11521fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:49:26 np0005548731 nova_compute[232433]: 2025-12-06 07:49:26.755 232437 DEBUG oslo_concurrency.lockutils [req-e383a173-15b7-4b0b-9765-1e4ca755839e req-72c12a81-b652-43c1-b173-9763e5f68e27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:49:26 np0005548731 nova_compute[232433]: 2025-12-06 07:49:26.756 232437 DEBUG oslo_concurrency.lockutils [req-e383a173-15b7-4b0b-9765-1e4ca755839e req-72c12a81-b652-43c1-b173-9763e5f68e27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:49:26 np0005548731 nova_compute[232433]: 2025-12-06 07:49:26.756 232437 DEBUG oslo_concurrency.lockutils [req-e383a173-15b7-4b0b-9765-1e4ca755839e req-72c12a81-b652-43c1-b173-9763e5f68e27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:49:26 np0005548731 nova_compute[232433]: 2025-12-06 07:49:26.756 232437 DEBUG nova.compute.manager [req-e383a173-15b7-4b0b-9765-1e4ca755839e req-72c12a81-b652-43c1-b173-9763e5f68e27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] No waiting events found dispatching network-vif-plugged-6922c7df-0b48-4e43-9326-8eeab11521fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:49:26 np0005548731 nova_compute[232433]: 2025-12-06 07:49:26.756 232437 WARNING nova.compute.manager [req-e383a173-15b7-4b0b-9765-1e4ca755839e req-72c12a81-b652-43c1-b173-9763e5f68e27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Received unexpected event network-vif-plugged-6922c7df-0b48-4e43-9326-8eeab11521fa for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:49:27 np0005548731 nova_compute[232433]: 2025-12-06 07:49:27.059 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:49:27 np0005548731 nova_compute[232433]: 2025-12-06 07:49:27.059 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:49:27 np0005548731 nova_compute[232433]: 2025-12-06 07:49:27.060 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:49:27 np0005548731 nova_compute[232433]: 2025-12-06 07:49:27.060 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:49:27 np0005548731 nova_compute[232433]: 2025-12-06 07:49:27.088 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  6 02:49:27 np0005548731 nova_compute[232433]: 2025-12-06 07:49:27.088 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:49:27 np0005548731 nova_compute[232433]: 2025-12-06 07:49:27.089 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:49:27 np0005548731 nova_compute[232433]: 2025-12-06 07:49:27.089 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:49:27 np0005548731 nova_compute[232433]: 2025-12-06 07:49:27.089 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:49:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:49:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:27.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:49:28 np0005548731 nova_compute[232433]: 2025-12-06 07:49:28.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:28.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:28 np0005548731 nova_compute[232433]: 2025-12-06 07:49:28.949 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:49:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:29.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:49:30 np0005548731 nova_compute[232433]: 2025-12-06 07:49:30.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:49:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:30.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:31 np0005548731 nova_compute[232433]: 2025-12-06 07:49:31.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:49:31 np0005548731 nova_compute[232433]: 2025-12-06 07:49:31.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:49:31 np0005548731 nova_compute[232433]: 2025-12-06 07:49:31.543 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:49:31 np0005548731 nova_compute[232433]: 2025-12-06 07:49:31.543 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:49:31 np0005548731 nova_compute[232433]: 2025-12-06 07:49:31.543 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:49:31 np0005548731 nova_compute[232433]: 2025-12-06 07:49:31.544 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:49:31 np0005548731 nova_compute[232433]: 2025-12-06 07:49:31.544 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:49:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:49:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:31.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:49:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:49:31 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1050604975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:49:31 np0005548731 nova_compute[232433]: 2025-12-06 07:49:31.964 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:49:32 np0005548731 nova_compute[232433]: 2025-12-06 07:49:32.133 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:49:32 np0005548731 nova_compute[232433]: 2025-12-06 07:49:32.135 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4307MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:49:32 np0005548731 nova_compute[232433]: 2025-12-06 07:49:32.135 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:49:32 np0005548731 nova_compute[232433]: 2025-12-06 07:49:32.135 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:49:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:32.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:32 np0005548731 nova_compute[232433]: 2025-12-06 07:49:32.944 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance c44fa39f-3d4d-4c6f-bbb2-5761daec080d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:49:32 np0005548731 nova_compute[232433]: 2025-12-06 07:49:32.944 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:49:32 np0005548731 nova_compute[232433]: 2025-12-06 07:49:32.945 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:49:33 np0005548731 nova_compute[232433]: 2025-12-06 07:49:33.028 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:49:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:49:33 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3221781244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:49:33 np0005548731 nova_compute[232433]: 2025-12-06 07:49:33.472 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:49:33 np0005548731 nova_compute[232433]: 2025-12-06 07:49:33.477 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:49:33 np0005548731 nova_compute[232433]: 2025-12-06 07:49:33.544 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:49:33 np0005548731 nova_compute[232433]: 2025-12-06 07:49:33.589 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:33 np0005548731 nova_compute[232433]: 2025-12-06 07:49:33.715 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:49:33 np0005548731 nova_compute[232433]: 2025-12-06 07:49:33.715 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:49:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:33.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:33 np0005548731 nova_compute[232433]: 2025-12-06 07:49:33.951 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:33 np0005548731 nova_compute[232433]: 2025-12-06 07:49:33.954 232437 DEBUG nova.network.neutron [-] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:49:34 np0005548731 nova_compute[232433]: 2025-12-06 07:49:34.059 232437 DEBUG nova.compute.manager [req-1972d4cf-2d7f-43f0-8ea6-db98cba9f451 req-9f50ee48-0310-47f9-a087-3b9b1e49beb7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Received event network-vif-deleted-6922c7df-0b48-4e43-9326-8eeab11521fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:49:34 np0005548731 nova_compute[232433]: 2025-12-06 07:49:34.059 232437 INFO nova.compute.manager [req-1972d4cf-2d7f-43f0-8ea6-db98cba9f451 req-9f50ee48-0310-47f9-a087-3b9b1e49beb7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Neutron deleted interface 6922c7df-0b48-4e43-9326-8eeab11521fa; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 02:49:34 np0005548731 nova_compute[232433]: 2025-12-06 07:49:34.059 232437 DEBUG nova.network.neutron [req-1972d4cf-2d7f-43f0-8ea6-db98cba9f451 req-9f50ee48-0310-47f9-a087-3b9b1e49beb7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:49:34 np0005548731 nova_compute[232433]: 2025-12-06 07:49:34.098 232437 INFO nova.compute.manager [-] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Took 8.92 seconds to deallocate network for instance.#033[00m
Dec  6 02:49:34 np0005548731 nova_compute[232433]: 2025-12-06 07:49:34.109 232437 DEBUG nova.compute.manager [req-1972d4cf-2d7f-43f0-8ea6-db98cba9f451 req-9f50ee48-0310-47f9-a087-3b9b1e49beb7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Detach interface failed, port_id=6922c7df-0b48-4e43-9326-8eeab11521fa, reason: Instance c44fa39f-3d4d-4c6f-bbb2-5761daec080d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 02:49:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:34.158 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:49:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:34.159 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:49:34 np0005548731 nova_compute[232433]: 2025-12-06 07:49:34.159 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:34 np0005548731 nova_compute[232433]: 2025-12-06 07:49:34.192 232437 DEBUG oslo_concurrency.lockutils [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:49:34 np0005548731 nova_compute[232433]: 2025-12-06 07:49:34.192 232437 DEBUG oslo_concurrency.lockutils [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:49:34 np0005548731 nova_compute[232433]: 2025-12-06 07:49:34.257 232437 DEBUG oslo_concurrency.processutils [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:49:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:49:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4007388396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:49:34 np0005548731 nova_compute[232433]: 2025-12-06 07:49:34.725 232437 DEBUG oslo_concurrency.processutils [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:49:34 np0005548731 nova_compute[232433]: 2025-12-06 07:49:34.733 232437 DEBUG nova.compute.provider_tree [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:49:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:34.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:35 np0005548731 nova_compute[232433]: 2025-12-06 07:49:35.717 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:49:35 np0005548731 nova_compute[232433]: 2025-12-06 07:49:35.718 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:49:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:49:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:35.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:49:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:49:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:36.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:49:36 np0005548731 nova_compute[232433]: 2025-12-06 07:49:36.883 232437 DEBUG nova.scheduler.client.report [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:49:36 np0005548731 nova_compute[232433]: 2025-12-06 07:49:36.962 232437 DEBUG oslo_concurrency.lockutils [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:49:37 np0005548731 nova_compute[232433]: 2025-12-06 07:49:37.049 232437 INFO nova.scheduler.client.report [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Deleted allocations for instance c44fa39f-3d4d-4c6f-bbb2-5761daec080d#033[00m
Dec  6 02:49:37 np0005548731 nova_compute[232433]: 2025-12-06 07:49:37.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:49:37 np0005548731 nova_compute[232433]: 2025-12-06 07:49:37.160 232437 DEBUG oslo_concurrency.lockutils [None req-a9951f3f-c28b-4b37-8998-6368fedea791 f1ecb7cb2b5e454d80f5a0ba9240a894 86b361be239c42758c8ab85ae6854857 - - default default] Lock "c44fa39f-3d4d-4c6f-bbb2-5761daec080d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:49:37 np0005548731 nova_compute[232433]: 2025-12-06 07:49:37.262 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007362.26069, c44fa39f-3d4d-4c6f-bbb2-5761daec080d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:49:37 np0005548731 nova_compute[232433]: 2025-12-06 07:49:37.262 232437 INFO nova.compute.manager [-] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:49:37 np0005548731 nova_compute[232433]: 2025-12-06 07:49:37.711 232437 DEBUG nova.compute.manager [None req-59bcf2a1-41bf-4bab-8ea1-22b4a1ddc54b - - - - - -] [instance: c44fa39f-3d4d-4c6f-bbb2-5761daec080d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:49:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:37.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:38 np0005548731 nova_compute[232433]: 2025-12-06 07:49:38.633 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:38.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:38 np0005548731 nova_compute[232433]: 2025-12-06 07:49:38.953 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:39.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:40.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:41.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:49:42.161 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:49:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:49:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:42.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:49:43 np0005548731 nova_compute[232433]: 2025-12-06 07:49:43.636 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:43.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:43 np0005548731 nova_compute[232433]: 2025-12-06 07:49:43.955 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:44.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:45 np0005548731 nova_compute[232433]: 2025-12-06 07:49:45.469 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:45.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:49:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:46.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:49:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:47.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:48 np0005548731 nova_compute[232433]: 2025-12-06 07:49:48.640 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:49:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:48.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:49:48 np0005548731 nova_compute[232433]: 2025-12-06 07:49:48.959 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:49.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:49 np0005548731 podman[307406]: 2025-12-06 07:49:49.937715562 +0000 UTC m=+0.080403482 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:49:49 np0005548731 podman[307408]: 2025-12-06 07:49:49.942399736 +0000 UTC m=+0.085618459 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  6 02:49:49 np0005548731 podman[307407]: 2025-12-06 07:49:49.992918957 +0000 UTC m=+0.135611147 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:49:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:50.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:51.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:52.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:53 np0005548731 nova_compute[232433]: 2025-12-06 07:49:53.644 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:53.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:53 np0005548731 nova_compute[232433]: 2025-12-06 07:49:53.961 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:54.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:49:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:55.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:49:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:49:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:49:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:56.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:49:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:49:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:57.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:49:58 np0005548731 nova_compute[232433]: 2025-12-06 07:49:58.647 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:49:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:58.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:49:58 np0005548731 nova_compute[232433]: 2025-12-06 07:49:58.963 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:49:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:49:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:49:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:59.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:50:00 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 02:50:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:00.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:00.893 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:00.893 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:00.893 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:01.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:02.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:50:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:50:03 np0005548731 nova_compute[232433]: 2025-12-06 07:50:03.650 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:03.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:03 np0005548731 nova_compute[232433]: 2025-12-06 07:50:03.964 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:50:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:50:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:04.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e373 e373: 3 total, 3 up, 3 in
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.594077) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405594124, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1880, "num_deletes": 260, "total_data_size": 4239370, "memory_usage": 4300096, "flush_reason": "Manual Compaction"}
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405611853, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 2760768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60040, "largest_seqno": 61915, "table_properties": {"data_size": 2752918, "index_size": 4728, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16981, "raw_average_key_size": 20, "raw_value_size": 2737003, "raw_average_value_size": 3354, "num_data_blocks": 206, "num_entries": 816, "num_filter_entries": 816, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007265, "oldest_key_time": 1765007265, "file_creation_time": 1765007405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 17848 microseconds, and 7994 cpu microseconds.
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.611924) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 2760768 bytes OK
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.611942) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.613504) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.613520) EVENT_LOG_v1 {"time_micros": 1765007405613515, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.613563) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 4230786, prev total WAL file size 4230786, number of live WAL files 2.
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.614854) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(2696KB)], [117(11MB)]
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405615175, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 15133936, "oldest_snapshot_seqno": -1}
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 9356 keys, 13186300 bytes, temperature: kUnknown
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405693162, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 13186300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13123896, "index_size": 37908, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 245177, "raw_average_key_size": 26, "raw_value_size": 12957420, "raw_average_value_size": 1384, "num_data_blocks": 1459, "num_entries": 9356, "num_filter_entries": 9356, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765007405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.693420) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 13186300 bytes
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.695119) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.9 rd, 168.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.8 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(10.3) write-amplify(4.8) OK, records in: 9890, records dropped: 534 output_compression: NoCompression
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.695157) EVENT_LOG_v1 {"time_micros": 1765007405695144, "job": 74, "event": "compaction_finished", "compaction_time_micros": 78070, "compaction_time_cpu_micros": 29775, "output_level": 6, "num_output_files": 1, "total_output_size": 13186300, "num_input_records": 9890, "num_output_records": 9356, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405695794, "job": 74, "event": "table_file_deletion", "file_number": 119}
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405697749, "job": 74, "event": "table_file_deletion", "file_number": 117}
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.614706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.697800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.697804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.697805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.697807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:05 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:05.697808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:05.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:06.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:06.993932) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007406993959, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 270, "num_deletes": 256, "total_data_size": 22901, "memory_usage": 28336, "flush_reason": "Manual Compaction"}
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007406996218, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 14382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61920, "largest_seqno": 62185, "table_properties": {"data_size": 12555, "index_size": 60, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4589, "raw_average_key_size": 17, "raw_value_size": 9012, "raw_average_value_size": 34, "num_data_blocks": 3, "num_entries": 265, "num_filter_entries": 265, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007406, "oldest_key_time": 1765007406, "file_creation_time": 1765007406, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 2335 microseconds, and 650 cpu microseconds.
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:06.996265) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 14382 bytes OK
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:06.996280) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:06.997306) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:06.997322) EVENT_LOG_v1 {"time_micros": 1765007406997317, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:06.997336) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 20803, prev total WAL file size 20803, number of live WAL files 2.
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:06.997682) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303233' seq:72057594037927935, type:22 .. '6C6F676D0032323735' seq:0, type:0; will stop at (end)
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(14KB)], [120(12MB)]
Dec  6 02:50:06 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007406997727, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 13200682, "oldest_snapshot_seqno": -1}
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 9104 keys, 13036310 bytes, temperature: kUnknown
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007407065030, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 13036310, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12975294, "index_size": 37160, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 240882, "raw_average_key_size": 26, "raw_value_size": 12812865, "raw_average_value_size": 1407, "num_data_blocks": 1423, "num_entries": 9104, "num_filter_entries": 9104, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765007406, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:07.065323) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13036310 bytes
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:07.066652) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.8 rd, 193.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 12.6 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(1824.3) write-amplify(906.4) OK, records in: 9621, records dropped: 517 output_compression: NoCompression
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:07.066669) EVENT_LOG_v1 {"time_micros": 1765007407066661, "job": 76, "event": "compaction_finished", "compaction_time_micros": 67418, "compaction_time_cpu_micros": 29552, "output_level": 6, "num_output_files": 1, "total_output_size": 13036310, "num_input_records": 9621, "num_output_records": 9104, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007407066769, "job": 76, "event": "table_file_deletion", "file_number": 122}
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007407069200, "job": 76, "event": "table_file_deletion", "file_number": 120}
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:06.997603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:07.069287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:07.069293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:07.069297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:07.069299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:07 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:50:07.069302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:50:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:07.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:08 np0005548731 nova_compute[232433]: 2025-12-06 07:50:08.653 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:08 np0005548731 nova_compute[232433]: 2025-12-06 07:50:08.966 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:50:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:09.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:50:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:10.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:11.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:12 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:50:12 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:50:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:12.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:13 np0005548731 nova_compute[232433]: 2025-12-06 07:50:13.656 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:13.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:13 np0005548731 nova_compute[232433]: 2025-12-06 07:50:13.968 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:14.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.120 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.120 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.153 232437 DEBUG nova.compute.manager [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.477 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.478 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.487 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.487 232437 INFO nova.compute.claims [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.715 232437 DEBUG nova.scheduler.client.report [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.799 232437 DEBUG nova.scheduler.client.report [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.799 232437 DEBUG nova.compute.provider_tree [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.855 232437 DEBUG nova.scheduler.client.report [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 02:50:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:50:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:15.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:50:15 np0005548731 nova_compute[232433]: 2025-12-06 07:50:15.991 232437 DEBUG nova.scheduler.client.report [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 02:50:16 np0005548731 nova_compute[232433]: 2025-12-06 07:50:16.054 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:50:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/272318413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:50:16 np0005548731 nova_compute[232433]: 2025-12-06 07:50:16.548 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:16 np0005548731 nova_compute[232433]: 2025-12-06 07:50:16.554 232437 DEBUG nova.compute.provider_tree [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:50:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:16 np0005548731 nova_compute[232433]: 2025-12-06 07:50:16.797 232437 DEBUG nova.scheduler.client.report [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:50:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:16.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:17 np0005548731 nova_compute[232433]: 2025-12-06 07:50:17.286 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:17 np0005548731 nova_compute[232433]: 2025-12-06 07:50:17.287 232437 DEBUG nova.compute.manager [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:50:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:50:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:17.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:50:17 np0005548731 nova_compute[232433]: 2025-12-06 07:50:17.895 232437 DEBUG nova.compute.manager [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:50:17 np0005548731 nova_compute[232433]: 2025-12-06 07:50:17.895 232437 DEBUG nova.network.neutron [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:50:18 np0005548731 nova_compute[232433]: 2025-12-06 07:50:18.260 232437 INFO nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:50:18 np0005548731 nova_compute[232433]: 2025-12-06 07:50:18.283 232437 DEBUG nova.compute.manager [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:50:18 np0005548731 nova_compute[232433]: 2025-12-06 07:50:18.660 232437 DEBUG nova.policy [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0f669e963dc54ad7bebf8dd20341428a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0c8fc5bc237e42bfad505a0bca6681eb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:50:18 np0005548731 nova_compute[232433]: 2025-12-06 07:50:18.662 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:18.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:18 np0005548731 nova_compute[232433]: 2025-12-06 07:50:18.950 232437 DEBUG nova.compute.manager [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:50:18 np0005548731 nova_compute[232433]: 2025-12-06 07:50:18.951 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:50:18 np0005548731 nova_compute[232433]: 2025-12-06 07:50:18.951 232437 INFO nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Creating image(s)#033[00m
Dec  6 02:50:19 np0005548731 nova_compute[232433]: 2025-12-06 07:50:19.015 232437 DEBUG nova.storage.rbd_utils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image fa71018e-7574-4438-bb85-43d1c96cf9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:19 np0005548731 nova_compute[232433]: 2025-12-06 07:50:19.040 232437 DEBUG nova.storage.rbd_utils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image fa71018e-7574-4438-bb85-43d1c96cf9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:19 np0005548731 nova_compute[232433]: 2025-12-06 07:50:19.066 232437 DEBUG nova.storage.rbd_utils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image fa71018e-7574-4438-bb85-43d1c96cf9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:19 np0005548731 nova_compute[232433]: 2025-12-06 07:50:19.070 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "8d5797d05121f09476d1863b26e82e86b77e0d74" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:19 np0005548731 nova_compute[232433]: 2025-12-06 07:50:19.071 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "8d5797d05121f09476d1863b26e82e86b77e0d74" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:19 np0005548731 nova_compute[232433]: 2025-12-06 07:50:19.074 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:19 np0005548731 nova_compute[232433]: 2025-12-06 07:50:19.450 232437 DEBUG nova.virt.libvirt.imagebackend [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6916f635-11b7-4158-ab13-60ff56406973/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6916f635-11b7-4158-ab13-60ff56406973/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  6 02:50:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:50:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:19.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:50:20 np0005548731 nova_compute[232433]: 2025-12-06 07:50:20.399 232437 DEBUG nova.network.neutron [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Successfully created port: 341d07a0-1551-46f6-85b6-aace80d14532 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:50:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:20.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:20 np0005548731 podman[307793]: 2025-12-06 07:50:20.883057699 +0000 UTC m=+0.048441892 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  6 02:50:20 np0005548731 podman[307795]: 2025-12-06 07:50:20.89089133 +0000 UTC m=+0.050281377 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:50:20 np0005548731 podman[307794]: 2025-12-06 07:50:20.916570656 +0000 UTC m=+0.079689563 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec  6 02:50:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:21.712 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:50:21 np0005548731 nova_compute[232433]: 2025-12-06 07:50:21.713 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:21.713 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:50:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:21.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:21 np0005548731 nova_compute[232433]: 2025-12-06 07:50:21.907 232437 DEBUG nova.network.neutron [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Successfully updated port: 341d07a0-1551-46f6-85b6-aace80d14532 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.250 232437 DEBUG nova.compute.manager [req-741ba9e0-ee90-4464-bc9c-83ce123312af req-ca291006-f65d-4ae5-bed9-61bb06259078 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-changed-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.250 232437 DEBUG nova.compute.manager [req-741ba9e0-ee90-4464-bc9c-83ce123312af req-ca291006-f65d-4ae5-bed9-61bb06259078 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing instance network info cache due to event network-changed-341d07a0-1551-46f6-85b6-aace80d14532. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.251 232437 DEBUG oslo_concurrency.lockutils [req-741ba9e0-ee90-4464-bc9c-83ce123312af req-ca291006-f65d-4ae5-bed9-61bb06259078 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.251 232437 DEBUG oslo_concurrency.lockutils [req-741ba9e0-ee90-4464-bc9c-83ce123312af req-ca291006-f65d-4ae5-bed9-61bb06259078 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.252 232437 DEBUG nova.network.neutron [req-741ba9e0-ee90-4464-bc9c-83ce123312af req-ca291006-f65d-4ae5-bed9-61bb06259078 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.409 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.518 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.584 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74.part --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.586 232437 DEBUG nova.virt.images [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] 6916f635-11b7-4158-ab13-60ff56406973 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.588 232437 DEBUG nova.privsep.utils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.589 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74.part /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.644 232437 DEBUG nova.network.neutron [req-741ba9e0-ee90-4464-bc9c-83ce123312af req-ca291006-f65d-4ae5-bed9-61bb06259078 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:50:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:22.716 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.799 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74.part /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74.converted" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.803 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:22.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.877 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74.converted --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.878 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "8d5797d05121f09476d1863b26e82e86b77e0d74" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.903 232437 DEBUG nova.storage.rbd_utils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image fa71018e-7574-4438-bb85-43d1c96cf9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:22 np0005548731 nova_compute[232433]: 2025-12-06 07:50:22.906 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74 fa71018e-7574-4438-bb85-43d1c96cf9b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:23 np0005548731 nova_compute[232433]: 2025-12-06 07:50:23.186 232437 DEBUG nova.network.neutron [req-741ba9e0-ee90-4464-bc9c-83ce123312af req-ca291006-f65d-4ae5-bed9-61bb06259078 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:50:23 np0005548731 nova_compute[232433]: 2025-12-06 07:50:23.209 232437 DEBUG oslo_concurrency.lockutils [req-741ba9e0-ee90-4464-bc9c-83ce123312af req-ca291006-f65d-4ae5-bed9-61bb06259078 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:50:23 np0005548731 nova_compute[232433]: 2025-12-06 07:50:23.210 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquired lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:50:23 np0005548731 nova_compute[232433]: 2025-12-06 07:50:23.210 232437 DEBUG nova.network.neutron [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:50:23 np0005548731 nova_compute[232433]: 2025-12-06 07:50:23.627 232437 DEBUG nova.network.neutron [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:50:23 np0005548731 nova_compute[232433]: 2025-12-06 07:50:23.665 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:23.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:23 np0005548731 nova_compute[232433]: 2025-12-06 07:50:23.979 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74 fa71018e-7574-4438-bb85-43d1c96cf9b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.029 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.068 232437 DEBUG nova.storage.rbd_utils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] resizing rbd image fa71018e-7574-4438-bb85-43d1c96cf9b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.372 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.377 232437 DEBUG nova.objects.instance [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'migration_context' on Instance uuid fa71018e-7574-4438-bb85-43d1c96cf9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.401 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.401 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Ensure instance console log exists: /var/lib/nova/instances/fa71018e-7574-4438-bb85-43d1c96cf9b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.402 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.403 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.403 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.796 232437 DEBUG nova.network.neutron [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating instance_info_cache with network_info: [{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:50:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:50:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:24.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.833 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Releasing lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.834 232437 DEBUG nova.compute.manager [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Instance network_info: |[{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.835 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Start _get_guest_xml network_info=[{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T07:50:02Z,direct_url=<?>,disk_format='qcow2',id=6916f635-11b7-4158-ab13-60ff56406973,min_disk=0,min_ram=0,name='tempest-scenario-img--953834163',owner='0c8fc5bc237e42bfad505a0bca6681eb',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T07:50:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6916f635-11b7-4158-ab13-60ff56406973'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.840 232437 WARNING nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.848 232437 DEBUG nova.virt.libvirt.host [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.849 232437 DEBUG nova.virt.libvirt.host [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.852 232437 DEBUG nova.virt.libvirt.host [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.852 232437 DEBUG nova.virt.libvirt.host [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.853 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.853 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T07:50:02Z,direct_url=<?>,disk_format='qcow2',id=6916f635-11b7-4158-ab13-60ff56406973,min_disk=0,min_ram=0,name='tempest-scenario-img--953834163',owner='0c8fc5bc237e42bfad505a0bca6681eb',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T07:50:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.854 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.854 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.854 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.855 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.855 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.855 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.855 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.856 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.856 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.856 232437 DEBUG nova.virt.hardware [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:50:24 np0005548731 nova_compute[232433]: 2025-12-06 07:50:24.859 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.159 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.160 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:50:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:50:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/11715429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.297 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.326 232437 DEBUG nova.storage.rbd_utils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image fa71018e-7574-4438-bb85-43d1c96cf9b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.329 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:50:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/281735642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.772 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.775 232437 DEBUG nova.virt.libvirt.vif [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:50:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-235586357',display_name='tempest-TestMinimumBasicScenario-server-235586357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-235586357',id=166,image_ref='6916f635-11b7-4158-ab13-60ff56406973',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKJFOYwUkK/AU5j/16C3dY42Bin5X9do17czkQVWr96Bc5yu/WMHsTfBb0AVGwIJHTi79KNz3aKY1rNv6m7H2M+tAVea1sAVgXE0sQ5V7zYqD0Y4j4BUibNqoce1/1RtaQ==',key_name='tempest-TestMinimumBasicScenario-1468814315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c8fc5bc237e42bfad505a0bca6681eb',ramdisk_id='',reservation_id='r-fawmn8ln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6916f635-11b7-4158-ab13-60ff56406973',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1310413980',owner_user_name='tempest-TestMinimumBasicScenario-1310413980-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:50:18Z,user_data=None,user_id='0f669e963dc54ad7bebf8dd20341428a',uuid=fa71018e-7574-4438-bb85-43d1c96cf9b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.776 232437 DEBUG nova.network.os_vif_util [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converting VIF {"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.777 232437 DEBUG nova.network.os_vif_util [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.779 232437 DEBUG nova.objects.instance [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'pci_devices' on Instance uuid fa71018e-7574-4438-bb85-43d1c96cf9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.809 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  <uuid>fa71018e-7574-4438-bb85-43d1c96cf9b9</uuid>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  <name>instance-000000a6</name>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestMinimumBasicScenario-server-235586357</nova:name>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:50:24</nova:creationTime>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <nova:user uuid="0f669e963dc54ad7bebf8dd20341428a">tempest-TestMinimumBasicScenario-1310413980-project-member</nova:user>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <nova:project uuid="0c8fc5bc237e42bfad505a0bca6681eb">tempest-TestMinimumBasicScenario-1310413980</nova:project>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6916f635-11b7-4158-ab13-60ff56406973"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <nova:port uuid="341d07a0-1551-46f6-85b6-aace80d14532">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <entry name="serial">fa71018e-7574-4438-bb85-43d1c96cf9b9</entry>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <entry name="uuid">fa71018e-7574-4438-bb85-43d1c96cf9b9</entry>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/fa71018e-7574-4438-bb85-43d1c96cf9b9_disk">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/fa71018e-7574-4438-bb85-43d1c96cf9b9_disk.config">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:71:74:18"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <target dev="tap341d07a0-15"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/fa71018e-7574-4438-bb85-43d1c96cf9b9/console.log" append="off"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:50:25 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:50:25 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:50:25 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:50:25 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.810 232437 DEBUG nova.compute.manager [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Preparing to wait for external event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.811 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.811 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.812 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.812 232437 DEBUG nova.virt.libvirt.vif [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:50:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-235586357',display_name='tempest-TestMinimumBasicScenario-server-235586357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-235586357',id=166,image_ref='6916f635-11b7-4158-ab13-60ff56406973',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKJFOYwUkK/AU5j/16C3dY42Bin5X9do17czkQVWr96Bc5yu/WMHsTfBb0AVGwIJHTi79KNz3aKY1rNv6m7H2M+tAVea1sAVgXE0sQ5V7zYqD0Y4j4BUibNqoce1/1RtaQ==',key_name='tempest-TestMinimumBasicScenario-1468814315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c8fc5bc237e42bfad505a0bca6681eb',ramdisk_id='',reservation_id='r-fawmn8ln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6916f635-11b7-4158-ab13-60ff56406973',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1310413980',owner_user_name='tempest-TestMinimumBasicScenario-1310413980-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:50:18Z,user_data=None,user_id='0f669e963dc54ad7bebf8dd20341428a',uuid=fa71018e-7574-4438-bb85-43d1c96cf9b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.813 232437 DEBUG nova.network.os_vif_util [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converting VIF {"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.813 232437 DEBUG nova.network.os_vif_util [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.814 232437 DEBUG os_vif [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.814 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.815 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.815 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.818 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.818 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap341d07a0-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.819 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap341d07a0-15, col_values=(('external_ids', {'iface-id': '341d07a0-1551-46f6-85b6-aace80d14532', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:74:18', 'vm-uuid': 'fa71018e-7574-4438-bb85-43d1c96cf9b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:25 np0005548731 NetworkManager[49182]: <info>  [1765007425.8640] manager: (tap341d07a0-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.863 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.867 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.871 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.873 232437 INFO os_vif [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15')#033[00m
Dec  6 02:50:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:25.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.940 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.941 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.941 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No VIF found with MAC fa:16:3e:71:74:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.942 232437 INFO nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Using config drive#033[00m
Dec  6 02:50:25 np0005548731 nova_compute[232433]: 2025-12-06 07:50:25.967 232437 DEBUG nova.storage.rbd_utils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image fa71018e-7574-4438-bb85-43d1c96cf9b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:26 np0005548731 nova_compute[232433]: 2025-12-06 07:50:26.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:50:26 np0005548731 nova_compute[232433]: 2025-12-06 07:50:26.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:50:26 np0005548731 nova_compute[232433]: 2025-12-06 07:50:26.296 232437 INFO nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Creating config drive at /var/lib/nova/instances/fa71018e-7574-4438-bb85-43d1c96cf9b9/disk.config#033[00m
Dec  6 02:50:26 np0005548731 nova_compute[232433]: 2025-12-06 07:50:26.301 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fa71018e-7574-4438-bb85-43d1c96cf9b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7dtkxd93 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:26Z|00800|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  6 02:50:26 np0005548731 nova_compute[232433]: 2025-12-06 07:50:26.435 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fa71018e-7574-4438-bb85-43d1c96cf9b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7dtkxd93" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:26 np0005548731 nova_compute[232433]: 2025-12-06 07:50:26.466 232437 DEBUG nova.storage.rbd_utils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image fa71018e-7574-4438-bb85-43d1c96cf9b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:26 np0005548731 nova_compute[232433]: 2025-12-06 07:50:26.469 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fa71018e-7574-4438-bb85-43d1c96cf9b9/disk.config fa71018e-7574-4438-bb85-43d1c96cf9b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:26.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:27 np0005548731 nova_compute[232433]: 2025-12-06 07:50:27.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:50:27 np0005548731 nova_compute[232433]: 2025-12-06 07:50:27.667 232437 DEBUG oslo_concurrency.processutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fa71018e-7574-4438-bb85-43d1c96cf9b9/disk.config fa71018e-7574-4438-bb85-43d1c96cf9b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:27 np0005548731 nova_compute[232433]: 2025-12-06 07:50:27.669 232437 INFO nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Deleting local config drive /var/lib/nova/instances/fa71018e-7574-4438-bb85-43d1c96cf9b9/disk.config because it was imported into RBD.#033[00m
Dec  6 02:50:27 np0005548731 kernel: tap341d07a0-15: entered promiscuous mode
Dec  6 02:50:27 np0005548731 NetworkManager[49182]: <info>  [1765007427.7164] manager: (tap341d07a0-15): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Dec  6 02:50:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:27Z|00801|binding|INFO|Claiming lport 341d07a0-1551-46f6-85b6-aace80d14532 for this chassis.
Dec  6 02:50:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:27Z|00802|binding|INFO|341d07a0-1551-46f6-85b6-aace80d14532: Claiming fa:16:3e:71:74:18 10.100.0.13
Dec  6 02:50:27 np0005548731 nova_compute[232433]: 2025-12-06 07:50:27.718 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:27 np0005548731 nova_compute[232433]: 2025-12-06 07:50:27.721 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:27 np0005548731 nova_compute[232433]: 2025-12-06 07:50:27.723 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.733 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:74:18 10.100.0.13'], port_security=['fa:16:3e:71:74:18 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fa71018e-7574-4438-bb85-43d1c96cf9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f08afae8-f952-4a01-a643-61a4dc212937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c8fc5bc237e42bfad505a0bca6681eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97e2366f-5968-4702-838b-5830aacf120f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9bb6731-402a-4b02-b4cf-9ac913838fd2, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=341d07a0-1551-46f6-85b6-aace80d14532) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.734 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 341d07a0-1551-46f6-85b6-aace80d14532 in datapath f08afae8-f952-4a01-a643-61a4dc212937 bound to our chassis#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.736 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f08afae8-f952-4a01-a643-61a4dc212937#033[00m
Dec  6 02:50:27 np0005548731 systemd-udevd[308168]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.748 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3b2ab5-ca05-4394-b731-313f78709f29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.749 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf08afae8-f1 in ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:50:27 np0005548731 systemd-machined[195355]: New machine qemu-81-instance-000000a6.
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.752 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf08afae8-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.752 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e7695301-2cfc-4572-80f6-1d66caa17e30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.753 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8b77ea7c-f478-46cf-89a5-0b68b31997e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:27 np0005548731 NetworkManager[49182]: <info>  [1765007427.7625] device (tap341d07a0-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:50:27 np0005548731 NetworkManager[49182]: <info>  [1765007427.7632] device (tap341d07a0-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.764 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[595fb218-f52a-494d-ba2c-6e3e57e1282b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:27 np0005548731 systemd[1]: Started Virtual Machine qemu-81-instance-000000a6.
Dec  6 02:50:27 np0005548731 nova_compute[232433]: 2025-12-06 07:50:27.788 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.788 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0356c76f-2b67-40a8-a1a2-2f51a1341e15]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:27Z|00803|binding|INFO|Setting lport 341d07a0-1551-46f6-85b6-aace80d14532 ovn-installed in OVS
Dec  6 02:50:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:27Z|00804|binding|INFO|Setting lport 341d07a0-1551-46f6-85b6-aace80d14532 up in Southbound
Dec  6 02:50:27 np0005548731 nova_compute[232433]: 2025-12-06 07:50:27.793 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.819 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[303f63a2-8bd6-4f9a-bcd4-f7c638d8ba35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.826 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1167c485-27e5-43bd-8a85-cc67e34bb089]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:27 np0005548731 NetworkManager[49182]: <info>  [1765007427.8276] manager: (tapf08afae8-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/365)
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.860 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[719b4d4c-0ad2-4a9d-b5e4-0973d99591f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.865 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f48af5-2a7c-40d7-bc5a-ea928832039f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:27.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:27 np0005548731 NetworkManager[49182]: <info>  [1765007427.8991] device (tapf08afae8-f0): carrier: link connected
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.948 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6e5c25-d33b-4501-bd17-205d1b233fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.965 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b2837e-f706-4439-98c5-11e3ca753224]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf08afae8-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:11:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768990, 'reachable_time': 28115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308200, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:27.985 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d15cd396-b1a3-4e10-af2d-971277d16e35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:1118'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 768990, 'tstamp': 768990}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308201, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:28.004 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[79eef3b3-cb69-442b-bb8c-92c24ffe9ec1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf08afae8-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:11:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768990, 'reachable_time': 28115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308202, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:28.041 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5b38f8-72e0-47ba-8196-733a38ac4742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:28.104 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[315b951e-8420-4586-a8a4-a4d939f0ef2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:28.105 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf08afae8-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:28.105 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:28.106 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf08afae8-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:28 np0005548731 kernel: tapf08afae8-f0: entered promiscuous mode
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.159 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:28 np0005548731 NetworkManager[49182]: <info>  [1765007428.1601] manager: (tapf08afae8-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:28.164 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf08afae8-f0, col_values=(('external_ids', {'iface-id': '684342c7-1709-4776-be04-f6d5a6b0b0ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:28Z|00805|binding|INFO|Releasing lport 684342c7-1709-4776-be04-f6d5a6b0b0ae from this chassis (sb_readonly=0)
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.165 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:28.180 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:28.182 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[55ca7618-c0e5-4471-8810-842e1c1cf256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:28.184 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-f08afae8-f952-4a01-a643-61a4dc212937
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID f08afae8-f952-4a01-a643-61a4dc212937
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:50:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:28.184 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'env', 'PROCESS_TAG=haproxy-f08afae8-f952-4a01-a643-61a4dc212937', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f08afae8-f952-4a01-a643-61a4dc212937.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.264 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007428.264122, fa71018e-7574-4438-bb85-43d1c96cf9b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.265 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] VM Started (Lifecycle Event)#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.297 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.301 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007428.2655146, fa71018e-7574-4438-bb85-43d1c96cf9b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.301 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.336 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.340 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.381 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:50:28 np0005548731 podman[308277]: 2025-12-06 07:50:28.562381877 +0000 UTC m=+0.050904641 container create 5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:50:28 np0005548731 systemd[1]: Started libpod-conmon-5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2.scope.
Dec  6 02:50:28 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:50:28 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f9803d469c14b5c32a1af21850c82d75fcc02c6fad9e88654859ee6c2f41eb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:50:28 np0005548731 podman[308277]: 2025-12-06 07:50:28.538016134 +0000 UTC m=+0.026538948 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:50:28 np0005548731 podman[308277]: 2025-12-06 07:50:28.636431633 +0000 UTC m=+0.124954417 container init 5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:50:28 np0005548731 podman[308277]: 2025-12-06 07:50:28.641682311 +0000 UTC m=+0.130205075 container start 5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 02:50:28 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[308292]: [NOTICE]   (308296) : New worker (308298) forked
Dec  6 02:50:28 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[308292]: [NOTICE]   (308296) : Loading success.
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.732 232437 DEBUG nova.compute.manager [req-beaffbed-aa0b-4727-9016-3a44877b3876 req-bf89aefd-46bd-4dc9-a779-d22c0205e699 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.732 232437 DEBUG oslo_concurrency.lockutils [req-beaffbed-aa0b-4727-9016-3a44877b3876 req-bf89aefd-46bd-4dc9-a779-d22c0205e699 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.732 232437 DEBUG oslo_concurrency.lockutils [req-beaffbed-aa0b-4727-9016-3a44877b3876 req-bf89aefd-46bd-4dc9-a779-d22c0205e699 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.733 232437 DEBUG oslo_concurrency.lockutils [req-beaffbed-aa0b-4727-9016-3a44877b3876 req-bf89aefd-46bd-4dc9-a779-d22c0205e699 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.733 232437 DEBUG nova.compute.manager [req-beaffbed-aa0b-4727-9016-3a44877b3876 req-bf89aefd-46bd-4dc9-a779-d22c0205e699 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Processing event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.733 232437 DEBUG nova.compute.manager [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.737 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007428.737015, fa71018e-7574-4438-bb85-43d1c96cf9b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.737 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.739 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.743 232437 INFO nova.virt.libvirt.driver [-] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Instance spawned successfully.#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.743 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.764 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.769 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.773 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.773 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.773 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.774 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.774 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.775 232437 DEBUG nova.virt.libvirt.driver [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.799 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:50:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:28.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.849 232437 INFO nova.compute.manager [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Took 9.90 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.850 232437 DEBUG nova.compute.manager [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.931 232437 INFO nova.compute.manager [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Took 13.56 seconds to build instance.#033[00m
Dec  6 02:50:28 np0005548731 nova_compute[232433]: 2025-12-06 07:50:28.946 232437 DEBUG oslo_concurrency.lockutils [None req-d215b82b-4294-4748-b739-5735b4cd7eb8 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:29 np0005548731 nova_compute[232433]: 2025-12-06 07:50:29.016 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:29.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:30.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:30 np0005548731 nova_compute[232433]: 2025-12-06 07:50:30.830 232437 DEBUG nova.compute.manager [req-7c67563e-cd0b-4a34-9eff-b917910e28f2 req-88fe9398-283d-4857-8d9e-14bab5d158b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:50:30 np0005548731 nova_compute[232433]: 2025-12-06 07:50:30.830 232437 DEBUG oslo_concurrency.lockutils [req-7c67563e-cd0b-4a34-9eff-b917910e28f2 req-88fe9398-283d-4857-8d9e-14bab5d158b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:30 np0005548731 nova_compute[232433]: 2025-12-06 07:50:30.831 232437 DEBUG oslo_concurrency.lockutils [req-7c67563e-cd0b-4a34-9eff-b917910e28f2 req-88fe9398-283d-4857-8d9e-14bab5d158b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:30 np0005548731 nova_compute[232433]: 2025-12-06 07:50:30.831 232437 DEBUG oslo_concurrency.lockutils [req-7c67563e-cd0b-4a34-9eff-b917910e28f2 req-88fe9398-283d-4857-8d9e-14bab5d158b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:30 np0005548731 nova_compute[232433]: 2025-12-06 07:50:30.831 232437 DEBUG nova.compute.manager [req-7c67563e-cd0b-4a34-9eff-b917910e28f2 req-88fe9398-283d-4857-8d9e-14bab5d158b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] No waiting events found dispatching network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:50:30 np0005548731 nova_compute[232433]: 2025-12-06 07:50:30.831 232437 WARNING nova.compute.manager [req-7c67563e-cd0b-4a34-9eff-b917910e28f2 req-88fe9398-283d-4857-8d9e-14bab5d158b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received unexpected event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:50:30 np0005548731 nova_compute[232433]: 2025-12-06 07:50:30.865 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:50:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4802.4 total, 600.0 interval#012Cumulative writes: 59K writes, 239K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.05 MB/s#012Cumulative WAL: 59K writes, 21K syncs, 2.74 writes per sync, written: 0.24 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 41.89 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4286 syncs, 2.51 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 02:50:31 np0005548731 nova_compute[232433]: 2025-12-06 07:50:31.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:50:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:31.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.128 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.128 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:50:32 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3325595356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.741 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:32.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.818 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.818 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.976 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.977 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4125MB free_disk=20.861888885498047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.977 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:32 np0005548731 nova_compute[232433]: 2025-12-06 07:50:32.978 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:33 np0005548731 nova_compute[232433]: 2025-12-06 07:50:33.068 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance fa71018e-7574-4438-bb85-43d1c96cf9b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:50:33 np0005548731 nova_compute[232433]: 2025-12-06 07:50:33.069 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:50:33 np0005548731 nova_compute[232433]: 2025-12-06 07:50:33.069 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:50:33 np0005548731 nova_compute[232433]: 2025-12-06 07:50:33.134 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:33 np0005548731 nova_compute[232433]: 2025-12-06 07:50:33.568 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:33 np0005548731 nova_compute[232433]: 2025-12-06 07:50:33.575 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:50:33 np0005548731 nova_compute[232433]: 2025-12-06 07:50:33.783 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:50:33 np0005548731 nova_compute[232433]: 2025-12-06 07:50:33.876 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:50:33 np0005548731 nova_compute[232433]: 2025-12-06 07:50:33.877 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:33.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:34 np0005548731 nova_compute[232433]: 2025-12-06 07:50:34.048 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:34.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:34 np0005548731 nova_compute[232433]: 2025-12-06 07:50:34.877 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:50:35 np0005548731 nova_compute[232433]: 2025-12-06 07:50:35.868 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:35.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:36 np0005548731 nova_compute[232433]: 2025-12-06 07:50:36.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:50:36 np0005548731 nova_compute[232433]: 2025-12-06 07:50:36.298 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:36 np0005548731 nova_compute[232433]: 2025-12-06 07:50:36.298 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:36 np0005548731 nova_compute[232433]: 2025-12-06 07:50:36.350 232437 DEBUG nova.compute.manager [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:50:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:36 np0005548731 nova_compute[232433]: 2025-12-06 07:50:36.743 232437 DEBUG oslo_concurrency.lockutils [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:36 np0005548731 nova_compute[232433]: 2025-12-06 07:50:36.743 232437 DEBUG oslo_concurrency.lockutils [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:36 np0005548731 nova_compute[232433]: 2025-12-06 07:50:36.763 232437 DEBUG nova.objects.instance [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'flavor' on Instance uuid fa71018e-7574-4438-bb85-43d1c96cf9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:50:36 np0005548731 nova_compute[232433]: 2025-12-06 07:50:36.802 232437 DEBUG oslo_concurrency.lockutils [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:36.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.076 232437 DEBUG oslo_concurrency.lockutils [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.077 232437 DEBUG oslo_concurrency.lockutils [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.078 232437 INFO nova.compute.manager [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Attaching volume 1c4642f5-e00c-416f-9c41-e5aa7293d85b to /dev/vdb#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.241 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.242 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.243 232437 DEBUG os_brick.utils [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.245 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.249 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.250 232437 INFO nova.compute.claims [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.260 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.260 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3336ab-995d-4e84-87bf-ba096557f64d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.262 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.269 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.270 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[21647360-5f37-44cf-bb62-740339b63b74]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.271 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.280 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.280 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a76e4d-db69-4034-9f8c-88d0515eaded]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.281 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6367ea-6158-40ec-8883-ce5dfd3b16a7]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.282 232437 DEBUG oslo_concurrency.processutils [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.318 232437 DEBUG oslo_concurrency.processutils [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.320 232437 DEBUG os_brick.initiator.connectors.lightos [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.321 232437 DEBUG os_brick.initiator.connectors.lightos [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.321 232437 DEBUG os_brick.initiator.connectors.lightos [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.321 232437 DEBUG os_brick.utils [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:50:37 np0005548731 nova_compute[232433]: 2025-12-06 07:50:37.321 232437 DEBUG nova.virt.block_device [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating existing volume attachment record: 8af8b9ed-2781-4f3f-8961-beda66d23d33 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:50:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:37.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:38 np0005548731 nova_compute[232433]: 2025-12-06 07:50:38.246 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:50:38 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/375235154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:50:38 np0005548731 nova_compute[232433]: 2025-12-06 07:50:38.709 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:38 np0005548731 nova_compute[232433]: 2025-12-06 07:50:38.715 232437 DEBUG nova.compute.provider_tree [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:50:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:38.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.049 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.180 232437 DEBUG nova.scheduler.client.report [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.463 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.464 232437 DEBUG nova.compute.manager [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.543 232437 DEBUG nova.compute.manager [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.543 232437 DEBUG nova.network.neutron [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.554 232437 DEBUG nova.objects.instance [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'flavor' on Instance uuid fa71018e-7574-4438-bb85-43d1c96cf9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.586 232437 INFO nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.594 232437 DEBUG nova.virt.libvirt.driver [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Attempting to attach volume 1c4642f5-e00c-416f-9c41-e5aa7293d85b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.596 232437 DEBUG nova.virt.libvirt.guest [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:50:39 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:50:39 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-1c4642f5-e00c-416f-9c41-e5aa7293d85b">
Dec  6 02:50:39 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:50:39 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:50:39 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:50:39 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:50:39 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:50:39 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:50:39 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:50:39 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:50:39 np0005548731 nova_compute[232433]:  <serial>1c4642f5-e00c-416f-9c41-e5aa7293d85b</serial>
Dec  6 02:50:39 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:50:39 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.605 232437 DEBUG nova.compute.manager [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.718 232437 DEBUG nova.compute.manager [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.719 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.720 232437 INFO nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Creating image(s)#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.747 232437 DEBUG nova.storage.rbd_utils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.774 232437 DEBUG nova.storage.rbd_utils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.801 232437 DEBUG nova.storage.rbd_utils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.804 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.833 232437 DEBUG nova.policy [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '90c9de6e67724c898a8e23b05fbf14da', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfa713d92cc94fa1b94404ed58b0563f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.842 232437 DEBUG nova.virt.libvirt.driver [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.843 232437 DEBUG nova.virt.libvirt.driver [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.843 232437 DEBUG nova.virt.libvirt.driver [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.843 232437 DEBUG nova.virt.libvirt.driver [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No VIF found with MAC fa:16:3e:71:74:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.873 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.873 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.874 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.874 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.898 232437 DEBUG nova.storage.rbd_utils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:39 np0005548731 nova_compute[232433]: 2025-12-06 07:50:39.902 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:50:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:39.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:50:40 np0005548731 nova_compute[232433]: 2025-12-06 07:50:40.644 232437 DEBUG oslo_concurrency.lockutils [None req-c4240f6d-c86f-4730-a164-c32b0647d681 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:40.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:40 np0005548731 nova_compute[232433]: 2025-12-06 07:50:40.914 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:41 np0005548731 nova_compute[232433]: 2025-12-06 07:50:41.037 232437 DEBUG nova.network.neutron [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Successfully created port: 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:50:41 np0005548731 nova_compute[232433]: 2025-12-06 07:50:41.542 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:41 np0005548731 nova_compute[232433]: 2025-12-06 07:50:41.618 232437 DEBUG nova.storage.rbd_utils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] resizing rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:50:41 np0005548731 nova_compute[232433]: 2025-12-06 07:50:41.710 232437 DEBUG nova.objects.instance [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'migration_context' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:50:41 np0005548731 nova_compute[232433]: 2025-12-06 07:50:41.724 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:50:41 np0005548731 nova_compute[232433]: 2025-12-06 07:50:41.724 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Ensure instance console log exists: /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:50:41 np0005548731 nova_compute[232433]: 2025-12-06 07:50:41.725 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:41 np0005548731 nova_compute[232433]: 2025-12-06 07:50:41.726 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:41 np0005548731 nova_compute[232433]: 2025-12-06 07:50:41.726 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:41.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e374 e374: 3 total, 3 up, 3 in
Dec  6 02:50:42 np0005548731 nova_compute[232433]: 2025-12-06 07:50:42.175 232437 DEBUG nova.network.neutron [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Successfully updated port: 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:50:42 np0005548731 nova_compute[232433]: 2025-12-06 07:50:42.196 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:50:42 np0005548731 nova_compute[232433]: 2025-12-06 07:50:42.196 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquired lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:50:42 np0005548731 nova_compute[232433]: 2025-12-06 07:50:42.197 232437 DEBUG nova.network.neutron [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:50:42 np0005548731 nova_compute[232433]: 2025-12-06 07:50:42.369 232437 DEBUG nova.network.neutron [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:50:42 np0005548731 nova_compute[232433]: 2025-12-06 07:50:42.392 232437 DEBUG nova.compute.manager [req-6cde9deb-412d-40fd-ace4-1050875f79c6 req-f90ad332-a7ac-4a33-a2ed-aaa6b54beadf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-changed-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:50:42 np0005548731 nova_compute[232433]: 2025-12-06 07:50:42.392 232437 DEBUG nova.compute.manager [req-6cde9deb-412d-40fd-ace4-1050875f79c6 req-f90ad332-a7ac-4a33-a2ed-aaa6b54beadf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Refreshing instance network info cache due to event network-changed-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:50:42 np0005548731 nova_compute[232433]: 2025-12-06 07:50:42.393 232437 DEBUG oslo_concurrency.lockutils [req-6cde9deb-412d-40fd-ace4-1050875f79c6 req-f90ad332-a7ac-4a33-a2ed-aaa6b54beadf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:50:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:42Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:71:74:18 10.100.0.13
Dec  6 02:50:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:42Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:74:18 10.100.0.13
Dec  6 02:50:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:42.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:43.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:44 np0005548731 nova_compute[232433]: 2025-12-06 07:50:44.052 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:44.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:45 np0005548731 nova_compute[232433]: 2025-12-06 07:50:45.749 232437 DEBUG nova.network.neutron [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updating instance_info_cache with network_info: [{"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:50:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:45.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:45 np0005548731 nova_compute[232433]: 2025-12-06 07:50:45.916 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.139 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Releasing lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.139 232437 DEBUG nova.compute.manager [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Instance network_info: |[{"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.140 232437 DEBUG oslo_concurrency.lockutils [req-6cde9deb-412d-40fd-ace4-1050875f79c6 req-f90ad332-a7ac-4a33-a2ed-aaa6b54beadf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.140 232437 DEBUG nova.network.neutron [req-6cde9deb-412d-40fd-ace4-1050875f79c6 req-f90ad332-a7ac-4a33-a2ed-aaa6b54beadf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Refreshing network info cache for port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.144 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Start _get_guest_xml network_info=[{"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.148 232437 WARNING nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.153 232437 DEBUG nova.virt.libvirt.host [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.154 232437 DEBUG nova.virt.libvirt.host [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.159 232437 DEBUG nova.virt.libvirt.host [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.159 232437 DEBUG nova.virt.libvirt.host [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.160 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.160 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.161 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.161 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.161 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.161 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.161 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.162 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.162 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.162 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.162 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.162 232437 DEBUG nova.virt.hardware [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.165 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:50:46 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2810597353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.593 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.618 232437 DEBUG nova.storage.rbd_utils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:46 np0005548731 nova_compute[232433]: 2025-12-06 07:50:46.622 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:46.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:50:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/588140338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.071 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.073 232437 DEBUG nova.virt.libvirt.vif [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-676074581',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-676074581',id=168,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNKYB7DoHCE/Zq13G9bNkMnqez+ah/cvrfQVcJ98DATDISc+hKRWhcY3n96hXJBgGVFk2F3L+nAWB+E3c8HLOV2K86PN0fFtoRWNhKFWMW6mo0EoTV5X0kp2CVI8eUnC1g==',key_name='tempest-keypair-12606996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-qd0qnxrq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:50:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=0b9681c0-c0e7-4bd8-9040-865c1bff517b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.073 232437 DEBUG nova.network.os_vif_util [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.074 232437 DEBUG nova.network.os_vif_util [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.075 232437 DEBUG nova.objects.instance [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.146 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  <uuid>0b9681c0-c0e7-4bd8-9040-865c1bff517b</uuid>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  <name>instance-000000a8</name>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-676074581</nova:name>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:50:46</nova:creationTime>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <nova:user uuid="90c9de6e67724c898a8e23b05fbf14da">tempest-AttachVolumeShelveTestJSON-1510980811-project-member</nova:user>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <nova:project uuid="cfa713d92cc94fa1b94404ed58b0563f">tempest-AttachVolumeShelveTestJSON-1510980811</nova:project>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <nova:port uuid="1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <entry name="serial">0b9681c0-c0e7-4bd8-9040-865c1bff517b</entry>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <entry name="uuid">0b9681c0-c0e7-4bd8-9040-865c1bff517b</entry>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:b7:01:5b"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <target dev="tap1d320b87-e6"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/console.log" append="off"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:50:47 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:50:47 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:50:47 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:50:47 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.147 232437 DEBUG nova.compute.manager [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Preparing to wait for external event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.148 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.148 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.149 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.149 232437 DEBUG nova.virt.libvirt.vif [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-676074581',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-676074581',id=168,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNKYB7DoHCE/Zq13G9bNkMnqez+ah/cvrfQVcJ98DATDISc+hKRWhcY3n96hXJBgGVFk2F3L+nAWB+E3c8HLOV2K86PN0fFtoRWNhKFWMW6mo0EoTV5X0kp2CVI8eUnC1g==',key_name='tempest-keypair-12606996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-qd0qnxrq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:50:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=0b9681c0-c0e7-4bd8-9040-865c1bff517b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.150 232437 DEBUG nova.network.os_vif_util [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.150 232437 DEBUG nova.network.os_vif_util [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.151 232437 DEBUG os_vif [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.151 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.153 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.154 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.157 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.157 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d320b87-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.158 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d320b87-e6, col_values=(('external_ids', {'iface-id': '1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:01:5b', 'vm-uuid': '0b9681c0-c0e7-4bd8-9040-865c1bff517b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.159 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:47 np0005548731 NetworkManager[49182]: <info>  [1765007447.1609] manager: (tap1d320b87-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.162 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.166 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.167 232437 INFO os_vif [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6')#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.375 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.375 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.375 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No VIF found with MAC fa:16:3e:b7:01:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.376 232437 INFO nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Using config drive#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.401 232437 DEBUG nova.storage.rbd_utils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.830 232437 INFO nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Creating config drive at /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config#033[00m
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.839 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8gla_xbj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:47.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:47 np0005548731 nova_compute[232433]: 2025-12-06 07:50:47.978 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8gla_xbj" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.013 232437 DEBUG nova.storage.rbd_utils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.018 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.412 232437 DEBUG oslo_concurrency.processutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.413 232437 INFO nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Deleting local config drive /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config because it was imported into RBD.#033[00m
Dec  6 02:50:48 np0005548731 kernel: tap1d320b87-e6: entered promiscuous mode
Dec  6 02:50:48 np0005548731 NetworkManager[49182]: <info>  [1765007448.4725] manager: (tap1d320b87-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Dec  6 02:50:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:48Z|00806|binding|INFO|Claiming lport 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc for this chassis.
Dec  6 02:50:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:48Z|00807|binding|INFO|1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc: Claiming fa:16:3e:b7:01:5b 10.100.0.14
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.474 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.489 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:01:5b 10.100.0.14'], port_security=['fa:16:3e:b7:01:5b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0b9681c0-c0e7-4bd8-9040-865c1bff517b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfa713d92cc94fa1b94404ed58b0563f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a26fe1ae-b98b-40c8-b5a2-fa6264313a90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5411afcf-f935-4976-affc-7b12214f8e50, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.491 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc in datapath 45904a2f-a5c2-4047-9c19-a87d36354c1b bound to our chassis#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.494 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45904a2f-a5c2-4047-9c19-a87d36354c1b#033[00m
Dec  6 02:50:48 np0005548731 systemd-udevd[308761]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.507 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e50dcf97-bef8-4131-a70c-cee562f92e98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.508 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45904a2f-a1 in ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.510 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45904a2f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.510 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d57d5c6b-7b24-4ced-b9b2-cd8fc98abba8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.511 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[816488a8-8d2b-4a8e-bcd0-05ea1b111072]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 NetworkManager[49182]: <info>  [1765007448.5220] device (tap1d320b87-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:50:48 np0005548731 NetworkManager[49182]: <info>  [1765007448.5237] device (tap1d320b87-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.524 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b9c06f-47d8-429c-977c-b2eee0123f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 systemd-machined[195355]: New machine qemu-82-instance-000000a8.
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.543 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:48 np0005548731 systemd[1]: Started Virtual Machine qemu-82-instance-000000a8.
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.549 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8831701f-7ff9-4acf-a6c7-a30443a96655]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:48Z|00808|binding|INFO|Setting lport 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc ovn-installed in OVS
Dec  6 02:50:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:48Z|00809|binding|INFO|Setting lport 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc up in Southbound
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.557 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.577 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[934933e7-fae9-425a-9fea-88d64c37e1b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.582 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3deefaa9-2e06-4c3b-b8fb-7eee751481d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 systemd-udevd[308766]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:50:48 np0005548731 NetworkManager[49182]: <info>  [1765007448.5837] manager: (tap45904a2f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/369)
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.594 232437 DEBUG nova.network.neutron [req-6cde9deb-412d-40fd-ace4-1050875f79c6 req-f90ad332-a7ac-4a33-a2ed-aaa6b54beadf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updated VIF entry in instance network info cache for port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.595 232437 DEBUG nova.network.neutron [req-6cde9deb-412d-40fd-ace4-1050875f79c6 req-f90ad332-a7ac-4a33-a2ed-aaa6b54beadf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updating instance_info_cache with network_info: [{"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.616 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad330d9-6fae-457e-9f0e-f4f744b51941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.621 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5949ccf3-d0d9-435f-abb3-5b4cb02d3f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 NetworkManager[49182]: <info>  [1765007448.6417] device (tap45904a2f-a0): carrier: link connected
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.647 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e3dd45e4-6a2a-49fa-b269-a3a89ba24d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.665 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a45b5538-6a0d-4926-b210-b6d633b0447c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45904a2f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:67:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771064, 'reachable_time': 21238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308795, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.666 232437 DEBUG oslo_concurrency.lockutils [req-6cde9deb-412d-40fd-ace4-1050875f79c6 req-f90ad332-a7ac-4a33-a2ed-aaa6b54beadf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.684 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0a2d31bd-d00e-4327-bf45-60f56fc8cc84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:67fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 771064, 'tstamp': 771064}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308796, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.698 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0780adcc-381b-441c-8dfb-a8312824309e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45904a2f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:67:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771064, 'reachable_time': 21238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308797, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.725 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[676eaca1-1205-4e3f-b203-5a48013d70e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.781 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b6359e-44ef-47d1-9b60-1273d6cb99d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.782 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45904a2f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.783 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.783 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45904a2f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.784 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:48 np0005548731 kernel: tap45904a2f-a0: entered promiscuous mode
Dec  6 02:50:48 np0005548731 NetworkManager[49182]: <info>  [1765007448.7855] manager: (tap45904a2f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.786 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.789 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45904a2f-a0, col_values=(('external_ids', {'iface-id': 'e43e784e-bee5-49c8-8bc7-c45a17996abf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.790 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.792 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.793 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3eba0912-27c2-434a-ae7d-908c120c83f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.794 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-45904a2f-a5c2-4047-9c19-a87d36354c1b
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 45904a2f-a5c2-4047-9c19-a87d36354c1b
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:50:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:50:48.794 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'env', 'PROCESS_TAG=haproxy-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45904a2f-a5c2-4047-9c19-a87d36354c1b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:50:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:48Z|00810|binding|INFO|Releasing lport e43e784e-bee5-49c8-8bc7-c45a17996abf from this chassis (sb_readonly=0)
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:48 np0005548731 NetworkManager[49182]: <info>  [1765007448.8173] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Dec  6 02:50:48 np0005548731 NetworkManager[49182]: <info>  [1765007448.8179] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Dec  6 02:50:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:48.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.966 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007448.9663777, 0b9681c0-c0e7-4bd8-9040-865c1bff517b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:50:48 np0005548731 nova_compute[232433]: 2025-12-06 07:50:48.967 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] VM Started (Lifecycle Event)#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.027 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:49 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:49Z|00811|binding|INFO|Releasing lport 684342c7-1709-4776-be04-f6d5a6b0b0ae from this chassis (sb_readonly=0)
Dec  6 02:50:49 np0005548731 ovn_controller[133927]: 2025-12-06T07:50:49Z|00812|binding|INFO|Releasing lport e43e784e-bee5-49c8-8bc7-c45a17996abf from this chassis (sb_readonly=0)
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.050 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.053 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.055 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007448.9673789, 0b9681c0-c0e7-4bd8-9040-865c1bff517b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.056 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.061 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.073 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.076 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.080 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.137 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:50:49 np0005548731 podman[308871]: 2025-12-06 07:50:49.172254908 +0000 UTC m=+0.048932684 container create 1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  6 02:50:49 np0005548731 systemd[1]: Started libpod-conmon-1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e.scope.
Dec  6 02:50:49 np0005548731 podman[308871]: 2025-12-06 07:50:49.145452475 +0000 UTC m=+0.022130281 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:50:49 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:50:49 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3573929415c6e400d8ee4569198989d7cbfd01a2eb44321818fb09c82edb4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:50:49 np0005548731 podman[308871]: 2025-12-06 07:50:49.271498068 +0000 UTC m=+0.148175864 container init 1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  6 02:50:49 np0005548731 podman[308871]: 2025-12-06 07:50:49.276590013 +0000 UTC m=+0.153267799 container start 1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:50:49 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[308886]: [NOTICE]   (308890) : New worker (308892) forked
Dec  6 02:50:49 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[308886]: [NOTICE]   (308890) : Loading success.
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.762 232437 DEBUG nova.compute.manager [req-828e4928-69a7-4f6c-a9ba-597191ea6d7a req-5b64d0ab-ea01-48c2-bba4-1f798bc8db53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-changed-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.763 232437 DEBUG nova.compute.manager [req-828e4928-69a7-4f6c-a9ba-597191ea6d7a req-5b64d0ab-ea01-48c2-bba4-1f798bc8db53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing instance network info cache due to event network-changed-341d07a0-1551-46f6-85b6-aace80d14532. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.763 232437 DEBUG oslo_concurrency.lockutils [req-828e4928-69a7-4f6c-a9ba-597191ea6d7a req-5b64d0ab-ea01-48c2-bba4-1f798bc8db53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.764 232437 DEBUG oslo_concurrency.lockutils [req-828e4928-69a7-4f6c-a9ba-597191ea6d7a req-5b64d0ab-ea01-48c2-bba4-1f798bc8db53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:50:49 np0005548731 nova_compute[232433]: 2025-12-06 07:50:49.765 232437 DEBUG nova.network.neutron [req-828e4928-69a7-4f6c-a9ba-597191ea6d7a req-5b64d0ab-ea01-48c2-bba4-1f798bc8db53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:50:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:49.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.386 232437 DEBUG nova.compute.manager [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.386 232437 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.387 232437 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.387 232437 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.387 232437 DEBUG nova.compute.manager [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Processing event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.388 232437 DEBUG nova.compute.manager [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.388 232437 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.388 232437 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.388 232437 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.389 232437 DEBUG nova.compute.manager [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] No waiting events found dispatching network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.389 232437 WARNING nova.compute.manager [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received unexpected event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.390 232437 DEBUG nova.compute.manager [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.394 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007450.3944323, 0b9681c0-c0e7-4bd8-9040-865c1bff517b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.395 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.401 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.405 232437 INFO nova.virt.libvirt.driver [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Instance spawned successfully.#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.405 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.426 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.430 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.439 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.439 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.440 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.440 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.440 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.441 232437 DEBUG nova.virt.libvirt.driver [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.503 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.585 232437 INFO nova.compute.manager [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Took 10.87 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.585 232437 DEBUG nova.compute.manager [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.661 232437 INFO nova.compute.manager [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Took 14.27 seconds to build instance.#033[00m
Dec  6 02:50:50 np0005548731 nova_compute[232433]: 2025-12-06 07:50:50.692 232437 DEBUG oslo_concurrency.lockutils [None req-71c99f44-8c86-45e5-a802-d58fd01147d0 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:50:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:50.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:51 np0005548731 podman[308904]: 2025-12-06 07:50:51.898530488 +0000 UTC m=+0.054160572 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:50:51 np0005548731 podman[308902]: 2025-12-06 07:50:51.918766062 +0000 UTC m=+0.078946186 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  6 02:50:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:51 np0005548731 podman[308903]: 2025-12-06 07:50:51.921351264 +0000 UTC m=+0.081531288 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  6 02:50:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:50:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:51.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:50:52 np0005548731 nova_compute[232433]: 2025-12-06 07:50:52.042 232437 DEBUG nova.compute.manager [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-changed-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:50:52 np0005548731 nova_compute[232433]: 2025-12-06 07:50:52.042 232437 DEBUG nova.compute.manager [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing instance network info cache due to event network-changed-341d07a0-1551-46f6-85b6-aace80d14532. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:50:52 np0005548731 nova_compute[232433]: 2025-12-06 07:50:52.043 232437 DEBUG oslo_concurrency.lockutils [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:50:52 np0005548731 nova_compute[232433]: 2025-12-06 07:50:52.159 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:52 np0005548731 nova_compute[232433]: 2025-12-06 07:50:52.201 232437 DEBUG nova.network.neutron [req-828e4928-69a7-4f6c-a9ba-597191ea6d7a req-5b64d0ab-ea01-48c2-bba4-1f798bc8db53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updated VIF entry in instance network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:50:52 np0005548731 nova_compute[232433]: 2025-12-06 07:50:52.202 232437 DEBUG nova.network.neutron [req-828e4928-69a7-4f6c-a9ba-597191ea6d7a req-5b64d0ab-ea01-48c2-bba4-1f798bc8db53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating instance_info_cache with network_info: [{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:50:52 np0005548731 nova_compute[232433]: 2025-12-06 07:50:52.798 232437 DEBUG oslo_concurrency.lockutils [req-828e4928-69a7-4f6c-a9ba-597191ea6d7a req-5b64d0ab-ea01-48c2-bba4-1f798bc8db53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:50:52 np0005548731 nova_compute[232433]: 2025-12-06 07:50:52.800 232437 DEBUG oslo_concurrency.lockutils [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:50:52 np0005548731 nova_compute[232433]: 2025-12-06 07:50:52.800 232437 DEBUG nova.network.neutron [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:50:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:50:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:52.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:50:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:50:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:53.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:50:54 np0005548731 nova_compute[232433]: 2025-12-06 07:50:54.055 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:54 np0005548731 nova_compute[232433]: 2025-12-06 07:50:54.405 232437 DEBUG nova.compute.manager [req-fe1a06f3-205d-4e8f-adef-1ec05c015923 req-23a7dfbc-1466-402b-a494-107983942fb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-changed-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:50:54 np0005548731 nova_compute[232433]: 2025-12-06 07:50:54.406 232437 DEBUG nova.compute.manager [req-fe1a06f3-205d-4e8f-adef-1ec05c015923 req-23a7dfbc-1466-402b-a494-107983942fb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Refreshing instance network info cache due to event network-changed-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:50:54 np0005548731 nova_compute[232433]: 2025-12-06 07:50:54.406 232437 DEBUG oslo_concurrency.lockutils [req-fe1a06f3-205d-4e8f-adef-1ec05c015923 req-23a7dfbc-1466-402b-a494-107983942fb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:50:54 np0005548731 nova_compute[232433]: 2025-12-06 07:50:54.406 232437 DEBUG oslo_concurrency.lockutils [req-fe1a06f3-205d-4e8f-adef-1ec05c015923 req-23a7dfbc-1466-402b-a494-107983942fb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:50:54 np0005548731 nova_compute[232433]: 2025-12-06 07:50:54.407 232437 DEBUG nova.network.neutron [req-fe1a06f3-205d-4e8f-adef-1ec05c015923 req-23a7dfbc-1466-402b-a494-107983942fb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Refreshing network info cache for port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:50:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:54.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:55 np0005548731 nova_compute[232433]: 2025-12-06 07:50:55.292 232437 DEBUG nova.network.neutron [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updated VIF entry in instance network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:50:55 np0005548731 nova_compute[232433]: 2025-12-06 07:50:55.294 232437 DEBUG nova.network.neutron [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating instance_info_cache with network_info: [{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:50:55 np0005548731 nova_compute[232433]: 2025-12-06 07:50:55.464 232437 DEBUG oslo_concurrency.lockutils [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:50:55 np0005548731 nova_compute[232433]: 2025-12-06 07:50:55.464 232437 DEBUG nova.compute.manager [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-changed-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:50:55 np0005548731 nova_compute[232433]: 2025-12-06 07:50:55.465 232437 DEBUG nova.compute.manager [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing instance network info cache due to event network-changed-341d07a0-1551-46f6-85b6-aace80d14532. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:50:55 np0005548731 nova_compute[232433]: 2025-12-06 07:50:55.465 232437 DEBUG oslo_concurrency.lockutils [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:50:55 np0005548731 nova_compute[232433]: 2025-12-06 07:50:55.465 232437 DEBUG oslo_concurrency.lockutils [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:50:55 np0005548731 nova_compute[232433]: 2025-12-06 07:50:55.466 232437 DEBUG nova.network.neutron [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:50:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:55.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:56 np0005548731 nova_compute[232433]: 2025-12-06 07:50:56.118 232437 DEBUG nova.network.neutron [req-fe1a06f3-205d-4e8f-adef-1ec05c015923 req-23a7dfbc-1466-402b-a494-107983942fb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updated VIF entry in instance network info cache for port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:50:56 np0005548731 nova_compute[232433]: 2025-12-06 07:50:56.119 232437 DEBUG nova.network.neutron [req-fe1a06f3-205d-4e8f-adef-1ec05c015923 req-23a7dfbc-1466-402b-a494-107983942fb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updating instance_info_cache with network_info: [{"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:50:56 np0005548731 nova_compute[232433]: 2025-12-06 07:50:56.230 232437 DEBUG oslo_concurrency.lockutils [req-fe1a06f3-205d-4e8f-adef-1ec05c015923 req-23a7dfbc-1466-402b-a494-107983942fb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:50:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:50:56 np0005548731 nova_compute[232433]: 2025-12-06 07:50:56.705 232437 DEBUG nova.network.neutron [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updated VIF entry in instance network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:50:56 np0005548731 nova_compute[232433]: 2025-12-06 07:50:56.706 232437 DEBUG nova.network.neutron [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating instance_info_cache with network_info: [{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:50:56 np0005548731 nova_compute[232433]: 2025-12-06 07:50:56.724 232437 DEBUG oslo_concurrency.lockutils [req-c19c31d2-1f87-4fdc-b3cd-2ec21f1275f2 req-fe9db544-0fae-48d4-9cae-73e8cb79555d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:50:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:56.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:57 np0005548731 nova_compute[232433]: 2025-12-06 07:50:57.162 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:50:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:57.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:50:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:50:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:58.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:50:59 np0005548731 nova_compute[232433]: 2025-12-06 07:50:59.056 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:50:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:50:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:50:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:59.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:51:00 np0005548731 nova_compute[232433]: 2025-12-06 07:51:00.253 232437 DEBUG nova.compute.manager [req-9defa22c-dd9f-4cca-af2f-e478fd8469c1 req-d9859128-247d-4246-918b-078d766ee5da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-changed-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:00 np0005548731 nova_compute[232433]: 2025-12-06 07:51:00.254 232437 DEBUG nova.compute.manager [req-9defa22c-dd9f-4cca-af2f-e478fd8469c1 req-d9859128-247d-4246-918b-078d766ee5da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing instance network info cache due to event network-changed-341d07a0-1551-46f6-85b6-aace80d14532. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:51:00 np0005548731 nova_compute[232433]: 2025-12-06 07:51:00.254 232437 DEBUG oslo_concurrency.lockutils [req-9defa22c-dd9f-4cca-af2f-e478fd8469c1 req-d9859128-247d-4246-918b-078d766ee5da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:51:00 np0005548731 nova_compute[232433]: 2025-12-06 07:51:00.254 232437 DEBUG oslo_concurrency.lockutils [req-9defa22c-dd9f-4cca-af2f-e478fd8469c1 req-d9859128-247d-4246-918b-078d766ee5da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:51:00 np0005548731 nova_compute[232433]: 2025-12-06 07:51:00.255 232437 DEBUG nova.network.neutron [req-9defa22c-dd9f-4cca-af2f-e478fd8469c1 req-d9859128-247d-4246-918b-078d766ee5da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:51:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:51:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:00.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:51:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:00.895 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:00.896 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:00.897 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:01 np0005548731 nova_compute[232433]: 2025-12-06 07:51:01.899 232437 DEBUG nova.network.neutron [req-9defa22c-dd9f-4cca-af2f-e478fd8469c1 req-d9859128-247d-4246-918b-078d766ee5da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updated VIF entry in instance network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:51:01 np0005548731 nova_compute[232433]: 2025-12-06 07:51:01.900 232437 DEBUG nova.network.neutron [req-9defa22c-dd9f-4cca-af2f-e478fd8469c1 req-d9859128-247d-4246-918b-078d766ee5da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating instance_info_cache with network_info: [{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:51:01 np0005548731 nova_compute[232433]: 2025-12-06 07:51:01.921 232437 DEBUG oslo_concurrency.lockutils [req-9defa22c-dd9f-4cca-af2f-e478fd8469c1 req-d9859128-247d-4246-918b-078d766ee5da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:51:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:01.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:02 np0005548731 nova_compute[232433]: 2025-12-06 07:51:02.164 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.779512) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462779580, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 804, "num_deletes": 250, "total_data_size": 1518853, "memory_usage": 1539368, "flush_reason": "Manual Compaction"}
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462790743, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 665305, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62190, "largest_seqno": 62989, "table_properties": {"data_size": 662026, "index_size": 1123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8960, "raw_average_key_size": 20, "raw_value_size": 654996, "raw_average_value_size": 1523, "num_data_blocks": 49, "num_entries": 430, "num_filter_entries": 430, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007407, "oldest_key_time": 1765007407, "file_creation_time": 1765007462, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 11345 microseconds, and 3475 cpu microseconds.
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.790851) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 665305 bytes OK
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.790876) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.798822) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.798864) EVENT_LOG_v1 {"time_micros": 1765007462798855, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.798885) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 1514643, prev total WAL file size 1514643, number of live WAL files 2.
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.799720) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303236' seq:72057594037927935, type:22 .. '6D6772737461740032323737' seq:0, type:0; will stop at (end)
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(649KB)], [123(12MB)]
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462799804, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 13701615, "oldest_snapshot_seqno": -1}
Dec  6 02:51:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:02.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 9040 keys, 10192277 bytes, temperature: kUnknown
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462888137, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 10192277, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10135745, "index_size": 32804, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22661, "raw_key_size": 239797, "raw_average_key_size": 26, "raw_value_size": 9978549, "raw_average_value_size": 1103, "num_data_blocks": 1242, "num_entries": 9040, "num_filter_entries": 9040, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765007462, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.888446) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10192277 bytes
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.891096) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.0 rd, 115.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(35.9) write-amplify(15.3) OK, records in: 9534, records dropped: 494 output_compression: NoCompression
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.891116) EVENT_LOG_v1 {"time_micros": 1765007462891107, "job": 78, "event": "compaction_finished", "compaction_time_micros": 88402, "compaction_time_cpu_micros": 30982, "output_level": 6, "num_output_files": 1, "total_output_size": 10192277, "num_input_records": 9534, "num_output_records": 9040, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462891351, "job": 78, "event": "table_file_deletion", "file_number": 125}
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462894311, "job": 78, "event": "table_file_deletion", "file_number": 123}
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.799608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.894360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.894365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.894367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.894368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:51:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:51:02.894369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:51:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:03Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:01:5b 10.100.0.14
Dec  6 02:51:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:03Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:01:5b 10.100.0.14
Dec  6 02:51:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:03.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e375 e375: 3 total, 3 up, 3 in
Dec  6 02:51:04 np0005548731 nova_compute[232433]: 2025-12-06 07:51:04.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:04 np0005548731 nova_compute[232433]: 2025-12-06 07:51:04.754 232437 DEBUG oslo_concurrency.lockutils [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:04 np0005548731 nova_compute[232433]: 2025-12-06 07:51:04.755 232437 DEBUG oslo_concurrency.lockutils [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:04 np0005548731 nova_compute[232433]: 2025-12-06 07:51:04.755 232437 INFO nova.compute.manager [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Rebooting instance#033[00m
Dec  6 02:51:04 np0005548731 nova_compute[232433]: 2025-12-06 07:51:04.784 232437 DEBUG oslo_concurrency.lockutils [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:51:04 np0005548731 nova_compute[232433]: 2025-12-06 07:51:04.785 232437 DEBUG oslo_concurrency.lockutils [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquired lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:51:04 np0005548731 nova_compute[232433]: 2025-12-06 07:51:04.785 232437 DEBUG nova.network.neutron [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:51:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:51:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:04.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:51:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:05.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:06.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:07 np0005548731 nova_compute[232433]: 2025-12-06 07:51:07.200 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:07.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:08.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:09 np0005548731 nova_compute[232433]: 2025-12-06 07:51:09.059 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:09.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:51:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4194304661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:51:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:51:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4194304661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:51:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:10.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:11.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:12 np0005548731 nova_compute[232433]: 2025-12-06 07:51:12.203 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e376 e376: 3 total, 3 up, 3 in
Dec  6 02:51:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:12.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:51:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:51:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:51:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:13.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:14 np0005548731 nova_compute[232433]: 2025-12-06 07:51:14.124 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:14.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:15.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:16.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:17 np0005548731 nova_compute[232433]: 2025-12-06 07:51:17.204 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:17.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:18.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:19 np0005548731 nova_compute[232433]: 2025-12-06 07:51:19.164 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:19.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:51:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:20.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:21.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:22 np0005548731 nova_compute[232433]: 2025-12-06 07:51:22.206 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:51:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:22.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:22 np0005548731 podman[309216]: 2025-12-06 07:51:22.891771948 +0000 UTC m=+0.050595834 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:51:22 np0005548731 podman[309218]: 2025-12-06 07:51:22.903460613 +0000 UTC m=+0.058950539 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 02:51:22 np0005548731 podman[309217]: 2025-12-06 07:51:22.924606278 +0000 UTC m=+0.081720523 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:51:23 np0005548731 nova_compute[232433]: 2025-12-06 07:51:23.568 232437 DEBUG nova.network.neutron [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating instance_info_cache with network_info: [{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:51:23 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:23Z|00813|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  6 02:51:23 np0005548731 nova_compute[232433]: 2025-12-06 07:51:23.869 232437 DEBUG oslo_concurrency.lockutils [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Releasing lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:51:23 np0005548731 nova_compute[232433]: 2025-12-06 07:51:23.870 232437 DEBUG nova.compute.manager [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:51:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:51:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:23.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:51:24 np0005548731 nova_compute[232433]: 2025-12-06 07:51:24.165 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:24.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.198 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.198 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.198 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.198 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid fa71018e-7574-4438-bb85-43d1c96cf9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:51:25 np0005548731 kernel: tap341d07a0-15 (unregistering): left promiscuous mode
Dec  6 02:51:25 np0005548731 NetworkManager[49182]: <info>  [1765007485.4821] device (tap341d07a0-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:51:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:25Z|00814|binding|INFO|Releasing lport 341d07a0-1551-46f6-85b6-aace80d14532 from this chassis (sb_readonly=0)
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.490 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:25Z|00815|binding|INFO|Setting lport 341d07a0-1551-46f6-85b6-aace80d14532 down in Southbound
Dec  6 02:51:25 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:25Z|00816|binding|INFO|Removing iface tap341d07a0-15 ovn-installed in OVS
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.493 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.524 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:25 np0005548731 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a6.scope: Deactivated successfully.
Dec  6 02:51:25 np0005548731 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a6.scope: Consumed 15.634s CPU time.
Dec  6 02:51:25 np0005548731 systemd-machined[195355]: Machine qemu-81-instance-000000a6 terminated.
Dec  6 02:51:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:25.669 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:74:18 10.100.0.13'], port_security=['fa:16:3e:71:74:18 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fa71018e-7574-4438-bb85-43d1c96cf9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f08afae8-f952-4a01-a643-61a4dc212937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c8fc5bc237e42bfad505a0bca6681eb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6303b693-a47b-4375-8b74-5068fffec871 97e2366f-5968-4702-838b-5830aacf120f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9bb6731-402a-4b02-b4cf-9ac913838fd2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=341d07a0-1551-46f6-85b6-aace80d14532) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:51:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:25.671 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 341d07a0-1551-46f6-85b6-aace80d14532 in datapath f08afae8-f952-4a01-a643-61a4dc212937 unbound from our chassis#033[00m
Dec  6 02:51:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:25.673 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f08afae8-f952-4a01-a643-61a4dc212937, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:51:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:25.674 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7bf6f7-8ae2-4702-b4b5-1ff43eeeb331]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:25.674 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 namespace which is not needed anymore#033[00m
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.697 232437 INFO nova.virt.libvirt.driver [-] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Instance destroyed successfully.#033[00m
Dec  6 02:51:25 np0005548731 nova_compute[232433]: 2025-12-06 07:51:25.698 232437 DEBUG nova.objects.instance [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'resources' on Instance uuid fa71018e-7574-4438-bb85-43d1c96cf9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:51:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:25.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:26 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[308292]: [NOTICE]   (308296) : haproxy version is 2.8.14-c23fe91
Dec  6 02:51:26 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[308292]: [NOTICE]   (308296) : path to executable is /usr/sbin/haproxy
Dec  6 02:51:26 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[308292]: [WARNING]  (308296) : Exiting Master process...
Dec  6 02:51:26 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[308292]: [WARNING]  (308296) : Exiting Master process...
Dec  6 02:51:26 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[308292]: [ALERT]    (308296) : Current worker (308298) exited with code 143 (Terminated)
Dec  6 02:51:26 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[308292]: [WARNING]  (308296) : All workers exited. Exiting... (0)
Dec  6 02:51:26 np0005548731 systemd[1]: libpod-5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2.scope: Deactivated successfully.
Dec  6 02:51:26 np0005548731 podman[309362]: 2025-12-06 07:51:26.302556647 +0000 UTC m=+0.537097786 container died 5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:51:26 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2-userdata-shm.mount: Deactivated successfully.
Dec  6 02:51:26 np0005548731 systemd[1]: var-lib-containers-storage-overlay-7f9803d469c14b5c32a1af21850c82d75fcc02c6fad9e88654859ee6c2f41eb6-merged.mount: Deactivated successfully.
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.342 232437 DEBUG nova.virt.libvirt.vif [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:50:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-235586357',display_name='tempest-TestMinimumBasicScenario-server-235586357',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-235586357',id=166,image_ref='6916f635-11b7-4158-ab13-60ff56406973',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKJFOYwUkK/AU5j/16C3dY42Bin5X9do17czkQVWr96Bc5yu/WMHsTfBb0AVGwIJHTi79KNz3aKY1rNv6m7H2M+tAVea1sAVgXE0sQ5V7zYqD0Y4j4BUibNqoce1/1RtaQ==',key_name='tempest-TestMinimumBasicScenario-1468814315',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:50:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0c8fc5bc237e42bfad505a0bca6681eb',ramdisk_id='',reservation_id='r-fawmn8ln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6916f635-11b7-4158-ab13-60ff56406973',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1310413980',owner_user_name='tempest-TestMinimumBasicScenario-1310413980-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:51:23Z,user_data=None,user_id='0f669e963dc54ad7bebf8dd20341428a',uuid=fa71018e-7574-4438-bb85-43d1c96cf9b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.343 232437 DEBUG nova.network.os_vif_util [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converting VIF {"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.344 232437 DEBUG nova.network.os_vif_util [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.344 232437 DEBUG os_vif [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.346 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.346 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap341d07a0-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.348 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.349 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.351 232437 INFO os_vif [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15')#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.361 232437 DEBUG nova.virt.libvirt.driver [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Start _get_guest_xml network_info=[{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6916f635-11b7-4158-ab13-60ff56406973,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6916f635-11b7-4158-ab13-60ff56406973'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1c4642f5-e00c-416f-9c41-e5aa7293d85b', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1c4642f5-e00c-416f-9c41-e5aa7293d85b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'fa71018e-7574-4438-bb85-43d1c96cf9b9', 'attached_at': '', 'detached_at': '', 'volume_id': '1c4642f5-e00c-416f-9c41-e5aa7293d85b', 'serial': '1c4642f5-e00c-416f-9c41-e5aa7293d85b'}, 'disk_bus': 'virtio', 'boot_index': None, 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'attachment_id': '8af8b9ed-2781-4f3f-8961-beda66d23d33', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.364 232437 WARNING nova.virt.libvirt.driver [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.373 232437 DEBUG nova.virt.libvirt.host [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.373 232437 DEBUG nova.virt.libvirt.host [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.380 232437 DEBUG nova.virt.libvirt.host [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.381 232437 DEBUG nova.virt.libvirt.host [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.382 232437 DEBUG nova.virt.libvirt.driver [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.382 232437 DEBUG nova.virt.hardware [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6916f635-11b7-4158-ab13-60ff56406973,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.382 232437 DEBUG nova.virt.hardware [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.383 232437 DEBUG nova.virt.hardware [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.383 232437 DEBUG nova.virt.hardware [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.383 232437 DEBUG nova.virt.hardware [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.383 232437 DEBUG nova.virt.hardware [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.384 232437 DEBUG nova.virt.hardware [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.384 232437 DEBUG nova.virt.hardware [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.384 232437 DEBUG nova.virt.hardware [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.384 232437 DEBUG nova.virt.hardware [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.385 232437 DEBUG nova.virt.hardware [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.385 232437 DEBUG nova.objects.instance [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'vcpu_model' on Instance uuid fa71018e-7574-4438-bb85-43d1c96cf9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:51:26 np0005548731 podman[309362]: 2025-12-06 07:51:26.403130539 +0000 UTC m=+0.637671668 container cleanup 5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:51:26 np0005548731 systemd[1]: libpod-conmon-5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2.scope: Deactivated successfully.
Dec  6 02:51:26 np0005548731 podman[309391]: 2025-12-06 07:51:26.472704016 +0000 UTC m=+0.046123706 container remove 5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 02:51:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:26.478 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8e559a-4118-4b78-80ea-26255e587f98]: (4, ('Sat Dec  6 07:51:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 (5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2)\n5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2\nSat Dec  6 07:51:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 (5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2)\n5a76585a745b69166b7d9e1bea06442b653604ef013e81a91769d410f08ef7e2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:26.480 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[67121f42-4ea3-45d8-a38a-fb01629c0c17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:26.481 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf08afae8-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.483 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:26 np0005548731 kernel: tapf08afae8-f0: left promiscuous mode
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.485 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:26.489 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[83f489fa-2257-426c-9320-f79d474e78e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.497 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:26.505 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c15e2851-966c-4596-bba9-f5b9b211393d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:26.507 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[10b2559d-d3b8-4e3a-888d-7c5e05270074]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:26.522 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[00b95bc3-d147-4326-938f-0fd822f469c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768982, 'reachable_time': 31107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309404, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:26 np0005548731 systemd[1]: run-netns-ovnmeta\x2df08afae8\x2df952\x2d4a01\x2da643\x2d61a4dc212937.mount: Deactivated successfully.
Dec  6 02:51:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:26.526 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:51:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:26.526 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[5a365348-0e51-4493-a6a3-336ccd675f8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.645 232437 DEBUG oslo_concurrency.processutils [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.677 232437 DEBUG oslo_concurrency.lockutils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.678 232437 DEBUG oslo_concurrency.lockutils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.678 232437 INFO nova.compute.manager [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Shelving#033[00m
Dec  6 02:51:26 np0005548731 nova_compute[232433]: 2025-12-06 07:51:26.750 232437 DEBUG nova.virt.libvirt.driver [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:51:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:26.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.022 232437 DEBUG nova.compute.manager [req-7f6755b5-02d8-4b92-bd60-d6eeba66ed75 req-c7d1352c-9a95-4478-a230-c47ffbf0faa1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-vif-unplugged-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.023 232437 DEBUG oslo_concurrency.lockutils [req-7f6755b5-02d8-4b92-bd60-d6eeba66ed75 req-c7d1352c-9a95-4478-a230-c47ffbf0faa1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.023 232437 DEBUG oslo_concurrency.lockutils [req-7f6755b5-02d8-4b92-bd60-d6eeba66ed75 req-c7d1352c-9a95-4478-a230-c47ffbf0faa1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.024 232437 DEBUG oslo_concurrency.lockutils [req-7f6755b5-02d8-4b92-bd60-d6eeba66ed75 req-c7d1352c-9a95-4478-a230-c47ffbf0faa1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.024 232437 DEBUG nova.compute.manager [req-7f6755b5-02d8-4b92-bd60-d6eeba66ed75 req-c7d1352c-9a95-4478-a230-c47ffbf0faa1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] No waiting events found dispatching network-vif-unplugged-341d07a0-1551-46f6-85b6-aace80d14532 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.024 232437 WARNING nova.compute.manager [req-7f6755b5-02d8-4b92-bd60-d6eeba66ed75 req-c7d1352c-9a95-4478-a230-c47ffbf0faa1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received unexpected event network-vif-unplugged-341d07a0-1551-46f6-85b6-aace80d14532 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  6 02:51:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:51:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/600074495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.094 232437 DEBUG oslo_concurrency.processutils [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.139 232437 DEBUG oslo_concurrency.processutils [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:51:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1438578257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.576 232437 DEBUG oslo_concurrency.processutils [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.682 232437 DEBUG nova.virt.libvirt.vif [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:50:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-235586357',display_name='tempest-TestMinimumBasicScenario-server-235586357',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-235586357',id=166,image_ref='6916f635-11b7-4158-ab13-60ff56406973',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKJFOYwUkK/AU5j/16C3dY42Bin5X9do17czkQVWr96Bc5yu/WMHsTfBb0AVGwIJHTi79KNz3aKY1rNv6m7H2M+tAVea1sAVgXE0sQ5V7zYqD0Y4j4BUibNqoce1/1RtaQ==',key_name='tempest-TestMinimumBasicScenario-1468814315',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:50:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0c8fc5bc237e42bfad505a0bca6681eb',ramdisk_id='',reservation_id='r-fawmn8ln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6916f635-11b7-4158-ab13-60ff56406973',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1310413980',owner_user_name='tempest-TestMinimumBasicScenario-1310413980-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:51:23Z,user_data=None,user_id='0f669e963dc54ad7bebf8dd20341428a',uuid=fa71018e-7574-4438-bb85-43d1c96cf9b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.683 232437 DEBUG nova.network.os_vif_util [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converting VIF {"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.684 232437 DEBUG nova.network.os_vif_util [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.685 232437 DEBUG nova.objects.instance [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'pci_devices' on Instance uuid fa71018e-7574-4438-bb85-43d1c96cf9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.716 232437 DEBUG nova.virt.libvirt.driver [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  <uuid>fa71018e-7574-4438-bb85-43d1c96cf9b9</uuid>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  <name>instance-000000a6</name>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestMinimumBasicScenario-server-235586357</nova:name>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:51:26</nova:creationTime>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <nova:user uuid="0f669e963dc54ad7bebf8dd20341428a">tempest-TestMinimumBasicScenario-1310413980-project-member</nova:user>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <nova:project uuid="0c8fc5bc237e42bfad505a0bca6681eb">tempest-TestMinimumBasicScenario-1310413980</nova:project>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6916f635-11b7-4158-ab13-60ff56406973"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <nova:port uuid="341d07a0-1551-46f6-85b6-aace80d14532">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <entry name="serial">fa71018e-7574-4438-bb85-43d1c96cf9b9</entry>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <entry name="uuid">fa71018e-7574-4438-bb85-43d1c96cf9b9</entry>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/fa71018e-7574-4438-bb85-43d1c96cf9b9_disk">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/fa71018e-7574-4438-bb85-43d1c96cf9b9_disk.config">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-1c4642f5-e00c-416f-9c41-e5aa7293d85b">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <target dev="vdb" bus="virtio"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <serial>1c4642f5-e00c-416f-9c41-e5aa7293d85b</serial>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:71:74:18"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <target dev="tap341d07a0-15"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/fa71018e-7574-4438-bb85-43d1c96cf9b9/console.log" append="off"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <input type="keyboard" bus="usb"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:51:27 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:51:27 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:51:27 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:51:27 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.718 232437 DEBUG nova.virt.libvirt.driver [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.718 232437 DEBUG nova.virt.libvirt.driver [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.718 232437 DEBUG nova.virt.libvirt.driver [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.719 232437 DEBUG nova.virt.libvirt.vif [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:50:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-235586357',display_name='tempest-TestMinimumBasicScenario-server-235586357',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-235586357',id=166,image_ref='6916f635-11b7-4158-ab13-60ff56406973',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKJFOYwUkK/AU5j/16C3dY42Bin5X9do17czkQVWr96Bc5yu/WMHsTfBb0AVGwIJHTi79KNz3aKY1rNv6m7H2M+tAVea1sAVgXE0sQ5V7zYqD0Y4j4BUibNqoce1/1RtaQ==',key_name='tempest-TestMinimumBasicScenario-1468814315',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:50:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='0c8fc5bc237e42bfad505a0bca6681eb',ramdisk_id='',reservation_id='r-fawmn8ln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6916f635-11b7-4158-ab13-60ff56406973',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1310413980',owner_user_name='tempest-TestMinimumBasicScenario-1310413980-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:51:23Z,user_data=None,user_id='0f669e963dc54ad7bebf8dd20341428a',uuid=fa71018e-7574-4438-bb85-43d1c96cf9b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.720 232437 DEBUG nova.network.os_vif_util [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converting VIF {"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.720 232437 DEBUG nova.network.os_vif_util [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.721 232437 DEBUG os_vif [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.721 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.722 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.722 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.724 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.725 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap341d07a0-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.725 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap341d07a0-15, col_values=(('external_ids', {'iface-id': '341d07a0-1551-46f6-85b6-aace80d14532', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:74:18', 'vm-uuid': 'fa71018e-7574-4438-bb85-43d1c96cf9b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.731 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:27 np0005548731 NetworkManager[49182]: <info>  [1765007487.7335] manager: (tap341d07a0-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.734 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.738 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.739 232437 INFO os_vif [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15')#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.776 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.777 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.777 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:51:27 np0005548731 kernel: tap341d07a0-15: entered promiscuous mode
Dec  6 02:51:27 np0005548731 NetworkManager[49182]: <info>  [1765007487.7992] manager: (tap341d07a0-15): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Dec  6 02:51:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:27Z|00817|binding|INFO|Claiming lport 341d07a0-1551-46f6-85b6-aace80d14532 for this chassis.
Dec  6 02:51:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:27Z|00818|binding|INFO|341d07a0-1551-46f6-85b6-aace80d14532: Claiming fa:16:3e:71:74:18 10.100.0.13
Dec  6 02:51:27 np0005548731 systemd-udevd[309330]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.800 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.807 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:74:18 10.100.0.13'], port_security=['fa:16:3e:71:74:18 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fa71018e-7574-4438-bb85-43d1c96cf9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f08afae8-f952-4a01-a643-61a4dc212937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c8fc5bc237e42bfad505a0bca6681eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6303b693-a47b-4375-8b74-5068fffec871 97e2366f-5968-4702-838b-5830aacf120f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9bb6731-402a-4b02-b4cf-9ac913838fd2, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=341d07a0-1551-46f6-85b6-aace80d14532) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.808 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 341d07a0-1551-46f6-85b6-aace80d14532 in datapath f08afae8-f952-4a01-a643-61a4dc212937 bound to our chassis#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.810 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f08afae8-f952-4a01-a643-61a4dc212937#033[00m
Dec  6 02:51:27 np0005548731 NetworkManager[49182]: <info>  [1765007487.8122] device (tap341d07a0-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:51:27 np0005548731 NetworkManager[49182]: <info>  [1765007487.8138] device (tap341d07a0-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:51:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:27Z|00819|binding|INFO|Setting lport 341d07a0-1551-46f6-85b6-aace80d14532 ovn-installed in OVS
Dec  6 02:51:27 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:27Z|00820|binding|INFO|Setting lport 341d07a0-1551-46f6-85b6-aace80d14532 up in Southbound
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.820 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.822 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ad103e36-e50d-426c-af32-c05709fd6f9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.823 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf08afae8-f1 in ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:51:27 np0005548731 nova_compute[232433]: 2025-12-06 07:51:27.825 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.827 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf08afae8-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.827 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[39aaf073-c66d-4e67-a99c-dc07df6dae54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.828 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8e59e5e2-bc22-4e00-8f53-2d3fe5e1a979]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:27 np0005548731 systemd-machined[195355]: New machine qemu-83-instance-000000a6.
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.839 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[699b39e0-d1ce-4618-be7c-58da85931f43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:27 np0005548731 systemd[1]: Started Virtual Machine qemu-83-instance-000000a6.
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.851 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cb63e5c2-5194-46c7-85be-fc55dc872ff1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.883 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6912c6c1-ab37-4728-be24-5ffd70753253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.887 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c18f47-9e60-4a2e-a399-f457ba4e78c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:27 np0005548731 NetworkManager[49182]: <info>  [1765007487.8888] manager: (tapf08afae8-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/375)
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.916 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0ecc29ac-c52a-40ce-9892-d8943dce6888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.920 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[16c024a3-dc16-490f-9454-7418c0a915ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:27 np0005548731 NetworkManager[49182]: <info>  [1765007487.9458] device (tapf08afae8-f0): carrier: link connected
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.954 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae8efe7-067e-4fa5-b323-587967e0aae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.971 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[745def70-8bcb-4047-b1a1-5905b5191641]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf08afae8-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:11:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774995, 'reachable_time': 36170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309513, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:27.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:27.989 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d15cf9-1517-47fa-ae35-580fe701ab1c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:1118'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 774995, 'tstamp': 774995}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309514, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.004 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6632f1-e7bb-4e2f-a2cc-c04441c8895d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf08afae8-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:11:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774995, 'reachable_time': 36170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309515, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.037 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6df516a1-5035-456f-a851-0c9b9d6d914d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.095 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[192089e6-4228-4b7c-b247-be43d516b939]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.096 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf08afae8-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.096 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.097 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf08afae8-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.098 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:28 np0005548731 NetworkManager[49182]: <info>  [1765007488.0991] manager: (tapf08afae8-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Dec  6 02:51:28 np0005548731 kernel: tapf08afae8-f0: entered promiscuous mode
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.101 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.104 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf08afae8-f0, col_values=(('external_ids', {'iface-id': '684342c7-1709-4776-be04-f6d5a6b0b0ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:28Z|00821|binding|INFO|Releasing lport 684342c7-1709-4776-be04-f6d5a6b0b0ae from this chassis (sb_readonly=0)
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.105 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.106 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.107 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.108 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5da55c40-1eb2-47dd-94c6-1b5c34bee7fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.108 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-f08afae8-f952-4a01-a643-61a4dc212937
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID f08afae8-f952-4a01-a643-61a4dc212937
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.109 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'env', 'PROCESS_TAG=haproxy-f08afae8-f952-4a01-a643-61a4dc212937', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f08afae8-f952-4a01-a643-61a4dc212937.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.119 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.164 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating instance_info_cache with network_info: [{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.181 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.182 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.183 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.183 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.184 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:51:28 np0005548731 podman[309584]: 2025-12-06 07:51:28.466075905 +0000 UTC m=+0.053402202 container create 9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:51:28 np0005548731 systemd[1]: Started libpod-conmon-9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e.scope.
Dec  6 02:51:28 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:51:28 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/489316ef5604d5410596aac10c3f0e78c66c207a3ad85f7bbe847e539f967d22/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:51:28 np0005548731 podman[309584]: 2025-12-06 07:51:28.528403456 +0000 UTC m=+0.115729773 container init 9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 02:51:28 np0005548731 podman[309584]: 2025-12-06 07:51:28.436188037 +0000 UTC m=+0.023514354 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:51:28 np0005548731 podman[309584]: 2025-12-06 07:51:28.534280909 +0000 UTC m=+0.121607206 container start 9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:51:28 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[309619]: [NOTICE]   (309627) : New worker (309629) forked
Dec  6 02:51:28 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[309619]: [NOTICE]   (309627) : Loading success.
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.596 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for fa71018e-7574-4438-bb85-43d1c96cf9b9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.597 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007488.596297, fa71018e-7574-4438-bb85-43d1c96cf9b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.597 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.599 232437 DEBUG nova.compute.manager [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.602 232437 INFO nova.virt.libvirt.driver [-] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Instance rebooted successfully.#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.602 232437 DEBUG nova.compute.manager [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.726 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.731 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:51:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:28.779 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:28.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.885 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.886 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007488.5964215, fa71018e-7574-4438-bb85-43d1c96cf9b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.886 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] VM Started (Lifecycle Event)#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.969 232437 DEBUG oslo_concurrency.lockutils [None req-96936b7e-0203-447f-816d-28f3e108a5b2 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 24.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.980 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:51:28 np0005548731 nova_compute[232433]: 2025-12-06 07:51:28.984 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:51:29 np0005548731 kernel: tap1d320b87-e6 (unregistering): left promiscuous mode
Dec  6 02:51:29 np0005548731 NetworkManager[49182]: <info>  [1765007489.0361] device (tap1d320b87-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:51:29 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:29Z|00822|binding|INFO|Releasing lport 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc from this chassis (sb_readonly=0)
Dec  6 02:51:29 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:29Z|00823|binding|INFO|Setting lport 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc down in Southbound
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.043 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:29 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:29Z|00824|binding|INFO|Removing iface tap1d320b87-e6 ovn-installed in OVS
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.045 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.084 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:01:5b 10.100.0.14'], port_security=['fa:16:3e:b7:01:5b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0b9681c0-c0e7-4bd8-9040-865c1bff517b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfa713d92cc94fa1b94404ed58b0563f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a26fe1ae-b98b-40c8-b5a2-fa6264313a90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5411afcf-f935-4976-affc-7b12214f8e50, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.085 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc in datapath 45904a2f-a5c2-4047-9c19-a87d36354c1b unbound from our chassis#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.087 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45904a2f-a5c2-4047-9c19-a87d36354c1b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.088 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6373c3b0-e445-4d0b-9765-7f0eb48922de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.089 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b namespace which is not needed anymore#033[00m
Dec  6 02:51:29 np0005548731 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a8.scope: Deactivated successfully.
Dec  6 02:51:29 np0005548731 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a8.scope: Consumed 14.857s CPU time.
Dec  6 02:51:29 np0005548731 systemd-machined[195355]: Machine qemu-82-instance-000000a8 terminated.
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.167 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.178 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.210 232437 DEBUG nova.compute.manager [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.211 232437 DEBUG oslo_concurrency.lockutils [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.211 232437 DEBUG oslo_concurrency.lockutils [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.212 232437 DEBUG oslo_concurrency.lockutils [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.212 232437 DEBUG nova.compute.manager [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] No waiting events found dispatching network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.212 232437 WARNING nova.compute.manager [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received unexpected event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.212 232437 DEBUG nova.compute.manager [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.213 232437 DEBUG oslo_concurrency.lockutils [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.213 232437 DEBUG oslo_concurrency.lockutils [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.213 232437 DEBUG oslo_concurrency.lockutils [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.214 232437 DEBUG nova.compute.manager [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] No waiting events found dispatching network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.214 232437 WARNING nova.compute.manager [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received unexpected event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.214 232437 DEBUG nova.compute.manager [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.214 232437 DEBUG oslo_concurrency.lockutils [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:29 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[308886]: [NOTICE]   (308890) : haproxy version is 2.8.14-c23fe91
Dec  6 02:51:29 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[308886]: [NOTICE]   (308890) : path to executable is /usr/sbin/haproxy
Dec  6 02:51:29 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[308886]: [WARNING]  (308890) : Exiting Master process...
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.215 232437 DEBUG oslo_concurrency.lockutils [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:29 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[308886]: [ALERT]    (308890) : Current worker (308892) exited with code 143 (Terminated)
Dec  6 02:51:29 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[308886]: [WARNING]  (308890) : All workers exited. Exiting... (0)
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.215 232437 DEBUG oslo_concurrency.lockutils [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.216 232437 DEBUG nova.compute.manager [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] No waiting events found dispatching network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.216 232437 WARNING nova.compute.manager [req-88c9bace-8c9d-4e1b-8ef8-d757dee8d447 req-198d0811-07a1-4c3c-b3bd-1d04024f9c8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received unexpected event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:51:29 np0005548731 systemd[1]: libpod-1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e.scope: Deactivated successfully.
Dec  6 02:51:29 np0005548731 podman[309660]: 2025-12-06 07:51:29.22791785 +0000 UTC m=+0.047990871 container died 1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 02:51:29 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e-userdata-shm.mount: Deactivated successfully.
Dec  6 02:51:29 np0005548731 systemd[1]: var-lib-containers-storage-overlay-5b3573929415c6e400d8ee4569198989d7cbfd01a2eb44321818fb09c82edb4c-merged.mount: Deactivated successfully.
Dec  6 02:51:29 np0005548731 podman[309660]: 2025-12-06 07:51:29.279745524 +0000 UTC m=+0.099818525 container cleanup 1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:51:29 np0005548731 NetworkManager[49182]: <info>  [1765007489.2823] manager: (tap1d320b87-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Dec  6 02:51:29 np0005548731 systemd[1]: libpod-conmon-1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e.scope: Deactivated successfully.
Dec  6 02:51:29 np0005548731 podman[309693]: 2025-12-06 07:51:29.35627477 +0000 UTC m=+0.048186886 container remove 1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.363 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[91cffaee-82e4-43db-b747-dfc10a2a239d]: (4, ('Sat Dec  6 07:51:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b (1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e)\n1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e\nSat Dec  6 07:51:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b (1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e)\n1839d8aa77ccdec1e51bd23368255cd4efefdcde1ff3a2e845bbd7313bdb405e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.367 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[597dd099-a3aa-4b20-92b9-31302e70c7d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.368 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45904a2f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.371 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:29 np0005548731 kernel: tap45904a2f-a0: left promiscuous mode
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.388 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.391 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[df6264fd-34b2-4adf-afcb-f9b9fd434e56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.408 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0289de-65a9-4db7-bde4-b42fc9f17bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.409 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[295a1379-73fa-43ce-a16d-ec8415b28d2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.426 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c27c615e-d85f-409a-bf91-10f1a4965bd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771058, 'reachable_time': 26426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309713, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.428 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:51:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:29.429 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[8df060b0-b95a-45ec-9f16-601c09b0346b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:29 np0005548731 systemd[1]: run-netns-ovnmeta\x2d45904a2f\x2da5c2\x2d4047\x2d9c19\x2da87d36354c1b.mount: Deactivated successfully.
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.768 232437 INFO nova.virt.libvirt.driver [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Instance shutdown successfully after 3 seconds.#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.780 232437 INFO nova.virt.libvirt.driver [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Instance destroyed successfully.#033[00m
Dec  6 02:51:29 np0005548731 nova_compute[232433]: 2025-12-06 07:51:29.781 232437 DEBUG nova.objects.instance [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'numa_topology' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:51:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:29.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:30 np0005548731 nova_compute[232433]: 2025-12-06 07:51:30.175 232437 INFO nova.virt.libvirt.driver [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Beginning cold snapshot process#033[00m
Dec  6 02:51:30 np0005548731 nova_compute[232433]: 2025-12-06 07:51:30.304 232437 DEBUG nova.virt.libvirt.imagebackend [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec  6 02:51:30 np0005548731 nova_compute[232433]: 2025-12-06 07:51:30.505 232437 DEBUG nova.storage.rbd_utils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] creating snapshot(18b6e777245e4814a529df57c9156f84) on rbd image(0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:51:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:51:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:30.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.310 232437 DEBUG nova.compute.manager [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-vif-unplugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.310 232437 DEBUG oslo_concurrency.lockutils [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.311 232437 DEBUG oslo_concurrency.lockutils [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.311 232437 DEBUG oslo_concurrency.lockutils [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.311 232437 DEBUG nova.compute.manager [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] No waiting events found dispatching network-vif-unplugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.312 232437 WARNING nova.compute.manager [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received unexpected event network-vif-unplugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.312 232437 DEBUG nova.compute.manager [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.312 232437 DEBUG oslo_concurrency.lockutils [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.312 232437 DEBUG oslo_concurrency.lockutils [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.313 232437 DEBUG oslo_concurrency.lockutils [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.313 232437 DEBUG nova.compute.manager [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] No waiting events found dispatching network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.313 232437 WARNING nova.compute.manager [req-375d8871-6e87-42dd-b9f3-1d5b63fbe3e0 req-6c3b518a-4522-4e43-b314-f28cfc291db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received unexpected event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec  6 02:51:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e377 e377: 3 total, 3 up, 3 in
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.425 232437 DEBUG nova.storage.rbd_utils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] cloning vms/0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk@18b6e777245e4814a529df57c9156f84 to images/2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.530 232437 DEBUG nova.storage.rbd_utils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] flattening images/2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  6 02:51:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:31 np0005548731 nova_compute[232433]: 2025-12-06 07:51:31.941 232437 DEBUG nova.storage.rbd_utils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] removing snapshot(18b6e777245e4814a529df57c9156f84) on rbd image(0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:51:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:31.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e378 e378: 3 total, 3 up, 3 in
Dec  6 02:51:32 np0005548731 nova_compute[232433]: 2025-12-06 07:51:32.657 232437 DEBUG nova.storage.rbd_utils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] creating snapshot(snap) on rbd image(2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:51:32 np0005548731 nova_compute[232433]: 2025-12-06 07:51:32.733 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:32.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.181 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.181 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.182 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.182 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.182 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:51:33 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/415410419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.622 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e379 e379: 3 total, 3 up, 3 in
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.695 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.695 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.695 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.699 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.699 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.879 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.880 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4043MB free_disk=20.718521118164062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.881 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.882 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.961 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance fa71018e-7574-4438-bb85-43d1c96cf9b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.962 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 0b9681c0-c0e7-4bd8-9040-865c1bff517b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.962 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:51:33 np0005548731 nova_compute[232433]: 2025-12-06 07:51:33.962 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:51:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:33.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:34 np0005548731 nova_compute[232433]: 2025-12-06 07:51:34.035 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:34 np0005548731 nova_compute[232433]: 2025-12-06 07:51:34.170 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:51:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1423580668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:51:34 np0005548731 nova_compute[232433]: 2025-12-06 07:51:34.451 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:34 np0005548731 nova_compute[232433]: 2025-12-06 07:51:34.456 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:51:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e380 e380: 3 total, 3 up, 3 in
Dec  6 02:51:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:34.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:35 np0005548731 nova_compute[232433]: 2025-12-06 07:51:35.005 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:51:35 np0005548731 nova_compute[232433]: 2025-12-06 07:51:35.238 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:51:35 np0005548731 nova_compute[232433]: 2025-12-06 07:51:35.239 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:35 np0005548731 nova_compute[232433]: 2025-12-06 07:51:35.271 232437 INFO nova.virt.libvirt.driver [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Snapshot image upload complete#033[00m
Dec  6 02:51:35 np0005548731 nova_compute[232433]: 2025-12-06 07:51:35.272 232437 DEBUG nova.compute.manager [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:51:35 np0005548731 nova_compute[232433]: 2025-12-06 07:51:35.505 232437 INFO nova.compute.manager [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Shelve offloading#033[00m
Dec  6 02:51:35 np0005548731 nova_compute[232433]: 2025-12-06 07:51:35.513 232437 INFO nova.virt.libvirt.driver [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Instance destroyed successfully.#033[00m
Dec  6 02:51:35 np0005548731 nova_compute[232433]: 2025-12-06 07:51:35.513 232437 DEBUG nova.compute.manager [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:51:35 np0005548731 nova_compute[232433]: 2025-12-06 07:51:35.516 232437 DEBUG oslo_concurrency.lockutils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:51:35 np0005548731 nova_compute[232433]: 2025-12-06 07:51:35.516 232437 DEBUG oslo_concurrency.lockutils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquired lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:51:35 np0005548731 nova_compute[232433]: 2025-12-06 07:51:35.516 232437 DEBUG nova.network.neutron [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:51:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:35.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:51:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:36.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:51:37 np0005548731 nova_compute[232433]: 2025-12-06 07:51:37.736 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:37 np0005548731 nova_compute[232433]: 2025-12-06 07:51:37.804 232437 DEBUG nova.network.neutron [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updating instance_info_cache with network_info: [{"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:51:37 np0005548731 nova_compute[232433]: 2025-12-06 07:51:37.827 232437 DEBUG oslo_concurrency.lockutils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Releasing lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:51:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:37.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.240 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:51:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e381 e381: 3 total, 3 up, 3 in
Dec  6 02:51:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:38.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.888 232437 INFO nova.virt.libvirt.driver [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Instance destroyed successfully.#033[00m
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.889 232437 DEBUG nova.objects.instance [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'resources' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.908 232437 DEBUG nova.virt.libvirt.vif [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-676074581',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-676074581',id=168,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNKYB7DoHCE/Zq13G9bNkMnqez+ah/cvrfQVcJ98DATDISc+hKRWhcY3n96hXJBgGVFk2F3L+nAWB+E3c8HLOV2K86PN0fFtoRWNhKFWMW6mo0EoTV5X0kp2CVI8eUnC1g==',key_name='tempest-keypair-12606996',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:50:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-qd0qnxrq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member',shelved_at='2025-12-06T07:51:35.272180',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:51:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=0b9681c0-c0e7-4bd8-9040-865c1bff517b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.908 232437 DEBUG nova.network.os_vif_util [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.909 232437 DEBUG nova.network.os_vif_util [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.910 232437 DEBUG os_vif [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.911 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.911 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d320b87-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.913 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.914 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:38 np0005548731 nova_compute[232433]: 2025-12-06 07:51:38.916 232437 INFO os_vif [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6')#033[00m
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.027 232437 DEBUG nova.compute.manager [req-dff9d5c9-0a87-4c48-a567-d0476e4c3ee0 req-c4f87c54-4179-4190-8eb9-2b3997404358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-changed-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.028 232437 DEBUG nova.compute.manager [req-dff9d5c9-0a87-4c48-a567-d0476e4c3ee0 req-c4f87c54-4179-4190-8eb9-2b3997404358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Refreshing instance network info cache due to event network-changed-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.028 232437 DEBUG oslo_concurrency.lockutils [req-dff9d5c9-0a87-4c48-a567-d0476e4c3ee0 req-c4f87c54-4179-4190-8eb9-2b3997404358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.028 232437 DEBUG oslo_concurrency.lockutils [req-dff9d5c9-0a87-4c48-a567-d0476e4c3ee0 req-c4f87c54-4179-4190-8eb9-2b3997404358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.028 232437 DEBUG nova.network.neutron [req-dff9d5c9-0a87-4c48-a567-d0476e4c3ee0 req-c4f87c54-4179-4190-8eb9-2b3997404358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Refreshing network info cache for port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.171 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.356 232437 INFO nova.virt.libvirt.driver [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Deleting instance files /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b_del#033[00m
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.357 232437 INFO nova.virt.libvirt.driver [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Deletion of /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b_del complete#033[00m
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.473 232437 INFO nova.scheduler.client.report [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Deleted allocations for instance 0b9681c0-c0e7-4bd8-9040-865c1bff517b#033[00m
Dec  6 02:51:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e382 e382: 3 total, 3 up, 3 in
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.535 232437 DEBUG oslo_concurrency.lockutils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.536 232437 DEBUG oslo_concurrency.lockutils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:39 np0005548731 nova_compute[232433]: 2025-12-06 07:51:39.723 232437 DEBUG oslo_concurrency.processutils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:39.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:51:40 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/520152717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:51:40 np0005548731 nova_compute[232433]: 2025-12-06 07:51:40.168 232437 DEBUG oslo_concurrency.processutils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:40 np0005548731 nova_compute[232433]: 2025-12-06 07:51:40.174 232437 DEBUG nova.compute.provider_tree [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:51:40 np0005548731 nova_compute[232433]: 2025-12-06 07:51:40.292 232437 DEBUG nova.scheduler.client.report [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:51:40 np0005548731 nova_compute[232433]: 2025-12-06 07:51:40.301 232437 DEBUG nova.network.neutron [req-dff9d5c9-0a87-4c48-a567-d0476e4c3ee0 req-c4f87c54-4179-4190-8eb9-2b3997404358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updated VIF entry in instance network info cache for port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:51:40 np0005548731 nova_compute[232433]: 2025-12-06 07:51:40.302 232437 DEBUG nova.network.neutron [req-dff9d5c9-0a87-4c48-a567-d0476e4c3ee0 req-c4f87c54-4179-4190-8eb9-2b3997404358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updating instance_info_cache with network_info: [{"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap1d320b87-e6", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:51:40 np0005548731 nova_compute[232433]: 2025-12-06 07:51:40.760 232437 DEBUG oslo_concurrency.lockutils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:51:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:40.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:51:40 np0005548731 nova_compute[232433]: 2025-12-06 07:51:40.885 232437 DEBUG oslo_concurrency.lockutils [req-dff9d5c9-0a87-4c48-a567-d0476e4c3ee0 req-c4f87c54-4179-4190-8eb9-2b3997404358 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:51:41 np0005548731 nova_compute[232433]: 2025-12-06 07:51:41.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:51:41 np0005548731 nova_compute[232433]: 2025-12-06 07:51:41.134 232437 DEBUG oslo_concurrency.lockutils [None req-516c9607-79cd-44d9-bc89-e8949d0d7962 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:41.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:42 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:42Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:74:18 10.100.0.13
Dec  6 02:51:42 np0005548731 nova_compute[232433]: 2025-12-06 07:51:42.552 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:42 np0005548731 nova_compute[232433]: 2025-12-06 07:51:42.553 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:42 np0005548731 nova_compute[232433]: 2025-12-06 07:51:42.581 232437 DEBUG nova.compute.manager [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:51:42 np0005548731 nova_compute[232433]: 2025-12-06 07:51:42.646 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:42 np0005548731 nova_compute[232433]: 2025-12-06 07:51:42.646 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e383 e383: 3 total, 3 up, 3 in
Dec  6 02:51:42 np0005548731 nova_compute[232433]: 2025-12-06 07:51:42.653 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:51:42 np0005548731 nova_compute[232433]: 2025-12-06 07:51:42.653 232437 INFO nova.compute.claims [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:51:42 np0005548731 nova_compute[232433]: 2025-12-06 07:51:42.767 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:51:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:42.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:51:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:51:43 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3778624018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.235 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.245 232437 DEBUG nova.compute.provider_tree [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.263 232437 DEBUG nova.scheduler.client.report [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.283 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.284 232437 DEBUG nova.compute.manager [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.325 232437 DEBUG nova.compute.manager [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.325 232437 DEBUG nova.network.neutron [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.352 232437 INFO nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.378 232437 DEBUG nova.compute.manager [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.462 232437 DEBUG nova.compute.manager [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.463 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.464 232437 INFO nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Creating image(s)#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.543 232437 DEBUG nova.storage.rbd_utils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.576 232437 DEBUG nova.storage.rbd_utils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.605 232437 DEBUG nova.storage.rbd_utils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.608 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.679 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.680 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.681 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.682 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.708 232437 DEBUG nova.storage.rbd_utils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.711 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.742 232437 DEBUG nova.policy [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f2335740042045fba7f544ee5140eb87', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:51:43 np0005548731 nova_compute[232433]: 2025-12-06 07:51:43.914 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:43.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.174 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.300 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007489.2982247, 0b9681c0-c0e7-4bd8-9040-865c1bff517b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.301 232437 INFO nova.compute.manager [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.323 232437 DEBUG nova.compute.manager [None req-d24363ab-79cc-4084-b09a-640453346139 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.333 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.396 232437 DEBUG nova.storage.rbd_utils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] resizing rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.552 232437 DEBUG nova.objects.instance [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'migration_context' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.821 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.822 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Ensure instance console log exists: /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.822 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.823 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:44 np0005548731 nova_compute[232433]: 2025-12-06 07:51:44.823 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:44.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:45 np0005548731 nova_compute[232433]: 2025-12-06 07:51:45.494 232437 DEBUG nova.network.neutron [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Successfully created port: 8e1136a8-e1c4-47b4-ada8-22860a8df286 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:51:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:51:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:45.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:51:46 np0005548731 nova_compute[232433]: 2025-12-06 07:51:46.360 232437 DEBUG nova.network.neutron [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Successfully updated port: 8e1136a8-e1c4-47b4-ada8-22860a8df286 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:51:46 np0005548731 nova_compute[232433]: 2025-12-06 07:51:46.380 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:51:46 np0005548731 nova_compute[232433]: 2025-12-06 07:51:46.380 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquired lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:51:46 np0005548731 nova_compute[232433]: 2025-12-06 07:51:46.381 232437 DEBUG nova.network.neutron [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:51:46 np0005548731 nova_compute[232433]: 2025-12-06 07:51:46.471 232437 DEBUG nova.compute.manager [req-4c83dd74-8832-45ce-9a7e-8a26488aaf7b req-0447b913-2b75-4948-876c-3482dfacf4a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-changed-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:46 np0005548731 nova_compute[232433]: 2025-12-06 07:51:46.471 232437 DEBUG nova.compute.manager [req-4c83dd74-8832-45ce-9a7e-8a26488aaf7b req-0447b913-2b75-4948-876c-3482dfacf4a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Refreshing instance network info cache due to event network-changed-8e1136a8-e1c4-47b4-ada8-22860a8df286. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:51:46 np0005548731 nova_compute[232433]: 2025-12-06 07:51:46.471 232437 DEBUG oslo_concurrency.lockutils [req-4c83dd74-8832-45ce-9a7e-8a26488aaf7b req-0447b913-2b75-4948-876c-3482dfacf4a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:51:46 np0005548731 nova_compute[232433]: 2025-12-06 07:51:46.573 232437 DEBUG nova.network.neutron [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:51:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:46.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e384 e384: 3 total, 3 up, 3 in
Dec  6 02:51:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:47.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.591 232437 DEBUG nova.network.neutron [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updating instance_info_cache with network_info: [{"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.679 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Releasing lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.680 232437 DEBUG nova.compute.manager [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Instance network_info: |[{"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.680 232437 DEBUG oslo_concurrency.lockutils [req-4c83dd74-8832-45ce-9a7e-8a26488aaf7b req-0447b913-2b75-4948-876c-3482dfacf4a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.680 232437 DEBUG nova.network.neutron [req-4c83dd74-8832-45ce-9a7e-8a26488aaf7b req-0447b913-2b75-4948-876c-3482dfacf4a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Refreshing network info cache for port 8e1136a8-e1c4-47b4-ada8-22860a8df286 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.683 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Start _get_guest_xml network_info=[{"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.687 232437 WARNING nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.692 232437 DEBUG nova.virt.libvirt.host [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.692 232437 DEBUG nova.virt.libvirt.host [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.695 232437 DEBUG nova.virt.libvirt.host [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.695 232437 DEBUG nova.virt.libvirt.host [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.696 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.696 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.697 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.697 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.697 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.697 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.697 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.698 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.698 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.698 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.698 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.698 232437 DEBUG nova.virt.hardware [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.701 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:48.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:48 np0005548731 nova_compute[232433]: 2025-12-06 07:51:48.916 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:51:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1411954260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.200 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.226 232437 DEBUG nova.storage.rbd_utils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.229 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.262 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:51:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2754697469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.660 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.661 232437 DEBUG nova.virt.libvirt.vif [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:51:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-193302101',display_name='tempest-ServerRescueNegativeTestJSON-server-193302101',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-193302101',id=169,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5A7PUR8B+q/aEm961UR40RWH5j2d0ebX0KEZ8+WjamXhjP8ST8tm7kS6TxJuU8YsY9D43lwbEhi9EKe7kSKkxSw3JIb+Ggj3DIhBD4zOx4oUM+gOFYaZD0JjPWKPEMBw==',key_name='tempest-keypair-414016542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-gcbn1atd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:51:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f2335740042045fba7f544ee5140eb87',uuid=9cc0604e-1ff7-4781-8383-c780b6f598c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.662 232437 DEBUG nova.network.os_vif_util [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.663 232437 DEBUG nova.network.os_vif_util [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:5f:69,bridge_name='br-int',has_traffic_filtering=True,id=8e1136a8-e1c4-47b4-ada8-22860a8df286,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e1136a8-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.664 232437 DEBUG nova.objects.instance [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.685 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  <uuid>9cc0604e-1ff7-4781-8383-c780b6f598c5</uuid>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  <name>instance-000000a9</name>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-193302101</nova:name>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:51:48</nova:creationTime>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <nova:user uuid="f2335740042045fba7f544ee5140eb87">tempest-ServerRescueNegativeTestJSON-1304226499-project-member</nova:user>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <nova:project uuid="4842ecff6dce4ccc981a6b65a14ea406">tempest-ServerRescueNegativeTestJSON-1304226499</nova:project>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <nova:port uuid="8e1136a8-e1c4-47b4-ada8-22860a8df286">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <entry name="serial">9cc0604e-1ff7-4781-8383-c780b6f598c5</entry>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <entry name="uuid">9cc0604e-1ff7-4781-8383-c780b6f598c5</entry>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9cc0604e-1ff7-4781-8383-c780b6f598c5_disk">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.config">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:3b:5f:69"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <target dev="tap8e1136a8-e1"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/console.log" append="off"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:51:49 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:51:49 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:51:49 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:51:49 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.687 232437 DEBUG nova.compute.manager [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Preparing to wait for external event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.687 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.687 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.688 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.688 232437 DEBUG nova.virt.libvirt.vif [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:51:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-193302101',display_name='tempest-ServerRescueNegativeTestJSON-server-193302101',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-193302101',id=169,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5A7PUR8B+q/aEm961UR40RWH5j2d0ebX0KEZ8+WjamXhjP8ST8tm7kS6TxJuU8YsY9D43lwbEhi9EKe7kSKkxSw3JIb+Ggj3DIhBD4zOx4oUM+gOFYaZD0JjPWKPEMBw==',key_name='tempest-keypair-414016542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-gcbn1atd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:51:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f2335740042045fba7f544ee5140eb87',uuid=9cc0604e-1ff7-4781-8383-c780b6f598c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.689 232437 DEBUG nova.network.os_vif_util [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.689 232437 DEBUG nova.network.os_vif_util [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:5f:69,bridge_name='br-int',has_traffic_filtering=True,id=8e1136a8-e1c4-47b4-ada8-22860a8df286,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e1136a8-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.690 232437 DEBUG os_vif [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:5f:69,bridge_name='br-int',has_traffic_filtering=True,id=8e1136a8-e1c4-47b4-ada8-22860a8df286,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e1136a8-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.690 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.690 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.691 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.693 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.694 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e1136a8-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.694 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e1136a8-e1, col_values=(('external_ids', {'iface-id': '8e1136a8-e1c4-47b4-ada8-22860a8df286', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:5f:69', 'vm-uuid': '9cc0604e-1ff7-4781-8383-c780b6f598c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:49 np0005548731 NetworkManager[49182]: <info>  [1765007509.6967] manager: (tap8e1136a8-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.698 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.702 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.703 232437 INFO os_vif [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:5f:69,bridge_name='br-int',has_traffic_filtering=True,id=8e1136a8-e1c4-47b4-ada8-22860a8df286,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e1136a8-e1')#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.752 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.753 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.753 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No VIF found with MAC fa:16:3e:3b:5f:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.753 232437 INFO nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Using config drive#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.778 232437 DEBUG nova.storage.rbd_utils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.944 232437 DEBUG nova.network.neutron [req-4c83dd74-8832-45ce-9a7e-8a26488aaf7b req-0447b913-2b75-4948-876c-3482dfacf4a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updated VIF entry in instance network info cache for port 8e1136a8-e1c4-47b4-ada8-22860a8df286. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.945 232437 DEBUG nova.network.neutron [req-4c83dd74-8832-45ce-9a7e-8a26488aaf7b req-0447b913-2b75-4948-876c-3482dfacf4a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updating instance_info_cache with network_info: [{"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:51:49 np0005548731 nova_compute[232433]: 2025-12-06 07:51:49.962 232437 DEBUG oslo_concurrency.lockutils [req-4c83dd74-8832-45ce-9a7e-8a26488aaf7b req-0447b913-2b75-4948-876c-3482dfacf4a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:51:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:50.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:50 np0005548731 nova_compute[232433]: 2025-12-06 07:51:50.711 232437 INFO nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Creating config drive at /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config#033[00m
Dec  6 02:51:50 np0005548731 nova_compute[232433]: 2025-12-06 07:51:50.720 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpot5b5kd5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:50 np0005548731 nova_compute[232433]: 2025-12-06 07:51:50.858 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpot5b5kd5" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:50 np0005548731 nova_compute[232433]: 2025-12-06 07:51:50.884 232437 DEBUG nova.storage.rbd_utils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:51:50 np0005548731 nova_compute[232433]: 2025-12-06 07:51:50.888 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:51:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:50.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.115 232437 DEBUG oslo_concurrency.processutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.116 232437 INFO nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Deleting local config drive /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config because it was imported into RBD.#033[00m
Dec  6 02:51:51 np0005548731 kernel: tap8e1136a8-e1: entered promiscuous mode
Dec  6 02:51:51 np0005548731 NetworkManager[49182]: <info>  [1765007511.1648] manager: (tap8e1136a8-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/379)
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.196 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:51Z|00825|binding|INFO|Claiming lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 for this chassis.
Dec  6 02:51:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:51Z|00826|binding|INFO|8e1136a8-e1c4-47b4-ada8-22860a8df286: Claiming fa:16:3e:3b:5f:69 10.100.0.12
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.212 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:5f:69 10.100.0.12'], port_security=['fa:16:3e:3b:5f:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9cc0604e-1ff7-4781-8383-c780b6f598c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2f9858c-886d-4e74-a599-a5dabbf1ba8e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8e1136a8-e1c4-47b4-ada8-22860a8df286) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.214 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8e1136a8-e1c4-47b4-ada8-22860a8df286 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 bound to our chassis#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.215 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d151181-0dfe-43ab-b47e-15b53add33a6#033[00m
Dec  6 02:51:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:51Z|00827|binding|INFO|Setting lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 ovn-installed in OVS
Dec  6 02:51:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:51Z|00828|binding|INFO|Setting lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 up in Southbound
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.220 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.225 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.230 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6c53951d-4209-4c76-a92d-6e1e35b8210b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.231 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d151181-01 in ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:51:51 np0005548731 systemd-machined[195355]: New machine qemu-84-instance-000000a9.
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.233 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d151181-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:51:51 np0005548731 systemd-udevd[310326]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.233 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8208b6ab-40cb-4acd-a41c-11e96bb6a604]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.234 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1589599a-3df3-49c7-9d4a-4bcbf7bcdbcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 systemd[1]: Started Virtual Machine qemu-84-instance-000000a9.
Dec  6 02:51:51 np0005548731 NetworkManager[49182]: <info>  [1765007511.2479] device (tap8e1136a8-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:51:51 np0005548731 NetworkManager[49182]: <info>  [1765007511.2492] device (tap8e1136a8-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.249 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[73e3d8a9-d2d4-41d4-b106-2a5dd4661058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.264 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d85720ad-bbd2-409d-a8e8-bcc3c1797647]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.295 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2560fb-3a80-4550-a524-38aac9f4ad63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.300 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[62771ba3-6eb5-43b8-9dc7-b129df9a4845]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 NetworkManager[49182]: <info>  [1765007511.3013] manager: (tap3d151181-00): new Veth device (/org/freedesktop/NetworkManager/Devices/380)
Dec  6 02:51:51 np0005548731 systemd-udevd[310329]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.331 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[8501b8f5-2d7a-481b-8065-fbf1b7f4d9ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.334 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f719fa-3b59-4fc6-9dae-693157fdfa83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 NetworkManager[49182]: <info>  [1765007511.3562] device (tap3d151181-00): carrier: link connected
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.361 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[509f33b9-5cc7-4a52-99d6-8c96c3f512ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.377 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a09520-a3df-41e8-a8da-001e89bd8c74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777336, 'reachable_time': 38192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310358, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.392 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba946c0-5251-4e35-b5b5-fde715951bf3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:130b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777336, 'tstamp': 777336}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310359, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.408 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8586b004-603b-460c-95f8-d775e9ed4c05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777336, 'reachable_time': 38192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310360, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.437 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3398ebab-6f93-4ce4-9ad9-d85fde430bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.496 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[46b1817e-f883-4b32-9b31-423d843d7b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.497 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.497 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.498 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d151181-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.543 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:51 np0005548731 NetworkManager[49182]: <info>  [1765007511.5439] manager: (tap3d151181-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Dec  6 02:51:51 np0005548731 kernel: tap3d151181-00: entered promiscuous mode
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.548 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.549 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d151181-00, col_values=(('external_ids', {'iface-id': '7c0488e1-35c2-4c92-b43c-271fbeecd9ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.550 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:51 np0005548731 ovn_controller[133927]: 2025-12-06T07:51:51Z|00829|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.564 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.565 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.566 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3c0f0b-ff96-4aef-8052-7ffe84222f86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.567 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-3d151181-0dfe-43ab-b47e-15b53add33a6
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:51:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:51:51.568 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'env', 'PROCESS_TAG=haproxy-3d151181-0dfe-43ab-b47e-15b53add33a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d151181-0dfe-43ab-b47e-15b53add33a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:51:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.946 232437 DEBUG nova.compute.manager [req-d691cd06-79b4-48d3-9a7f-83d5953d2c30 req-ffa79d5a-a271-4e0d-ae03-c5c27ee9799d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.947 232437 DEBUG oslo_concurrency.lockutils [req-d691cd06-79b4-48d3-9a7f-83d5953d2c30 req-ffa79d5a-a271-4e0d-ae03-c5c27ee9799d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.947 232437 DEBUG oslo_concurrency.lockutils [req-d691cd06-79b4-48d3-9a7f-83d5953d2c30 req-ffa79d5a-a271-4e0d-ae03-c5c27ee9799d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.948 232437 DEBUG oslo_concurrency.lockutils [req-d691cd06-79b4-48d3-9a7f-83d5953d2c30 req-ffa79d5a-a271-4e0d-ae03-c5c27ee9799d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:51 np0005548731 nova_compute[232433]: 2025-12-06 07:51:51.948 232437 DEBUG nova.compute.manager [req-d691cd06-79b4-48d3-9a7f-83d5953d2c30 req-ffa79d5a-a271-4e0d-ae03-c5c27ee9799d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Processing event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:51:51 np0005548731 podman[310393]: 2025-12-06 07:51:51.960353611 +0000 UTC m=+0.063572096 container create 40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:51:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:52.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:52 np0005548731 podman[310393]: 2025-12-06 07:51:51.923187592 +0000 UTC m=+0.026406107 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:51:52 np0005548731 systemd[1]: Started libpod-conmon-40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c.scope.
Dec  6 02:51:52 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:51:52 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2b04b537af80f86fd98b1ce5a80acde92dff61ee3c84c111db5d565c41d8db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:51:52 np0005548731 podman[310393]: 2025-12-06 07:51:52.079163058 +0000 UTC m=+0.182381543 container init 40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:51:52 np0005548731 podman[310393]: 2025-12-06 07:51:52.084808496 +0000 UTC m=+0.188026981 container start 40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 02:51:52 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[310431]: [NOTICE]   (310450) : New worker (310456) forked
Dec  6 02:51:52 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[310431]: [NOTICE]   (310450) : Loading success.
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.177 232437 DEBUG nova.compute.manager [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.179 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007512.1770308, 9cc0604e-1ff7-4781-8383-c780b6f598c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.179 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] VM Started (Lifecycle Event)#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.182 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.186 232437 INFO nova.virt.libvirt.driver [-] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Instance spawned successfully.#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.186 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.199 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.204 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.207 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.207 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.208 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.208 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.208 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.209 232437 DEBUG nova.virt.libvirt.driver [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.234 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.235 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007512.1783519, 9cc0604e-1ff7-4781-8383-c780b6f598c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.235 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.260 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.263 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007512.182163, 9cc0604e-1ff7-4781-8383-c780b6f598c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.263 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.270 232437 INFO nova.compute.manager [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Took 8.81 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.270 232437 DEBUG nova.compute.manager [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.280 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.283 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.302 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.329 232437 INFO nova.compute.manager [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Took 9.70 seconds to build instance.#033[00m
Dec  6 02:51:52 np0005548731 nova_compute[232433]: 2025-12-06 07:51:52.349 232437 DEBUG oslo_concurrency.lockutils [None req-3cc475b2-e3fe-45cc-b80c-82e97e99fc30 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:52.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:53 np0005548731 podman[310467]: 2025-12-06 07:51:53.895921868 +0000 UTC m=+0.055056629 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  6 02:51:53 np0005548731 podman[310469]: 2025-12-06 07:51:53.928348831 +0000 UTC m=+0.083554295 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:51:53 np0005548731 podman[310468]: 2025-12-06 07:51:53.975301959 +0000 UTC m=+0.133394674 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 02:51:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:54.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:54 np0005548731 nova_compute[232433]: 2025-12-06 07:51:54.205 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:54 np0005548731 nova_compute[232433]: 2025-12-06 07:51:54.471 232437 DEBUG nova.compute.manager [req-ef033e24-84fc-4362-ac6f-dfc323262c24 req-28fda395-cfc6-4a94-9a29-673df52a816b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:54 np0005548731 nova_compute[232433]: 2025-12-06 07:51:54.471 232437 DEBUG oslo_concurrency.lockutils [req-ef033e24-84fc-4362-ac6f-dfc323262c24 req-28fda395-cfc6-4a94-9a29-673df52a816b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:51:54 np0005548731 nova_compute[232433]: 2025-12-06 07:51:54.472 232437 DEBUG oslo_concurrency.lockutils [req-ef033e24-84fc-4362-ac6f-dfc323262c24 req-28fda395-cfc6-4a94-9a29-673df52a816b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:51:54 np0005548731 nova_compute[232433]: 2025-12-06 07:51:54.472 232437 DEBUG oslo_concurrency.lockutils [req-ef033e24-84fc-4362-ac6f-dfc323262c24 req-28fda395-cfc6-4a94-9a29-673df52a816b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:51:54 np0005548731 nova_compute[232433]: 2025-12-06 07:51:54.472 232437 DEBUG nova.compute.manager [req-ef033e24-84fc-4362-ac6f-dfc323262c24 req-28fda395-cfc6-4a94-9a29-673df52a816b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] No waiting events found dispatching network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:51:54 np0005548731 nova_compute[232433]: 2025-12-06 07:51:54.472 232437 WARNING nova.compute.manager [req-ef033e24-84fc-4362-ac6f-dfc323262c24 req-28fda395-cfc6-4a94-9a29-673df52a816b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received unexpected event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:51:54 np0005548731 nova_compute[232433]: 2025-12-06 07:51:54.695 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:54.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:55 np0005548731 nova_compute[232433]: 2025-12-06 07:51:55.947 232437 DEBUG nova.compute.manager [req-13a6aeda-64e2-4782-86de-da1edc56cd5d req-1eda78a5-01f2-4930-8892-fae2a09b73a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-changed-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:51:55 np0005548731 nova_compute[232433]: 2025-12-06 07:51:55.947 232437 DEBUG nova.compute.manager [req-13a6aeda-64e2-4782-86de-da1edc56cd5d req-1eda78a5-01f2-4930-8892-fae2a09b73a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Refreshing instance network info cache due to event network-changed-8e1136a8-e1c4-47b4-ada8-22860a8df286. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:51:55 np0005548731 nova_compute[232433]: 2025-12-06 07:51:55.947 232437 DEBUG oslo_concurrency.lockutils [req-13a6aeda-64e2-4782-86de-da1edc56cd5d req-1eda78a5-01f2-4930-8892-fae2a09b73a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:51:55 np0005548731 nova_compute[232433]: 2025-12-06 07:51:55.947 232437 DEBUG oslo_concurrency.lockutils [req-13a6aeda-64e2-4782-86de-da1edc56cd5d req-1eda78a5-01f2-4930-8892-fae2a09b73a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:51:55 np0005548731 nova_compute[232433]: 2025-12-06 07:51:55.947 232437 DEBUG nova.network.neutron [req-13a6aeda-64e2-4782-86de-da1edc56cd5d req-1eda78a5-01f2-4930-8892-fae2a09b73a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Refreshing network info cache for port 8e1136a8-e1c4-47b4-ada8-22860a8df286 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:51:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:56.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:51:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:56.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e385 e385: 3 total, 3 up, 3 in
Dec  6 02:51:57 np0005548731 nova_compute[232433]: 2025-12-06 07:51:57.774 232437 DEBUG nova.network.neutron [req-13a6aeda-64e2-4782-86de-da1edc56cd5d req-1eda78a5-01f2-4930-8892-fae2a09b73a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updated VIF entry in instance network info cache for port 8e1136a8-e1c4-47b4-ada8-22860a8df286. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:51:57 np0005548731 nova_compute[232433]: 2025-12-06 07:51:57.775 232437 DEBUG nova.network.neutron [req-13a6aeda-64e2-4782-86de-da1edc56cd5d req-1eda78a5-01f2-4930-8892-fae2a09b73a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updating instance_info_cache with network_info: [{"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:51:57 np0005548731 nova_compute[232433]: 2025-12-06 07:51:57.811 232437 DEBUG oslo_concurrency.lockutils [req-13a6aeda-64e2-4782-86de-da1edc56cd5d req-1eda78a5-01f2-4930-8892-fae2a09b73a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:51:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:58.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:51:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:51:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:58.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:51:59 np0005548731 nova_compute[232433]: 2025-12-06 07:51:59.207 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:51:59 np0005548731 nova_compute[232433]: 2025-12-06 07:51:59.697 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:00.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:00.897 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:00.898 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:00.899 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:00.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:02.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e386 e386: 3 total, 3 up, 3 in
Dec  6 02:52:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:02.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:04.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:04 np0005548731 nova_compute[232433]: 2025-12-06 07:52:04.208 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:04 np0005548731 nova_compute[232433]: 2025-12-06 07:52:04.403 232437 DEBUG nova.compute.manager [req-4444d92f-b628-4eb7-b294-045909f6a4fb req-79c769e0-a17c-4496-8026-c7507b8a8463 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-changed-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:52:04 np0005548731 nova_compute[232433]: 2025-12-06 07:52:04.404 232437 DEBUG nova.compute.manager [req-4444d92f-b628-4eb7-b294-045909f6a4fb req-79c769e0-a17c-4496-8026-c7507b8a8463 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing instance network info cache due to event network-changed-341d07a0-1551-46f6-85b6-aace80d14532. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:52:04 np0005548731 nova_compute[232433]: 2025-12-06 07:52:04.404 232437 DEBUG oslo_concurrency.lockutils [req-4444d92f-b628-4eb7-b294-045909f6a4fb req-79c769e0-a17c-4496-8026-c7507b8a8463 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:52:04 np0005548731 nova_compute[232433]: 2025-12-06 07:52:04.405 232437 DEBUG oslo_concurrency.lockutils [req-4444d92f-b628-4eb7-b294-045909f6a4fb req-79c769e0-a17c-4496-8026-c7507b8a8463 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:52:04 np0005548731 nova_compute[232433]: 2025-12-06 07:52:04.405 232437 DEBUG nova.network.neutron [req-4444d92f-b628-4eb7-b294-045909f6a4fb req-79c769e0-a17c-4496-8026-c7507b8a8463 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:52:04 np0005548731 nova_compute[232433]: 2025-12-06 07:52:04.749 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:04.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:06.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:06 np0005548731 nova_compute[232433]: 2025-12-06 07:52:06.723 232437 DEBUG nova.network.neutron [req-4444d92f-b628-4eb7-b294-045909f6a4fb req-79c769e0-a17c-4496-8026-c7507b8a8463 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updated VIF entry in instance network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:52:06 np0005548731 nova_compute[232433]: 2025-12-06 07:52:06.725 232437 DEBUG nova.network.neutron [req-4444d92f-b628-4eb7-b294-045909f6a4fb req-79c769e0-a17c-4496-8026-c7507b8a8463 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating instance_info_cache with network_info: [{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:52:06 np0005548731 nova_compute[232433]: 2025-12-06 07:52:06.751 232437 DEBUG oslo_concurrency.lockutils [req-4444d92f-b628-4eb7-b294-045909f6a4fb req-79c769e0-a17c-4496-8026-c7507b8a8463 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:52:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:06Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:5f:69 10.100.0.12
Dec  6 02:52:06 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:06Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:5f:69 10.100.0.12
Dec  6 02:52:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:06.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:08.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:08.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:09 np0005548731 nova_compute[232433]: 2025-12-06 07:52:09.210 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:09 np0005548731 nova_compute[232433]: 2025-12-06 07:52:09.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:52:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:10.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:52:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:10.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:11 np0005548731 nova_compute[232433]: 2025-12-06 07:52:11.075 232437 DEBUG nova.compute.manager [req-a4e192dc-5685-4fdf-b352-83d10aa17981 req-1dd634c4-f58b-4b79-af99-8b686c77c51c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-changed-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:52:11 np0005548731 nova_compute[232433]: 2025-12-06 07:52:11.075 232437 DEBUG nova.compute.manager [req-a4e192dc-5685-4fdf-b352-83d10aa17981 req-1dd634c4-f58b-4b79-af99-8b686c77c51c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing instance network info cache due to event network-changed-341d07a0-1551-46f6-85b6-aace80d14532. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:52:11 np0005548731 nova_compute[232433]: 2025-12-06 07:52:11.075 232437 DEBUG oslo_concurrency.lockutils [req-a4e192dc-5685-4fdf-b352-83d10aa17981 req-1dd634c4-f58b-4b79-af99-8b686c77c51c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:52:11 np0005548731 nova_compute[232433]: 2025-12-06 07:52:11.075 232437 DEBUG oslo_concurrency.lockutils [req-a4e192dc-5685-4fdf-b352-83d10aa17981 req-1dd634c4-f58b-4b79-af99-8b686c77c51c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:52:11 np0005548731 nova_compute[232433]: 2025-12-06 07:52:11.076 232437 DEBUG nova.network.neutron [req-a4e192dc-5685-4fdf-b352-83d10aa17981 req-1dd634c4-f58b-4b79-af99-8b686c77c51c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Refreshing network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:52:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:12.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:12 np0005548731 nova_compute[232433]: 2025-12-06 07:52:12.596 232437 DEBUG nova.network.neutron [req-a4e192dc-5685-4fdf-b352-83d10aa17981 req-1dd634c4-f58b-4b79-af99-8b686c77c51c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updated VIF entry in instance network info cache for port 341d07a0-1551-46f6-85b6-aace80d14532. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:52:12 np0005548731 nova_compute[232433]: 2025-12-06 07:52:12.597 232437 DEBUG nova.network.neutron [req-a4e192dc-5685-4fdf-b352-83d10aa17981 req-1dd634c4-f58b-4b79-af99-8b686c77c51c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating instance_info_cache with network_info: [{"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:52:12 np0005548731 nova_compute[232433]: 2025-12-06 07:52:12.614 232437 DEBUG oslo_concurrency.lockutils [req-a4e192dc-5685-4fdf-b352-83d10aa17981 req-1dd634c4-f58b-4b79-af99-8b686c77c51c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-fa71018e-7574-4438-bb85-43d1c96cf9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:52:12 np0005548731 nova_compute[232433]: 2025-12-06 07:52:12.726 232437 DEBUG oslo_concurrency.lockutils [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:12 np0005548731 nova_compute[232433]: 2025-12-06 07:52:12.727 232437 DEBUG oslo_concurrency.lockutils [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:12 np0005548731 nova_compute[232433]: 2025-12-06 07:52:12.743 232437 INFO nova.compute.manager [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Detaching volume 1c4642f5-e00c-416f-9c41-e5aa7293d85b#033[00m
Dec  6 02:52:12 np0005548731 nova_compute[232433]: 2025-12-06 07:52:12.892 232437 INFO nova.virt.block_device [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Attempting to driver detach volume 1c4642f5-e00c-416f-9c41-e5aa7293d85b from mountpoint /dev/vdb#033[00m
Dec  6 02:52:12 np0005548731 nova_compute[232433]: 2025-12-06 07:52:12.901 232437 DEBUG nova.virt.libvirt.driver [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Attempting to detach device vdb from instance fa71018e-7574-4438-bb85-43d1c96cf9b9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:52:12 np0005548731 nova_compute[232433]: 2025-12-06 07:52:12.902 232437 DEBUG nova.virt.libvirt.guest [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:52:12 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:52:12 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-1c4642f5-e00c-416f-9c41-e5aa7293d85b">
Dec  6 02:52:12 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:52:12 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:52:12 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:52:12 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:52:12 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:52:12 np0005548731 nova_compute[232433]:  <serial>1c4642f5-e00c-416f-9c41-e5aa7293d85b</serial>
Dec  6 02:52:12 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec  6 02:52:12 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:52:12 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:52:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:12.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:13 np0005548731 nova_compute[232433]: 2025-12-06 07:52:13.144 232437 INFO nova.virt.libvirt.driver [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Successfully detached device vdb from instance fa71018e-7574-4438-bb85-43d1c96cf9b9 from the persistent domain config.#033[00m
Dec  6 02:52:13 np0005548731 nova_compute[232433]: 2025-12-06 07:52:13.145 232437 DEBUG nova.virt.libvirt.driver [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance fa71018e-7574-4438-bb85-43d1c96cf9b9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:52:13 np0005548731 nova_compute[232433]: 2025-12-06 07:52:13.145 232437 DEBUG nova.virt.libvirt.guest [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:52:13 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:52:13 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-1c4642f5-e00c-416f-9c41-e5aa7293d85b">
Dec  6 02:52:13 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:52:13 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:52:13 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:52:13 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:52:13 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:52:13 np0005548731 nova_compute[232433]:  <serial>1c4642f5-e00c-416f-9c41-e5aa7293d85b</serial>
Dec  6 02:52:13 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec  6 02:52:13 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:52:13 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:52:13 np0005548731 nova_compute[232433]: 2025-12-06 07:52:13.199 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765007533.1989377, fa71018e-7574-4438-bb85-43d1c96cf9b9 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:52:13 np0005548731 nova_compute[232433]: 2025-12-06 07:52:13.200 232437 DEBUG nova.virt.libvirt.driver [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance fa71018e-7574-4438-bb85-43d1c96cf9b9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:52:13 np0005548731 nova_compute[232433]: 2025-12-06 07:52:13.202 232437 INFO nova.virt.libvirt.driver [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Successfully detached device vdb from instance fa71018e-7574-4438-bb85-43d1c96cf9b9 from the live domain config.#033[00m
Dec  6 02:52:13 np0005548731 nova_compute[232433]: 2025-12-06 07:52:13.329 232437 DEBUG nova.objects.instance [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'flavor' on Instance uuid fa71018e-7574-4438-bb85-43d1c96cf9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:13 np0005548731 nova_compute[232433]: 2025-12-06 07:52:13.370 232437 DEBUG oslo_concurrency.lockutils [None req-78020185-7c72-4ad7-a320-426d70e6b689 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:14.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:14 np0005548731 nova_compute[232433]: 2025-12-06 07:52:14.212 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:14 np0005548731 nova_compute[232433]: 2025-12-06 07:52:14.818 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:14.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:16.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.740 232437 DEBUG oslo_concurrency.lockutils [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.741 232437 DEBUG oslo_concurrency.lockutils [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.741 232437 DEBUG oslo_concurrency.lockutils [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.741 232437 DEBUG oslo_concurrency.lockutils [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.741 232437 DEBUG oslo_concurrency.lockutils [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.742 232437 INFO nova.compute.manager [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Terminating instance#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.743 232437 DEBUG nova.compute.manager [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:52:16 np0005548731 kernel: tap341d07a0-15 (unregistering): left promiscuous mode
Dec  6 02:52:16 np0005548731 NetworkManager[49182]: <info>  [1765007536.7970] device (tap341d07a0-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:52:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:16Z|00830|binding|INFO|Releasing lport 341d07a0-1551-46f6-85b6-aace80d14532 from this chassis (sb_readonly=0)
Dec  6 02:52:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:16Z|00831|binding|INFO|Setting lport 341d07a0-1551-46f6-85b6-aace80d14532 down in Southbound
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.805 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:16 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:16Z|00832|binding|INFO|Removing iface tap341d07a0-15 ovn-installed in OVS
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.807 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:16.813 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:74:18 10.100.0.13'], port_security=['fa:16:3e:71:74:18 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fa71018e-7574-4438-bb85-43d1c96cf9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f08afae8-f952-4a01-a643-61a4dc212937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c8fc5bc237e42bfad505a0bca6681eb', 'neutron:revision_number': '8', 'neutron:security_group_ids': '97e2366f-5968-4702-838b-5830aacf120f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9bb6731-402a-4b02-b4cf-9ac913838fd2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=341d07a0-1551-46f6-85b6-aace80d14532) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:52:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:16.815 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 341d07a0-1551-46f6-85b6-aace80d14532 in datapath f08afae8-f952-4a01-a643-61a4dc212937 unbound from our chassis#033[00m
Dec  6 02:52:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:16.816 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f08afae8-f952-4a01-a643-61a4dc212937, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:52:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:16.818 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9ddf36-f0af-47fa-a342-f1b32a4922c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:16.819 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 namespace which is not needed anymore#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.822 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:16 np0005548731 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000a6.scope: Deactivated successfully.
Dec  6 02:52:16 np0005548731 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000a6.scope: Consumed 16.066s CPU time.
Dec  6 02:52:16 np0005548731 systemd-machined[195355]: Machine qemu-83-instance-000000a6 terminated.
Dec  6 02:52:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:16.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:16 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[309619]: [NOTICE]   (309627) : haproxy version is 2.8.14-c23fe91
Dec  6 02:52:16 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[309619]: [NOTICE]   (309627) : path to executable is /usr/sbin/haproxy
Dec  6 02:52:16 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[309619]: [WARNING]  (309627) : Exiting Master process...
Dec  6 02:52:16 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[309619]: [WARNING]  (309627) : Exiting Master process...
Dec  6 02:52:16 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[309619]: [ALERT]    (309627) : Current worker (309629) exited with code 143 (Terminated)
Dec  6 02:52:16 np0005548731 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[309619]: [WARNING]  (309627) : All workers exited. Exiting... (0)
Dec  6 02:52:16 np0005548731 systemd[1]: libpod-9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e.scope: Deactivated successfully.
Dec  6 02:52:16 np0005548731 podman[310622]: 2025-12-06 07:52:16.957929662 +0000 UTC m=+0.045999147 container died 9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.976 232437 INFO nova.virt.libvirt.driver [-] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Instance destroyed successfully.#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.977 232437 DEBUG nova.objects.instance [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'resources' on Instance uuid fa71018e-7574-4438-bb85-43d1c96cf9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:16 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e-userdata-shm.mount: Deactivated successfully.
Dec  6 02:52:16 np0005548731 systemd[1]: var-lib-containers-storage-overlay-489316ef5604d5410596aac10c3f0e78c66c207a3ad85f7bbe847e539f967d22-merged.mount: Deactivated successfully.
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.992 232437 DEBUG nova.virt.libvirt.vif [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:50:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-235586357',display_name='tempest-TestMinimumBasicScenario-server-235586357',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-235586357',id=166,image_ref='6916f635-11b7-4158-ab13-60ff56406973',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKJFOYwUkK/AU5j/16C3dY42Bin5X9do17czkQVWr96Bc5yu/WMHsTfBb0AVGwIJHTi79KNz3aKY1rNv6m7H2M+tAVea1sAVgXE0sQ5V7zYqD0Y4j4BUibNqoce1/1RtaQ==',key_name='tempest-TestMinimumBasicScenario-1468814315',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:50:28Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0c8fc5bc237e42bfad505a0bca6681eb',ramdisk_id='',reservation_id='r-fawmn8ln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6916f635-11b7-4158-ab13-60ff56406973',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1310413980',owner_user_name='tempest-TestMinimumBasicScenario-1310413980-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:51:28Z,user_data=None,user_id='0f669e963dc54ad7bebf8dd20341428a',uuid=fa71018e-7574-4438-bb85-43d1c96cf9b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.993 232437 DEBUG nova.network.os_vif_util [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converting VIF {"id": "341d07a0-1551-46f6-85b6-aace80d14532", "address": "fa:16:3e:71:74:18", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341d07a0-15", "ovs_interfaceid": "341d07a0-1551-46f6-85b6-aace80d14532", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.994 232437 DEBUG nova.network.os_vif_util [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.995 232437 DEBUG os_vif [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.996 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.997 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap341d07a0-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:52:16 np0005548731 nova_compute[232433]: 2025-12-06 07:52:16.998 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:16 np0005548731 podman[310622]: 2025-12-06 07:52:16.999688573 +0000 UTC m=+0.087758068 container cleanup 9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 02:52:17 np0005548731 nova_compute[232433]: 2025-12-06 07:52:17.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:17 np0005548731 nova_compute[232433]: 2025-12-06 07:52:17.002 232437 INFO os_vif [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:74:18,bridge_name='br-int',has_traffic_filtering=True,id=341d07a0-1551-46f6-85b6-aace80d14532,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341d07a0-15')#033[00m
Dec  6 02:52:17 np0005548731 systemd[1]: libpod-conmon-9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e.scope: Deactivated successfully.
Dec  6 02:52:17 np0005548731 podman[310663]: 2025-12-06 07:52:17.066444736 +0000 UTC m=+0.042294676 container remove 9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 02:52:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:17.072 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[49b77b36-ca2f-4f2f-aa0a-bedf5b967816]: (4, ('Sat Dec  6 07:52:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 (9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e)\n9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e\nSat Dec  6 07:52:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 (9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e)\n9c29eef5ac0815812b262be9c10a7b68cbd0bf1529d8288781ac85822ee9c40e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:17.074 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7f722a12-ef8e-4626-8f8f-a12af2dc7779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:17.075 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf08afae8-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:52:17 np0005548731 nova_compute[232433]: 2025-12-06 07:52:17.077 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:17 np0005548731 kernel: tapf08afae8-f0: left promiscuous mode
Dec  6 02:52:17 np0005548731 nova_compute[232433]: 2025-12-06 07:52:17.092 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:17.094 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbf11ac-128a-45a5-b653-e217984fd641]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:17.107 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b7784bc0-5d8a-45a7-8e12-3897292dffbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:17.108 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e04e7b9d-289c-4bb3-afa0-6d1991a415fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:17.123 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a5cdae8c-5f40-4f9f-adce-96bf652a9c94]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774988, 'reachable_time': 29566, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310695, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:17.126 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:52:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:17.126 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[ab283b0f-014f-4273-a9e1-c11be51e8677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:17 np0005548731 systemd[1]: run-netns-ovnmeta\x2df08afae8\x2df952\x2d4a01\x2da643\x2d61a4dc212937.mount: Deactivated successfully.
Dec  6 02:52:17 np0005548731 nova_compute[232433]: 2025-12-06 07:52:17.847 232437 DEBUG nova.compute.manager [req-10fe45ab-9d45-4541-8111-b37e15d63385 req-b844fd60-269c-4eb6-b4e9-73582425014f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-vif-unplugged-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:52:17 np0005548731 nova_compute[232433]: 2025-12-06 07:52:17.847 232437 DEBUG oslo_concurrency.lockutils [req-10fe45ab-9d45-4541-8111-b37e15d63385 req-b844fd60-269c-4eb6-b4e9-73582425014f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:17 np0005548731 nova_compute[232433]: 2025-12-06 07:52:17.847 232437 DEBUG oslo_concurrency.lockutils [req-10fe45ab-9d45-4541-8111-b37e15d63385 req-b844fd60-269c-4eb6-b4e9-73582425014f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:17 np0005548731 nova_compute[232433]: 2025-12-06 07:52:17.848 232437 DEBUG oslo_concurrency.lockutils [req-10fe45ab-9d45-4541-8111-b37e15d63385 req-b844fd60-269c-4eb6-b4e9-73582425014f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:17 np0005548731 nova_compute[232433]: 2025-12-06 07:52:17.848 232437 DEBUG nova.compute.manager [req-10fe45ab-9d45-4541-8111-b37e15d63385 req-b844fd60-269c-4eb6-b4e9-73582425014f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] No waiting events found dispatching network-vif-unplugged-341d07a0-1551-46f6-85b6-aace80d14532 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:52:17 np0005548731 nova_compute[232433]: 2025-12-06 07:52:17.848 232437 DEBUG nova.compute.manager [req-10fe45ab-9d45-4541-8111-b37e15d63385 req-b844fd60-269c-4eb6-b4e9-73582425014f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-vif-unplugged-341d07a0-1551-46f6-85b6-aace80d14532 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:52:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:18.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:18.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:19 np0005548731 nova_compute[232433]: 2025-12-06 07:52:19.263 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:19 np0005548731 nova_compute[232433]: 2025-12-06 07:52:19.918 232437 DEBUG nova.compute.manager [req-dbfdea48-e9ee-4524-8a4e-619e058653be req-37e2f9af-188e-4774-92da-7a188eb4982f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:52:19 np0005548731 nova_compute[232433]: 2025-12-06 07:52:19.919 232437 DEBUG oslo_concurrency.lockutils [req-dbfdea48-e9ee-4524-8a4e-619e058653be req-37e2f9af-188e-4774-92da-7a188eb4982f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:19 np0005548731 nova_compute[232433]: 2025-12-06 07:52:19.919 232437 DEBUG oslo_concurrency.lockutils [req-dbfdea48-e9ee-4524-8a4e-619e058653be req-37e2f9af-188e-4774-92da-7a188eb4982f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:19 np0005548731 nova_compute[232433]: 2025-12-06 07:52:19.919 232437 DEBUG oslo_concurrency.lockutils [req-dbfdea48-e9ee-4524-8a4e-619e058653be req-37e2f9af-188e-4774-92da-7a188eb4982f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:19 np0005548731 nova_compute[232433]: 2025-12-06 07:52:19.920 232437 DEBUG nova.compute.manager [req-dbfdea48-e9ee-4524-8a4e-619e058653be req-37e2f9af-188e-4774-92da-7a188eb4982f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] No waiting events found dispatching network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:52:19 np0005548731 nova_compute[232433]: 2025-12-06 07:52:19.920 232437 WARNING nova.compute.manager [req-dbfdea48-e9ee-4524-8a4e-619e058653be req-37e2f9af-188e-4774-92da-7a188eb4982f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received unexpected event network-vif-plugged-341d07a0-1551-46f6-85b6-aace80d14532 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:52:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:20.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:20 np0005548731 nova_compute[232433]: 2025-12-06 07:52:20.518 232437 INFO nova.virt.libvirt.driver [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Deleting instance files /var/lib/nova/instances/fa71018e-7574-4438-bb85-43d1c96cf9b9_del#033[00m
Dec  6 02:52:20 np0005548731 nova_compute[232433]: 2025-12-06 07:52:20.518 232437 INFO nova.virt.libvirt.driver [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Deletion of /var/lib/nova/instances/fa71018e-7574-4438-bb85-43d1c96cf9b9_del complete#033[00m
Dec  6 02:52:20 np0005548731 nova_compute[232433]: 2025-12-06 07:52:20.580 232437 INFO nova.compute.manager [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Took 3.84 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:52:20 np0005548731 nova_compute[232433]: 2025-12-06 07:52:20.580 232437 DEBUG oslo.service.loopingcall [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:52:20 np0005548731 nova_compute[232433]: 2025-12-06 07:52:20.581 232437 DEBUG nova.compute.manager [-] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:52:20 np0005548731 nova_compute[232433]: 2025-12-06 07:52:20.581 232437 DEBUG nova.network.neutron [-] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:52:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:20.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:21 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:52:21 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:52:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:21 np0005548731 nova_compute[232433]: 2025-12-06 07:52:21.920 232437 DEBUG nova.network.neutron [-] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:52:21 np0005548731 nova_compute[232433]: 2025-12-06 07:52:21.957 232437 INFO nova.compute.manager [-] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Took 1.38 seconds to deallocate network for instance.#033[00m
Dec  6 02:52:21 np0005548731 nova_compute[232433]: 2025-12-06 07:52:21.998 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:22 np0005548731 nova_compute[232433]: 2025-12-06 07:52:22.009 232437 DEBUG nova.compute.manager [req-cff490a1-9d6f-4673-ab85-b0b626145117 req-1f4bf093-375a-4c64-822a-b986b4a219cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Received event network-vif-deleted-341d07a0-1551-46f6-85b6-aace80d14532 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:52:22 np0005548731 nova_compute[232433]: 2025-12-06 07:52:22.011 232437 DEBUG oslo_concurrency.lockutils [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:22 np0005548731 nova_compute[232433]: 2025-12-06 07:52:22.011 232437 DEBUG oslo_concurrency.lockutils [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:52:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:22.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:52:22 np0005548731 nova_compute[232433]: 2025-12-06 07:52:22.063 232437 DEBUG oslo_concurrency.processutils [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:52:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:52:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:52:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:52:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2314624498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:52:22 np0005548731 nova_compute[232433]: 2025-12-06 07:52:22.490 232437 DEBUG oslo_concurrency.processutils [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:22 np0005548731 nova_compute[232433]: 2025-12-06 07:52:22.497 232437 DEBUG nova.compute.provider_tree [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:52:22 np0005548731 nova_compute[232433]: 2025-12-06 07:52:22.594 232437 DEBUG nova.scheduler.client.report [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:52:22 np0005548731 nova_compute[232433]: 2025-12-06 07:52:22.665 232437 DEBUG oslo_concurrency.lockutils [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:22 np0005548731 nova_compute[232433]: 2025-12-06 07:52:22.691 232437 INFO nova.scheduler.client.report [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Deleted allocations for instance fa71018e-7574-4438-bb85-43d1c96cf9b9#033[00m
Dec  6 02:52:22 np0005548731 nova_compute[232433]: 2025-12-06 07:52:22.743 232437 DEBUG oslo_concurrency.lockutils [None req-e87a7a85-f018-4952-9b20-697110a2cc7e 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "fa71018e-7574-4438-bb85-43d1c96cf9b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:22.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e387 e387: 3 total, 3 up, 3 in
Dec  6 02:52:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:24.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:24 np0005548731 nova_compute[232433]: 2025-12-06 07:52:24.265 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:24 np0005548731 podman[310876]: 2025-12-06 07:52:24.412403676 +0000 UTC m=+0.060414039 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  6 02:52:24 np0005548731 podman[310879]: 2025-12-06 07:52:24.444508131 +0000 UTC m=+0.088262550 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:52:24 np0005548731 podman[310878]: 2025-12-06 07:52:24.462614374 +0000 UTC m=+0.110385572 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Dec  6 02:52:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:24.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:26.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:26 np0005548731 nova_compute[232433]: 2025-12-06 07:52:26.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:52:26 np0005548731 nova_compute[232433]: 2025-12-06 07:52:26.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:52:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:26.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:27 np0005548731 nova_compute[232433]: 2025-12-06 07:52:27.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:27 np0005548731 nova_compute[232433]: 2025-12-06 07:52:27.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:52:27 np0005548731 nova_compute[232433]: 2025-12-06 07:52:27.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:52:27 np0005548731 nova_compute[232433]: 2025-12-06 07:52:27.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:52:27 np0005548731 nova_compute[232433]: 2025-12-06 07:52:27.128 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Skipping network cache update for instance because it has been migrated to another host. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9902#033[00m
Dec  6 02:52:27 np0005548731 nova_compute[232433]: 2025-12-06 07:52:27.128 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:52:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:28.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.217 232437 DEBUG oslo_concurrency.lockutils [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.217 232437 DEBUG oslo_concurrency.lockutils [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.237 232437 DEBUG nova.objects.instance [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'flavor' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.280 232437 DEBUG oslo_concurrency.lockutils [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.545 232437 DEBUG oslo_concurrency.lockutils [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.545 232437 DEBUG oslo_concurrency.lockutils [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.546 232437 INFO nova.compute.manager [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Attaching volume abe8f868-d32c-4fcd-9cd6-843e5af74a06 to /dev/vdb#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.731 232437 DEBUG os_brick.utils [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.734 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.748 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.748 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[cc650a54-db98-4e1b-b4f9-4e7d1643ead8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.750 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.756 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.757 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[827a4bdd-608b-4d40-a51e-8a43aba45f97]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.758 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.766 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.766 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[02f63b0a-a892-4005-8466-1830e7ed484a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.768 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7e85bf55-41d5-4ff6-b022-8b04770901e3]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.768 232437 DEBUG oslo_concurrency.processutils [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.806 232437 DEBUG oslo_concurrency.processutils [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "nvme version" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.809 232437 DEBUG os_brick.initiator.connectors.lightos [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.809 232437 DEBUG os_brick.initiator.connectors.lightos [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.809 232437 DEBUG os_brick.initiator.connectors.lightos [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.810 232437 DEBUG os_brick.utils [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:52:28 np0005548731 nova_compute[232433]: 2025-12-06 07:52:28.810 232437 DEBUG nova.virt.block_device [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updating existing volume attachment record: 9b25da7f-a519-440f-b579-a9606260e07b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:52:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 02:52:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:28.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 02:52:29 np0005548731 nova_compute[232433]: 2025-12-06 07:52:29.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:52:29 np0005548731 nova_compute[232433]: 2025-12-06 07:52:29.267 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:29 np0005548731 nova_compute[232433]: 2025-12-06 07:52:29.918 232437 DEBUG nova.objects.instance [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'flavor' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:29 np0005548731 nova_compute[232433]: 2025-12-06 07:52:29.954 232437 DEBUG nova.virt.libvirt.driver [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Attempting to attach volume abe8f868-d32c-4fcd-9cd6-843e5af74a06 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 02:52:29 np0005548731 nova_compute[232433]: 2025-12-06 07:52:29.957 232437 DEBUG nova.virt.libvirt.guest [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:52:29 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:52:29 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-abe8f868-d32c-4fcd-9cd6-843e5af74a06">
Dec  6 02:52:29 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:52:29 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:52:29 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:52:29 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:52:29 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:52:29 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:52:29 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:52:29 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:52:29 np0005548731 nova_compute[232433]:  <serial>abe8f868-d32c-4fcd-9cd6-843e5af74a06</serial>
Dec  6 02:52:29 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:52:29 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:52:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:30.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:30 np0005548731 nova_compute[232433]: 2025-12-06 07:52:30.081 232437 DEBUG nova.virt.libvirt.driver [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:52:30 np0005548731 nova_compute[232433]: 2025-12-06 07:52:30.081 232437 DEBUG nova.virt.libvirt.driver [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:52:30 np0005548731 nova_compute[232433]: 2025-12-06 07:52:30.081 232437 DEBUG nova.virt.libvirt.driver [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:52:30 np0005548731 nova_compute[232433]: 2025-12-06 07:52:30.082 232437 DEBUG nova.virt.libvirt.driver [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No VIF found with MAC fa:16:3e:3b:5f:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:52:30 np0005548731 nova_compute[232433]: 2025-12-06 07:52:30.288 232437 DEBUG oslo_concurrency.lockutils [None req-2d9a3cf4-f45c-492e-8979-60a112ff9d11 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:30.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e388 e388: 3 total, 3 up, 3 in
Dec  6 02:52:31 np0005548731 nova_compute[232433]: 2025-12-06 07:52:31.265 232437 INFO nova.compute.manager [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Rescuing#033[00m
Dec  6 02:52:31 np0005548731 nova_compute[232433]: 2025-12-06 07:52:31.265 232437 DEBUG oslo_concurrency.lockutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:52:31 np0005548731 nova_compute[232433]: 2025-12-06 07:52:31.265 232437 DEBUG oslo_concurrency.lockutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquired lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:52:31 np0005548731 nova_compute[232433]: 2025-12-06 07:52:31.266 232437 DEBUG nova.network.neutron [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:52:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:31 np0005548731 nova_compute[232433]: 2025-12-06 07:52:31.974 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007536.9732456, fa71018e-7574-4438-bb85-43d1c96cf9b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:52:31 np0005548731 nova_compute[232433]: 2025-12-06 07:52:31.975 232437 INFO nova.compute.manager [-] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:52:31 np0005548731 nova_compute[232433]: 2025-12-06 07:52:31.995 232437 DEBUG nova.compute.manager [None req-8ec24892-7b1b-4928-a2dc-7caf29202a52 - - - - - -] [instance: fa71018e-7574-4438-bb85-43d1c96cf9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:52:32 np0005548731 nova_compute[232433]: 2025-12-06 07:52:32.002 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:32.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:32 np0005548731 nova_compute[232433]: 2025-12-06 07:52:32.754 232437 DEBUG nova.network.neutron [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updating instance_info_cache with network_info: [{"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:52:32 np0005548731 nova_compute[232433]: 2025-12-06 07:52:32.772 232437 DEBUG oslo_concurrency.lockutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Releasing lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:52:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e389 e389: 3 total, 3 up, 3 in
Dec  6 02:52:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:32.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:33 np0005548731 nova_compute[232433]: 2025-12-06 07:52:33.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:52:33 np0005548731 nova_compute[232433]: 2025-12-06 07:52:33.353 232437 DEBUG nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:52:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:34.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.129 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.129 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.319 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:52:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/853213052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.551 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.635 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.635 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.635 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:52:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:34.637 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:52:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:34.638 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.639 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.778 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.779 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4030MB free_disk=20.733062744140625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.779 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.780 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:34.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.989 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 9cc0604e-1ff7-4781-8383-c780b6f598c5 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.990 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:52:34 np0005548731 nova_compute[232433]: 2025-12-06 07:52:34.990 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:52:35 np0005548731 nova_compute[232433]: 2025-12-06 07:52:35.027 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:52:35 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2536026822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:52:35 np0005548731 nova_compute[232433]: 2025-12-06 07:52:35.456 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:35 np0005548731 nova_compute[232433]: 2025-12-06 07:52:35.463 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:52:35 np0005548731 nova_compute[232433]: 2025-12-06 07:52:35.500 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:52:35 np0005548731 nova_compute[232433]: 2025-12-06 07:52:35.532 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:52:35 np0005548731 nova_compute[232433]: 2025-12-06 07:52:35.533 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:36.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:36 np0005548731 nova_compute[232433]: 2025-12-06 07:52:36.370 232437 INFO nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Instance shutdown successfully after 3 seconds.#033[00m
Dec  6 02:52:36 np0005548731 nova_compute[232433]: 2025-12-06 07:52:36.532 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:52:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:36.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.004 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:52:37 np0005548731 kernel: tap8e1136a8-e1 (unregistering): left promiscuous mode
Dec  6 02:52:37 np0005548731 NetworkManager[49182]: <info>  [1765007557.5234] device (tap8e1136a8-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:52:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:37Z|00833|binding|INFO|Releasing lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 from this chassis (sb_readonly=0)
Dec  6 02:52:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:37Z|00834|binding|INFO|Setting lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 down in Southbound
Dec  6 02:52:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:37Z|00835|binding|INFO|Removing iface tap8e1136a8-e1 ovn-installed in OVS
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.531 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.536 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:5f:69 10.100.0.12'], port_security=['fa:16:3e:3b:5f:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9cc0604e-1ff7-4781-8383-c780b6f598c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2f9858c-886d-4e74-a599-a5dabbf1ba8e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8e1136a8-e1c4-47b4-ada8-22860a8df286) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.537 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8e1136a8-e1c4-47b4-ada8-22860a8df286 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 unbound from our chassis#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.541 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d151181-0dfe-43ab-b47e-15b53add33a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.542 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[51fd102b-5d56-4edd-b4b3-10c66a07b463]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.543 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace which is not needed anymore#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.553 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:37 np0005548731 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Dec  6 02:52:37 np0005548731 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000a9.scope: Consumed 15.549s CPU time.
Dec  6 02:52:37 np0005548731 systemd-machined[195355]: Machine qemu-84-instance-000000a9 terminated.
Dec  6 02:52:37 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[310431]: [NOTICE]   (310450) : haproxy version is 2.8.14-c23fe91
Dec  6 02:52:37 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[310431]: [NOTICE]   (310450) : path to executable is /usr/sbin/haproxy
Dec  6 02:52:37 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[310431]: [WARNING]  (310450) : Exiting Master process...
Dec  6 02:52:37 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[310431]: [ALERT]    (310450) : Current worker (310456) exited with code 143 (Terminated)
Dec  6 02:52:37 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[310431]: [WARNING]  (310450) : All workers exited. Exiting... (0)
Dec  6 02:52:37 np0005548731 systemd[1]: libpod-40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c.scope: Deactivated successfully.
Dec  6 02:52:37 np0005548731 podman[311066]: 2025-12-06 07:52:37.681070274 +0000 UTC m=+0.053710926 container died 40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  6 02:52:37 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c-userdata-shm.mount: Deactivated successfully.
Dec  6 02:52:37 np0005548731 systemd[1]: var-lib-containers-storage-overlay-6c2b04b537af80f86fd98b1ce5a80acde92dff61ee3c84c111db5d565c41d8db-merged.mount: Deactivated successfully.
Dec  6 02:52:37 np0005548731 podman[311066]: 2025-12-06 07:52:37.724769463 +0000 UTC m=+0.097410115 container cleanup 40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:52:37 np0005548731 systemd[1]: libpod-conmon-40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c.scope: Deactivated successfully.
Dec  6 02:52:37 np0005548731 podman[311098]: 2025-12-06 07:52:37.784780331 +0000 UTC m=+0.038387460 container remove 40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.793 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7f65c9-010f-4755-bdc2-12f3586c329d]: (4, ('Sat Dec  6 07:52:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c)\n40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c\nSat Dec  6 07:52:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c)\n40903214fc9fbf2330b9823ba5ba40e7827fce42827f17b9d71161f330a9ae2c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.795 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0a47b4-9a7e-4bf8-b750-0066aa17ef0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.797 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.800 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.805 232437 INFO nova.virt.libvirt.driver [-] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Instance destroyed successfully.#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.806 232437 DEBUG nova.objects.instance [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:37 np0005548731 kernel: tap3d151181-00: left promiscuous mode
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.818 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.823 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ec18bd-eded-4b29-8e23-36b76c113c56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.823 232437 INFO nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Attempting rescue#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.824 232437 DEBUG nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.829 232437 DEBUG nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.829 232437 INFO nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Creating image(s)#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.839 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[371a9aee-8645-40f4-a918-eca39929b75a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.841 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1a4874-ca11-4d62-b96a-c700585a2fcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.856 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7f0428-c6ec-4c38-bb15-e95c685f1613]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777329, 'reachable_time': 44577, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311168, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.857 232437 DEBUG nova.storage.rbd_utils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:52:37 np0005548731 systemd[1]: run-netns-ovnmeta\x2d3d151181\x2d0dfe\x2d43ab\x2db47e\x2d15b53add33a6.mount: Deactivated successfully.
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.860 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:52:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:37.860 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[aba6c674-1fd0-475b-9559-6779a27ebbdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.864 232437 DEBUG nova.objects.instance [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.910 232437 DEBUG nova.storage.rbd_utils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.937 232437 DEBUG nova.storage.rbd_utils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:52:37 np0005548731 nova_compute[232433]: 2025-12-06 07:52:37.940 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.006 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.007 232437 DEBUG oslo_concurrency.lockutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.008 232437 DEBUG oslo_concurrency.lockutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.008 232437 DEBUG oslo_concurrency.lockutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.034 232437 DEBUG nova.storage.rbd_utils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.038 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.067 232437 DEBUG nova.compute.manager [req-48d40ecb-355c-4d07-98ea-c3dda553d347 req-f59cb4c1-b677-4c95-97e5-f261fbf6c949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-unplugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.068 232437 DEBUG oslo_concurrency.lockutils [req-48d40ecb-355c-4d07-98ea-c3dda553d347 req-f59cb4c1-b677-4c95-97e5-f261fbf6c949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.068 232437 DEBUG oslo_concurrency.lockutils [req-48d40ecb-355c-4d07-98ea-c3dda553d347 req-f59cb4c1-b677-4c95-97e5-f261fbf6c949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.068 232437 DEBUG oslo_concurrency.lockutils [req-48d40ecb-355c-4d07-98ea-c3dda553d347 req-f59cb4c1-b677-4c95-97e5-f261fbf6c949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.068 232437 DEBUG nova.compute.manager [req-48d40ecb-355c-4d07-98ea-c3dda553d347 req-f59cb4c1-b677-4c95-97e5-f261fbf6c949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] No waiting events found dispatching network-vif-unplugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.069 232437 WARNING nova.compute.manager [req-48d40ecb-355c-4d07-98ea-c3dda553d347 req-f59cb4c1-b677-4c95-97e5-f261fbf6c949 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received unexpected event network-vif-unplugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:52:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:38.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:52:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.288 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.289 232437 DEBUG nova.objects.instance [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'migration_context' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.401 232437 DEBUG nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.401 232437 DEBUG nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Start _get_guest_xml network_info=[{"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "vif_mac": "fa:16:3e:3b:5f:69"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.402 232437 DEBUG nova.objects.instance [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'resources' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.431 232437 WARNING nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.438 232437 DEBUG nova.virt.libvirt.host [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.438 232437 DEBUG nova.virt.libvirt.host [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.441 232437 DEBUG nova.virt.libvirt.host [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.441 232437 DEBUG nova.virt.libvirt.host [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.442 232437 DEBUG nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.443 232437 DEBUG nova.virt.hardware [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.443 232437 DEBUG nova.virt.hardware [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.443 232437 DEBUG nova.virt.hardware [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.443 232437 DEBUG nova.virt.hardware [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.444 232437 DEBUG nova.virt.hardware [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.444 232437 DEBUG nova.virt.hardware [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.444 232437 DEBUG nova.virt.hardware [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.444 232437 DEBUG nova.virt.hardware [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.444 232437 DEBUG nova.virt.hardware [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.444 232437 DEBUG nova.virt.hardware [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.444 232437 DEBUG nova.virt.hardware [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.445 232437 DEBUG nova.objects.instance [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.461 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:52:38 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2547543116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.907 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:38 np0005548731 nova_compute[232433]: 2025-12-06 07:52:38.908 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:38.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:39 np0005548731 nova_compute[232433]: 2025-12-06 07:52:39.320 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:52:39 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2287609545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:52:39 np0005548731 nova_compute[232433]: 2025-12-06 07:52:39.360 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:39 np0005548731 nova_compute[232433]: 2025-12-06 07:52:39.361 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:52:39 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4213075360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:52:39 np0005548731 nova_compute[232433]: 2025-12-06 07:52:39.814 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:39 np0005548731 nova_compute[232433]: 2025-12-06 07:52:39.815 232437 DEBUG nova.virt.libvirt.vif [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:51:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-193302101',display_name='tempest-ServerRescueNegativeTestJSON-server-193302101',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-193302101',id=169,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5A7PUR8B+q/aEm961UR40RWH5j2d0ebX0KEZ8+WjamXhjP8ST8tm7kS6TxJuU8YsY9D43lwbEhi9EKe7kSKkxSw3JIb+Ggj3DIhBD4zOx4oUM+gOFYaZD0JjPWKPEMBw==',key_name='tempest-keypair-414016542',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:51:52Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-gcbn1atd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:51:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f2335740042045fba7f544ee5140eb87',uuid=9cc0604e-1ff7-4781-8383-c780b6f598c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "vif_mac": "fa:16:3e:3b:5f:69"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:52:39 np0005548731 nova_compute[232433]: 2025-12-06 07:52:39.816 232437 DEBUG nova.network.os_vif_util [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "vif_mac": "fa:16:3e:3b:5f:69"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:52:39 np0005548731 nova_compute[232433]: 2025-12-06 07:52:39.817 232437 DEBUG nova.network.os_vif_util [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3b:5f:69,bridge_name='br-int',has_traffic_filtering=True,id=8e1136a8-e1c4-47b4-ada8-22860a8df286,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e1136a8-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:52:39 np0005548731 nova_compute[232433]: 2025-12-06 07:52:39.818 232437 DEBUG nova.objects.instance [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:40.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.329 232437 DEBUG nova.compute.manager [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.329 232437 DEBUG oslo_concurrency.lockutils [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.329 232437 DEBUG oslo_concurrency.lockutils [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.330 232437 DEBUG oslo_concurrency.lockutils [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.330 232437 DEBUG nova.compute.manager [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] No waiting events found dispatching network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.330 232437 WARNING nova.compute.manager [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received unexpected event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 for instance with vm_state active and task_state rescuing.#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.339 232437 DEBUG nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  <uuid>9cc0604e-1ff7-4781-8383-c780b6f598c5</uuid>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  <name>instance-000000a9</name>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-193302101</nova:name>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:52:38</nova:creationTime>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <nova:user uuid="f2335740042045fba7f544ee5140eb87">tempest-ServerRescueNegativeTestJSON-1304226499-project-member</nova:user>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <nova:project uuid="4842ecff6dce4ccc981a6b65a14ea406">tempest-ServerRescueNegativeTestJSON-1304226499</nova:project>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <nova:port uuid="8e1136a8-e1c4-47b4-ada8-22860a8df286">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <entry name="serial">9cc0604e-1ff7-4781-8383-c780b6f598c5</entry>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <entry name="uuid">9cc0604e-1ff7-4781-8383-c780b6f598c5</entry>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.rescue">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9cc0604e-1ff7-4781-8383-c780b6f598c5_disk">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <target dev="vdb" bus="virtio"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.config.rescue">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:3b:5f:69"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <target dev="tap8e1136a8-e1"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/console.log" append="off"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:52:40 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:52:40 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:52:40 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:52:40 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.344 232437 INFO nova.virt.libvirt.driver [-] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Instance destroyed successfully.#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.652 232437 DEBUG nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.653 232437 DEBUG nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.653 232437 DEBUG nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.653 232437 DEBUG nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No VIF found with MAC fa:16:3e:3b:5f:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:52:40 np0005548731 nova_compute[232433]: 2025-12-06 07:52:40.654 232437 INFO nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Using config drive#033[00m
Dec  6 02:52:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:40.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:52:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:42.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:52:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:52:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:42.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:52:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:43.640 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:52:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:43 np0005548731 nova_compute[232433]: 2025-12-06 07:52:43.969 232437 DEBUG nova.storage.rbd_utils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:52:43 np0005548731 nova_compute[232433]: 2025-12-06 07:52:43.975 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:43 np0005548731 nova_compute[232433]: 2025-12-06 07:52:43.999 232437 DEBUG nova.objects.instance [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:44 np0005548731 nova_compute[232433]: 2025-12-06 07:52:44.028 232437 DEBUG nova.objects.instance [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'keypairs' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:44.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:44 np0005548731 nova_compute[232433]: 2025-12-06 07:52:44.321 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:44 np0005548731 nova_compute[232433]: 2025-12-06 07:52:44.353 232437 INFO nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Creating config drive at /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config.rescue#033[00m
Dec  6 02:52:44 np0005548731 nova_compute[232433]: 2025-12-06 07:52:44.358 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpga3qbsvh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:44 np0005548731 nova_compute[232433]: 2025-12-06 07:52:44.492 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpga3qbsvh" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:44 np0005548731 nova_compute[232433]: 2025-12-06 07:52:44.913 232437 DEBUG nova.storage.rbd_utils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:52:44 np0005548731 nova_compute[232433]: 2025-12-06 07:52:44.917 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config.rescue 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:52:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:44.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:45 np0005548731 nova_compute[232433]: 2025-12-06 07:52:45.091 232437 DEBUG oslo_concurrency.processutils [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config.rescue 9cc0604e-1ff7-4781-8383-c780b6f598c5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:52:45 np0005548731 nova_compute[232433]: 2025-12-06 07:52:45.092 232437 INFO nova.virt.libvirt.driver [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Deleting local config drive /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5/disk.config.rescue because it was imported into RBD.#033[00m
Dec  6 02:52:45 np0005548731 kernel: tap8e1136a8-e1: entered promiscuous mode
Dec  6 02:52:45 np0005548731 NetworkManager[49182]: <info>  [1765007565.1489] manager: (tap8e1136a8-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Dec  6 02:52:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:45Z|00836|binding|INFO|Claiming lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 for this chassis.
Dec  6 02:52:45 np0005548731 nova_compute[232433]: 2025-12-06 07:52:45.186 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:45Z|00837|binding|INFO|8e1136a8-e1c4-47b4-ada8-22860a8df286: Claiming fa:16:3e:3b:5f:69 10.100.0.12
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.194 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:5f:69 10.100.0.12'], port_security=['fa:16:3e:3b:5f:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9cc0604e-1ff7-4781-8383-c780b6f598c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c2f9858c-886d-4e74-a599-a5dabbf1ba8e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8e1136a8-e1c4-47b4-ada8-22860a8df286) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.195 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8e1136a8-e1c4-47b4-ada8-22860a8df286 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 bound to our chassis#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.197 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d151181-0dfe-43ab-b47e-15b53add33a6#033[00m
Dec  6 02:52:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:45Z|00838|binding|INFO|Setting lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 ovn-installed in OVS
Dec  6 02:52:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:45Z|00839|binding|INFO|Setting lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 up in Southbound
Dec  6 02:52:45 np0005548731 nova_compute[232433]: 2025-12-06 07:52:45.206 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.209 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7a4a46-1f0e-49d1-af4e-711d82d61ccb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.210 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d151181-01 in ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:52:45 np0005548731 systemd-udevd[311465]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.211 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d151181-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.211 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0655aa25-efa4-4b26-81d6-8423b04544a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 systemd-machined[195355]: New machine qemu-85-instance-000000a9.
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.212 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[55893c88-0c00-43e1-a781-151dfcc0951e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 NetworkManager[49182]: <info>  [1765007565.2227] device (tap8e1136a8-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:52:45 np0005548731 NetworkManager[49182]: <info>  [1765007565.2234] device (tap8e1136a8-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:52:45 np0005548731 systemd[1]: Started Virtual Machine qemu-85-instance-000000a9.
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.223 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[ae22e399-5dbb-46b3-b8c9-d122539c50c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.247 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dd72e7ff-a5a0-4c9e-b2f0-54be5036899f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.277 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8004a6-8fe9-4a9e-b721-36e5e440c4b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 NetworkManager[49182]: <info>  [1765007565.2831] manager: (tap3d151181-00): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.284 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3a264093-1b38-440e-bf5b-853187eb9a44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.313 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b8aa04f2-caee-4709-b603-2b9c030642af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.316 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d50a7fd5-ecc5-40cb-9901-181fb76a1924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 NetworkManager[49182]: <info>  [1765007565.3331] device (tap3d151181-00): carrier: link connected
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.336 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4d521db5-4047-45a4-9d6c-0de6f98947ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.350 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1adc288d-b0dd-47d6-9d2f-6d3fa9cbe0c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782734, 'reachable_time': 34113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311497, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.362 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ff63e4be-e326-4e4e-b74d-114a9df95ecf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:130b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782734, 'tstamp': 782734}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311498, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.374 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4f8cc6-413e-4ad3-9e09-ee523e54452a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782734, 'reachable_time': 34113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311499, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.396 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbf07ae-71fc-4918-a11d-bd782ee8b462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.440 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[77c68928-ef95-4c3a-8ca3-547a14a192f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.441 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.441 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.441 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d151181-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:52:45 np0005548731 NetworkManager[49182]: <info>  [1765007565.4442] manager: (tap3d151181-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Dec  6 02:52:45 np0005548731 nova_compute[232433]: 2025-12-06 07:52:45.443 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:45 np0005548731 kernel: tap3d151181-00: entered promiscuous mode
Dec  6 02:52:45 np0005548731 nova_compute[232433]: 2025-12-06 07:52:45.445 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.446 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d151181-00, col_values=(('external_ids', {'iface-id': '7c0488e1-35c2-4c92-b43c-271fbeecd9ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:52:45 np0005548731 nova_compute[232433]: 2025-12-06 07:52:45.447 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:45 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:45Z|00840|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec  6 02:52:45 np0005548731 nova_compute[232433]: 2025-12-06 07:52:45.461 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.461 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.462 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7de0d0f6-c752-42f0-a6eb-e6e7d5a87590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.463 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-3d151181-0dfe-43ab-b47e-15b53add33a6
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:52:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:45.463 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'env', 'PROCESS_TAG=haproxy-3d151181-0dfe-43ab-b47e-15b53add33a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d151181-0dfe-43ab-b47e-15b53add33a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:52:45 np0005548731 podman[311567]: 2025-12-06 07:52:45.784009284 +0000 UTC m=+0.022850961 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:52:46 np0005548731 podman[311567]: 2025-12-06 07:52:46.008182229 +0000 UTC m=+0.247023886 container create 7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 02:52:46 np0005548731 systemd[1]: Started libpod-conmon-7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e.scope.
Dec  6 02:52:46 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:52:46 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64e221a6cedb0c309f6f3cf5e93836e9579d51ff287742ee0f10c1e09aa80eca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:52:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:46.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:46 np0005548731 podman[311567]: 2025-12-06 07:52:46.090854141 +0000 UTC m=+0.329695818 container init 7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 02:52:46 np0005548731 podman[311567]: 2025-12-06 07:52:46.097069253 +0000 UTC m=+0.335910940 container start 7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  6 02:52:46 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311602]: [NOTICE]   (311609) : New worker (311612) forked
Dec  6 02:52:46 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311602]: [NOTICE]   (311609) : Loading success.
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.153 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for 9cc0604e-1ff7-4781-8383-c780b6f598c5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.154 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007566.153117, 9cc0604e-1ff7-4781-8383-c780b6f598c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.154 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.159 232437 DEBUG nova.compute.manager [None req-f937f3d4-4cac-4f12-ad10-62a2c586ffdf f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.193 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.195 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.225 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.226 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007566.157157, 9cc0604e-1ff7-4781-8383-c780b6f598c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.226 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] VM Started (Lifecycle Event)#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.251 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.255 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.277 232437 DEBUG nova.compute.manager [req-6079bbd2-ecc5-400b-a72c-71da148b80b0 req-d0225fba-91a9-4c94-8384-05cffee0383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.277 232437 DEBUG oslo_concurrency.lockutils [req-6079bbd2-ecc5-400b-a72c-71da148b80b0 req-d0225fba-91a9-4c94-8384-05cffee0383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.278 232437 DEBUG oslo_concurrency.lockutils [req-6079bbd2-ecc5-400b-a72c-71da148b80b0 req-d0225fba-91a9-4c94-8384-05cffee0383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.278 232437 DEBUG oslo_concurrency.lockutils [req-6079bbd2-ecc5-400b-a72c-71da148b80b0 req-d0225fba-91a9-4c94-8384-05cffee0383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.278 232437 DEBUG nova.compute.manager [req-6079bbd2-ecc5-400b-a72c-71da148b80b0 req-d0225fba-91a9-4c94-8384-05cffee0383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] No waiting events found dispatching network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:52:46 np0005548731 nova_compute[232433]: 2025-12-06 07:52:46.278 232437 WARNING nova.compute.manager [req-6079bbd2-ecc5-400b-a72c-71da148b80b0 req-d0225fba-91a9-4c94-8384-05cffee0383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received unexpected event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 for instance with vm_state rescued and task_state None.#033[00m
Dec  6 02:52:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:46.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:48.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:48.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:49 np0005548731 nova_compute[232433]: 2025-12-06 07:52:49.019 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:49 np0005548731 nova_compute[232433]: 2025-12-06 07:52:49.314 232437 DEBUG nova.compute.manager [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:52:49 np0005548731 nova_compute[232433]: 2025-12-06 07:52:49.316 232437 DEBUG oslo_concurrency.lockutils [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:49 np0005548731 nova_compute[232433]: 2025-12-06 07:52:49.316 232437 DEBUG oslo_concurrency.lockutils [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:49 np0005548731 nova_compute[232433]: 2025-12-06 07:52:49.317 232437 DEBUG oslo_concurrency.lockutils [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:49 np0005548731 nova_compute[232433]: 2025-12-06 07:52:49.317 232437 DEBUG nova.compute.manager [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] No waiting events found dispatching network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:52:49 np0005548731 nova_compute[232433]: 2025-12-06 07:52:49.318 232437 WARNING nova.compute.manager [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received unexpected event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 for instance with vm_state rescued and task_state None.#033[00m
Dec  6 02:52:49 np0005548731 nova_compute[232433]: 2025-12-06 07:52:49.368 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:50.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:50.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:51 np0005548731 nova_compute[232433]: 2025-12-06 07:52:51.434 232437 INFO nova.compute.manager [None req-6689d37c-0af0-4a2f-9bb4-505231b6ee4c f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Unrescuing#033[00m
Dec  6 02:52:51 np0005548731 nova_compute[232433]: 2025-12-06 07:52:51.435 232437 DEBUG oslo_concurrency.lockutils [None req-6689d37c-0af0-4a2f-9bb4-505231b6ee4c f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:52:51 np0005548731 nova_compute[232433]: 2025-12-06 07:52:51.435 232437 DEBUG oslo_concurrency.lockutils [None req-6689d37c-0af0-4a2f-9bb4-505231b6ee4c f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquired lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:52:51 np0005548731 nova_compute[232433]: 2025-12-06 07:52:51.435 232437 DEBUG nova.network.neutron [None req-6689d37c-0af0-4a2f-9bb4-505231b6ee4c f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:52:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:52.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:52.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:54 np0005548731 nova_compute[232433]: 2025-12-06 07:52:54.069 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:54.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:54 np0005548731 nova_compute[232433]: 2025-12-06 07:52:54.370 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:54 np0005548731 podman[311626]: 2025-12-06 07:52:54.88900723 +0000 UTC m=+0.049888781 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  6 02:52:54 np0005548731 podman[311628]: 2025-12-06 07:52:54.910716121 +0000 UTC m=+0.065049802 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2)
Dec  6 02:52:54 np0005548731 podman[311627]: 2025-12-06 07:52:54.92784162 +0000 UTC m=+0.086788435 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 02:52:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:54.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:56 np0005548731 nova_compute[232433]: 2025-12-06 07:52:56.016 232437 DEBUG nova.network.neutron [None req-6689d37c-0af0-4a2f-9bb4-505231b6ee4c f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updating instance_info_cache with network_info: [{"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:52:56 np0005548731 nova_compute[232433]: 2025-12-06 07:52:56.033 232437 DEBUG oslo_concurrency.lockutils [None req-6689d37c-0af0-4a2f-9bb4-505231b6ee4c f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Releasing lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:52:56 np0005548731 nova_compute[232433]: 2025-12-06 07:52:56.033 232437 DEBUG nova.objects.instance [None req-6689d37c-0af0-4a2f-9bb4-505231b6ee4c f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'flavor' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:56.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:56.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:57 np0005548731 kernel: tap8e1136a8-e1 (unregistering): left promiscuous mode
Dec  6 02:52:57 np0005548731 NetworkManager[49182]: <info>  [1765007577.1274] device (tap8e1136a8-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.142 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:57Z|00841|binding|INFO|Releasing lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 from this chassis (sb_readonly=0)
Dec  6 02:52:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:57Z|00842|binding|INFO|Setting lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 down in Southbound
Dec  6 02:52:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:57Z|00843|binding|INFO|Removing iface tap8e1136a8-e1 ovn-installed in OVS
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.145 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.152 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:5f:69 10.100.0.12'], port_security=['fa:16:3e:3b:5f:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9cc0604e-1ff7-4781-8383-c780b6f598c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c2f9858c-886d-4e74-a599-a5dabbf1ba8e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8e1136a8-e1c4-47b4-ada8-22860a8df286) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.154 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8e1136a8-e1c4-47b4-ada8-22860a8df286 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 unbound from our chassis#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.158 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d151181-0dfe-43ab-b47e-15b53add33a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.160 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4221e04a-f835-4215-8218-59a70eb37f2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.161 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace which is not needed anymore#033[00m
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.167 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Dec  6 02:52:57 np0005548731 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000a9.scope: Consumed 10.555s CPU time.
Dec  6 02:52:57 np0005548731 systemd-machined[195355]: Machine qemu-85-instance-000000a9 terminated.
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.271 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.276 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311602]: [NOTICE]   (311609) : haproxy version is 2.8.14-c23fe91
Dec  6 02:52:57 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311602]: [NOTICE]   (311609) : path to executable is /usr/sbin/haproxy
Dec  6 02:52:57 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311602]: [WARNING]  (311609) : Exiting Master process...
Dec  6 02:52:57 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311602]: [WARNING]  (311609) : Exiting Master process...
Dec  6 02:52:57 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311602]: [ALERT]    (311609) : Current worker (311612) exited with code 143 (Terminated)
Dec  6 02:52:57 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311602]: [WARNING]  (311609) : All workers exited. Exiting... (0)
Dec  6 02:52:57 np0005548731 systemd[1]: libpod-7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e.scope: Deactivated successfully.
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.285 232437 INFO nova.virt.libvirt.driver [-] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Instance destroyed successfully.#033[00m
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.285 232437 DEBUG nova.objects.instance [None req-6689d37c-0af0-4a2f-9bb4-505231b6ee4c f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:52:57 np0005548731 podman[311717]: 2025-12-06 07:52:57.28821818 +0000 UTC m=+0.044492320 container died 7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 02:52:57 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e-userdata-shm.mount: Deactivated successfully.
Dec  6 02:52:57 np0005548731 systemd[1]: var-lib-containers-storage-overlay-64e221a6cedb0c309f6f3cf5e93836e9579d51ff287742ee0f10c1e09aa80eca-merged.mount: Deactivated successfully.
Dec  6 02:52:57 np0005548731 podman[311717]: 2025-12-06 07:52:57.331641802 +0000 UTC m=+0.087915942 container cleanup 7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 02:52:57 np0005548731 systemd[1]: libpod-conmon-7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e.scope: Deactivated successfully.
Dec  6 02:52:57 np0005548731 kernel: tap8e1136a8-e1: entered promiscuous mode
Dec  6 02:52:57 np0005548731 systemd-udevd[311696]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:52:57 np0005548731 NetworkManager[49182]: <info>  [1765007577.3695] manager: (tap8e1136a8-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.370 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:57Z|00844|binding|INFO|Claiming lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 for this chassis.
Dec  6 02:52:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:57Z|00845|binding|INFO|8e1136a8-e1c4-47b4-ada8-22860a8df286: Claiming fa:16:3e:3b:5f:69 10.100.0.12
Dec  6 02:52:57 np0005548731 NetworkManager[49182]: <info>  [1765007577.3805] device (tap8e1136a8-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:52:57 np0005548731 NetworkManager[49182]: <info>  [1765007577.3823] device (tap8e1136a8-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:52:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:57Z|00846|binding|INFO|Setting lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 ovn-installed in OVS
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.387 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.390 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:57Z|00847|binding|INFO|Setting lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 up in Southbound
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.392 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:5f:69 10.100.0.12'], port_security=['fa:16:3e:3b:5f:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9cc0604e-1ff7-4781-8383-c780b6f598c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c2f9858c-886d-4e74-a599-a5dabbf1ba8e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8e1136a8-e1c4-47b4-ada8-22860a8df286) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:52:57 np0005548731 podman[311759]: 2025-12-06 07:52:57.394604423 +0000 UTC m=+0.043227789 container remove 7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:52:57 np0005548731 systemd-machined[195355]: New machine qemu-86-instance-000000a9.
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.399 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c298b336-1d5b-40f7-824c-318512e9958a]: (4, ('Sat Dec  6 07:52:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e)\n7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e\nSat Dec  6 07:52:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e)\n7c990d858385fe666dc9eec562d030a86cffbc2372f6a1ef4eaeb2f1f3af318e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.401 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c78a0176-baa3-435f-81d9-080034eade21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.401 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.403 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 kernel: tap3d151181-00: left promiscuous mode
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.404 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 systemd[1]: Started Virtual Machine qemu-86-instance-000000a9.
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.418 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.515 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1626c58d-9f78-4ac1-9744-b5cbca8d90c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.530 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0a201ce1-1eb5-4005-9e87-f77f9bd94129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.531 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d7b7c6-534e-4035-8a8c-8b2632080c18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.546 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0804b522-64e0-4bb6-8203-fe2d96dde9ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782727, 'reachable_time': 44431, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311790, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 systemd[1]: run-netns-ovnmeta\x2d3d151181\x2d0dfe\x2d43ab\x2db47e\x2d15b53add33a6.mount: Deactivated successfully.
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.548 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.548 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[265ca1b9-1e53-46b5-ba1d-c1de87d539c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.548 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8e1136a8-e1c4-47b4-ada8-22860a8df286 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 unbound from our chassis#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.550 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d151181-0dfe-43ab-b47e-15b53add33a6#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.561 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[49baa34b-b1ba-4384-a032-800d512c44fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.562 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d151181-01 in ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.564 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d151181-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.564 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3b023930-bf23-449e-8ea5-8c3537ea81e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.565 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c75d07f6-ec8d-43f0-adcd-8b4d50abd917]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.577 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[9a225c50-822c-4124-ab60-1620dab0548a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.594 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[739379a6-f43c-425f-80c1-193c2a41404a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.621 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4b78b523-4f0d-4e9a-b94c-e01753d6877b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.625 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce1217b-a43f-4603-b436-d8f843a5b2a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 NetworkManager[49182]: <info>  [1765007577.6278] manager: (tap3d151181-00): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.660 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[51bbef30-93d9-43ba-8593-3d96db567ca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.664 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdce65b-3b6c-4acc-b24a-15fa5e0e9674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 NetworkManager[49182]: <info>  [1765007577.6872] device (tap3d151181-00): carrier: link connected
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.695 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[52754547-7c91-41d0-933c-4c10a432ebd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.715 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8b885a96-0f7e-4ab3-b7bb-8c7e1d2d6b09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783969, 'reachable_time': 26642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311814, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.733 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f58f855d-5804-4657-a1b8-14551ea640c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:130b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783969, 'tstamp': 783969}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311815, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.751 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd2dde7-2249-4640-aea0-7ffc06c1872f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783969, 'reachable_time': 26642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311816, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.783 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[422d28f6-ed6e-4ed3-b444-3d0cf625ed0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.839 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3b45b6-0e30-49f6-b43b-c404ac458003]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.840 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.840 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.841 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d151181-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:52:57 np0005548731 kernel: tap3d151181-00: entered promiscuous mode
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.842 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 NetworkManager[49182]: <info>  [1765007577.8434] manager: (tap3d151181-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.844 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.845 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d151181-00, col_values=(('external_ids', {'iface-id': '7c0488e1-35c2-4c92-b43c-271fbeecd9ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.846 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 ovn_controller[133927]: 2025-12-06T07:52:57Z|00848|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec  6 02:52:57 np0005548731 nova_compute[232433]: 2025-12-06 07:52:57.860 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.861 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.861 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3cfc4f-9f58-48c9-a195-4894291a1935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.862 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-3d151181-0dfe-43ab-b47e-15b53add33a6
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:52:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:52:57.862 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'env', 'PROCESS_TAG=haproxy-3d151181-0dfe-43ab-b47e-15b53add33a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d151181-0dfe-43ab-b47e-15b53add33a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:52:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:52:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:58.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:52:58 np0005548731 podman[311884]: 2025-12-06 07:52:58.200235354 +0000 UTC m=+0.042977023 container create 8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:52:58 np0005548731 systemd[1]: Started libpod-conmon-8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767.scope.
Dec  6 02:52:58 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:52:58 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1b469adf450771c392e2cf91f0ae2f9402e8802f2f6d740f149d82be00314c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:52:58 np0005548731 podman[311884]: 2025-12-06 07:52:58.265017939 +0000 UTC m=+0.107759608 container init 8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:52:58 np0005548731 podman[311884]: 2025-12-06 07:52:58.270441052 +0000 UTC m=+0.113182721 container start 8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 02:52:58 np0005548731 podman[311884]: 2025-12-06 07:52:58.177934858 +0000 UTC m=+0.020676547 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.285 232437 DEBUG nova.compute.manager [req-6681cb7d-eb31-4850-b7f0-12a29f9eb0b0 req-9ac49cd9-d66c-4dc5-a38f-61e92f107cf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-unplugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.286 232437 DEBUG oslo_concurrency.lockutils [req-6681cb7d-eb31-4850-b7f0-12a29f9eb0b0 req-9ac49cd9-d66c-4dc5-a38f-61e92f107cf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.286 232437 DEBUG oslo_concurrency.lockutils [req-6681cb7d-eb31-4850-b7f0-12a29f9eb0b0 req-9ac49cd9-d66c-4dc5-a38f-61e92f107cf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.287 232437 DEBUG oslo_concurrency.lockutils [req-6681cb7d-eb31-4850-b7f0-12a29f9eb0b0 req-9ac49cd9-d66c-4dc5-a38f-61e92f107cf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.287 232437 DEBUG nova.compute.manager [req-6681cb7d-eb31-4850-b7f0-12a29f9eb0b0 req-9ac49cd9-d66c-4dc5-a38f-61e92f107cf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] No waiting events found dispatching network-vif-unplugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.287 232437 WARNING nova.compute.manager [req-6681cb7d-eb31-4850-b7f0-12a29f9eb0b0 req-9ac49cd9-d66c-4dc5-a38f-61e92f107cf3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received unexpected event network-vif-unplugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:52:58 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311919]: [NOTICE]   (311926) : New worker (311928) forked
Dec  6 02:52:58 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311919]: [NOTICE]   (311926) : Loading success.
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.328 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for 9cc0604e-1ff7-4781-8383-c780b6f598c5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.328 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007578.3277173, 9cc0604e-1ff7-4781-8383-c780b6f598c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.329 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.351 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.354 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.377 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.378 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007578.3286245, 9cc0604e-1ff7-4781-8383-c780b6f598c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.378 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] VM Started (Lifecycle Event)#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.396 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.399 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:52:58 np0005548731 nova_compute[232433]: 2025-12-06 07:52:58.419 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Dec  6 02:52:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:52:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:52:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:52:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:58.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:52:59 np0005548731 nova_compute[232433]: 2025-12-06 07:52:59.072 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:52:59 np0005548731 nova_compute[232433]: 2025-12-06 07:52:59.371 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:53:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:00.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.453 232437 DEBUG nova.compute.manager [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.454 232437 DEBUG oslo_concurrency.lockutils [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.454 232437 DEBUG oslo_concurrency.lockutils [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.455 232437 DEBUG oslo_concurrency.lockutils [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.455 232437 DEBUG nova.compute.manager [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] No waiting events found dispatching network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.455 232437 WARNING nova.compute.manager [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received unexpected event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.455 232437 DEBUG nova.compute.manager [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.456 232437 DEBUG oslo_concurrency.lockutils [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.456 232437 DEBUG oslo_concurrency.lockutils [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.456 232437 DEBUG oslo_concurrency.lockutils [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.456 232437 DEBUG nova.compute.manager [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] No waiting events found dispatching network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.457 232437 WARNING nova.compute.manager [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received unexpected event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.457 232437 DEBUG nova.compute.manager [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.457 232437 DEBUG oslo_concurrency.lockutils [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.458 232437 DEBUG oslo_concurrency.lockutils [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.458 232437 DEBUG oslo_concurrency.lockutils [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.458 232437 DEBUG nova.compute.manager [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] No waiting events found dispatching network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.458 232437 WARNING nova.compute.manager [req-92522a9d-a2dc-43f5-9438-a0436bce1120 req-893fa44c-139a-4d9d-a959-855b63970837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received unexpected event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec  6 02:53:00 np0005548731 nova_compute[232433]: 2025-12-06 07:53:00.569 232437 DEBUG nova.compute.manager [None req-6689d37c-0af0-4a2f-9bb4-505231b6ee4c f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:53:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:00.897 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:00.898 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:00.899 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:00.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:01 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Dec  6 02:53:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:02.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:02.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:03 np0005548731 nova_compute[232433]: 2025-12-06 07:53:03.065 232437 DEBUG nova.compute.manager [req-8ae98395-081e-4ac4-864b-436ae2d2adaf req-9a4aa5ba-9280-4573-b47a-29a331bef1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-changed-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:53:03 np0005548731 nova_compute[232433]: 2025-12-06 07:53:03.065 232437 DEBUG nova.compute.manager [req-8ae98395-081e-4ac4-864b-436ae2d2adaf req-9a4aa5ba-9280-4573-b47a-29a331bef1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Refreshing instance network info cache due to event network-changed-8e1136a8-e1c4-47b4-ada8-22860a8df286. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:53:03 np0005548731 nova_compute[232433]: 2025-12-06 07:53:03.066 232437 DEBUG oslo_concurrency.lockutils [req-8ae98395-081e-4ac4-864b-436ae2d2adaf req-9a4aa5ba-9280-4573-b47a-29a331bef1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:53:03 np0005548731 nova_compute[232433]: 2025-12-06 07:53:03.066 232437 DEBUG oslo_concurrency.lockutils [req-8ae98395-081e-4ac4-864b-436ae2d2adaf req-9a4aa5ba-9280-4573-b47a-29a331bef1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:53:03 np0005548731 nova_compute[232433]: 2025-12-06 07:53:03.066 232437 DEBUG nova.network.neutron [req-8ae98395-081e-4ac4-864b-436ae2d2adaf req-9a4aa5ba-9280-4573-b47a-29a331bef1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Refreshing network info cache for port 8e1136a8-e1c4-47b4-ada8-22860a8df286 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:53:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:53:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2434060058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:53:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:53:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2434060058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:53:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:04 np0005548731 nova_compute[232433]: 2025-12-06 07:53:04.074 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:04.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:04 np0005548731 nova_compute[232433]: 2025-12-06 07:53:04.412 232437 DEBUG nova.network.neutron [req-8ae98395-081e-4ac4-864b-436ae2d2adaf req-9a4aa5ba-9280-4573-b47a-29a331bef1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updated VIF entry in instance network info cache for port 8e1136a8-e1c4-47b4-ada8-22860a8df286. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:53:04 np0005548731 nova_compute[232433]: 2025-12-06 07:53:04.413 232437 DEBUG nova.network.neutron [req-8ae98395-081e-4ac4-864b-436ae2d2adaf req-9a4aa5ba-9280-4573-b47a-29a331bef1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updating instance_info_cache with network_info: [{"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:53:04 np0005548731 nova_compute[232433]: 2025-12-06 07:53:04.414 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:04 np0005548731 nova_compute[232433]: 2025-12-06 07:53:04.431 232437 DEBUG oslo_concurrency.lockutils [req-8ae98395-081e-4ac4-864b-436ae2d2adaf req-9a4aa5ba-9280-4573-b47a-29a331bef1c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:53:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:04.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e390 e390: 3 total, 3 up, 3 in
Dec  6 02:53:05 np0005548731 nova_compute[232433]: 2025-12-06 07:53:05.164 232437 DEBUG nova.compute.manager [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-changed-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:53:05 np0005548731 nova_compute[232433]: 2025-12-06 07:53:05.164 232437 DEBUG nova.compute.manager [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Refreshing instance network info cache due to event network-changed-8e1136a8-e1c4-47b4-ada8-22860a8df286. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:53:05 np0005548731 nova_compute[232433]: 2025-12-06 07:53:05.165 232437 DEBUG oslo_concurrency.lockutils [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:53:05 np0005548731 nova_compute[232433]: 2025-12-06 07:53:05.165 232437 DEBUG oslo_concurrency.lockutils [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:53:05 np0005548731 nova_compute[232433]: 2025-12-06 07:53:05.165 232437 DEBUG nova.network.neutron [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Refreshing network info cache for port 8e1136a8-e1c4-47b4-ada8-22860a8df286 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:53:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:53:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:06.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:53:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:06.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e391 e391: 3 total, 3 up, 3 in
Dec  6 02:53:07 np0005548731 nova_compute[232433]: 2025-12-06 07:53:07.854 232437 DEBUG nova.network.neutron [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updated VIF entry in instance network info cache for port 8e1136a8-e1c4-47b4-ada8-22860a8df286. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:53:07 np0005548731 nova_compute[232433]: 2025-12-06 07:53:07.856 232437 DEBUG nova.network.neutron [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updating instance_info_cache with network_info: [{"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:53:07 np0005548731 nova_compute[232433]: 2025-12-06 07:53:07.879 232437 DEBUG oslo_concurrency.lockutils [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:53:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:08.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:08.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:09 np0005548731 nova_compute[232433]: 2025-12-06 07:53:09.117 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:09 np0005548731 nova_compute[232433]: 2025-12-06 07:53:09.415 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:10.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:10.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:12.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:12.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:13 np0005548731 ovn_controller[133927]: 2025-12-06T07:53:13Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:5f:69 10.100.0.12
Dec  6 02:53:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:14.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:14 np0005548731 nova_compute[232433]: 2025-12-06 07:53:14.171 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:14 np0005548731 nova_compute[232433]: 2025-12-06 07:53:14.417 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:14.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:16.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:16.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:18.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e392 e392: 3 total, 3 up, 3 in
Dec  6 02:53:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:53:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:18.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:53:19 np0005548731 nova_compute[232433]: 2025-12-06 07:53:19.173 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:19 np0005548731 nova_compute[232433]: 2025-12-06 07:53:19.420 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:53:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:20.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:53:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:20.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:53:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:22.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:53:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:22.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:24.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:24 np0005548731 nova_compute[232433]: 2025-12-06 07:53:24.176 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:24 np0005548731 nova_compute[232433]: 2025-12-06 07:53:24.421 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:53:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:24.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:53:25 np0005548731 podman[312072]: 2025-12-06 07:53:25.907552022 +0000 UTC m=+0.062599913 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:53:25 np0005548731 podman[312074]: 2025-12-06 07:53:25.9414097 +0000 UTC m=+0.095549859 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 02:53:25 np0005548731 podman[312073]: 2025-12-06 07:53:25.960295642 +0000 UTC m=+0.114848411 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 02:53:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:26.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:53:26Z|00849|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec  6 02:53:26 np0005548731 nova_compute[232433]: 2025-12-06 07:53:26.887 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:53:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:26.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:53:28 np0005548731 nova_compute[232433]: 2025-12-06 07:53:28.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:28 np0005548731 nova_compute[232433]: 2025-12-06 07:53:28.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:28 np0005548731 nova_compute[232433]: 2025-12-06 07:53:28.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:53:28 np0005548731 nova_compute[232433]: 2025-12-06 07:53:28.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:53:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:28.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:28 np0005548731 nova_compute[232433]: 2025-12-06 07:53:28.627 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:53:28 np0005548731 nova_compute[232433]: 2025-12-06 07:53:28.628 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:53:28 np0005548731 nova_compute[232433]: 2025-12-06 07:53:28.628 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:53:28 np0005548731 nova_compute[232433]: 2025-12-06 07:53:28.628 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:53:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:28.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:29 np0005548731 nova_compute[232433]: 2025-12-06 07:53:29.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:29 np0005548731 nova_compute[232433]: 2025-12-06 07:53:29.424 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:53:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:30.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:53:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:30.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:31 np0005548731 nova_compute[232433]: 2025-12-06 07:53:31.362 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updating instance_info_cache with network_info: [{"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:53:31 np0005548731 nova_compute[232433]: 2025-12-06 07:53:31.383 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-9cc0604e-1ff7-4781-8383-c780b6f598c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:53:31 np0005548731 nova_compute[232433]: 2025-12-06 07:53:31.383 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:53:31 np0005548731 nova_compute[232433]: 2025-12-06 07:53:31.384 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:31 np0005548731 nova_compute[232433]: 2025-12-06 07:53:31.384 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:31 np0005548731 nova_compute[232433]: 2025-12-06 07:53:31.384 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:53:31 np0005548731 nova_compute[232433]: 2025-12-06 07:53:31.384 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:32.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.702877) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612703140, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1929, "num_deletes": 258, "total_data_size": 4335601, "memory_usage": 4404072, "flush_reason": "Manual Compaction"}
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612721380, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 2846580, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62995, "largest_seqno": 64918, "table_properties": {"data_size": 2838345, "index_size": 4984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17926, "raw_average_key_size": 21, "raw_value_size": 2821728, "raw_average_value_size": 3311, "num_data_blocks": 216, "num_entries": 852, "num_filter_entries": 852, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007463, "oldest_key_time": 1765007463, "file_creation_time": 1765007612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 18346 microseconds, and 6694 cpu microseconds.
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.721425) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 2846580 bytes OK
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.721443) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.723235) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.723248) EVENT_LOG_v1 {"time_micros": 1765007612723243, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.723264) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 4326748, prev total WAL file size 4326748, number of live WAL files 2.
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.724251) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(2779KB)], [126(9953KB)]
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612724360, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 13038857, "oldest_snapshot_seqno": -1}
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 9361 keys, 11088029 bytes, temperature: kUnknown
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612798746, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 11088029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11028509, "index_size": 35023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 247575, "raw_average_key_size": 26, "raw_value_size": 10864805, "raw_average_value_size": 1160, "num_data_blocks": 1329, "num_entries": 9361, "num_filter_entries": 9361, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765007612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.798987) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11088029 bytes
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.800169) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.1 rd, 148.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.7 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(8.5) write-amplify(3.9) OK, records in: 9892, records dropped: 531 output_compression: NoCompression
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.800187) EVENT_LOG_v1 {"time_micros": 1765007612800179, "job": 80, "event": "compaction_finished", "compaction_time_micros": 74455, "compaction_time_cpu_micros": 27571, "output_level": 6, "num_output_files": 1, "total_output_size": 11088029, "num_input_records": 9892, "num_output_records": 9361, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612800824, "job": 80, "event": "table_file_deletion", "file_number": 128}
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612802340, "job": 80, "event": "table_file_deletion", "file_number": 126}
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.724071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.802381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.802386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.802388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.802389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:53:32 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:53:32.802390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:53:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:32.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:33 np0005548731 nova_compute[232433]: 2025-12-06 07:53:33.122 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:34 np0005548731 nova_compute[232433]: 2025-12-06 07:53:34.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:34.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:34 np0005548731 nova_compute[232433]: 2025-12-06 07:53:34.180 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:34 np0005548731 nova_compute[232433]: 2025-12-06 07:53:34.425 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:34.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:35 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:35.090 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.091 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:35 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:35.091 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.126 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.127 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.127 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:53:35 np0005548731 ovn_controller[133927]: 2025-12-06T07:53:35Z|00850|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.339 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.419 232437 DEBUG oslo_concurrency.lockutils [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.420 232437 DEBUG oslo_concurrency.lockutils [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.443 232437 INFO nova.compute.manager [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Detaching volume abe8f868-d32c-4fcd-9cd6-843e5af74a06#033[00m
Dec  6 02:53:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:53:35 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3540705987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.577 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.604 232437 INFO nova.virt.block_device [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Attempting to driver detach volume abe8f868-d32c-4fcd-9cd6-843e5af74a06 from mountpoint /dev/vdb#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.616 232437 DEBUG nova.virt.libvirt.driver [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Attempting to detach device vdb from instance 9cc0604e-1ff7-4781-8383-c780b6f598c5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.617 232437 DEBUG nova.virt.libvirt.guest [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-abe8f868-d32c-4fcd-9cd6-843e5af74a06">
Dec  6 02:53:35 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  <serial>abe8f868-d32c-4fcd-9cd6-843e5af74a06</serial>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:53:35 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.625 232437 INFO nova.virt.libvirt.driver [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Successfully detached device vdb from instance 9cc0604e-1ff7-4781-8383-c780b6f598c5 from the persistent domain config.#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.625 232437 DEBUG nova.virt.libvirt.driver [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 9cc0604e-1ff7-4781-8383-c780b6f598c5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.626 232437 DEBUG nova.virt.libvirt.guest [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-abe8f868-d32c-4fcd-9cd6-843e5af74a06">
Dec  6 02:53:35 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  <serial>abe8f868-d32c-4fcd-9cd6-843e5af74a06</serial>
Dec  6 02:53:35 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:53:35 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:53:35 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.682 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765007615.6823792, 9cc0604e-1ff7-4781-8383-c780b6f598c5 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.684 232437 DEBUG nova.virt.libvirt.driver [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 9cc0604e-1ff7-4781-8383-c780b6f598c5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.687 232437 INFO nova.virt.libvirt.driver [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Successfully detached device vdb from instance 9cc0604e-1ff7-4781-8383-c780b6f598c5 from the live domain config.#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.689 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.689 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000a9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.843 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.844 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4024MB free_disk=20.73941421508789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.844 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.844 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.923 232437 DEBUG nova.objects.instance [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'flavor' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.943 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 9cc0604e-1ff7-4781-8383-c780b6f598c5 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.944 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.944 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.973 232437 DEBUG oslo_concurrency.lockutils [None req-a34ac9c9-6f03-40ef-9db1-dc9c5cd20b1b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:35 np0005548731 nova_compute[232433]: 2025-12-06 07:53:35.993 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:53:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:53:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:36.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:53:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:53:36 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3684386716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:53:36 np0005548731 nova_compute[232433]: 2025-12-06 07:53:36.437 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:53:36 np0005548731 nova_compute[232433]: 2025-12-06 07:53:36.443 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:53:36 np0005548731 nova_compute[232433]: 2025-12-06 07:53:36.464 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:53:36 np0005548731 nova_compute[232433]: 2025-12-06 07:53:36.528 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:53:36 np0005548731 nova_compute[232433]: 2025-12-06 07:53:36.528 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:53:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:36.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:53:37 np0005548731 nova_compute[232433]: 2025-12-06 07:53:37.529 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:37 np0005548731 nova_compute[232433]: 2025-12-06 07:53:37.582 232437 DEBUG oslo_concurrency.lockutils [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:37 np0005548731 nova_compute[232433]: 2025-12-06 07:53:37.583 232437 DEBUG oslo_concurrency.lockutils [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:37 np0005548731 nova_compute[232433]: 2025-12-06 07:53:37.583 232437 DEBUG oslo_concurrency.lockutils [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:37 np0005548731 nova_compute[232433]: 2025-12-06 07:53:37.583 232437 DEBUG oslo_concurrency.lockutils [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:37 np0005548731 nova_compute[232433]: 2025-12-06 07:53:37.584 232437 DEBUG oslo_concurrency.lockutils [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:37 np0005548731 nova_compute[232433]: 2025-12-06 07:53:37.585 232437 INFO nova.compute.manager [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Terminating instance#033[00m
Dec  6 02:53:37 np0005548731 nova_compute[232433]: 2025-12-06 07:53:37.586 232437 DEBUG nova.compute.manager [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.093 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:53:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:38.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:53:38 np0005548731 kernel: tap8e1136a8-e1 (unregistering): left promiscuous mode
Dec  6 02:53:38 np0005548731 NetworkManager[49182]: <info>  [1765007618.2707] device (tap8e1136a8-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:53:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:53:38Z|00851|binding|INFO|Releasing lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 from this chassis (sb_readonly=0)
Dec  6 02:53:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:53:38Z|00852|binding|INFO|Setting lport 8e1136a8-e1c4-47b4-ada8-22860a8df286 down in Southbound
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:38 np0005548731 ovn_controller[133927]: 2025-12-06T07:53:38Z|00853|binding|INFO|Removing iface tap8e1136a8-e1 ovn-installed in OVS
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.287 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.292 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:5f:69 10.100.0.12'], port_security=['fa:16:3e:3b:5f:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9cc0604e-1ff7-4781-8383-c780b6f598c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c2f9858c-886d-4e74-a599-a5dabbf1ba8e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8e1136a8-e1c4-47b4-ada8-22860a8df286) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.293 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8e1136a8-e1c4-47b4-ada8-22860a8df286 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 unbound from our chassis#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.295 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d151181-0dfe-43ab-b47e-15b53add33a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.297 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[042ffb13-bc99-4b0f-a7fb-fcc82ad47e9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.297 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace which is not needed anymore#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.308 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:38 np0005548731 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Dec  6 02:53:38 np0005548731 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000a9.scope: Consumed 14.538s CPU time.
Dec  6 02:53:38 np0005548731 systemd-machined[195355]: Machine qemu-86-instance-000000a9 terminated.
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.427 232437 INFO nova.virt.libvirt.driver [-] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Instance destroyed successfully.#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.428 232437 DEBUG nova.objects.instance [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'resources' on Instance uuid 9cc0604e-1ff7-4781-8383-c780b6f598c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:53:38 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311919]: [NOTICE]   (311926) : haproxy version is 2.8.14-c23fe91
Dec  6 02:53:38 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311919]: [NOTICE]   (311926) : path to executable is /usr/sbin/haproxy
Dec  6 02:53:38 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311919]: [WARNING]  (311926) : Exiting Master process...
Dec  6 02:53:38 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311919]: [WARNING]  (311926) : Exiting Master process...
Dec  6 02:53:38 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311919]: [ALERT]    (311926) : Current worker (311928) exited with code 143 (Terminated)
Dec  6 02:53:38 np0005548731 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[311919]: [WARNING]  (311926) : All workers exited. Exiting... (0)
Dec  6 02:53:38 np0005548731 systemd[1]: libpod-8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767.scope: Deactivated successfully.
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.442 232437 DEBUG nova.virt.libvirt.vif [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:51:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-193302101',display_name='tempest-ServerRescueNegativeTestJSON-server-193302101',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-193302101',id=169,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5A7PUR8B+q/aEm961UR40RWH5j2d0ebX0KEZ8+WjamXhjP8ST8tm7kS6TxJuU8YsY9D43lwbEhi9EKe7kSKkxSw3JIb+Ggj3DIhBD4zOx4oUM+gOFYaZD0JjPWKPEMBw==',key_name='tempest-keypair-414016542',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:52:46Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-gcbn1atd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:53:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f2335740042045fba7f544ee5140eb87',uuid=9cc0604e-1ff7-4781-8383-c780b6f598c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:53:38 np0005548731 podman[312334]: 2025-12-06 07:53:38.443271197 +0000 UTC m=+0.052806003 container died 8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.442 232437 DEBUG nova.network.os_vif_util [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "address": "fa:16:3e:3b:5f:69", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e1136a8-e1", "ovs_interfaceid": "8e1136a8-e1c4-47b4-ada8-22860a8df286", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.443 232437 DEBUG nova.network.os_vif_util [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3b:5f:69,bridge_name='br-int',has_traffic_filtering=True,id=8e1136a8-e1c4-47b4-ada8-22860a8df286,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e1136a8-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.443 232437 DEBUG os_vif [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:5f:69,bridge_name='br-int',has_traffic_filtering=True,id=8e1136a8-e1c4-47b4-ada8-22860a8df286,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e1136a8-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.445 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.445 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e1136a8-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.447 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.449 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.452 232437 INFO os_vif [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:5f:69,bridge_name='br-int',has_traffic_filtering=True,id=8e1136a8-e1c4-47b4-ada8-22860a8df286,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e1136a8-e1')#033[00m
Dec  6 02:53:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767-userdata-shm.mount: Deactivated successfully.
Dec  6 02:53:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay-ad1b469adf450771c392e2cf91f0ae2f9402e8802f2f6d740f149d82be00314c-merged.mount: Deactivated successfully.
Dec  6 02:53:38 np0005548731 podman[312334]: 2025-12-06 07:53:38.48468111 +0000 UTC m=+0.094215916 container cleanup 8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:53:38 np0005548731 systemd[1]: libpod-conmon-8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767.scope: Deactivated successfully.
Dec  6 02:53:38 np0005548731 podman[312389]: 2025-12-06 07:53:38.54066021 +0000 UTC m=+0.035960411 container remove 8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.546 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c64221-4fff-4de2-a16a-7f96dd0d1986]: (4, ('Sat Dec  6 07:53:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767)\n8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767\nSat Dec  6 07:53:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767)\n8a5985b7a092081b42ecde9b8c836a6352e7ec079ab46a450440e67ff2795767\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.547 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[70d8c1ea-8e60-47be-8945-f8b786182b69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.548 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.585 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:38 np0005548731 kernel: tap3d151181-00: left promiscuous mode
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.599 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.603 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[38ae19b7-0ae1-4733-823a-1de880569b92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.619 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0f61bfc9-cd81-45e6-8993-55d8b9343b05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.620 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d20a32cd-d09a-495c-9e49-eed90118c8b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.635 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4a59fa2e-bc41-432f-843e-d0e840189e51]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783962, 'reachable_time': 27916, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312407, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:53:38 np0005548731 systemd[1]: run-netns-ovnmeta\x2d3d151181\x2d0dfe\x2d43ab\x2db47e\x2d15b53add33a6.mount: Deactivated successfully.
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.639 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:53:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:53:38.639 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[646980e6-433e-4982-a384-89fce937a5b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.729 232437 DEBUG nova.compute.manager [req-edbd8126-819f-4c20-92d7-3889022a686f req-744a1e00-2288-4b47-8f4f-f252ce9abe46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-unplugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.729 232437 DEBUG oslo_concurrency.lockutils [req-edbd8126-819f-4c20-92d7-3889022a686f req-744a1e00-2288-4b47-8f4f-f252ce9abe46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.729 232437 DEBUG oslo_concurrency.lockutils [req-edbd8126-819f-4c20-92d7-3889022a686f req-744a1e00-2288-4b47-8f4f-f252ce9abe46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.730 232437 DEBUG oslo_concurrency.lockutils [req-edbd8126-819f-4c20-92d7-3889022a686f req-744a1e00-2288-4b47-8f4f-f252ce9abe46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.730 232437 DEBUG nova.compute.manager [req-edbd8126-819f-4c20-92d7-3889022a686f req-744a1e00-2288-4b47-8f4f-f252ce9abe46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] No waiting events found dispatching network-vif-unplugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:53:38 np0005548731 nova_compute[232433]: 2025-12-06 07:53:38.730 232437 DEBUG nova.compute.manager [req-edbd8126-819f-4c20-92d7-3889022a686f req-744a1e00-2288-4b47-8f4f-f252ce9abe46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-unplugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:53:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:39.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:39 np0005548731 nova_compute[232433]: 2025-12-06 07:53:39.426 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:40.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:40 np0005548731 nova_compute[232433]: 2025-12-06 07:53:40.863 232437 DEBUG nova.compute.manager [req-8dc33513-3b1f-4d17-a87e-5c1f4d348697 req-af2be7f3-5067-495e-b7e4-a6bcc65bd6c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:53:40 np0005548731 nova_compute[232433]: 2025-12-06 07:53:40.864 232437 DEBUG oslo_concurrency.lockutils [req-8dc33513-3b1f-4d17-a87e-5c1f4d348697 req-af2be7f3-5067-495e-b7e4-a6bcc65bd6c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:40 np0005548731 nova_compute[232433]: 2025-12-06 07:53:40.864 232437 DEBUG oslo_concurrency.lockutils [req-8dc33513-3b1f-4d17-a87e-5c1f4d348697 req-af2be7f3-5067-495e-b7e4-a6bcc65bd6c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:40 np0005548731 nova_compute[232433]: 2025-12-06 07:53:40.864 232437 DEBUG oslo_concurrency.lockutils [req-8dc33513-3b1f-4d17-a87e-5c1f4d348697 req-af2be7f3-5067-495e-b7e4-a6bcc65bd6c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:40 np0005548731 nova_compute[232433]: 2025-12-06 07:53:40.864 232437 DEBUG nova.compute.manager [req-8dc33513-3b1f-4d17-a87e-5c1f4d348697 req-af2be7f3-5067-495e-b7e4-a6bcc65bd6c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] No waiting events found dispatching network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:53:40 np0005548731 nova_compute[232433]: 2025-12-06 07:53:40.865 232437 WARNING nova.compute.manager [req-8dc33513-3b1f-4d17-a87e-5c1f4d348697 req-af2be7f3-5067-495e-b7e4-a6bcc65bd6c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received unexpected event network-vif-plugged-8e1136a8-e1c4-47b4-ada8-22860a8df286 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:53:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 02:53:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 02:53:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:41.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:41 np0005548731 nova_compute[232433]: 2025-12-06 07:53:41.458 232437 INFO nova.virt.libvirt.driver [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Deleting instance files /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5_del#033[00m
Dec  6 02:53:41 np0005548731 nova_compute[232433]: 2025-12-06 07:53:41.461 232437 INFO nova.virt.libvirt.driver [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Deletion of /var/lib/nova/instances/9cc0604e-1ff7-4781-8383-c780b6f598c5_del complete#033[00m
Dec  6 02:53:41 np0005548731 nova_compute[232433]: 2025-12-06 07:53:41.523 232437 INFO nova.compute.manager [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Took 3.94 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:53:41 np0005548731 nova_compute[232433]: 2025-12-06 07:53:41.524 232437 DEBUG oslo.service.loopingcall [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:53:41 np0005548731 nova_compute[232433]: 2025-12-06 07:53:41.525 232437 DEBUG nova.compute.manager [-] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:53:41 np0005548731 nova_compute[232433]: 2025-12-06 07:53:41.525 232437 DEBUG nova.network.neutron [-] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:53:41 np0005548731 podman[312582]: 2025-12-06 07:53:41.61997776 +0000 UTC m=+0.067616816 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 02:53:41 np0005548731 podman[312582]: 2025-12-06 07:53:41.734097201 +0000 UTC m=+0.181736277 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Dec  6 02:53:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:53:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:53:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:53:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:42.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:53:42 np0005548731 podman[312736]: 2025-12-06 07:53:42.270860984 +0000 UTC m=+0.048769134 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:53:42 np0005548731 podman[312736]: 2025-12-06 07:53:42.282807806 +0000 UTC m=+0.060715926 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 02:53:42 np0005548731 nova_compute[232433]: 2025-12-06 07:53:42.380 232437 DEBUG nova.network.neutron [-] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:53:42 np0005548731 nova_compute[232433]: 2025-12-06 07:53:42.403 232437 INFO nova.compute.manager [-] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Took 0.88 seconds to deallocate network for instance.#033[00m
Dec  6 02:53:42 np0005548731 nova_compute[232433]: 2025-12-06 07:53:42.463 232437 DEBUG oslo_concurrency.lockutils [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:53:42 np0005548731 nova_compute[232433]: 2025-12-06 07:53:42.464 232437 DEBUG oslo_concurrency.lockutils [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:53:42 np0005548731 podman[312803]: 2025-12-06 07:53:42.480849211 +0000 UTC m=+0.050123286 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, name=keepalived, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=2.2.4, io.buildah.version=1.28.2, io.openshift.expose-services=, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  6 02:53:42 np0005548731 podman[312803]: 2025-12-06 07:53:42.492820485 +0000 UTC m=+0.062094540 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, com.redhat.component=keepalived-container, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.buildah.version=1.28.2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, description=keepalived for Ceph)
Dec  6 02:53:42 np0005548731 nova_compute[232433]: 2025-12-06 07:53:42.521 232437 DEBUG nova.compute.manager [req-fb092ab2-cf93-48f7-acfc-8e02f3870b87 req-05bc78be-5186-4b07-83b4-0da99b58915d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Received event network-vif-deleted-8e1136a8-e1c4-47b4-ada8-22860a8df286 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:53:42 np0005548731 nova_compute[232433]: 2025-12-06 07:53:42.543 232437 DEBUG oslo_concurrency.processutils [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:53:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:43.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:53:43 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/194808883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:53:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:53:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:53:43 np0005548731 nova_compute[232433]: 2025-12-06 07:53:43.055 232437 DEBUG oslo_concurrency.processutils [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:53:43 np0005548731 nova_compute[232433]: 2025-12-06 07:53:43.062 232437 DEBUG nova.compute.provider_tree [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:53:43 np0005548731 nova_compute[232433]: 2025-12-06 07:53:43.082 232437 DEBUG nova.scheduler.client.report [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:53:43 np0005548731 nova_compute[232433]: 2025-12-06 07:53:43.111 232437 DEBUG oslo_concurrency.lockutils [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:43 np0005548731 nova_compute[232433]: 2025-12-06 07:53:43.173 232437 INFO nova.scheduler.client.report [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Deleted allocations for instance 9cc0604e-1ff7-4781-8383-c780b6f598c5#033[00m
Dec  6 02:53:43 np0005548731 nova_compute[232433]: 2025-12-06 07:53:43.232 232437 DEBUG oslo_concurrency.lockutils [None req-0d38fcdd-cf49-425f-bacc-d66881660401 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "9cc0604e-1ff7-4781-8383-c780b6f598c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:53:43 np0005548731 nova_compute[232433]: 2025-12-06 07:53:43.449 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:53:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:53:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:53:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:44.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:44 np0005548731 nova_compute[232433]: 2025-12-06 07:53:44.471 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:53:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:45.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:53:45 np0005548731 nova_compute[232433]: 2025-12-06 07:53:45.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e393 e393: 3 total, 3 up, 3 in
Dec  6 02:53:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:53:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:46.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:53:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:47.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:47 np0005548731 nova_compute[232433]: 2025-12-06 07:53:47.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:47 np0005548731 nova_compute[232433]: 2025-12-06 07:53:47.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:53:47 np0005548731 nova_compute[232433]: 2025-12-06 07:53:47.123 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:53:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e394 e394: 3 total, 3 up, 3 in
Dec  6 02:53:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:48.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:48 np0005548731 nova_compute[232433]: 2025-12-06 07:53:48.452 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:49.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e395 e395: 3 total, 3 up, 3 in
Dec  6 02:53:49 np0005548731 nova_compute[232433]: 2025-12-06 07:53:49.473 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:53:50 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:53:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:50.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:51.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:52.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:53.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:53 np0005548731 nova_compute[232433]: 2025-12-06 07:53:53.425 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007618.4237618, 9cc0604e-1ff7-4781-8383-c780b6f598c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:53:53 np0005548731 nova_compute[232433]: 2025-12-06 07:53:53.426 232437 INFO nova.compute.manager [-] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:53:53 np0005548731 nova_compute[232433]: 2025-12-06 07:53:53.451 232437 DEBUG nova.compute.manager [None req-fde07be8-e481-4efd-b134-94f5928e1477 - - - - - -] [instance: 9cc0604e-1ff7-4781-8383-c780b6f598c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:53:53 np0005548731 nova_compute[232433]: 2025-12-06 07:53:53.454 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:54.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:54 np0005548731 nova_compute[232433]: 2025-12-06 07:53:54.477 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:55.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:56.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:56 np0005548731 nova_compute[232433]: 2025-12-06 07:53:56.268 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:56 np0005548731 podman[313095]: 2025-12-06 07:53:56.893691599 +0000 UTC m=+0.052020633 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  6 02:53:56 np0005548731 podman[313097]: 2025-12-06 07:53:56.929592338 +0000 UTC m=+0.083761400 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Dec  6 02:53:56 np0005548731 podman[313096]: 2025-12-06 07:53:56.952806716 +0000 UTC m=+0.111106809 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:53:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:57.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:58.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:58 np0005548731 nova_compute[232433]: 2025-12-06 07:53:58.457 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:53:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e396 e396: 3 total, 3 up, 3 in
Dec  6 02:53:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:53:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:53:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:53:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:59.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:53:59 np0005548731 nova_compute[232433]: 2025-12-06 07:53:59.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:53:59 np0005548731 nova_compute[232433]: 2025-12-06 07:53:59.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:53:59 np0005548731 nova_compute[232433]: 2025-12-06 07:53:59.479 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:54:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:00.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:54:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:00.898 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:00.899 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:00.899 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:01.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:02.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:03.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:03 np0005548731 nova_compute[232433]: 2025-12-06 07:54:03.460 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:54:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:04.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.432 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.433 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.433 232437 INFO nova.compute.manager [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Unshelving#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.481 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.514 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.514 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.520 232437 DEBUG nova.objects.instance [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'pci_requests' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.535 232437 DEBUG nova.objects.instance [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'numa_topology' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.544 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.557 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.558 232437 INFO nova.compute.claims [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.709 232437 DEBUG oslo_concurrency.processutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:54:04 np0005548731 nova_compute[232433]: 2025-12-06 07:54:04.756 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:05.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:54:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1188045838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:54:05 np0005548731 nova_compute[232433]: 2025-12-06 07:54:05.212 232437 DEBUG oslo_concurrency.processutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:54:05 np0005548731 nova_compute[232433]: 2025-12-06 07:54:05.219 232437 DEBUG nova.compute.provider_tree [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:54:05 np0005548731 nova_compute[232433]: 2025-12-06 07:54:05.236 232437 DEBUG nova.scheduler.client.report [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:54:05 np0005548731 nova_compute[232433]: 2025-12-06 07:54:05.255 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:05 np0005548731 nova_compute[232433]: 2025-12-06 07:54:05.834 232437 INFO nova.network.neutron [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updating port ad09ca6a-7b57-4547-9e95-4976d37ac5f9 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec  6 02:54:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:06.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:06 np0005548731 nova_compute[232433]: 2025-12-06 07:54:06.812 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:54:06 np0005548731 nova_compute[232433]: 2025-12-06 07:54:06.813 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquired lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:54:06 np0005548731 nova_compute[232433]: 2025-12-06 07:54:06.813 232437 DEBUG nova.network.neutron [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:54:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:07.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:07 np0005548731 nova_compute[232433]: 2025-12-06 07:54:07.035 232437 DEBUG nova.compute.manager [req-5a06409d-b2b3-467c-aa52-9a3e69184369 req-a23d001b-62d7-4bdf-9171-ac3ec518f7e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-changed-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:54:07 np0005548731 nova_compute[232433]: 2025-12-06 07:54:07.035 232437 DEBUG nova.compute.manager [req-5a06409d-b2b3-467c-aa52-9a3e69184369 req-a23d001b-62d7-4bdf-9171-ac3ec518f7e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Refreshing instance network info cache due to event network-changed-ad09ca6a-7b57-4547-9e95-4976d37ac5f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:54:07 np0005548731 nova_compute[232433]: 2025-12-06 07:54:07.035 232437 DEBUG oslo_concurrency.lockutils [req-5a06409d-b2b3-467c-aa52-9a3e69184369 req-a23d001b-62d7-4bdf-9171-ac3ec518f7e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:54:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:54:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:08.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:54:08 np0005548731 nova_compute[232433]: 2025-12-06 07:54:08.462 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:09.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:09 np0005548731 nova_compute[232433]: 2025-12-06 07:54:09.483 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:10.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.478 232437 DEBUG nova.network.neutron [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updating instance_info_cache with network_info: [{"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.503 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Releasing lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.504 232437 DEBUG nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.505 232437 INFO nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Creating image(s)#033[00m
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.528 232437 DEBUG nova.storage.rbd_utils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.532 232437 DEBUG nova.objects.instance [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.533 232437 DEBUG oslo_concurrency.lockutils [req-5a06409d-b2b3-467c-aa52-9a3e69184369 req-a23d001b-62d7-4bdf-9171-ac3ec518f7e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.533 232437 DEBUG nova.network.neutron [req-5a06409d-b2b3-467c-aa52-9a3e69184369 req-a23d001b-62d7-4bdf-9171-ac3ec518f7e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Refreshing network info cache for port ad09ca6a-7b57-4547-9e95-4976d37ac5f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.573 232437 DEBUG nova.storage.rbd_utils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.598 232437 DEBUG nova.storage.rbd_utils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.602 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "1de6ed854b679c6d5bb4bff88d139b58194cba27" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:10 np0005548731 nova_compute[232433]: 2025-12-06 07:54:10.603 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "1de6ed854b679c6d5bb4bff88d139b58194cba27" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:11.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:11 np0005548731 nova_compute[232433]: 2025-12-06 07:54:11.220 232437 DEBUG nova.virt.libvirt.imagebackend [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/f1264656-a902-4225-be11-839d8f664fd7/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/f1264656-a902-4225-be11-839d8f664fd7/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  6 02:54:11 np0005548731 nova_compute[232433]: 2025-12-06 07:54:11.275 232437 DEBUG nova.virt.libvirt.imagebackend [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/f1264656-a902-4225-be11-839d8f664fd7/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec  6 02:54:11 np0005548731 nova_compute[232433]: 2025-12-06 07:54:11.276 232437 DEBUG nova.storage.rbd_utils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] cloning images/f1264656-a902-4225-be11-839d8f664fd7@snap to None/91b85b86-0d07-4df4-80d5-48fa343c00b8_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:54:11 np0005548731 nova_compute[232433]: 2025-12-06 07:54:11.385 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "1de6ed854b679c6d5bb4bff88d139b58194cba27" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:11 np0005548731 nova_compute[232433]: 2025-12-06 07:54:11.521 232437 DEBUG nova.objects.instance [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'migration_context' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:54:11 np0005548731 nova_compute[232433]: 2025-12-06 07:54:11.602 232437 DEBUG nova.storage.rbd_utils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] flattening vms/91b85b86-0d07-4df4-80d5-48fa343c00b8_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.079 232437 DEBUG nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Image rbd:vms/91b85b86-0d07-4df4-80d5-48fa343c00b8_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.080 232437 DEBUG nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.080 232437 DEBUG nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Ensure instance console log exists: /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.081 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.081 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.081 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.083 232437 DEBUG nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Start _get_guest_xml network_info=[{"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:53:42Z,direct_url=<?>,disk_format='raw',id=f1264656-a902-4225-be11-839d8f664fd7,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-1180268370-shelved',owner='cfa713d92cc94fa1b94404ed58b0563f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:53:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.086 232437 WARNING nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.092 232437 DEBUG nova.virt.libvirt.host [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.092 232437 DEBUG nova.virt.libvirt.host [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.096 232437 DEBUG nova.virt.libvirt.host [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.097 232437 DEBUG nova.virt.libvirt.host [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.098 232437 DEBUG nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.098 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:53:42Z,direct_url=<?>,disk_format='raw',id=f1264656-a902-4225-be11-839d8f664fd7,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-1180268370-shelved',owner='cfa713d92cc94fa1b94404ed58b0563f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:53:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.098 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.098 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.098 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.099 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.099 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.099 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.099 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.099 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.099 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.099 232437 DEBUG nova.virt.hardware [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.100 232437 DEBUG nova.objects.instance [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.117 232437 DEBUG oslo_concurrency.processutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.144 232437 DEBUG nova.network.neutron [req-5a06409d-b2b3-467c-aa52-9a3e69184369 req-a23d001b-62d7-4bdf-9171-ac3ec518f7e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updated VIF entry in instance network info cache for port ad09ca6a-7b57-4547-9e95-4976d37ac5f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.145 232437 DEBUG nova.network.neutron [req-5a06409d-b2b3-467c-aa52-9a3e69184369 req-a23d001b-62d7-4bdf-9171-ac3ec518f7e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updating instance_info_cache with network_info: [{"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.161 232437 DEBUG oslo_concurrency.lockutils [req-5a06409d-b2b3-467c-aa52-9a3e69184369 req-a23d001b-62d7-4bdf-9171-ac3ec518f7e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:54:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:12.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:54:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/917406840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.581 232437 DEBUG oslo_concurrency.processutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.609 232437 DEBUG nova.storage.rbd_utils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:54:12 np0005548731 nova_compute[232433]: 2025-12-06 07:54:12.614 232437 DEBUG oslo_concurrency.processutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:54:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:54:13 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2975706700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.026 232437 DEBUG oslo_concurrency.processutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.028 232437 DEBUG nova.virt.libvirt.vif [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:52:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1180268370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1180268370',id=172,image_ref='f1264656-a902-4225-be11-839d8f664fd7',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-763555621',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:53:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-tqsa30w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member',shelved_at='2025-12-06T07:53:50.655408',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f1264656-a902-4225-be11-839d8f664fd7'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:54:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=91b85b86-0d07-4df4-80d5-48fa343c00b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.029 232437 DEBUG nova.network.os_vif_util [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.029 232437 DEBUG nova.network.os_vif_util [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.030 232437 DEBUG nova.objects.instance [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'pci_devices' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:54:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:13.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.046 232437 DEBUG nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  <uuid>91b85b86-0d07-4df4-80d5-48fa343c00b8</uuid>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  <name>instance-000000ac</name>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-1180268370</nova:name>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:54:12</nova:creationTime>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <nova:user uuid="90c9de6e67724c898a8e23b05fbf14da">tempest-AttachVolumeShelveTestJSON-1510980811-project-member</nova:user>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <nova:project uuid="cfa713d92cc94fa1b94404ed58b0563f">tempest-AttachVolumeShelveTestJSON-1510980811</nova:project>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="f1264656-a902-4225-be11-839d8f664fd7"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <nova:port uuid="ad09ca6a-7b57-4547-9e95-4976d37ac5f9">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <entry name="serial">91b85b86-0d07-4df4-80d5-48fa343c00b8</entry>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <entry name="uuid">91b85b86-0d07-4df4-80d5-48fa343c00b8</entry>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/91b85b86-0d07-4df4-80d5-48fa343c00b8_disk">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:5d:9b:dc"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <target dev="tapad09ca6a-7b"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/console.log" append="off"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <input type="keyboard" bus="usb"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:54:13 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:54:13 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:54:13 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:54:13 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.048 232437 DEBUG nova.compute.manager [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Preparing to wait for external event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.048 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.048 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.049 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.049 232437 DEBUG nova.virt.libvirt.vif [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:52:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1180268370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1180268370',id=172,image_ref='f1264656-a902-4225-be11-839d8f664fd7',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-763555621',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:53:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-tqsa30w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member',shelved_at='2025-12-06T07:53:50.655408',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f1264656-a902-4225-be11-839d8f664fd7'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:54:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=91b85b86-0d07-4df4-80d5-48fa343c00b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.050 232437 DEBUG nova.network.os_vif_util [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.050 232437 DEBUG nova.network.os_vif_util [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.051 232437 DEBUG os_vif [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.051 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.052 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.052 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.057 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad09ca6a-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.058 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapad09ca6a-7b, col_values=(('external_ids', {'iface-id': 'ad09ca6a-7b57-4547-9e95-4976d37ac5f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:9b:dc', 'vm-uuid': '91b85b86-0d07-4df4-80d5-48fa343c00b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.097 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:13 np0005548731 NetworkManager[49182]: <info>  [1765007653.0983] manager: (tapad09ca6a-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.100 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.104 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.105 232437 INFO os_vif [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b')#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.168 232437 DEBUG nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.168 232437 DEBUG nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.168 232437 DEBUG nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No VIF found with MAC fa:16:3e:5d:9b:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.169 232437 INFO nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Using config drive#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.194 232437 DEBUG nova.storage.rbd_utils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.211 232437 DEBUG nova.objects.instance [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:54:13 np0005548731 nova_compute[232433]: 2025-12-06 07:54:13.257 232437 DEBUG nova.objects.instance [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'keypairs' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:54:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:54:13 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3823144344' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:54:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:54:13 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3823144344' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:54:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:14.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:14 np0005548731 nova_compute[232433]: 2025-12-06 07:54:14.213 232437 INFO nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Creating config drive at /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config#033[00m
Dec  6 02:54:14 np0005548731 nova_compute[232433]: 2025-12-06 07:54:14.221 232437 DEBUG oslo_concurrency.processutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4yykma7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:54:14 np0005548731 nova_compute[232433]: 2025-12-06 07:54:14.385 232437 DEBUG oslo_concurrency.processutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4yykma7" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:54:14 np0005548731 nova_compute[232433]: 2025-12-06 07:54:14.409 232437 DEBUG nova.storage.rbd_utils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:54:14 np0005548731 nova_compute[232433]: 2025-12-06 07:54:14.413 232437 DEBUG oslo_concurrency.processutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:54:14 np0005548731 nova_compute[232433]: 2025-12-06 07:54:14.485 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:15.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.210 232437 DEBUG oslo_concurrency.processutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.797s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.211 232437 INFO nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Deleting local config drive /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config because it was imported into RBD.#033[00m
Dec  6 02:54:15 np0005548731 kernel: tapad09ca6a-7b: entered promiscuous mode
Dec  6 02:54:15 np0005548731 NetworkManager[49182]: <info>  [1765007655.2766] manager: (tapad09ca6a-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.309 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:54:15Z|00854|binding|INFO|Claiming lport ad09ca6a-7b57-4547-9e95-4976d37ac5f9 for this chassis.
Dec  6 02:54:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:54:15Z|00855|binding|INFO|ad09ca6a-7b57-4547-9e95-4976d37ac5f9: Claiming fa:16:3e:5d:9b:dc 10.100.0.9
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.320 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:15 np0005548731 NetworkManager[49182]: <info>  [1765007655.3262] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Dec  6 02:54:15 np0005548731 NetworkManager[49182]: <info>  [1765007655.3279] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.326 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.328 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:9b:dc 10.100.0.9'], port_security=['fa:16:3e:5d:9b:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '91b85b86-0d07-4df4-80d5-48fa343c00b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfa713d92cc94fa1b94404ed58b0563f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1e15bf02-5e56-4488-babc-bd5f6809e0ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5411afcf-f935-4976-affc-7b12214f8e50, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ad09ca6a-7b57-4547-9e95-4976d37ac5f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.329 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ad09ca6a-7b57-4547-9e95-4976d37ac5f9 in datapath 45904a2f-a5c2-4047-9c19-a87d36354c1b bound to our chassis#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.330 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45904a2f-a5c2-4047-9c19-a87d36354c1b#033[00m
Dec  6 02:54:15 np0005548731 systemd-udevd[313589]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:54:15 np0005548731 systemd-machined[195355]: New machine qemu-87-instance-000000ac.
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.345 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9c602fef-318e-49ee-9c63-18ff7a9160d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.346 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45904a2f-a1 in ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:54:15 np0005548731 NetworkManager[49182]: <info>  [1765007655.3488] device (tapad09ca6a-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.349 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45904a2f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.349 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0f6be8-8af4-4a61-9a85-cc710804a95b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.350 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e99ce664-4f0c-429e-930f-5b3024d7742b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 NetworkManager[49182]: <info>  [1765007655.3507] device (tapad09ca6a-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.367 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf5c5a9-a8b1-425b-8e23-b629c7f625f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 systemd[1]: Started Virtual Machine qemu-87-instance-000000ac.
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.385 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[11f36685-74c5-48ac-bd13-c45443c0c613]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.424 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[03a3030b-393c-4e3c-9b0d-628655aa5c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 systemd-udevd[313593]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.435 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[27ec4676-bded-4da8-83f3-b95ca4aecf8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 NetworkManager[49182]: <info>  [1765007655.4366] manager: (tap45904a2f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/392)
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.473 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0270032a-33a5-4559-b081-3b8d6c4807cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.476 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[650ed168-a953-451d-9416-dac48bc41724]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 NetworkManager[49182]: <info>  [1765007655.4964] device (tap45904a2f-a0): carrier: link connected
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.500 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d9954e0c-a596-42e2-a1b2-2280a86d633e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.513 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.517 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[15e7272f-75ef-44dc-a544-d987dc9af3b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45904a2f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:67:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791750, 'reachable_time': 23239, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313623, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.533 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.536 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8579d8-6fda-4408-a3f0-22bf85611995]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:67fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 791750, 'tstamp': 791750}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313624, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:54:15Z|00856|binding|INFO|Setting lport ad09ca6a-7b57-4547-9e95-4976d37ac5f9 ovn-installed in OVS
Dec  6 02:54:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:54:15Z|00857|binding|INFO|Setting lport ad09ca6a-7b57-4547-9e95-4976d37ac5f9 up in Southbound
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.545 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.553 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9270e991-77ff-4cfb-ba96-be5390289d73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45904a2f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:67:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791750, 'reachable_time': 23239, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313625, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.582 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6752747d-58fb-467b-89bb-526093ca5ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.639 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[42e4c1d9-688b-4a8e-91f7-b144a92a29e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.641 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45904a2f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.641 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.641 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45904a2f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.643 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:15 np0005548731 NetworkManager[49182]: <info>  [1765007655.6439] manager: (tap45904a2f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Dec  6 02:54:15 np0005548731 kernel: tap45904a2f-a0: entered promiscuous mode
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.646 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45904a2f-a0, col_values=(('external_ids', {'iface-id': 'e43e784e-bee5-49c8-8bc7-c45a17996abf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.646 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:15 np0005548731 ovn_controller[133927]: 2025-12-06T07:54:15Z|00858|binding|INFO|Releasing lport e43e784e-bee5-49c8-8bc7-c45a17996abf from this chassis (sb_readonly=0)
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.660 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.661 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.662 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[77ae2e11-7fee-46fa-b073-1c71ace37c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.663 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-45904a2f-a5c2-4047-9c19-a87d36354c1b
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 45904a2f-a5c2-4047-9c19-a87d36354c1b
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:54:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:15.664 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'env', 'PROCESS_TAG=haproxy-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45904a2f-a5c2-4047-9c19-a87d36354c1b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.874 232437 DEBUG nova.compute.manager [req-2b902640-4eb3-4890-8c04-d9c5265114bd req-de5dae46-45f2-4ea4-84cb-c8ede7311b1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.875 232437 DEBUG oslo_concurrency.lockutils [req-2b902640-4eb3-4890-8c04-d9c5265114bd req-de5dae46-45f2-4ea4-84cb-c8ede7311b1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.875 232437 DEBUG oslo_concurrency.lockutils [req-2b902640-4eb3-4890-8c04-d9c5265114bd req-de5dae46-45f2-4ea4-84cb-c8ede7311b1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.875 232437 DEBUG oslo_concurrency.lockutils [req-2b902640-4eb3-4890-8c04-d9c5265114bd req-de5dae46-45f2-4ea4-84cb-c8ede7311b1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.875 232437 DEBUG nova.compute.manager [req-2b902640-4eb3-4890-8c04-d9c5265114bd req-de5dae46-45f2-4ea4-84cb-c8ede7311b1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Processing event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.916 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007655.9161994, 91b85b86-0d07-4df4-80d5-48fa343c00b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.917 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] VM Started (Lifecycle Event)#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.919 232437 DEBUG nova.compute.manager [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.922 232437 DEBUG nova.virt.libvirt.driver [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.925 232437 INFO nova.virt.libvirt.driver [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Instance spawned successfully.#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.935 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.939 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.969 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.970 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007655.9163892, 91b85b86-0d07-4df4-80d5-48fa343c00b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.970 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.988 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.992 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007655.9220192, 91b85b86-0d07-4df4-80d5-48fa343c00b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:54:15 np0005548731 nova_compute[232433]: 2025-12-06 07:54:15.993 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:54:16 np0005548731 nova_compute[232433]: 2025-12-06 07:54:16.011 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:54:16 np0005548731 nova_compute[232433]: 2025-12-06 07:54:16.014 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:54:16 np0005548731 podman[313699]: 2025-12-06 07:54:16.020852635 +0000 UTC m=+0.055328344 container create 643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  6 02:54:16 np0005548731 nova_compute[232433]: 2025-12-06 07:54:16.034 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:54:16 np0005548731 systemd[1]: Started libpod-conmon-643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302.scope.
Dec  6 02:54:16 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:54:16 np0005548731 podman[313699]: 2025-12-06 07:54:15.991704512 +0000 UTC m=+0.026180241 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:54:16 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0a3e4dd7fb9ea4050a47b0daf4a085ea925ce0292ba6ae4f58fb227ba6c0c77/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:54:16 np0005548731 podman[313699]: 2025-12-06 07:54:16.098876044 +0000 UTC m=+0.133351773 container init 643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 02:54:16 np0005548731 podman[313699]: 2025-12-06 07:54:16.104555683 +0000 UTC m=+0.139031392 container start 643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:54:16 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[313715]: [NOTICE]   (313719) : New worker (313721) forked
Dec  6 02:54:16 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[313715]: [NOTICE]   (313719) : Loading success.
Dec  6 02:54:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:16.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:17.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e397 e397: 3 total, 3 up, 3 in
Dec  6 02:54:18 np0005548731 nova_compute[232433]: 2025-12-06 07:54:18.009 232437 DEBUG nova.compute.manager [req-842e52f1-0b90-43e1-bced-e9989452ce5a req-040e4c8e-69d3-4123-bae8-6c483a56cdf5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:54:18 np0005548731 nova_compute[232433]: 2025-12-06 07:54:18.009 232437 DEBUG oslo_concurrency.lockutils [req-842e52f1-0b90-43e1-bced-e9989452ce5a req-040e4c8e-69d3-4123-bae8-6c483a56cdf5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:18 np0005548731 nova_compute[232433]: 2025-12-06 07:54:18.010 232437 DEBUG oslo_concurrency.lockutils [req-842e52f1-0b90-43e1-bced-e9989452ce5a req-040e4c8e-69d3-4123-bae8-6c483a56cdf5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:18 np0005548731 nova_compute[232433]: 2025-12-06 07:54:18.010 232437 DEBUG oslo_concurrency.lockutils [req-842e52f1-0b90-43e1-bced-e9989452ce5a req-040e4c8e-69d3-4123-bae8-6c483a56cdf5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:18 np0005548731 nova_compute[232433]: 2025-12-06 07:54:18.010 232437 DEBUG nova.compute.manager [req-842e52f1-0b90-43e1-bced-e9989452ce5a req-040e4c8e-69d3-4123-bae8-6c483a56cdf5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] No waiting events found dispatching network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:54:18 np0005548731 nova_compute[232433]: 2025-12-06 07:54:18.011 232437 WARNING nova.compute.manager [req-842e52f1-0b90-43e1-bced-e9989452ce5a req-040e4c8e-69d3-4123-bae8-6c483a56cdf5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received unexpected event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Dec  6 02:54:18 np0005548731 nova_compute[232433]: 2025-12-06 07:54:18.098 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:18.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:18 np0005548731 nova_compute[232433]: 2025-12-06 07:54:18.415 232437 DEBUG nova.compute.manager [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:54:18 np0005548731 nova_compute[232433]: 2025-12-06 07:54:18.499 232437 DEBUG oslo_concurrency.lockutils [None req-8139596e-88c3-4ba6-9cbf-01dd99fa4077 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 14.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:54:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:19.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:54:19 np0005548731 nova_compute[232433]: 2025-12-06 07:54:19.487 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:20.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e398 e398: 3 total, 3 up, 3 in
Dec  6 02:54:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:54:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:21.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:54:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:22.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:23.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:23 np0005548731 nova_compute[232433]: 2025-12-06 07:54:23.102 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:23 np0005548731 nova_compute[232433]: 2025-12-06 07:54:23.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:54:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:24.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:24 np0005548731 nova_compute[232433]: 2025-12-06 07:54:24.489 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:25.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:26.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:26 np0005548731 nova_compute[232433]: 2025-12-06 07:54:26.231 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:26 np0005548731 ovn_controller[133927]: 2025-12-06T07:54:26Z|00859|binding|INFO|Releasing lport e43e784e-bee5-49c8-8bc7-c45a17996abf from this chassis (sb_readonly=0)
Dec  6 02:54:26 np0005548731 nova_compute[232433]: 2025-12-06 07:54:26.357 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:26.714 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:54:26 np0005548731 nova_compute[232433]: 2025-12-06 07:54:26.714 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:26.716 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:54:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:27.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:27.719 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:54:27 np0005548731 podman[313786]: 2025-12-06 07:54:27.897509854 +0000 UTC m=+0.055710304 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  6 02:54:27 np0005548731 podman[313788]: 2025-12-06 07:54:27.912326316 +0000 UTC m=+0.065983715 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:54:27 np0005548731 podman[313787]: 2025-12-06 07:54:27.928289767 +0000 UTC m=+0.085310718 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  6 02:54:28 np0005548731 nova_compute[232433]: 2025-12-06 07:54:28.103 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:54:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:28.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:54:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 e399: 3 total, 3 up, 3 in
Dec  6 02:54:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:54:28Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:9b:dc 10.100.0.9
Dec  6 02:54:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:29.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:29 np0005548731 nova_compute[232433]: 2025-12-06 07:54:29.111 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:54:29 np0005548731 nova_compute[232433]: 2025-12-06 07:54:29.111 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:54:29 np0005548731 nova_compute[232433]: 2025-12-06 07:54:29.111 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:54:29 np0005548731 nova_compute[232433]: 2025-12-06 07:54:29.111 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:54:29 np0005548731 nova_compute[232433]: 2025-12-06 07:54:29.491 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:29 np0005548731 nova_compute[232433]: 2025-12-06 07:54:29.896 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:54:29 np0005548731 nova_compute[232433]: 2025-12-06 07:54:29.896 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:54:29 np0005548731 nova_compute[232433]: 2025-12-06 07:54:29.896 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:54:29 np0005548731 nova_compute[232433]: 2025-12-06 07:54:29.897 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:54:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:30.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:31.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:32 np0005548731 nova_compute[232433]: 2025-12-06 07:54:32.205 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:32.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:54:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:33.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:54:33 np0005548731 nova_compute[232433]: 2025-12-06 07:54:33.140 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:33 np0005548731 nova_compute[232433]: 2025-12-06 07:54:33.204 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updating instance_info_cache with network_info: [{"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:54:33 np0005548731 nova_compute[232433]: 2025-12-06 07:54:33.231 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:54:33 np0005548731 nova_compute[232433]: 2025-12-06 07:54:33.231 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:54:33 np0005548731 nova_compute[232433]: 2025-12-06 07:54:33.232 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:54:33 np0005548731 nova_compute[232433]: 2025-12-06 07:54:33.232 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:54:33 np0005548731 nova_compute[232433]: 2025-12-06 07:54:33.233 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:54:33 np0005548731 nova_compute[232433]: 2025-12-06 07:54:33.233 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:54:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:54:33Z|00860|binding|INFO|Releasing lport e43e784e-bee5-49c8-8bc7-c45a17996abf from this chassis (sb_readonly=0)
Dec  6 02:54:33 np0005548731 nova_compute[232433]: 2025-12-06 07:54:33.296 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:54:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:34.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:54:34 np0005548731 nova_compute[232433]: 2025-12-06 07:54:34.493 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:35.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.129 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.130 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:54:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:54:35 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1322277353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.574 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.656 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.657 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.826 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.828 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4069MB free_disk=20.942642211914062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.828 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.828 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.932 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 91b85b86-0d07-4df4-80d5-48fa343c00b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.932 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.932 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:54:35 np0005548731 nova_compute[232433]: 2025-12-06 07:54:35.981 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:54:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:54:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:36.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:54:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:54:36 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3167987060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.414 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.420 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.441 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.490 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.490 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.942 232437 DEBUG oslo_concurrency.lockutils [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.943 232437 DEBUG oslo_concurrency.lockutils [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.943 232437 DEBUG oslo_concurrency.lockutils [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.943 232437 DEBUG oslo_concurrency.lockutils [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.943 232437 DEBUG oslo_concurrency.lockutils [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.945 232437 INFO nova.compute.manager [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Terminating instance#033[00m
Dec  6 02:54:36 np0005548731 nova_compute[232433]: 2025-12-06 07:54:36.946 232437 DEBUG nova.compute.manager [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:54:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:37.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:37 np0005548731 kernel: tapad09ca6a-7b (unregistering): left promiscuous mode
Dec  6 02:54:37 np0005548731 NetworkManager[49182]: <info>  [1765007677.2517] device (tapad09ca6a-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:54:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:54:37Z|00861|binding|INFO|Releasing lport ad09ca6a-7b57-4547-9e95-4976d37ac5f9 from this chassis (sb_readonly=0)
Dec  6 02:54:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:54:37Z|00862|binding|INFO|Setting lport ad09ca6a-7b57-4547-9e95-4976d37ac5f9 down in Southbound
Dec  6 02:54:37 np0005548731 ovn_controller[133927]: 2025-12-06T07:54:37Z|00863|binding|INFO|Removing iface tapad09ca6a-7b ovn-installed in OVS
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.302 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.304 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.317 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.330 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:9b:dc 10.100.0.9'], port_security=['fa:16:3e:5d:9b:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '91b85b86-0d07-4df4-80d5-48fa343c00b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfa713d92cc94fa1b94404ed58b0563f', 'neutron:revision_number': '9', 'neutron:security_group_ids': '1e15bf02-5e56-4488-babc-bd5f6809e0ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.190', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5411afcf-f935-4976-affc-7b12214f8e50, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=ad09ca6a-7b57-4547-9e95-4976d37ac5f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.331 143965 INFO neutron.agent.ovn.metadata.agent [-] Port ad09ca6a-7b57-4547-9e95-4976d37ac5f9 in datapath 45904a2f-a5c2-4047-9c19-a87d36354c1b unbound from our chassis#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.333 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45904a2f-a5c2-4047-9c19-a87d36354c1b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.333 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[22fa3ef4-680f-469c-b3f3-ff893939c787]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.334 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b namespace which is not needed anymore#033[00m
Dec  6 02:54:37 np0005548731 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000ac.scope: Deactivated successfully.
Dec  6 02:54:37 np0005548731 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000ac.scope: Consumed 14.616s CPU time.
Dec  6 02:54:37 np0005548731 systemd-machined[195355]: Machine qemu-87-instance-000000ac terminated.
Dec  6 02:54:37 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[313715]: [NOTICE]   (313719) : haproxy version is 2.8.14-c23fe91
Dec  6 02:54:37 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[313715]: [NOTICE]   (313719) : path to executable is /usr/sbin/haproxy
Dec  6 02:54:37 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[313715]: [WARNING]  (313719) : Exiting Master process...
Dec  6 02:54:37 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[313715]: [WARNING]  (313719) : Exiting Master process...
Dec  6 02:54:37 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[313715]: [ALERT]    (313719) : Current worker (313721) exited with code 143 (Terminated)
Dec  6 02:54:37 np0005548731 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[313715]: [WARNING]  (313719) : All workers exited. Exiting... (0)
Dec  6 02:54:37 np0005548731 systemd[1]: libpod-643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302.scope: Deactivated successfully.
Dec  6 02:54:37 np0005548731 podman[313923]: 2025-12-06 07:54:37.45530501 +0000 UTC m=+0.041247521 container died 643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:54:37 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302-userdata-shm.mount: Deactivated successfully.
Dec  6 02:54:37 np0005548731 systemd[1]: var-lib-containers-storage-overlay-d0a3e4dd7fb9ea4050a47b0daf4a085ea925ce0292ba6ae4f58fb227ba6c0c77-merged.mount: Deactivated successfully.
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.491 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:54:37 np0005548731 podman[313923]: 2025-12-06 07:54:37.495951914 +0000 UTC m=+0.081894435 container cleanup 643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  6 02:54:37 np0005548731 systemd[1]: libpod-conmon-643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302.scope: Deactivated successfully.
Dec  6 02:54:37 np0005548731 podman[313953]: 2025-12-06 07:54:37.557888759 +0000 UTC m=+0.041649500 container remove 643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.565 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b48f6272-79f6-46e1-8ece-208d37cb20df]: (4, ('Sat Dec  6 07:54:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b (643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302)\n643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302\nSat Dec  6 07:54:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b (643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302)\n643f750f3b457cc4f674b83ea30055d5fbaab80d1c58122dfd9235376c564302\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.569 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2c004c76-3295-439c-a229-1a03f83eb1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.570 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45904a2f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.573 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:37 np0005548731 kernel: tap45904a2f-a0: left promiscuous mode
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.578 232437 INFO nova.virt.libvirt.driver [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Instance destroyed successfully.#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.579 232437 DEBUG nova.objects.instance [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'resources' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.591 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.592 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.594 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb50c55-c657-4dec-b722-dfac2d0fafa5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.598 232437 DEBUG nova.virt.libvirt.vif [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:52:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1180268370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1180268370',id=172,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHi/zU/wIK9gDsaOwcMdl3RsHvsLUGXMCp6e+v7Vsr1tSU1UeVN9QkmLR8bRL7zUBTSmDE2iL72n56YVoqmlRT/okHDeuUDKoH9btDdrzNPAdyAh2Xwe7f5FUrx+EbYFg==',key_name='tempest-keypair-763555621',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:54:18Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-tqsa30w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:54:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=91b85b86-0d07-4df4-80d5-48fa343c00b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.598 232437 DEBUG nova.network.os_vif_util [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.599 232437 DEBUG nova.network.os_vif_util [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.599 232437 DEBUG os_vif [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.601 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.601 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad09ca6a-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.604 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:37 np0005548731 nova_compute[232433]: 2025-12-06 07:54:37.606 232437 INFO os_vif [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b')#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.606 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1d88d9-0b97-48d8-96dc-e28ac9e9ec46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.608 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc61314-e713-47a6-bdbc-9c7544cac5e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.624 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1f7761-d74f-4de2-bc20-7d5b8c41d659]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791742, 'reachable_time': 42997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313983, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:37 np0005548731 systemd[1]: run-netns-ovnmeta\x2d45904a2f\x2da5c2\x2d4047\x2d9c19\x2da87d36354c1b.mount: Deactivated successfully.
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.629 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:54:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:54:37.629 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[a43d9f3f-8c16-4f56-94e4-c5f5c3412b83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.005 232437 DEBUG nova.compute.manager [req-0b73d359-42b4-4e06-99fa-aa8def297675 req-425ae3c8-1424-420a-b129-cc4af29db3f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-vif-unplugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.005 232437 DEBUG oslo_concurrency.lockutils [req-0b73d359-42b4-4e06-99fa-aa8def297675 req-425ae3c8-1424-420a-b129-cc4af29db3f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.005 232437 DEBUG oslo_concurrency.lockutils [req-0b73d359-42b4-4e06-99fa-aa8def297675 req-425ae3c8-1424-420a-b129-cc4af29db3f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.005 232437 DEBUG oslo_concurrency.lockutils [req-0b73d359-42b4-4e06-99fa-aa8def297675 req-425ae3c8-1424-420a-b129-cc4af29db3f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.006 232437 DEBUG nova.compute.manager [req-0b73d359-42b4-4e06-99fa-aa8def297675 req-425ae3c8-1424-420a-b129-cc4af29db3f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] No waiting events found dispatching network-vif-unplugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.006 232437 DEBUG nova.compute.manager [req-0b73d359-42b4-4e06-99fa-aa8def297675 req-425ae3c8-1424-420a-b129-cc4af29db3f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-vif-unplugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:54:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:38.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.931 232437 INFO nova.virt.libvirt.driver [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Deleting instance files /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8_del#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.932 232437 INFO nova.virt.libvirt.driver [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Deletion of /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8_del complete#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.987 232437 INFO nova.compute.manager [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Took 2.04 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.987 232437 DEBUG oslo.service.loopingcall [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.988 232437 DEBUG nova.compute.manager [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:54:38 np0005548731 nova_compute[232433]: 2025-12-06 07:54:38.988 232437 DEBUG nova.network.neutron [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:54:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:54:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:54:39 np0005548731 nova_compute[232433]: 2025-12-06 07:54:39.529 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:40 np0005548731 nova_compute[232433]: 2025-12-06 07:54:40.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:54:40 np0005548731 nova_compute[232433]: 2025-12-06 07:54:40.124 232437 DEBUG nova.compute.manager [req-b5fe7dca-7a11-4274-b211-8674578fe5c0 req-186f8213-4621-4abc-bd3b-8e7e30880dd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:54:40 np0005548731 nova_compute[232433]: 2025-12-06 07:54:40.124 232437 DEBUG oslo_concurrency.lockutils [req-b5fe7dca-7a11-4274-b211-8674578fe5c0 req-186f8213-4621-4abc-bd3b-8e7e30880dd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:40 np0005548731 nova_compute[232433]: 2025-12-06 07:54:40.125 232437 DEBUG oslo_concurrency.lockutils [req-b5fe7dca-7a11-4274-b211-8674578fe5c0 req-186f8213-4621-4abc-bd3b-8e7e30880dd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:40 np0005548731 nova_compute[232433]: 2025-12-06 07:54:40.125 232437 DEBUG oslo_concurrency.lockutils [req-b5fe7dca-7a11-4274-b211-8674578fe5c0 req-186f8213-4621-4abc-bd3b-8e7e30880dd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:40 np0005548731 nova_compute[232433]: 2025-12-06 07:54:40.125 232437 DEBUG nova.compute.manager [req-b5fe7dca-7a11-4274-b211-8674578fe5c0 req-186f8213-4621-4abc-bd3b-8e7e30880dd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] No waiting events found dispatching network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:54:40 np0005548731 nova_compute[232433]: 2025-12-06 07:54:40.125 232437 WARNING nova.compute.manager [req-b5fe7dca-7a11-4274-b211-8674578fe5c0 req-186f8213-4621-4abc-bd3b-8e7e30880dd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received unexpected event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 02:54:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:40.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:40 np0005548731 nova_compute[232433]: 2025-12-06 07:54:40.927 232437 DEBUG nova.network.neutron [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:54:40 np0005548731 nova_compute[232433]: 2025-12-06 07:54:40.946 232437 INFO nova.compute.manager [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Took 1.96 seconds to deallocate network for instance.#033[00m
Dec  6 02:54:41 np0005548731 nova_compute[232433]: 2025-12-06 07:54:41.010 232437 DEBUG oslo_concurrency.lockutils [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:54:41 np0005548731 nova_compute[232433]: 2025-12-06 07:54:41.011 232437 DEBUG oslo_concurrency.lockutils [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:54:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:41.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:41 np0005548731 nova_compute[232433]: 2025-12-06 07:54:41.168 232437 DEBUG oslo_concurrency.processutils [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:54:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:54:41 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3558779861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:54:41 np0005548731 nova_compute[232433]: 2025-12-06 07:54:41.589 232437 DEBUG oslo_concurrency.processutils [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:54:41 np0005548731 nova_compute[232433]: 2025-12-06 07:54:41.596 232437 DEBUG nova.compute.provider_tree [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:54:41 np0005548731 nova_compute[232433]: 2025-12-06 07:54:41.871 232437 DEBUG nova.scheduler.client.report [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:54:41 np0005548731 nova_compute[232433]: 2025-12-06 07:54:41.933 232437 DEBUG oslo_concurrency.lockutils [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:41 np0005548731 nova_compute[232433]: 2025-12-06 07:54:41.963 232437 INFO nova.scheduler.client.report [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Deleted allocations for instance 91b85b86-0d07-4df4-80d5-48fa343c00b8#033[00m
Dec  6 02:54:42 np0005548731 nova_compute[232433]: 2025-12-06 07:54:42.050 232437 DEBUG oslo_concurrency.lockutils [None req-d0d812f4-0a4c-4ad8-a6bc-fdce86409dbf 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:54:42 np0005548731 nova_compute[232433]: 2025-12-06 07:54:42.212 232437 DEBUG nova.compute.manager [req-13f648c6-247a-4dc8-b348-299265b14b56 req-5efb6416-e946-45f4-80f6-59c3bb57f676 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-vif-deleted-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:54:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:42.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:42 np0005548731 nova_compute[232433]: 2025-12-06 07:54:42.603 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:43.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:44 np0005548731 nova_compute[232433]: 2025-12-06 07:54:44.109 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:44.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:44 np0005548731 nova_compute[232433]: 2025-12-06 07:54:44.533 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:45.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:46.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:47 np0005548731 nova_compute[232433]: 2025-12-06 07:54:47.006 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:54:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:54:47 np0005548731 nova_compute[232433]: 2025-12-06 07:54:47.604 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:48.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:54:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:49.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:54:49 np0005548731 nova_compute[232433]: 2025-12-06 07:54:49.534 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:54:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:50.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:54:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:51.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:51 np0005548731 nova_compute[232433]: 2025-12-06 07:54:51.226 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:51 np0005548731 nova_compute[232433]: 2025-12-06 07:54:51.396 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:54:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:54:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:54:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:52.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:52 np0005548731 nova_compute[232433]: 2025-12-06 07:54:52.577 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007677.5758498, 91b85b86-0d07-4df4-80d5-48fa343c00b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:54:52 np0005548731 nova_compute[232433]: 2025-12-06 07:54:52.577 232437 INFO nova.compute.manager [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:54:52 np0005548731 nova_compute[232433]: 2025-12-06 07:54:52.595 232437 DEBUG nova.compute.manager [None req-fb927834-ed3a-4c3f-9780-aedcc73197bb - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:54:52 np0005548731 nova_compute[232433]: 2025-12-06 07:54:52.605 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:54:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:53.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:54:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:54.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:54 np0005548731 nova_compute[232433]: 2025-12-06 07:54:54.536 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:55.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:56.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:57.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:54:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:54:57 np0005548731 nova_compute[232433]: 2025-12-06 07:54:57.607 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:54:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:54:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:58.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:54:58 np0005548731 podman[314265]: 2025-12-06 07:54:58.888777182 +0000 UTC m=+0.051703256 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:54:58 np0005548731 podman[314267]: 2025-12-06 07:54:58.899607067 +0000 UTC m=+0.059390235 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125)
Dec  6 02:54:58 np0005548731 podman[314266]: 2025-12-06 07:54:58.917158956 +0000 UTC m=+0.079947767 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:54:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:54:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:54:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:54:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:59.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:54:59 np0005548731 nova_compute[232433]: 2025-12-06 07:54:59.537 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:00.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:00.900 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:00.901 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:00.901 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:55:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:01.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:55:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:02.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:02 np0005548731 nova_compute[232433]: 2025-12-06 07:55:02.609 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:03.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:04.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:04 np0005548731 nova_compute[232433]: 2025-12-06 07:55:04.539 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:05.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:06.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:07.050 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:55:07 np0005548731 nova_compute[232433]: 2025-12-06 07:55:07.051 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:07.051 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:55:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:07.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:07 np0005548731 nova_compute[232433]: 2025-12-06 07:55:07.642 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:55:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:08.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:55:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:09.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:09 np0005548731 nova_compute[232433]: 2025-12-06 07:55:09.543 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:10.053 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:10.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.081 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Acquiring lock "e600163d-bbdd-421a-ab0e-83bfbac6495a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.081 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:11.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.099 232437 DEBUG nova.compute.manager [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.185 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.186 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.194 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.194 232437 INFO nova.compute.claims [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.294 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:55:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/264023895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.719 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.725 232437 DEBUG nova.compute.provider_tree [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.935 232437 DEBUG nova.scheduler.client.report [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.972 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:11 np0005548731 nova_compute[232433]: 2025-12-06 07:55:11.973 232437 DEBUG nova.compute.manager [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.054 232437 DEBUG nova.compute.manager [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.055 232437 DEBUG nova.network.neutron [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.075 232437 INFO nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.095 232437 DEBUG nova.compute.manager [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.186 232437 DEBUG nova.compute.manager [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.188 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.188 232437 INFO nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Creating image(s)#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.214 232437 DEBUG nova.storage.rbd_utils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] rbd image e600163d-bbdd-421a-ab0e-83bfbac6495a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.243 232437 DEBUG nova.storage.rbd_utils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] rbd image e600163d-bbdd-421a-ab0e-83bfbac6495a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.270 232437 DEBUG nova.storage.rbd_utils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] rbd image e600163d-bbdd-421a-ab0e-83bfbac6495a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.274 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:12.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.359 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.360 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.360 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.361 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.384 232437 DEBUG nova.storage.rbd_utils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] rbd image e600163d-bbdd-421a-ab0e-83bfbac6495a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.387 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e600163d-bbdd-421a-ab0e-83bfbac6495a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.644 232437 DEBUG nova.policy [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '383c99c13fb24b039259ed88ae2f32c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b02ca2bf61d48d99724e6446cfd3524', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.646 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.663 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e600163d-bbdd-421a-ab0e-83bfbac6495a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.724 232437 DEBUG nova.storage.rbd_utils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] resizing rbd image e600163d-bbdd-421a-ab0e-83bfbac6495a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.824 232437 DEBUG nova.objects.instance [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lazy-loading 'migration_context' on Instance uuid e600163d-bbdd-421a-ab0e-83bfbac6495a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.845 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.845 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Ensure instance console log exists: /var/lib/nova/instances/e600163d-bbdd-421a-ab0e-83bfbac6495a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.846 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.846 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:12 np0005548731 nova_compute[232433]: 2025-12-06 07:55:12.846 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:13.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:13 np0005548731 nova_compute[232433]: 2025-12-06 07:55:13.628 232437 DEBUG nova.network.neutron [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Successfully created port: 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:55:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:14.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:14 np0005548731 nova_compute[232433]: 2025-12-06 07:55:14.545 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:15 np0005548731 nova_compute[232433]: 2025-12-06 07:55:15.079 232437 DEBUG nova.network.neutron [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Successfully updated port: 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:55:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:15.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:15 np0005548731 nova_compute[232433]: 2025-12-06 07:55:15.101 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Acquiring lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:55:15 np0005548731 nova_compute[232433]: 2025-12-06 07:55:15.101 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Acquired lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:55:15 np0005548731 nova_compute[232433]: 2025-12-06 07:55:15.102 232437 DEBUG nova.network.neutron [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:55:15 np0005548731 nova_compute[232433]: 2025-12-06 07:55:15.271 232437 DEBUG nova.compute.manager [req-81245fa5-155f-4fa5-8ac7-432da97d1fee req-7750415b-1488-40e8-9f85-0263057711d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Received event network-changed-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:55:15 np0005548731 nova_compute[232433]: 2025-12-06 07:55:15.271 232437 DEBUG nova.compute.manager [req-81245fa5-155f-4fa5-8ac7-432da97d1fee req-7750415b-1488-40e8-9f85-0263057711d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Refreshing instance network info cache due to event network-changed-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:55:15 np0005548731 nova_compute[232433]: 2025-12-06 07:55:15.272 232437 DEBUG oslo_concurrency.lockutils [req-81245fa5-155f-4fa5-8ac7-432da97d1fee req-7750415b-1488-40e8-9f85-0263057711d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:55:15 np0005548731 nova_compute[232433]: 2025-12-06 07:55:15.994 232437 DEBUG nova.network.neutron [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:55:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:16.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.021 232437 DEBUG nova.network.neutron [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Updating instance_info_cache with network_info: [{"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.071 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Releasing lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.072 232437 DEBUG nova.compute.manager [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Instance network_info: |[{"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.072 232437 DEBUG oslo_concurrency.lockutils [req-81245fa5-155f-4fa5-8ac7-432da97d1fee req-7750415b-1488-40e8-9f85-0263057711d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.073 232437 DEBUG nova.network.neutron [req-81245fa5-155f-4fa5-8ac7-432da97d1fee req-7750415b-1488-40e8-9f85-0263057711d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Refreshing network info cache for port 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.076 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Start _get_guest_xml network_info=[{"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.081 232437 WARNING nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.086 232437 DEBUG nova.virt.libvirt.host [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.087 232437 DEBUG nova.virt.libvirt.host [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.090 232437 DEBUG nova.virt.libvirt.host [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.091 232437 DEBUG nova.virt.libvirt.host [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.092 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.092 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.093 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.093 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.094 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.094 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.094 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.094 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.095 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.095 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.096 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.096 232437 DEBUG nova.virt.hardware [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.100 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:17.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:55:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1464405074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.549 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.574 232437 DEBUG nova.storage.rbd_utils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] rbd image e600163d-bbdd-421a-ab0e-83bfbac6495a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.577 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.648 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:55:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2755240412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.978 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.980 232437 DEBUG nova.virt.libvirt.vif [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1079945337',display_name='tempest-TestServerBasicOps-server-1079945337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1079945337',id=174,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFn8IHrAu1capFTEammcpgozBC0c7wVgyvOhNY1pdFvVuqQz1ZJigEw9y/ImJqmbdoP6LMd9JwjHpW/YGqtrVjbCEXCA4lrId1JNg6dNRmMx1JWzWMSDqrEVBgkgpHkSA==',key_name='tempest-TestServerBasicOps-1564693908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b02ca2bf61d48d99724e6446cfd3524',ramdisk_id='',reservation_id='r-7ox8eliw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-882087666',owner_user_name='tempest-TestServerBasicOps-882087666-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:55:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='383c99c13fb24b039259ed88ae2f32c0',uuid=e600163d-bbdd-421a-ab0e-83bfbac6495a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.980 232437 DEBUG nova.network.os_vif_util [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Converting VIF {"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.981 232437 DEBUG nova.network.os_vif_util [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c,network=Network(a7f183ec-224d-47f8-a012-9bcde922bafb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06fe0ba2-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:55:17 np0005548731 nova_compute[232433]: 2025-12-06 07:55:17.983 232437 DEBUG nova.objects.instance [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lazy-loading 'pci_devices' on Instance uuid e600163d-bbdd-421a-ab0e-83bfbac6495a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.015 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  <uuid>e600163d-bbdd-421a-ab0e-83bfbac6495a</uuid>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  <name>instance-000000ae</name>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestServerBasicOps-server-1079945337</nova:name>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:55:17</nova:creationTime>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <nova:user uuid="383c99c13fb24b039259ed88ae2f32c0">tempest-TestServerBasicOps-882087666-project-member</nova:user>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <nova:project uuid="2b02ca2bf61d48d99724e6446cfd3524">tempest-TestServerBasicOps-882087666</nova:project>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <nova:port uuid="06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <entry name="serial">e600163d-bbdd-421a-ab0e-83bfbac6495a</entry>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <entry name="uuid">e600163d-bbdd-421a-ab0e-83bfbac6495a</entry>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e600163d-bbdd-421a-ab0e-83bfbac6495a_disk">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/e600163d-bbdd-421a-ab0e-83bfbac6495a_disk.config">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:ab:c9:69"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <target dev="tap06fe0ba2-d0"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/e600163d-bbdd-421a-ab0e-83bfbac6495a/console.log" append="off"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:55:18 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:55:18 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:55:18 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:55:18 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.015 232437 DEBUG nova.compute.manager [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Preparing to wait for external event network-vif-plugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.016 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Acquiring lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.016 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.016 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.017 232437 DEBUG nova.virt.libvirt.vif [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1079945337',display_name='tempest-TestServerBasicOps-server-1079945337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1079945337',id=174,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFn8IHrAu1capFTEammcpgozBC0c7wVgyvOhNY1pdFvVuqQz1ZJigEw9y/ImJqmbdoP6LMd9JwjHpW/YGqtrVjbCEXCA4lrId1JNg6dNRmMx1JWzWMSDqrEVBgkgpHkSA==',key_name='tempest-TestServerBasicOps-1564693908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b02ca2bf61d48d99724e6446cfd3524',ramdisk_id='',reservation_id='r-7ox8eliw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-882087666',owner_user_name='tempest-TestServerBasicOps-882087666-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:55:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='383c99c13fb24b039259ed88ae2f32c0',uuid=e600163d-bbdd-421a-ab0e-83bfbac6495a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.017 232437 DEBUG nova.network.os_vif_util [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Converting VIF {"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.018 232437 DEBUG nova.network.os_vif_util [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c,network=Network(a7f183ec-224d-47f8-a012-9bcde922bafb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06fe0ba2-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.018 232437 DEBUG os_vif [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c,network=Network(a7f183ec-224d-47f8-a012-9bcde922bafb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06fe0ba2-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.018 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.019 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.019 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.022 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.022 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06fe0ba2-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.023 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06fe0ba2-d0, col_values=(('external_ids', {'iface-id': '06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:c9:69', 'vm-uuid': 'e600163d-bbdd-421a-ab0e-83bfbac6495a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.078 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:18 np0005548731 NetworkManager[49182]: <info>  [1765007718.0797] manager: (tap06fe0ba2-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.081 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.084 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.085 232437 INFO os_vif [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c,network=Network(a7f183ec-224d-47f8-a012-9bcde922bafb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06fe0ba2-d0')#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.170 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.170 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.171 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] No VIF found with MAC fa:16:3e:ab:c9:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.171 232437 INFO nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Using config drive#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.197 232437 DEBUG nova.storage.rbd_utils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] rbd image e600163d-bbdd-421a-ab0e-83bfbac6495a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:55:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:18.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.990 232437 INFO nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Creating config drive at /var/lib/nova/instances/e600163d-bbdd-421a-ab0e-83bfbac6495a/disk.config#033[00m
Dec  6 02:55:18 np0005548731 nova_compute[232433]: 2025-12-06 07:55:18.995 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e600163d-bbdd-421a-ab0e-83bfbac6495a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6hem7kzp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:19.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.130 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e600163d-bbdd-421a-ab0e-83bfbac6495a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6hem7kzp" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.160 232437 DEBUG nova.storage.rbd_utils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] rbd image e600163d-bbdd-421a-ab0e-83bfbac6495a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.164 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e600163d-bbdd-421a-ab0e-83bfbac6495a/disk.config e600163d-bbdd-421a-ab0e-83bfbac6495a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.374 232437 DEBUG oslo_concurrency.processutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e600163d-bbdd-421a-ab0e-83bfbac6495a/disk.config e600163d-bbdd-421a-ab0e-83bfbac6495a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.375 232437 INFO nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Deleting local config drive /var/lib/nova/instances/e600163d-bbdd-421a-ab0e-83bfbac6495a/disk.config because it was imported into RBD.#033[00m
Dec  6 02:55:19 np0005548731 kernel: tap06fe0ba2-d0: entered promiscuous mode
Dec  6 02:55:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:19Z|00864|binding|INFO|Claiming lport 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c for this chassis.
Dec  6 02:55:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:19Z|00865|binding|INFO|06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c: Claiming fa:16:3e:ab:c9:69 10.100.0.14
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.427 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:19 np0005548731 NetworkManager[49182]: <info>  [1765007719.4290] manager: (tap06fe0ba2-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.434 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:19 np0005548731 systemd-udevd[314710]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:55:19 np0005548731 NetworkManager[49182]: <info>  [1765007719.4685] device (tap06fe0ba2-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.467 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:c9:69 10.100.0.14'], port_security=['fa:16:3e:ab:c9:69 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e600163d-bbdd-421a-ab0e-83bfbac6495a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7f183ec-224d-47f8-a012-9bcde922bafb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b02ca2bf61d48d99724e6446cfd3524', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cb7353fc-9d60-41d9-a986-6d90d5004536 f35ac537-2359-40f7-b84f-66524cb2482e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4cf42949-a96b-4530-989a-41b873f41733, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.468 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c in datapath a7f183ec-224d-47f8-a012-9bcde922bafb bound to our chassis#033[00m
Dec  6 02:55:19 np0005548731 NetworkManager[49182]: <info>  [1765007719.4700] device (tap06fe0ba2-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.470 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7f183ec-224d-47f8-a012-9bcde922bafb#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.483 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[51d3e127-f86f-48f2-a7c1-f7054d722be5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.484 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7f183ec-21 in ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.486 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7f183ec-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.486 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8c2c56-e1d2-444c-847b-42ec473dd51b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.487 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ab15a3-c6f7-4a4e-b9ec-493f5860cd61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 systemd-machined[195355]: New machine qemu-88-instance-000000ae.
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.498 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[4be3fd21-aa3e-47c6-8f6e-254c40b7a8e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.500 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:19Z|00866|binding|INFO|Setting lport 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c ovn-installed in OVS
Dec  6 02:55:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:19Z|00867|binding|INFO|Setting lport 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c up in Southbound
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.506 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:19 np0005548731 systemd[1]: Started Virtual Machine qemu-88-instance-000000ae.
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.521 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3ffce3c8-ed89-4a65-8917-85055888cce9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.546 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.553 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3961d725-55c3-4fa0-92f0-807dbcd1b8a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.557 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[67f8745b-75f9-4e11-a731-d904228f65dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 NetworkManager[49182]: <info>  [1765007719.5585] manager: (tapa7f183ec-20): new Veth device (/org/freedesktop/NetworkManager/Devices/396)
Dec  6 02:55:19 np0005548731 systemd-udevd[314712]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.586 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3149b354-1831-45b2-8cc4-506b7d476f42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.590 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd3af10-8cdc-4196-bef1-b0f36da8447a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 NetworkManager[49182]: <info>  [1765007719.6136] device (tapa7f183ec-20): carrier: link connected
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.620 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b15a5fa4-4ad9-4d09-9095-e98b04ff69a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.638 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cf513dfb-4c42-46df-9aff-6e3e497f98cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7f183ec-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:f5:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 798162, 'reachable_time': 43985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314746, 'error': None, 'target': 'ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.654 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cc40aa96-4bd5-4f54-a8ee-886441b28a42]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:f511'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 798162, 'tstamp': 798162}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314747, 'error': None, 'target': 'ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.670 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4af42e69-50bb-4e14-9a20-bb5a0cb454c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7f183ec-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:f5:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 798162, 'reachable_time': 43985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314748, 'error': None, 'target': 'ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.700 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe696dd-4026-4406-b810-3490d1129ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.754 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[88264db9-7d8a-4d47-a594-12432797f691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.755 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7f183ec-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.755 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.756 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7f183ec-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.757 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:19 np0005548731 NetworkManager[49182]: <info>  [1765007719.7581] manager: (tapa7f183ec-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Dec  6 02:55:19 np0005548731 kernel: tapa7f183ec-20: entered promiscuous mode
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.760 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.761 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7f183ec-20, col_values=(('external_ids', {'iface-id': 'da468cbe-02bf-4465-aa94-57670ea2997c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.761 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:19 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:19Z|00868|binding|INFO|Releasing lport da468cbe-02bf-4465-aa94-57670ea2997c from this chassis (sb_readonly=0)
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.776 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.777 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7f183ec-224d-47f8-a012-9bcde922bafb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7f183ec-224d-47f8-a012-9bcde922bafb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.778 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a12509c4-fe2f-402e-99f9-e78fc75579aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.779 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-a7f183ec-224d-47f8-a012-9bcde922bafb
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/a7f183ec-224d-47f8-a012-9bcde922bafb.pid.haproxy
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID a7f183ec-224d-47f8-a012-9bcde922bafb
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:55:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:19.779 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb', 'env', 'PROCESS_TAG=haproxy-a7f183ec-224d-47f8-a012-9bcde922bafb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7f183ec-224d-47f8-a012-9bcde922bafb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.910 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007719.91016, e600163d-bbdd-421a-ab0e-83bfbac6495a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.916 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] VM Started (Lifecycle Event)#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.943 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.948 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007719.911311, e600163d-bbdd-421a-ab0e-83bfbac6495a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.948 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.972 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.975 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:55:19 np0005548731 nova_compute[232433]: 2025-12-06 07:55:19.994 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:55:20 np0005548731 podman[314823]: 2025-12-06 07:55:20.133740853 +0000 UTC m=+0.041789173 container create 4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:55:20 np0005548731 systemd[1]: Started libpod-conmon-4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4.scope.
Dec  6 02:55:20 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:55:20 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a15df466c62e630ed3b8de91686da145103cb40d4075a0eeffba2e20d704b5a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:55:20 np0005548731 podman[314823]: 2025-12-06 07:55:20.11277437 +0000 UTC m=+0.020822720 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:55:20 np0005548731 podman[314823]: 2025-12-06 07:55:20.210146723 +0000 UTC m=+0.118195073 container init 4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  6 02:55:20 np0005548731 podman[314823]: 2025-12-06 07:55:20.214993671 +0000 UTC m=+0.123042001 container start 4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 02:55:20 np0005548731 neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb[314838]: [NOTICE]   (314842) : New worker (314844) forked
Dec  6 02:55:20 np0005548731 neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb[314838]: [NOTICE]   (314842) : Loading success.
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.300 232437 DEBUG nova.network.neutron [req-81245fa5-155f-4fa5-8ac7-432da97d1fee req-7750415b-1488-40e8-9f85-0263057711d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Updated VIF entry in instance network info cache for port 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.301 232437 DEBUG nova.network.neutron [req-81245fa5-155f-4fa5-8ac7-432da97d1fee req-7750415b-1488-40e8-9f85-0263057711d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Updating instance_info_cache with network_info: [{"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:55:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:20.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.343 232437 DEBUG oslo_concurrency.lockutils [req-81245fa5-155f-4fa5-8ac7-432da97d1fee req-7750415b-1488-40e8-9f85-0263057711d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.439 232437 DEBUG nova.compute.manager [req-d4268b93-1034-4f8c-ab0f-12977d43e237 req-cfd968e9-9a12-453e-86d4-7d40d5f6491c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Received event network-vif-plugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.441 232437 DEBUG oslo_concurrency.lockutils [req-d4268b93-1034-4f8c-ab0f-12977d43e237 req-cfd968e9-9a12-453e-86d4-7d40d5f6491c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.442 232437 DEBUG oslo_concurrency.lockutils [req-d4268b93-1034-4f8c-ab0f-12977d43e237 req-cfd968e9-9a12-453e-86d4-7d40d5f6491c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.442 232437 DEBUG oslo_concurrency.lockutils [req-d4268b93-1034-4f8c-ab0f-12977d43e237 req-cfd968e9-9a12-453e-86d4-7d40d5f6491c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.442 232437 DEBUG nova.compute.manager [req-d4268b93-1034-4f8c-ab0f-12977d43e237 req-cfd968e9-9a12-453e-86d4-7d40d5f6491c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Processing event network-vif-plugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.443 232437 DEBUG nova.compute.manager [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.447 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.447 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007720.447465, e600163d-bbdd-421a-ab0e-83bfbac6495a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.447 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.451 232437 INFO nova.virt.libvirt.driver [-] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Instance spawned successfully.#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.451 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.472 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.478 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.482 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.482 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.483 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.483 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.483 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.484 232437 DEBUG nova.virt.libvirt.driver [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.510 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.764 232437 INFO nova.compute.manager [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Took 8.58 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.768 232437 DEBUG nova.compute.manager [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.866 232437 INFO nova.compute.manager [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Took 9.72 seconds to build instance.#033[00m
Dec  6 02:55:20 np0005548731 nova_compute[232433]: 2025-12-06 07:55:20.897 232437 DEBUG oslo_concurrency.lockutils [None req-ee23107a-4d8e-4faa-81e5-2d23dd5cd06e 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:21.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:22.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:22 np0005548731 nova_compute[232433]: 2025-12-06 07:55:22.562 232437 DEBUG nova.compute.manager [req-b363c3f2-2229-454a-aeab-50df53d961b0 req-63ad3858-a79c-429c-aa61-2c2b9027a488 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Received event network-vif-plugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:55:22 np0005548731 nova_compute[232433]: 2025-12-06 07:55:22.562 232437 DEBUG oslo_concurrency.lockutils [req-b363c3f2-2229-454a-aeab-50df53d961b0 req-63ad3858-a79c-429c-aa61-2c2b9027a488 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:22 np0005548731 nova_compute[232433]: 2025-12-06 07:55:22.563 232437 DEBUG oslo_concurrency.lockutils [req-b363c3f2-2229-454a-aeab-50df53d961b0 req-63ad3858-a79c-429c-aa61-2c2b9027a488 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:22 np0005548731 nova_compute[232433]: 2025-12-06 07:55:22.563 232437 DEBUG oslo_concurrency.lockutils [req-b363c3f2-2229-454a-aeab-50df53d961b0 req-63ad3858-a79c-429c-aa61-2c2b9027a488 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:22 np0005548731 nova_compute[232433]: 2025-12-06 07:55:22.563 232437 DEBUG nova.compute.manager [req-b363c3f2-2229-454a-aeab-50df53d961b0 req-63ad3858-a79c-429c-aa61-2c2b9027a488 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] No waiting events found dispatching network-vif-plugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:55:22 np0005548731 nova_compute[232433]: 2025-12-06 07:55:22.564 232437 WARNING nova.compute.manager [req-b363c3f2-2229-454a-aeab-50df53d961b0 req-63ad3858-a79c-429c-aa61-2c2b9027a488 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Received unexpected event network-vif-plugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c for instance with vm_state active and task_state None.#033[00m
Dec  6 02:55:23 np0005548731 nova_compute[232433]: 2025-12-06 07:55:23.111 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:23.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:23 np0005548731 nova_compute[232433]: 2025-12-06 07:55:23.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:23 np0005548731 NetworkManager[49182]: <info>  [1765007723.3307] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Dec  6 02:55:23 np0005548731 NetworkManager[49182]: <info>  [1765007723.3318] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Dec  6 02:55:23 np0005548731 nova_compute[232433]: 2025-12-06 07:55:23.536 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:23 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:23Z|00869|binding|INFO|Releasing lport da468cbe-02bf-4465-aa94-57670ea2997c from this chassis (sb_readonly=0)
Dec  6 02:55:23 np0005548731 nova_compute[232433]: 2025-12-06 07:55:23.557 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:23 np0005548731 nova_compute[232433]: 2025-12-06 07:55:23.660 232437 DEBUG nova.compute.manager [req-8b178097-dc33-4ce3-a5a0-643cbf8cff4b req-39bf49ef-aa94-44be-abab-5dc80ff52839 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Received event network-changed-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:55:23 np0005548731 nova_compute[232433]: 2025-12-06 07:55:23.661 232437 DEBUG nova.compute.manager [req-8b178097-dc33-4ce3-a5a0-643cbf8cff4b req-39bf49ef-aa94-44be-abab-5dc80ff52839 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Refreshing instance network info cache due to event network-changed-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:55:23 np0005548731 nova_compute[232433]: 2025-12-06 07:55:23.661 232437 DEBUG oslo_concurrency.lockutils [req-8b178097-dc33-4ce3-a5a0-643cbf8cff4b req-39bf49ef-aa94-44be-abab-5dc80ff52839 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:55:23 np0005548731 nova_compute[232433]: 2025-12-06 07:55:23.661 232437 DEBUG oslo_concurrency.lockutils [req-8b178097-dc33-4ce3-a5a0-643cbf8cff4b req-39bf49ef-aa94-44be-abab-5dc80ff52839 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:55:23 np0005548731 nova_compute[232433]: 2025-12-06 07:55:23.662 232437 DEBUG nova.network.neutron [req-8b178097-dc33-4ce3-a5a0-643cbf8cff4b req-39bf49ef-aa94-44be-abab-5dc80ff52839 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Refreshing network info cache for port 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:55:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:24.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:24 np0005548731 nova_compute[232433]: 2025-12-06 07:55:24.548 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:24 np0005548731 nova_compute[232433]: 2025-12-06 07:55:24.938 232437 DEBUG nova.network.neutron [req-8b178097-dc33-4ce3-a5a0-643cbf8cff4b req-39bf49ef-aa94-44be-abab-5dc80ff52839 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Updated VIF entry in instance network info cache for port 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:55:24 np0005548731 nova_compute[232433]: 2025-12-06 07:55:24.939 232437 DEBUG nova.network.neutron [req-8b178097-dc33-4ce3-a5a0-643cbf8cff4b req-39bf49ef-aa94-44be-abab-5dc80ff52839 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Updating instance_info_cache with network_info: [{"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:55:24 np0005548731 nova_compute[232433]: 2025-12-06 07:55:24.981 232437 DEBUG oslo_concurrency.lockutils [req-8b178097-dc33-4ce3-a5a0-643cbf8cff4b req-39bf49ef-aa94-44be-abab-5dc80ff52839 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:55:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:55:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:25.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:55:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:26.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 e400: 3 total, 3 up, 3 in
Dec  6 02:55:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:27.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:28 np0005548731 nova_compute[232433]: 2025-12-06 07:55:28.114 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:55:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:28.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:55:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:29.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:29 np0005548731 nova_compute[232433]: 2025-12-06 07:55:29.550 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:29 np0005548731 podman[314910]: 2025-12-06 07:55:29.898376989 +0000 UTC m=+0.058949174 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:55:29 np0005548731 podman[314912]: 2025-12-06 07:55:29.900527521 +0000 UTC m=+0.059624520 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 02:55:29 np0005548731 podman[314911]: 2025-12-06 07:55:29.929366217 +0000 UTC m=+0.089233945 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:55:30 np0005548731 nova_compute[232433]: 2025-12-06 07:55:30.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:55:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:55:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:30.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:55:31 np0005548731 nova_compute[232433]: 2025-12-06 07:55:31.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:55:31 np0005548731 nova_compute[232433]: 2025-12-06 07:55:31.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:55:31 np0005548731 nova_compute[232433]: 2025-12-06 07:55:31.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:55:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:31.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:31 np0005548731 nova_compute[232433]: 2025-12-06 07:55:31.923 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:55:31 np0005548731 nova_compute[232433]: 2025-12-06 07:55:31.923 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:55:31 np0005548731 nova_compute[232433]: 2025-12-06 07:55:31.924 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:55:31 np0005548731 nova_compute[232433]: 2025-12-06 07:55:31.924 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid e600163d-bbdd-421a-ab0e-83bfbac6495a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:55:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:55:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:32.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:55:33 np0005548731 nova_compute[232433]: 2025-12-06 07:55:33.116 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:33.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:33 np0005548731 nova_compute[232433]: 2025-12-06 07:55:33.838 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Updating instance_info_cache with network_info: [{"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:55:34 np0005548731 nova_compute[232433]: 2025-12-06 07:55:34.091 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-e600163d-bbdd-421a-ab0e-83bfbac6495a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:55:34 np0005548731 nova_compute[232433]: 2025-12-06 07:55:34.091 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:55:34 np0005548731 nova_compute[232433]: 2025-12-06 07:55:34.092 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:55:34 np0005548731 nova_compute[232433]: 2025-12-06 07:55:34.092 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:55:34 np0005548731 nova_compute[232433]: 2025-12-06 07:55:34.092 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:55:34 np0005548731 nova_compute[232433]: 2025-12-06 07:55:34.093 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:55:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:34Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:c9:69 10.100.0.14
Dec  6 02:55:34 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:34Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:c9:69 10.100.0.14
Dec  6 02:55:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:34.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:34 np0005548731 nova_compute[232433]: 2025-12-06 07:55:34.551 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:55:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:35.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.126 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.126 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.126 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.127 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.127 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:36.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:55:36 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1621257542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.572 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.638 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000ae as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.639 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000ae as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.799 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.800 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4049MB free_disk=20.904495239257812GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.800 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.801 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.877 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance e600163d-bbdd-421a-ab0e-83bfbac6495a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.878 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.878 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.897 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.913 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.914 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.931 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.950 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 02:55:36 np0005548731 nova_compute[232433]: 2025-12-06 07:55:36.987 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:37.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:55:37 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3361064064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:55:37 np0005548731 nova_compute[232433]: 2025-12-06 07:55:37.441 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:37 np0005548731 nova_compute[232433]: 2025-12-06 07:55:37.447 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:55:37 np0005548731 nova_compute[232433]: 2025-12-06 07:55:37.467 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:55:37 np0005548731 nova_compute[232433]: 2025-12-06 07:55:37.490 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:55:37 np0005548731 nova_compute[232433]: 2025-12-06 07:55:37.490 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:38 np0005548731 nova_compute[232433]: 2025-12-06 07:55:38.118 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:38.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:39.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:39 np0005548731 nova_compute[232433]: 2025-12-06 07:55:39.491 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:55:39 np0005548731 nova_compute[232433]: 2025-12-06 07:55:39.491 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:55:39 np0005548731 nova_compute[232433]: 2025-12-06 07:55:39.553 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:40.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:40.878 144074 DEBUG eventlet.wsgi.server [-] (144074) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  6 02:55:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:40.879 144074 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Dec  6 02:55:40 np0005548731 ovn_metadata_agent[143960]: Accept: */*#015
Dec  6 02:55:40 np0005548731 ovn_metadata_agent[143960]: Connection: close#015
Dec  6 02:55:40 np0005548731 ovn_metadata_agent[143960]: Content-Type: text/plain#015
Dec  6 02:55:40 np0005548731 ovn_metadata_agent[143960]: Host: 169.254.169.254#015
Dec  6 02:55:40 np0005548731 ovn_metadata_agent[143960]: User-Agent: curl/7.84.0#015
Dec  6 02:55:40 np0005548731 ovn_metadata_agent[143960]: X-Forwarded-For: 10.100.0.14#015
Dec  6 02:55:40 np0005548731 ovn_metadata_agent[143960]: X-Ovn-Network-Id: a7f183ec-224d-47f8-a012-9bcde922bafb __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  6 02:55:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:41.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:41.947 144074 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  6 02:55:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:41.947 144074 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.0682318#033[00m
Dec  6 02:55:41 np0005548731 haproxy-metadata-proxy-a7f183ec-224d-47f8-a012-9bcde922bafb[314844]: 10.100.0.14:59092 [06/Dec/2025:07:55:40.876] listener listener/metadata 0/0/0/1070/1070 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:42.017 144074 DEBUG eventlet.wsgi.server [-] (144074) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:42.017 144074 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: Accept: */*#015
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: Connection: close#015
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: Content-Length: 100#015
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: Content-Type: application/x-www-form-urlencoded#015
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: Host: 169.254.169.254#015
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: User-Agent: curl/7.84.0#015
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: X-Forwarded-For: 10.100.0.14#015
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: X-Ovn-Network-Id: a7f183ec-224d-47f8-a012-9bcde922bafb#015
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: #015
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  6 02:55:42 np0005548731 nova_compute[232433]: 2025-12-06 07:55:42.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:42.146 144074 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  6 02:55:42 np0005548731 haproxy-metadata-proxy-a7f183ec-224d-47f8-a012-9bcde922bafb[314844]: 10.100.0.14:59094 [06/Dec/2025:07:55:42.015] listener listener/metadata 0/0/0/131/131 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec  6 02:55:42 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:42.147 144074 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.1293876#033[00m
Dec  6 02:55:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:55:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:55:43 np0005548731 nova_compute[232433]: 2025-12-06 07:55:43.121 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:55:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:43.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:55:43 np0005548731 nova_compute[232433]: 2025-12-06 07:55:43.887 232437 DEBUG oslo_concurrency.lockutils [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Acquiring lock "e600163d-bbdd-421a-ab0e-83bfbac6495a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:43 np0005548731 nova_compute[232433]: 2025-12-06 07:55:43.888 232437 DEBUG oslo_concurrency.lockutils [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:43 np0005548731 nova_compute[232433]: 2025-12-06 07:55:43.889 232437 DEBUG oslo_concurrency.lockutils [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Acquiring lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:43 np0005548731 nova_compute[232433]: 2025-12-06 07:55:43.889 232437 DEBUG oslo_concurrency.lockutils [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:43 np0005548731 nova_compute[232433]: 2025-12-06 07:55:43.890 232437 DEBUG oslo_concurrency.lockutils [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:43 np0005548731 nova_compute[232433]: 2025-12-06 07:55:43.892 232437 INFO nova.compute.manager [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Terminating instance#033[00m
Dec  6 02:55:43 np0005548731 nova_compute[232433]: 2025-12-06 07:55:43.894 232437 DEBUG nova.compute.manager [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:55:44 np0005548731 kernel: tap06fe0ba2-d0 (unregistering): left promiscuous mode
Dec  6 02:55:44 np0005548731 NetworkManager[49182]: <info>  [1765007744.1503] device (tap06fe0ba2-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:55:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:44Z|00870|binding|INFO|Releasing lport 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c from this chassis (sb_readonly=0)
Dec  6 02:55:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:44Z|00871|binding|INFO|Setting lport 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c down in Southbound
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.157 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:44 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:44Z|00872|binding|INFO|Removing iface tap06fe0ba2-d0 ovn-installed in OVS
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.160 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.170 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:c9:69 10.100.0.14'], port_security=['fa:16:3e:ab:c9:69 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e600163d-bbdd-421a-ab0e-83bfbac6495a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7f183ec-224d-47f8-a012-9bcde922bafb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b02ca2bf61d48d99724e6446cfd3524', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cb7353fc-9d60-41d9-a986-6d90d5004536 f35ac537-2359-40f7-b84f-66524cb2482e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4cf42949-a96b-4530-989a-41b873f41733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.172 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c in datapath a7f183ec-224d-47f8-a012-9bcde922bafb unbound from our chassis#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.174 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7f183ec-224d-47f8-a012-9bcde922bafb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.177 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.175 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[06157474-5915-4375-b8c1-78f9a3d2eec1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.176 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb namespace which is not needed anymore#033[00m
Dec  6 02:55:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:44 np0005548731 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000ae.scope: Deactivated successfully.
Dec  6 02:55:44 np0005548731 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000ae.scope: Consumed 14.195s CPU time.
Dec  6 02:55:44 np0005548731 systemd-machined[195355]: Machine qemu-88-instance-000000ae terminated.
Dec  6 02:55:44 np0005548731 neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb[314838]: [NOTICE]   (314842) : haproxy version is 2.8.14-c23fe91
Dec  6 02:55:44 np0005548731 neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb[314838]: [NOTICE]   (314842) : path to executable is /usr/sbin/haproxy
Dec  6 02:55:44 np0005548731 neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb[314838]: [WARNING]  (314842) : Exiting Master process...
Dec  6 02:55:44 np0005548731 neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb[314838]: [ALERT]    (314842) : Current worker (314844) exited with code 143 (Terminated)
Dec  6 02:55:44 np0005548731 neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb[314838]: [WARNING]  (314842) : All workers exited. Exiting... (0)
Dec  6 02:55:44 np0005548731 systemd[1]: libpod-4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4.scope: Deactivated successfully.
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.331 232437 INFO nova.virt.libvirt.driver [-] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Instance destroyed successfully.#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.332 232437 DEBUG nova.objects.instance [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lazy-loading 'resources' on Instance uuid e600163d-bbdd-421a-ab0e-83bfbac6495a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:55:44 np0005548731 podman[315050]: 2025-12-06 07:55:44.333147468 +0000 UTC m=+0.048161339 container died 4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 02:55:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:44.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.351 232437 DEBUG nova.virt.libvirt.vif [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1079945337',display_name='tempest-TestServerBasicOps-server-1079945337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1079945337',id=174,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFn8IHrAu1capFTEammcpgozBC0c7wVgyvOhNY1pdFvVuqQz1ZJigEw9y/ImJqmbdoP6LMd9JwjHpW/YGqtrVjbCEXCA4lrId1JNg6dNRmMx1JWzWMSDqrEVBgkgpHkSA==',key_name='tempest-TestServerBasicOps-1564693908',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:55:20Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b02ca2bf61d48d99724e6446cfd3524',ramdisk_id='',reservation_id='r-7ox8eliw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-882087666',owner_user_name='tempest-TestServerBasicOps-882087666-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:55:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='383c99c13fb24b039259ed88ae2f32c0',uuid=e600163d-bbdd-421a-ab0e-83bfbac6495a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.352 232437 DEBUG nova.network.os_vif_util [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Converting VIF {"id": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "address": "fa:16:3e:ab:c9:69", "network": {"id": "a7f183ec-224d-47f8-a012-9bcde922bafb", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1200474505-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b02ca2bf61d48d99724e6446cfd3524", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06fe0ba2-d0", "ovs_interfaceid": "06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.352 232437 DEBUG nova.network.os_vif_util [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c,network=Network(a7f183ec-224d-47f8-a012-9bcde922bafb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06fe0ba2-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.353 232437 DEBUG os_vif [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c,network=Network(a7f183ec-224d-47f8-a012-9bcde922bafb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06fe0ba2-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.354 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.354 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06fe0ba2-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.357 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.358 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.361 232437 INFO os_vif [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:c9:69,bridge_name='br-int',has_traffic_filtering=True,id=06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c,network=Network(a7f183ec-224d-47f8-a012-9bcde922bafb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06fe0ba2-d0')#033[00m
Dec  6 02:55:44 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4-userdata-shm.mount: Deactivated successfully.
Dec  6 02:55:44 np0005548731 systemd[1]: var-lib-containers-storage-overlay-a15df466c62e630ed3b8de91686da145103cb40d4075a0eeffba2e20d704b5a8-merged.mount: Deactivated successfully.
Dec  6 02:55:44 np0005548731 podman[315050]: 2025-12-06 07:55:44.373873964 +0000 UTC m=+0.088887805 container cleanup 4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:55:44 np0005548731 systemd[1]: libpod-conmon-4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4.scope: Deactivated successfully.
Dec  6 02:55:44 np0005548731 podman[315104]: 2025-12-06 07:55:44.431184516 +0000 UTC m=+0.037606551 container remove 4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.436 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0e81e7b1-7d51-4575-8272-23f54101e861]: (4, ('Sat Dec  6 07:55:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb (4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4)\n4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4\nSat Dec  6 07:55:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb (4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4)\n4566be7e21d3f25dec456bfe0c9839131b505b547799df11dd1467e7d14689a4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.438 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[26d227bd-cf1c-40e5-9f84-81d7c4ea0e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.439 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7f183ec-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.440 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:44 np0005548731 kernel: tapa7f183ec-20: left promiscuous mode
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.491 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.493 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c154978b-fd85-4042-ac0f-a8e976f96779]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.507 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[aa74950e-085a-4347-91af-6b3c9094a9fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.508 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5ce8eb-d16d-485d-944c-cb43c28ad0a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.523 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[41a48447-d968-4884-8182-c10db3745e53]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 798155, 'reachable_time': 43882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315122, 'error': None, 'target': 'ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.526 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7f183ec-224d-47f8-a012-9bcde922bafb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.526 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2c65cc-f8e7-4c9e-976b-e625913d42d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:44 np0005548731 systemd[1]: run-netns-ovnmeta\x2da7f183ec\x2d224d\x2d47f8\x2da012\x2d9bcde922bafb.mount: Deactivated successfully.
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.555 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.572 232437 DEBUG nova.compute.manager [req-d7847b74-d31d-427d-b919-885f6a5454fc req-43869fce-4f79-4d23-b89a-044c23da837f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Received event network-vif-unplugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.572 232437 DEBUG oslo_concurrency.lockutils [req-d7847b74-d31d-427d-b919-885f6a5454fc req-43869fce-4f79-4d23-b89a-044c23da837f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.572 232437 DEBUG oslo_concurrency.lockutils [req-d7847b74-d31d-427d-b919-885f6a5454fc req-43869fce-4f79-4d23-b89a-044c23da837f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.572 232437 DEBUG oslo_concurrency.lockutils [req-d7847b74-d31d-427d-b919-885f6a5454fc req-43869fce-4f79-4d23-b89a-044c23da837f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.572 232437 DEBUG nova.compute.manager [req-d7847b74-d31d-427d-b919-885f6a5454fc req-43869fce-4f79-4d23-b89a-044c23da837f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] No waiting events found dispatching network-vif-unplugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.573 232437 DEBUG nova.compute.manager [req-d7847b74-d31d-427d-b919-885f6a5454fc req-43869fce-4f79-4d23-b89a-044c23da837f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Received event network-vif-unplugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.703 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.704 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:44.708 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.810 232437 INFO nova.virt.libvirt.driver [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Deleting instance files /var/lib/nova/instances/e600163d-bbdd-421a-ab0e-83bfbac6495a_del#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.810 232437 INFO nova.virt.libvirt.driver [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Deletion of /var/lib/nova/instances/e600163d-bbdd-421a-ab0e-83bfbac6495a_del complete#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.877 232437 INFO nova.compute.manager [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.878 232437 DEBUG oslo.service.loopingcall [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.878 232437 DEBUG nova.compute.manager [-] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:55:44 np0005548731 nova_compute[232433]: 2025-12-06 07:55:44.879 232437 DEBUG nova.network.neutron [-] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:55:45 np0005548731 nova_compute[232433]: 2025-12-06 07:55:45.102 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:55:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:45.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:45 np0005548731 nova_compute[232433]: 2025-12-06 07:55:45.551 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:45 np0005548731 nova_compute[232433]: 2025-12-06 07:55:45.551 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:45 np0005548731 nova_compute[232433]: 2025-12-06 07:55:45.614 232437 DEBUG nova.compute.manager [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:55:45 np0005548731 nova_compute[232433]: 2025-12-06 07:55:45.763 232437 DEBUG nova.network.neutron [-] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:55:45 np0005548731 nova_compute[232433]: 2025-12-06 07:55:45.855 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:45 np0005548731 nova_compute[232433]: 2025-12-06 07:55:45.855 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:45 np0005548731 nova_compute[232433]: 2025-12-06 07:55:45.862 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:55:45 np0005548731 nova_compute[232433]: 2025-12-06 07:55:45.862 232437 INFO nova.compute.claims [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:55:45 np0005548731 nova_compute[232433]: 2025-12-06 07:55:45.895 232437 INFO nova.compute.manager [-] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Took 1.02 seconds to deallocate network for instance.#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.081 232437 DEBUG oslo_concurrency.lockutils [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.212 232437 DEBUG oslo_concurrency.processutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:46.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:55:46 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/80759059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.641 232437 DEBUG oslo_concurrency.processutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.647 232437 DEBUG nova.compute.provider_tree [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.719 232437 DEBUG nova.scheduler.client.report [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.786 232437 DEBUG nova.compute.manager [req-052cea80-1591-4c7b-a126-b639a3b4fb62 req-deb87cf1-04f1-4a0f-acee-3ea6855e41fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Received event network-vif-plugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.789 232437 DEBUG oslo_concurrency.lockutils [req-052cea80-1591-4c7b-a126-b639a3b4fb62 req-deb87cf1-04f1-4a0f-acee-3ea6855e41fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.789 232437 DEBUG oslo_concurrency.lockutils [req-052cea80-1591-4c7b-a126-b639a3b4fb62 req-deb87cf1-04f1-4a0f-acee-3ea6855e41fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.789 232437 DEBUG oslo_concurrency.lockutils [req-052cea80-1591-4c7b-a126-b639a3b4fb62 req-deb87cf1-04f1-4a0f-acee-3ea6855e41fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.789 232437 DEBUG nova.compute.manager [req-052cea80-1591-4c7b-a126-b639a3b4fb62 req-deb87cf1-04f1-4a0f-acee-3ea6855e41fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] No waiting events found dispatching network-vif-plugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.790 232437 WARNING nova.compute.manager [req-052cea80-1591-4c7b-a126-b639a3b4fb62 req-deb87cf1-04f1-4a0f-acee-3ea6855e41fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Received unexpected event network-vif-plugged-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c for instance with vm_state deleted and task_state None.#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.790 232437 DEBUG nova.compute.manager [req-052cea80-1591-4c7b-a126-b639a3b4fb62 req-deb87cf1-04f1-4a0f-acee-3ea6855e41fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Received event network-vif-deleted-06fe0ba2-d0c0-406d-bcb5-0d6f95c9848c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.810 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.810 232437 DEBUG nova.compute.manager [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.813 232437 DEBUG oslo_concurrency.lockutils [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.878 232437 DEBUG oslo_concurrency.processutils [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.984 232437 DEBUG nova.compute.manager [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:55:46 np0005548731 nova_compute[232433]: 2025-12-06 07:55:46.984 232437 DEBUG nova.network.neutron [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.037 232437 INFO nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.076 232437 DEBUG nova.compute.manager [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:55:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.148 232437 INFO nova.virt.block_device [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Booting with volume bbe9824b-1a7c-4ec6-8a4f-0f69d7c2bc92 at /dev/vda#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.250 232437 DEBUG nova.policy [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e685a049c8a74aa8aea831fbdaf2acf8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6164fee998c94b71a37886fe42b4c56c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:55:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:55:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1452116217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.294 232437 DEBUG oslo_concurrency.processutils [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.299 232437 DEBUG nova.compute.provider_tree [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.348 232437 DEBUG nova.scheduler.client.report [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.358 232437 DEBUG os_brick.utils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.359 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.369 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.369 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[330911be-fccf-48d8-a372-d27f2e28cc28]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.370 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.377 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.377 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[ebec54d6-70d1-4379-b1de-9ee67f51c0c6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.378 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.385 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.385 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[97b3408b-5e7f-410e-96c2-399029866262]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.386 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[9210e378-40bf-4b78-892e-68dfe2a85988]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.387 232437 DEBUG oslo_concurrency.processutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.411 232437 DEBUG oslo_concurrency.processutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.413 232437 DEBUG os_brick.initiator.connectors.lightos [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.413 232437 DEBUG os_brick.initiator.connectors.lightos [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.414 232437 DEBUG os_brick.initiator.connectors.lightos [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.414 232437 DEBUG os_brick.utils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.414 232437 DEBUG nova.virt.block_device [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating existing volume attachment record: 49dce9ff-d05f-47b8-be29-7f989b6b3505 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.513 232437 DEBUG oslo_concurrency.lockutils [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.563 232437 INFO nova.scheduler.client.report [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Deleted allocations for instance e600163d-bbdd-421a-ab0e-83bfbac6495a#033[00m
Dec  6 02:55:47 np0005548731 nova_compute[232433]: 2025-12-06 07:55:47.674 232437 DEBUG oslo_concurrency.lockutils [None req-978c260d-7a5f-4cd2-bea3-5519aa443acf 383c99c13fb24b039259ed88ae2f32c0 2b02ca2bf61d48d99724e6446cfd3524 - - default default] Lock "e600163d-bbdd-421a-ab0e-83bfbac6495a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:48.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:48 np0005548731 nova_compute[232433]: 2025-12-06 07:55:48.367 232437 DEBUG nova.network.neutron [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Successfully created port: c76db338-0396-40a2-82b3-0c720d28d2bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:55:48 np0005548731 nova_compute[232433]: 2025-12-06 07:55:48.944 232437 DEBUG nova.compute.manager [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:55:48 np0005548731 nova_compute[232433]: 2025-12-06 07:55:48.945 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:55:48 np0005548731 nova_compute[232433]: 2025-12-06 07:55:48.946 232437 INFO nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Creating image(s)#033[00m
Dec  6 02:55:48 np0005548731 nova_compute[232433]: 2025-12-06 07:55:48.946 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:55:48 np0005548731 nova_compute[232433]: 2025-12-06 07:55:48.946 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Ensure instance console log exists: /var/lib/nova/instances/21c591a0-5eed-4aa8-a68b-59a616b16e2b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:55:48 np0005548731 nova_compute[232433]: 2025-12-06 07:55:48.947 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:48 np0005548731 nova_compute[232433]: 2025-12-06 07:55:48.947 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:48 np0005548731 nova_compute[232433]: 2025-12-06 07:55:48.947 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:55:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:49.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:55:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:49 np0005548731 nova_compute[232433]: 2025-12-06 07:55:49.358 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:49 np0005548731 nova_compute[232433]: 2025-12-06 07:55:49.557 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:49.709 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:55:50 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1446130076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:55:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:50.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:50 np0005548731 nova_compute[232433]: 2025-12-06 07:55:50.875 232437 DEBUG nova.network.neutron [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Successfully updated port: c76db338-0396-40a2-82b3-0c720d28d2bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:55:50 np0005548731 nova_compute[232433]: 2025-12-06 07:55:50.896 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:55:50 np0005548731 nova_compute[232433]: 2025-12-06 07:55:50.897 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquired lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:55:50 np0005548731 nova_compute[232433]: 2025-12-06 07:55:50.897 232437 DEBUG nova.network.neutron [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:55:50 np0005548731 nova_compute[232433]: 2025-12-06 07:55:50.990 232437 DEBUG nova.compute.manager [req-78b22f38-b665-4f0a-b9d0-f3a18dea7b1c req-b05bd42a-7dda-4289-887d-c463d6b83474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:55:50 np0005548731 nova_compute[232433]: 2025-12-06 07:55:50.991 232437 DEBUG nova.compute.manager [req-78b22f38-b665-4f0a-b9d0-f3a18dea7b1c req-b05bd42a-7dda-4289-887d-c463d6b83474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing instance network info cache due to event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:55:50 np0005548731 nova_compute[232433]: 2025-12-06 07:55:50.991 232437 DEBUG oslo_concurrency.lockutils [req-78b22f38-b665-4f0a-b9d0-f3a18dea7b1c req-b05bd42a-7dda-4289-887d-c463d6b83474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:55:51 np0005548731 nova_compute[232433]: 2025-12-06 07:55:51.116 232437 DEBUG nova.network.neutron [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:55:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:51.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:51 np0005548731 nova_compute[232433]: 2025-12-06 07:55:51.960 232437 DEBUG nova.network.neutron [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating instance_info_cache with network_info: [{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:55:51 np0005548731 nova_compute[232433]: 2025-12-06 07:55:51.986 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Releasing lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:55:51 np0005548731 nova_compute[232433]: 2025-12-06 07:55:51.987 232437 DEBUG nova.compute.manager [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Instance network_info: |[{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:55:51 np0005548731 nova_compute[232433]: 2025-12-06 07:55:51.987 232437 DEBUG oslo_concurrency.lockutils [req-78b22f38-b665-4f0a-b9d0-f3a18dea7b1c req-b05bd42a-7dda-4289-887d-c463d6b83474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:55:51 np0005548731 nova_compute[232433]: 2025-12-06 07:55:51.988 232437 DEBUG nova.network.neutron [req-78b22f38-b665-4f0a-b9d0-f3a18dea7b1c req-b05bd42a-7dda-4289-887d-c463d6b83474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:55:51 np0005548731 nova_compute[232433]: 2025-12-06 07:55:51.991 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Start _get_guest_xml network_info=[{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bbe9824b-1a7c-4ec6-8a4f-0f69d7c2bc92', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bbe9824b-1a7c-4ec6-8a4f-0f69d7c2bc92', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '21c591a0-5eed-4aa8-a68b-59a616b16e2b', 'attached_at': '', 'detached_at': '', 'volume_id': 'bbe9824b-1a7c-4ec6-8a4f-0f69d7c2bc92', 'serial': 'bbe9824b-1a7c-4ec6-8a4f-0f69d7c2bc92'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'attachment_id': '49dce9ff-d05f-47b8-be29-7f989b6b3505', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:55:51 np0005548731 nova_compute[232433]: 2025-12-06 07:55:51.996 232437 WARNING nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.000 232437 DEBUG nova.virt.libvirt.host [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.001 232437 DEBUG nova.virt.libvirt.host [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.009 232437 DEBUG nova.virt.libvirt.host [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.009 232437 DEBUG nova.virt.libvirt.host [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.011 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.011 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.011 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.012 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.012 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.012 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.012 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.013 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.013 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.013 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.014 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.014 232437 DEBUG nova.virt.hardware [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.047 232437 DEBUG nova.storage.rbd_utils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] rbd image 21c591a0-5eed-4aa8-a68b-59a616b16e2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.051 232437 DEBUG oslo_concurrency.processutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:52.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:55:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3864288182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.500 232437 DEBUG oslo_concurrency.processutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.522 232437 DEBUG nova.virt.libvirt.vif [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:55:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1553758528',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1553758528',id=176,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMlCJgA181fL+hWV96XYAuaaRjR/DFcxIrENEwwuUSLNLg2Wo/zP2WcPtpxKQuFaV64lRGeBPzRnqkTHdlSql81bpyaGplyAnqRHnVLqVTwCxa7e5Tmw+I0TD65PH3Dpw==',key_name='tempest-TestInstancesWithCinderVolumes-1103529456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6164fee998c94b71a37886fe42b4c56c',ramdisk_id='',reservation_id='r-228wza3f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-1429596635',owner_user_name='tempest-TestInstancesWithCinderVolumes-1429596635-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:55:47Z,user_data=None,user_id='e685a049c8a74aa8aea831fbdaf2acf8',uuid=21c591a0-5eed-4aa8-a68b-59a616b16e2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.522 232437 DEBUG nova.network.os_vif_util [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converting VIF {"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.523 232437 DEBUG nova.network.os_vif_util [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:62:02,bridge_name='br-int',has_traffic_filtering=True,id=c76db338-0396-40a2-82b3-0c720d28d2bd,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76db338-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.524 232437 DEBUG nova.objects.instance [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'pci_devices' on Instance uuid 21c591a0-5eed-4aa8-a68b-59a616b16e2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.541 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  <uuid>21c591a0-5eed-4aa8-a68b-59a616b16e2b</uuid>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  <name>instance-000000b0</name>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestInstancesWithCinderVolumes-server-1553758528</nova:name>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:55:51</nova:creationTime>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <nova:user uuid="e685a049c8a74aa8aea831fbdaf2acf8">tempest-TestInstancesWithCinderVolumes-1429596635-project-member</nova:user>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <nova:project uuid="6164fee998c94b71a37886fe42b4c56c">tempest-TestInstancesWithCinderVolumes-1429596635</nova:project>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <nova:port uuid="c76db338-0396-40a2-82b3-0c720d28d2bd">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <entry name="serial">21c591a0-5eed-4aa8-a68b-59a616b16e2b</entry>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <entry name="uuid">21c591a0-5eed-4aa8-a68b-59a616b16e2b</entry>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/21c591a0-5eed-4aa8-a68b-59a616b16e2b_disk.config">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-bbe9824b-1a7c-4ec6-8a4f-0f69d7c2bc92">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <serial>bbe9824b-1a7c-4ec6-8a4f-0f69d7c2bc92</serial>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:31:62:02"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <target dev="tapc76db338-03"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/21c591a0-5eed-4aa8-a68b-59a616b16e2b/console.log" append="off"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:55:52 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:55:52 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:55:52 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:55:52 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.542 232437 DEBUG nova.compute.manager [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Preparing to wait for external event network-vif-plugged-c76db338-0396-40a2-82b3-0c720d28d2bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.542 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.543 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.543 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.544 232437 DEBUG nova.virt.libvirt.vif [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:55:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1553758528',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1553758528',id=176,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMlCJgA181fL+hWV96XYAuaaRjR/DFcxIrENEwwuUSLNLg2Wo/zP2WcPtpxKQuFaV64lRGeBPzRnqkTHdlSql81bpyaGplyAnqRHnVLqVTwCxa7e5Tmw+I0TD65PH3Dpw==',key_name='tempest-TestInstancesWithCinderVolumes-1103529456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6164fee998c94b71a37886fe42b4c56c',ramdisk_id='',reservation_id='r-228wza3f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-1429596635',owner_user_name='tempest-TestInstancesWithCinderVolumes-1429596635-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:55:47Z,user_data=None,user_id='e685a049c8a74aa8aea831fbdaf2acf8',uuid=21c591a0-5eed-4aa8-a68b-59a616b16e2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.544 232437 DEBUG nova.network.os_vif_util [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converting VIF {"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.545 232437 DEBUG nova.network.os_vif_util [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:62:02,bridge_name='br-int',has_traffic_filtering=True,id=c76db338-0396-40a2-82b3-0c720d28d2bd,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76db338-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.545 232437 DEBUG os_vif [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:62:02,bridge_name='br-int',has_traffic_filtering=True,id=c76db338-0396-40a2-82b3-0c720d28d2bd,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76db338-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.546 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.546 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.546 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.550 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.550 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc76db338-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.550 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc76db338-03, col_values=(('external_ids', {'iface-id': 'c76db338-0396-40a2-82b3-0c720d28d2bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:62:02', 'vm-uuid': '21c591a0-5eed-4aa8-a68b-59a616b16e2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.551 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:52 np0005548731 NetworkManager[49182]: <info>  [1765007752.5530] manager: (tapc76db338-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.554 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.557 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.557 232437 INFO os_vif [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:62:02,bridge_name='br-int',has_traffic_filtering=True,id=c76db338-0396-40a2-82b3-0c720d28d2bd,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76db338-03')#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.607 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.607 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.607 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No VIF found with MAC fa:16:3e:31:62:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.608 232437 INFO nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Using config drive#033[00m
Dec  6 02:55:52 np0005548731 nova_compute[232433]: 2025-12-06 07:55:52.632 232437 DEBUG nova.storage.rbd_utils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] rbd image 21c591a0-5eed-4aa8-a68b-59a616b16e2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:55:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:55:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.022 232437 INFO nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Creating config drive at /var/lib/nova/instances/21c591a0-5eed-4aa8-a68b-59a616b16e2b/disk.config#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.029 232437 DEBUG oslo_concurrency.processutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/21c591a0-5eed-4aa8-a68b-59a616b16e2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpynqrydj1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.162 232437 DEBUG oslo_concurrency.processutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/21c591a0-5eed-4aa8-a68b-59a616b16e2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpynqrydj1" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.195 232437 DEBUG nova.storage.rbd_utils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] rbd image 21c591a0-5eed-4aa8-a68b-59a616b16e2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.199 232437 DEBUG oslo_concurrency.processutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/21c591a0-5eed-4aa8-a68b-59a616b16e2b/disk.config 21c591a0-5eed-4aa8-a68b-59a616b16e2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:55:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.370 232437 DEBUG oslo_concurrency.processutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/21c591a0-5eed-4aa8-a68b-59a616b16e2b/disk.config 21c591a0-5eed-4aa8-a68b-59a616b16e2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.371 232437 INFO nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Deleting local config drive /var/lib/nova/instances/21c591a0-5eed-4aa8-a68b-59a616b16e2b/disk.config because it was imported into RBD.#033[00m
Dec  6 02:55:54 np0005548731 kernel: tapc76db338-03: entered promiscuous mode
Dec  6 02:55:54 np0005548731 NetworkManager[49182]: <info>  [1765007754.4192] manager: (tapc76db338-03): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Dec  6 02:55:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:54Z|00873|binding|INFO|Claiming lport c76db338-0396-40a2-82b3-0c720d28d2bd for this chassis.
Dec  6 02:55:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:54Z|00874|binding|INFO|c76db338-0396-40a2-82b3-0c720d28d2bd: Claiming fa:16:3e:31:62:02 10.100.0.4
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.420 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.425 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:62:02 10.100.0.4'], port_security=['fa:16:3e:31:62:02 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '21c591a0-5eed-4aa8-a68b-59a616b16e2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3764201-4b86-4407-84d2-684bd05a44b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6164fee998c94b71a37886fe42b4c56c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2bb7af25-e3c4-4687-888a-3caf6297e5c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a293aea-136f-4ea2-8198-6213071653ca, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=c76db338-0396-40a2-82b3-0c720d28d2bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.426 143965 INFO neutron.agent.ovn.metadata.agent [-] Port c76db338-0396-40a2-82b3-0c720d28d2bd in datapath a3764201-4b86-4407-84d2-684bd05a44b3 bound to our chassis#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.428 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3764201-4b86-4407-84d2-684bd05a44b3#033[00m
Dec  6 02:55:54 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 02:55:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:54Z|00875|binding|INFO|Setting lport c76db338-0396-40a2-82b3-0c720d28d2bd ovn-installed in OVS
Dec  6 02:55:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:54Z|00876|binding|INFO|Setting lport c76db338-0396-40a2-82b3-0c720d28d2bd up in Southbound
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.440 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.440 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[549b5953-34bf-4fea-9e07-8a11f1f13550]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.442 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3764201-41 in ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.444 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3764201-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.444 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6481d1e4-d3f3-4be1-938a-400a2a75aa73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.446 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[581edc1d-5526-4a9a-a4e2-a7bfefcf5399]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 systemd-udevd[315345]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:55:54 np0005548731 systemd-machined[195355]: New machine qemu-89-instance-000000b0.
Dec  6 02:55:54 np0005548731 NetworkManager[49182]: <info>  [1765007754.4630] device (tapc76db338-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:55:54 np0005548731 NetworkManager[49182]: <info>  [1765007754.4641] device (tapc76db338-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.463 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e92d1f-505b-4548-a43e-b674f53b4457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 systemd[1]: Started Virtual Machine qemu-89-instance-000000b0.
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.478 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6e57903d-4669-49d1-91ed-762429d5e16d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.520 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c34993-0f23-4ecb-993c-76f00aafece3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.525 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0337c7-9962-48ba-93cb-78033602d02f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 NetworkManager[49182]: <info>  [1765007754.5265] manager: (tapa3764201-40): new Veth device (/org/freedesktop/NetworkManager/Devices/402)
Dec  6 02:55:54 np0005548731 systemd-udevd[315349]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.559 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.561 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f604f360-812b-4bd8-a1ae-56b908cc9bb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.564 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[20412bc1-b3af-42ab-b575-fc73b0652ee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 NetworkManager[49182]: <info>  [1765007754.5874] device (tapa3764201-40): carrier: link connected
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.593 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6526df-51b4-4536-a84c-e44d7175706d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.610 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e4499564-85e8-4264-b91c-b41c50aefca8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3764201-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:90:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801659, 'reachable_time': 37297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315378, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.626 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c678b788-c3a9-42e7-8f9e-320cefb630f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:90e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 801659, 'tstamp': 801659}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315379, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.641 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d05b93f2-e553-4a46-94cb-acead5294d57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3764201-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:90:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801659, 'reachable_time': 37297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315380, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.670 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc4f030-4335-4fd0-a3f6-a6544c20fd54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.725 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bc772071-0b0a-4c17-aa68-05947887d800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.727 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3764201-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.727 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.727 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3764201-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:54 np0005548731 kernel: tapa3764201-40: entered promiscuous mode
Dec  6 02:55:54 np0005548731 NetworkManager[49182]: <info>  [1765007754.7297] manager: (tapa3764201-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.730 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.731 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3764201-40, col_values=(('external_ids', {'iface-id': '901b0fd3-1832-4628-bbf4-0a14b30cd979'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:55:54 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:54Z|00877|binding|INFO|Releasing lport 901b0fd3-1832-4628-bbf4-0a14b30cd979 from this chassis (sb_readonly=0)
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.745 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.746 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3764201-4b86-4407-84d2-684bd05a44b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3764201-4b86-4407-84d2-684bd05a44b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.747 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d205a9ff-b7b6-4e08-8754-033283b7ae80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.748 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-a3764201-4b86-4407-84d2-684bd05a44b3
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/a3764201-4b86-4407-84d2-684bd05a44b3.pid.haproxy
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID a3764201-4b86-4407-84d2-684bd05a44b3
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:55:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:55:54.748 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'env', 'PROCESS_TAG=haproxy-a3764201-4b86-4407-84d2-684bd05a44b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3764201-4b86-4407-84d2-684bd05a44b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.830 232437 DEBUG nova.compute.manager [req-cfc83ac2-c80f-4628-b25b-c541ae1fcb75 req-9789dee4-34a2-4c1a-ab62-6a9e18e487e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-vif-plugged-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.831 232437 DEBUG oslo_concurrency.lockutils [req-cfc83ac2-c80f-4628-b25b-c541ae1fcb75 req-9789dee4-34a2-4c1a-ab62-6a9e18e487e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.831 232437 DEBUG oslo_concurrency.lockutils [req-cfc83ac2-c80f-4628-b25b-c541ae1fcb75 req-9789dee4-34a2-4c1a-ab62-6a9e18e487e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.832 232437 DEBUG oslo_concurrency.lockutils [req-cfc83ac2-c80f-4628-b25b-c541ae1fcb75 req-9789dee4-34a2-4c1a-ab62-6a9e18e487e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.832 232437 DEBUG nova.compute.manager [req-cfc83ac2-c80f-4628-b25b-c541ae1fcb75 req-9789dee4-34a2-4c1a-ab62-6a9e18e487e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Processing event network-vif-plugged-c76db338-0396-40a2-82b3-0c720d28d2bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.893 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007754.8933032, 21c591a0-5eed-4aa8-a68b-59a616b16e2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.894 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] VM Started (Lifecycle Event)#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.897 232437 DEBUG nova.compute.manager [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.901 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.904 232437 INFO nova.virt.libvirt.driver [-] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Instance spawned successfully.#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.904 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.914 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.918 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.925 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.925 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.926 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.926 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.927 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.927 232437 DEBUG nova.virt.libvirt.driver [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.935 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.936 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007754.8947227, 21c591a0-5eed-4aa8-a68b-59a616b16e2b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.936 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.958 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.962 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007754.8996892, 21c591a0-5eed-4aa8-a68b-59a616b16e2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.963 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:55:54 np0005548731 nova_compute[232433]: 2025-12-06 07:55:54.996 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:55:55 np0005548731 nova_compute[232433]: 2025-12-06 07:55:55.003 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:55:55 np0005548731 nova_compute[232433]: 2025-12-06 07:55:55.014 232437 INFO nova.compute.manager [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Took 6.07 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:55:55 np0005548731 nova_compute[232433]: 2025-12-06 07:55:55.015 232437 DEBUG nova.compute.manager [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:55:55 np0005548731 nova_compute[232433]: 2025-12-06 07:55:55.046 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:55:55 np0005548731 podman[315454]: 2025-12-06 07:55:55.089202818 +0000 UTC m=+0.046211782 container create 6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:55:55 np0005548731 nova_compute[232433]: 2025-12-06 07:55:55.090 232437 INFO nova.compute.manager [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Took 9.41 seconds to build instance.#033[00m
Dec  6 02:55:55 np0005548731 nova_compute[232433]: 2025-12-06 07:55:55.109 232437 DEBUG oslo_concurrency.lockutils [None req-65605dda-d708-4fb9-a7e8-a63d7c88cfde e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:55 np0005548731 systemd[1]: Started libpod-conmon-6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424.scope.
Dec  6 02:55:55 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:55:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:55.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:55 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68173a740d969647938b23c145d7d31b01150b56b517124d6746cb8c447e998c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:55:55 np0005548731 podman[315454]: 2025-12-06 07:55:55.065412006 +0000 UTC m=+0.022420980 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:55:55 np0005548731 podman[315454]: 2025-12-06 07:55:55.162638555 +0000 UTC m=+0.119647539 container init 6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 02:55:55 np0005548731 podman[315454]: 2025-12-06 07:55:55.167704869 +0000 UTC m=+0.124713833 container start 6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 02:55:55 np0005548731 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[315469]: [NOTICE]   (315473) : New worker (315475) forked
Dec  6 02:55:55 np0005548731 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[315469]: [NOTICE]   (315473) : Loading success.
Dec  6 02:55:55 np0005548731 nova_compute[232433]: 2025-12-06 07:55:55.413 232437 DEBUG nova.network.neutron [req-78b22f38-b665-4f0a-b9d0-f3a18dea7b1c req-b05bd42a-7dda-4289-887d-c463d6b83474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updated VIF entry in instance network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:55:55 np0005548731 nova_compute[232433]: 2025-12-06 07:55:55.414 232437 DEBUG nova.network.neutron [req-78b22f38-b665-4f0a-b9d0-f3a18dea7b1c req-b05bd42a-7dda-4289-887d-c463d6b83474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating instance_info_cache with network_info: [{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:55:55 np0005548731 nova_compute[232433]: 2025-12-06 07:55:55.429 232437 DEBUG oslo_concurrency.lockutils [req-78b22f38-b665-4f0a-b9d0-f3a18dea7b1c req-b05bd42a-7dda-4289-887d-c463d6b83474 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:55:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:57.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:57 np0005548731 nova_compute[232433]: 2025-12-06 07:55:57.438 232437 DEBUG nova.compute.manager [req-709d61e7-4976-48c2-bf51-61e031cb05d5 req-e54e8261-73c1-482d-b22b-97f45ee9da8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-vif-plugged-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:55:57 np0005548731 nova_compute[232433]: 2025-12-06 07:55:57.438 232437 DEBUG oslo_concurrency.lockutils [req-709d61e7-4976-48c2-bf51-61e031cb05d5 req-e54e8261-73c1-482d-b22b-97f45ee9da8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:55:57 np0005548731 nova_compute[232433]: 2025-12-06 07:55:57.439 232437 DEBUG oslo_concurrency.lockutils [req-709d61e7-4976-48c2-bf51-61e031cb05d5 req-e54e8261-73c1-482d-b22b-97f45ee9da8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:55:57 np0005548731 nova_compute[232433]: 2025-12-06 07:55:57.439 232437 DEBUG oslo_concurrency.lockutils [req-709d61e7-4976-48c2-bf51-61e031cb05d5 req-e54e8261-73c1-482d-b22b-97f45ee9da8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:55:57 np0005548731 nova_compute[232433]: 2025-12-06 07:55:57.439 232437 DEBUG nova.compute.manager [req-709d61e7-4976-48c2-bf51-61e031cb05d5 req-e54e8261-73c1-482d-b22b-97f45ee9da8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] No waiting events found dispatching network-vif-plugged-c76db338-0396-40a2-82b3-0c720d28d2bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:55:57 np0005548731 nova_compute[232433]: 2025-12-06 07:55:57.439 232437 WARNING nova.compute.manager [req-709d61e7-4976-48c2-bf51-61e031cb05d5 req-e54e8261-73c1-482d-b22b-97f45ee9da8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received unexpected event network-vif-plugged-c76db338-0396-40a2-82b3-0c720d28d2bd for instance with vm_state active and task_state None.#033[00m
Dec  6 02:55:57 np0005548731 nova_compute[232433]: 2025-12-06 07:55:57.552 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:55:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:58.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:55:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:55:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:55:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:55:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:58Z|00878|binding|INFO|Releasing lport 901b0fd3-1832-4628-bbf4-0a14b30cd979 from this chassis (sb_readonly=0)
Dec  6 02:55:58 np0005548731 nova_compute[232433]: 2025-12-06 07:55:58.763 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:58 np0005548731 ovn_controller[133927]: 2025-12-06T07:55:58Z|00879|binding|INFO|Releasing lport 901b0fd3-1832-4628-bbf4-0a14b30cd979 from this chassis (sb_readonly=0)
Dec  6 02:55:58 np0005548731 nova_compute[232433]: 2025-12-06 07:55:58.992 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:55:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:55:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:55:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:59.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:55:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:55:59 np0005548731 nova_compute[232433]: 2025-12-06 07:55:59.331 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007744.32945, e600163d-bbdd-421a-ab0e-83bfbac6495a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:55:59 np0005548731 nova_compute[232433]: 2025-12-06 07:55:59.331 232437 INFO nova.compute.manager [-] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:55:59 np0005548731 nova_compute[232433]: 2025-12-06 07:55:59.357 232437 DEBUG nova.compute.manager [None req-461e8d05-a10e-4004-926c-d9fc24bdb1be - - - - - -] [instance: e600163d-bbdd-421a-ab0e-83bfbac6495a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:55:59 np0005548731 nova_compute[232433]: 2025-12-06 07:55:59.560 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:00.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:00 np0005548731 podman[315619]: 2025-12-06 07:56:00.894460692 +0000 UTC m=+0.053879988 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec  6 02:56:00 np0005548731 podman[315621]: 2025-12-06 07:56:00.900744367 +0000 UTC m=+0.054976996 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:56:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:00.902 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:00.906 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:00.906 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:00 np0005548731 podman[315620]: 2025-12-06 07:56:00.92544432 +0000 UTC m=+0.084020386 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  6 02:56:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:01.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:02.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:02 np0005548731 nova_compute[232433]: 2025-12-06 07:56:02.555 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:03.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:04 np0005548731 nova_compute[232433]: 2025-12-06 07:56:04.065 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:04 np0005548731 NetworkManager[49182]: <info>  [1765007764.0670] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Dec  6 02:56:04 np0005548731 NetworkManager[49182]: <info>  [1765007764.0684] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Dec  6 02:56:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:04 np0005548731 nova_compute[232433]: 2025-12-06 07:56:04.222 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:04 np0005548731 ovn_controller[133927]: 2025-12-06T07:56:04Z|00880|binding|INFO|Releasing lport 901b0fd3-1832-4628-bbf4-0a14b30cd979 from this chassis (sb_readonly=0)
Dec  6 02:56:04 np0005548731 nova_compute[232433]: 2025-12-06 07:56:04.238 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:04.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:04 np0005548731 nova_compute[232433]: 2025-12-06 07:56:04.595 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:05.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:56:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:56:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:07.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:07 np0005548731 nova_compute[232433]: 2025-12-06 07:56:07.556 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:08.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:56:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2025625938' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:56:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:56:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2025625938' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:56:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:09.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:56:09Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:31:62:02 10.100.0.4
Dec  6 02:56:09 np0005548731 ovn_controller[133927]: 2025-12-06T07:56:09Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:31:62:02 10.100.0.4
Dec  6 02:56:09 np0005548731 nova_compute[232433]: 2025-12-06 07:56:09.631 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:10.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:11.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:12.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:12 np0005548731 nova_compute[232433]: 2025-12-06 07:56:12.558 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:13.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:14.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:14 np0005548731 nova_compute[232433]: 2025-12-06 07:56:14.662 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:15.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:16.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:17.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:17 np0005548731 nova_compute[232433]: 2025-12-06 07:56:17.560 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:56:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:18.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:56:18 np0005548731 nova_compute[232433]: 2025-12-06 07:56:18.531 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:19.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:19 np0005548731 nova_compute[232433]: 2025-12-06 07:56:19.697 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:20.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:21.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:22.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:22 np0005548731 nova_compute[232433]: 2025-12-06 07:56:22.561 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:22 np0005548731 nova_compute[232433]: 2025-12-06 07:56:22.599 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:23.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:24.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:24 np0005548731 nova_compute[232433]: 2025-12-06 07:56:24.740 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:56:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:56:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:26.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:27.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:27 np0005548731 nova_compute[232433]: 2025-12-06 07:56:27.564 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:27 np0005548731 nova_compute[232433]: 2025-12-06 07:56:27.624 232437 DEBUG oslo_concurrency.lockutils [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:27 np0005548731 nova_compute[232433]: 2025-12-06 07:56:27.625 232437 DEBUG oslo_concurrency.lockutils [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:27 np0005548731 nova_compute[232433]: 2025-12-06 07:56:27.643 232437 DEBUG nova.objects.instance [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 21c591a0-5eed-4aa8-a68b-59a616b16e2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:56:27 np0005548731 nova_compute[232433]: 2025-12-06 07:56:27.678 232437 DEBUG oslo_concurrency.lockutils [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:27 np0005548731 nova_compute[232433]: 2025-12-06 07:56:27.838 232437 DEBUG oslo_concurrency.lockutils [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:27 np0005548731 nova_compute[232433]: 2025-12-06 07:56:27.839 232437 DEBUG oslo_concurrency.lockutils [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:27 np0005548731 nova_compute[232433]: 2025-12-06 07:56:27.839 232437 INFO nova.compute.manager [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Attaching volume c7a5e27c-c7d8-4759-87be-61891912cec5 to /dev/vdb#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.021 232437 DEBUG os_brick.utils [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.022 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.034 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.034 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7da5e810-485e-4772-9c9c-25ec7b944412]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.035 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.043 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.043 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fc61a2-98aa-4124-bdb0-ede4226e0fa4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.045 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.053 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.054 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[14db0c3c-2562-4114-a321-4ee5b236191a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.055 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[77ef6dcb-23af-4879-b608-edbef6318a8a]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.055 232437 DEBUG oslo_concurrency.processutils [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.087 232437 DEBUG oslo_concurrency.processutils [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.089 232437 DEBUG os_brick.initiator.connectors.lightos [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.090 232437 DEBUG os_brick.initiator.connectors.lightos [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.090 232437 DEBUG os_brick.initiator.connectors.lightos [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.090 232437 DEBUG os_brick.utils [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.090 232437 DEBUG nova.virt.block_device [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating existing volume attachment record: ffb6853f-47ee-4734-855e-0bdec24290a2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:56:28 np0005548731 nova_compute[232433]: 2025-12-06 07:56:28.342 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:28.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:56:28 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3374446509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:56:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:29.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:29 np0005548731 nova_compute[232433]: 2025-12-06 07:56:29.718 232437 DEBUG nova.objects.instance [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 21c591a0-5eed-4aa8-a68b-59a616b16e2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:56:29 np0005548731 nova_compute[232433]: 2025-12-06 07:56:29.742 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:29 np0005548731 nova_compute[232433]: 2025-12-06 07:56:29.747 232437 DEBUG nova.virt.libvirt.driver [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Attempting to attach volume c7a5e27c-c7d8-4759-87be-61891912cec5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 02:56:29 np0005548731 nova_compute[232433]: 2025-12-06 07:56:29.751 232437 DEBUG nova.virt.libvirt.guest [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:56:29 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:56:29 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-c7a5e27c-c7d8-4759-87be-61891912cec5">
Dec  6 02:56:29 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:56:29 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:56:29 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:56:29 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:56:29 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:56:29 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:56:29 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:56:29 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:56:29 np0005548731 nova_compute[232433]:  <serial>c7a5e27c-c7d8-4759-87be-61891912cec5</serial>
Dec  6 02:56:29 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:56:29 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:56:29 np0005548731 nova_compute[232433]: 2025-12-06 07:56:29.867 232437 DEBUG nova.virt.libvirt.driver [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:56:29 np0005548731 nova_compute[232433]: 2025-12-06 07:56:29.868 232437 DEBUG nova.virt.libvirt.driver [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:56:29 np0005548731 nova_compute[232433]: 2025-12-06 07:56:29.868 232437 DEBUG nova.virt.libvirt.driver [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:56:29 np0005548731 nova_compute[232433]: 2025-12-06 07:56:29.868 232437 DEBUG nova.virt.libvirt.driver [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No VIF found with MAC fa:16:3e:31:62:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:56:30 np0005548731 nova_compute[232433]: 2025-12-06 07:56:30.049 232437 DEBUG oslo_concurrency.lockutils [None req-972e3dc1-c129-455f-bd35-30c5a5c73801 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:30.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:30 np0005548731 nova_compute[232433]: 2025-12-06 07:56:30.633 232437 DEBUG oslo_concurrency.lockutils [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:30 np0005548731 nova_compute[232433]: 2025-12-06 07:56:30.634 232437 DEBUG oslo_concurrency.lockutils [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:30 np0005548731 nova_compute[232433]: 2025-12-06 07:56:30.656 232437 DEBUG nova.objects.instance [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 21c591a0-5eed-4aa8-a68b-59a616b16e2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:56:30 np0005548731 nova_compute[232433]: 2025-12-06 07:56:30.693 232437 DEBUG oslo_concurrency.lockutils [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:30 np0005548731 nova_compute[232433]: 2025-12-06 07:56:30.910 232437 DEBUG oslo_concurrency.lockutils [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:30 np0005548731 nova_compute[232433]: 2025-12-06 07:56:30.911 232437 DEBUG oslo_concurrency.lockutils [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:30 np0005548731 nova_compute[232433]: 2025-12-06 07:56:30.911 232437 INFO nova.compute.manager [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Attaching volume cbaeb237-1f40-498b-93c2-2ed85aff86b7 to /dev/vdc#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.053 232437 DEBUG os_brick.utils [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.054 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.064 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.064 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd27707-58f7-4ffd-ab78-af8316f5ab61]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.066 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.074 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.075 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[cf29b0f3-3440-4a97-be99-c7e185a30048]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.076 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.084 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.084 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[00ea20c9-7511-47db-b4f8-faa743de1d71]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.085 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[1be6f179-e919-4493-a1fb-65fbd3fed022]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.086 232437 DEBUG oslo_concurrency.processutils [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.110 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.110 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.111 232437 DEBUG oslo_concurrency.processutils [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.113 232437 DEBUG os_brick.initiator.connectors.lightos [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.113 232437 DEBUG os_brick.initiator.connectors.lightos [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.113 232437 DEBUG os_brick.initiator.connectors.lightos [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.113 232437 DEBUG os_brick.utils [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.114 232437 DEBUG nova.virt.block_device [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating existing volume attachment record: bd946cba-8def-44b2-a537-58e2af18ae02 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:56:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:31.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:31 np0005548731 nova_compute[232433]: 2025-12-06 07:56:31.296 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:31 np0005548731 podman[315884]: 2025-12-06 07:56:31.887526026 +0000 UTC m=+0.048088267 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  6 02:56:31 np0005548731 podman[315886]: 2025-12-06 07:56:31.901302813 +0000 UTC m=+0.058792769 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Dec  6 02:56:31 np0005548731 podman[315885]: 2025-12-06 07:56:31.951333958 +0000 UTC m=+0.109851219 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 02:56:32 np0005548731 nova_compute[232433]: 2025-12-06 07:56:32.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:56:32 np0005548731 nova_compute[232433]: 2025-12-06 07:56:32.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:56:32 np0005548731 nova_compute[232433]: 2025-12-06 07:56:32.103 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:56:32 np0005548731 nova_compute[232433]: 2025-12-06 07:56:32.103 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:56:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:56:32 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624542154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:56:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:32.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:32 np0005548731 nova_compute[232433]: 2025-12-06 07:56:32.565 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:32 np0005548731 nova_compute[232433]: 2025-12-06 07:56:32.954 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:56:32 np0005548731 nova_compute[232433]: 2025-12-06 07:56:32.955 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:56:32 np0005548731 nova_compute[232433]: 2025-12-06 07:56:32.955 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:56:32 np0005548731 nova_compute[232433]: 2025-12-06 07:56:32.955 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 21c591a0-5eed-4aa8-a68b-59a616b16e2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:56:33 np0005548731 nova_compute[232433]: 2025-12-06 07:56:33.011 232437 DEBUG nova.objects.instance [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 21c591a0-5eed-4aa8-a68b-59a616b16e2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:56:33 np0005548731 nova_compute[232433]: 2025-12-06 07:56:33.040 232437 DEBUG nova.virt.libvirt.driver [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Attempting to attach volume cbaeb237-1f40-498b-93c2-2ed85aff86b7 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Dec  6 02:56:33 np0005548731 nova_compute[232433]: 2025-12-06 07:56:33.042 232437 DEBUG nova.virt.libvirt.guest [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] attach device xml: <disk type="network" device="disk">
Dec  6 02:56:33 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:56:33 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-cbaeb237-1f40-498b-93c2-2ed85aff86b7">
Dec  6 02:56:33 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:56:33 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:56:33 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:56:33 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:56:33 np0005548731 nova_compute[232433]:  <auth username="openstack">
Dec  6 02:56:33 np0005548731 nova_compute[232433]:    <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:56:33 np0005548731 nova_compute[232433]:  </auth>
Dec  6 02:56:33 np0005548731 nova_compute[232433]:  <target dev="vdc" bus="virtio"/>
Dec  6 02:56:33 np0005548731 nova_compute[232433]:  <serial>cbaeb237-1f40-498b-93c2-2ed85aff86b7</serial>
Dec  6 02:56:33 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:56:33 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 02:56:33 np0005548731 nova_compute[232433]: 2025-12-06 07:56:33.149 232437 DEBUG nova.virt.libvirt.driver [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:56:33 np0005548731 nova_compute[232433]: 2025-12-06 07:56:33.150 232437 DEBUG nova.virt.libvirt.driver [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:56:33 np0005548731 nova_compute[232433]: 2025-12-06 07:56:33.150 232437 DEBUG nova.virt.libvirt.driver [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:56:33 np0005548731 nova_compute[232433]: 2025-12-06 07:56:33.150 232437 DEBUG nova.virt.libvirt.driver [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:56:33 np0005548731 nova_compute[232433]: 2025-12-06 07:56:33.150 232437 DEBUG nova.virt.libvirt.driver [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No VIF found with MAC fa:16:3e:31:62:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:56:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:33.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:33 np0005548731 nova_compute[232433]: 2025-12-06 07:56:33.330 232437 DEBUG oslo_concurrency.lockutils [None req-f8433f77-ef98-41d3-9621-37ceda7a088c e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:34 np0005548731 nova_compute[232433]: 2025-12-06 07:56:34.315 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating instance_info_cache with network_info: [{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:56:34 np0005548731 nova_compute[232433]: 2025-12-06 07:56:34.335 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:56:34 np0005548731 nova_compute[232433]: 2025-12-06 07:56:34.335 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:56:34 np0005548731 nova_compute[232433]: 2025-12-06 07:56:34.336 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:56:34 np0005548731 nova_compute[232433]: 2025-12-06 07:56:34.336 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:56:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:34.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:34 np0005548731 nova_compute[232433]: 2025-12-06 07:56:34.770 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:35.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:36.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.132 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.133 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:56:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:37.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:56:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:56:37 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3087927977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.534 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.567 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.624 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.625 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.625 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.625 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.787 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.789 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4048MB free_disk=20.942138671875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.789 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.789 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.860 232437 DEBUG nova.compute.manager [req-dcc6ab62-5dba-4934-b992-ff391a70ebb3 req-f8f09cb1-d2f0-470c-88c6-f3673a2866a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.861 232437 DEBUG nova.compute.manager [req-dcc6ab62-5dba-4934-b992-ff391a70ebb3 req-f8f09cb1-d2f0-470c-88c6-f3673a2866a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing instance network info cache due to event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.861 232437 DEBUG oslo_concurrency.lockutils [req-dcc6ab62-5dba-4934-b992-ff391a70ebb3 req-f8f09cb1-d2f0-470c-88c6-f3673a2866a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.862 232437 DEBUG oslo_concurrency.lockutils [req-dcc6ab62-5dba-4934-b992-ff391a70ebb3 req-f8f09cb1-d2f0-470c-88c6-f3673a2866a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.862 232437 DEBUG nova.network.neutron [req-dcc6ab62-5dba-4934-b992-ff391a70ebb3 req-f8f09cb1-d2f0-470c-88c6-f3673a2866a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.864 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.864 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.865 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:56:37 np0005548731 nova_compute[232433]: 2025-12-06 07:56:37.906 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:56:38 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4063988645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:56:38 np0005548731 nova_compute[232433]: 2025-12-06 07:56:38.331 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:38 np0005548731 nova_compute[232433]: 2025-12-06 07:56:38.336 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:56:38 np0005548731 nova_compute[232433]: 2025-12-06 07:56:38.352 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:56:38 np0005548731 nova_compute[232433]: 2025-12-06 07:56:38.385 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:56:38 np0005548731 nova_compute[232433]: 2025-12-06 07:56:38.385 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:38.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:39.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:39 np0005548731 nova_compute[232433]: 2025-12-06 07:56:39.341 232437 DEBUG nova.compute.manager [req-65a59b94-06be-4e89-8d5b-d0a1f2508715 req-9e01ab5e-0f45-4cff-999e-ef470f6039f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:56:39 np0005548731 nova_compute[232433]: 2025-12-06 07:56:39.342 232437 DEBUG nova.compute.manager [req-65a59b94-06be-4e89-8d5b-d0a1f2508715 req-9e01ab5e-0f45-4cff-999e-ef470f6039f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing instance network info cache due to event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:56:39 np0005548731 nova_compute[232433]: 2025-12-06 07:56:39.342 232437 DEBUG oslo_concurrency.lockutils [req-65a59b94-06be-4e89-8d5b-d0a1f2508715 req-9e01ab5e-0f45-4cff-999e-ef470f6039f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:56:39 np0005548731 nova_compute[232433]: 2025-12-06 07:56:39.511 232437 DEBUG nova.network.neutron [req-dcc6ab62-5dba-4934-b992-ff391a70ebb3 req-f8f09cb1-d2f0-470c-88c6-f3673a2866a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updated VIF entry in instance network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:56:39 np0005548731 nova_compute[232433]: 2025-12-06 07:56:39.512 232437 DEBUG nova.network.neutron [req-dcc6ab62-5dba-4934-b992-ff391a70ebb3 req-f8f09cb1-d2f0-470c-88c6-f3673a2866a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating instance_info_cache with network_info: [{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:56:39 np0005548731 nova_compute[232433]: 2025-12-06 07:56:39.540 232437 DEBUG oslo_concurrency.lockutils [req-dcc6ab62-5dba-4934-b992-ff391a70ebb3 req-f8f09cb1-d2f0-470c-88c6-f3673a2866a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:56:39 np0005548731 nova_compute[232433]: 2025-12-06 07:56:39.541 232437 DEBUG oslo_concurrency.lockutils [req-65a59b94-06be-4e89-8d5b-d0a1f2508715 req-9e01ab5e-0f45-4cff-999e-ef470f6039f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:56:39 np0005548731 nova_compute[232433]: 2025-12-06 07:56:39.541 232437 DEBUG nova.network.neutron [req-65a59b94-06be-4e89-8d5b-d0a1f2508715 req-9e01ab5e-0f45-4cff-999e-ef470f6039f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:56:39 np0005548731 nova_compute[232433]: 2025-12-06 07:56:39.794 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:40.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:40 np0005548731 nova_compute[232433]: 2025-12-06 07:56:40.675 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:40 np0005548731 nova_compute[232433]: 2025-12-06 07:56:40.676 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:40 np0005548731 nova_compute[232433]: 2025-12-06 07:56:40.714 232437 DEBUG nova.compute.manager [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:56:40 np0005548731 nova_compute[232433]: 2025-12-06 07:56:40.800 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:40 np0005548731 nova_compute[232433]: 2025-12-06 07:56:40.801 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:40 np0005548731 nova_compute[232433]: 2025-12-06 07:56:40.808 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:56:40 np0005548731 nova_compute[232433]: 2025-12-06 07:56:40.808 232437 INFO nova.compute.claims [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:56:40 np0005548731 nova_compute[232433]: 2025-12-06 07:56:40.926 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:41.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:56:41 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2467485622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.349 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.356 232437 DEBUG nova.compute.provider_tree [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.382 232437 DEBUG nova.scheduler.client.report [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.387 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.388 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.403 232437 DEBUG nova.network.neutron [req-65a59b94-06be-4e89-8d5b-d0a1f2508715 req-9e01ab5e-0f45-4cff-999e-ef470f6039f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updated VIF entry in instance network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.404 232437 DEBUG nova.network.neutron [req-65a59b94-06be-4e89-8d5b-d0a1f2508715 req-9e01ab5e-0f45-4cff-999e-ef470f6039f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating instance_info_cache with network_info: [{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.414 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.415 232437 DEBUG nova.compute.manager [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.448 232437 DEBUG oslo_concurrency.lockutils [req-65a59b94-06be-4e89-8d5b-d0a1f2508715 req-9e01ab5e-0f45-4cff-999e-ef470f6039f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.474 232437 DEBUG nova.compute.manager [req-d9f874e3-f2d8-4470-8d2c-e9b1be7ee839 req-856f8d86-b4f5-4041-a71e-a22bb0a1006c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.474 232437 DEBUG nova.compute.manager [req-d9f874e3-f2d8-4470-8d2c-e9b1be7ee839 req-856f8d86-b4f5-4041-a71e-a22bb0a1006c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing instance network info cache due to event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.475 232437 DEBUG oslo_concurrency.lockutils [req-d9f874e3-f2d8-4470-8d2c-e9b1be7ee839 req-856f8d86-b4f5-4041-a71e-a22bb0a1006c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.475 232437 DEBUG oslo_concurrency.lockutils [req-d9f874e3-f2d8-4470-8d2c-e9b1be7ee839 req-856f8d86-b4f5-4041-a71e-a22bb0a1006c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.476 232437 DEBUG nova.network.neutron [req-d9f874e3-f2d8-4470-8d2c-e9b1be7ee839 req-856f8d86-b4f5-4041-a71e-a22bb0a1006c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.485 232437 DEBUG nova.compute.manager [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.486 232437 DEBUG nova.network.neutron [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.506 232437 INFO nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.532 232437 DEBUG nova.compute.manager [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.628 232437 DEBUG nova.compute.manager [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.631 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.632 232437 INFO nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Creating image(s)#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.658 232437 DEBUG nova.storage.rbd_utils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.693 232437 DEBUG nova.storage.rbd_utils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.721 232437 DEBUG nova.storage.rbd_utils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.725 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.789 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.790 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.790 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.791 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.816 232437 DEBUG nova.storage.rbd_utils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:56:41 np0005548731 nova_compute[232433]: 2025-12-06 07:56:41.820 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.013 232437 DEBUG nova.policy [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '98e657096e3f4b528cd461a3dd6a750e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.235 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.303 232437 DEBUG nova.storage.rbd_utils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] resizing rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.397 232437 DEBUG nova.objects.instance [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'migration_context' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.421 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.421 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Ensure instance console log exists: /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.422 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.422 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.422 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:42.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.568 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.889 232437 DEBUG nova.network.neutron [req-d9f874e3-f2d8-4470-8d2c-e9b1be7ee839 req-856f8d86-b4f5-4041-a71e-a22bb0a1006c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updated VIF entry in instance network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.889 232437 DEBUG nova.network.neutron [req-d9f874e3-f2d8-4470-8d2c-e9b1be7ee839 req-856f8d86-b4f5-4041-a71e-a22bb0a1006c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating instance_info_cache with network_info: [{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:56:42 np0005548731 nova_compute[232433]: 2025-12-06 07:56:42.907 232437 DEBUG oslo_concurrency.lockutils [req-d9f874e3-f2d8-4470-8d2c-e9b1be7ee839 req-856f8d86-b4f5-4041-a71e-a22bb0a1006c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:56:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:43.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:43 np0005548731 nova_compute[232433]: 2025-12-06 07:56:43.284 232437 DEBUG nova.network.neutron [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Successfully created port: 3b228814-72c3-4086-9740-e056ea1c6d7b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:56:44 np0005548731 nova_compute[232433]: 2025-12-06 07:56:44.012 232437 DEBUG nova.compute.manager [req-bffdf178-de94-4c96-b57c-5d5c3765e0a1 req-1c49c014-f9fe-46bf-b3a2-8aad6d29c820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:56:44 np0005548731 nova_compute[232433]: 2025-12-06 07:56:44.012 232437 DEBUG nova.compute.manager [req-bffdf178-de94-4c96-b57c-5d5c3765e0a1 req-1c49c014-f9fe-46bf-b3a2-8aad6d29c820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing instance network info cache due to event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:56:44 np0005548731 nova_compute[232433]: 2025-12-06 07:56:44.012 232437 DEBUG oslo_concurrency.lockutils [req-bffdf178-de94-4c96-b57c-5d5c3765e0a1 req-1c49c014-f9fe-46bf-b3a2-8aad6d29c820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:56:44 np0005548731 nova_compute[232433]: 2025-12-06 07:56:44.013 232437 DEBUG oslo_concurrency.lockutils [req-bffdf178-de94-4c96-b57c-5d5c3765e0a1 req-1c49c014-f9fe-46bf-b3a2-8aad6d29c820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:56:44 np0005548731 nova_compute[232433]: 2025-12-06 07:56:44.013 232437 DEBUG nova.network.neutron [req-bffdf178-de94-4c96-b57c-5d5c3765e0a1 req-1c49c014-f9fe-46bf-b3a2-8aad6d29c820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:56:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:56:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:44.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:56:44 np0005548731 nova_compute[232433]: 2025-12-06 07:56:44.480 232437 DEBUG nova.network.neutron [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Successfully updated port: 3b228814-72c3-4086-9740-e056ea1c6d7b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:56:44 np0005548731 nova_compute[232433]: 2025-12-06 07:56:44.499 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:56:44 np0005548731 nova_compute[232433]: 2025-12-06 07:56:44.499 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquired lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:56:44 np0005548731 nova_compute[232433]: 2025-12-06 07:56:44.500 232437 DEBUG nova.network.neutron [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:56:44 np0005548731 nova_compute[232433]: 2025-12-06 07:56:44.704 232437 DEBUG nova.network.neutron [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:56:44 np0005548731 nova_compute[232433]: 2025-12-06 07:56:44.795 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:45.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:45 np0005548731 nova_compute[232433]: 2025-12-06 07:56:45.222 232437 DEBUG nova.network.neutron [req-bffdf178-de94-4c96-b57c-5d5c3765e0a1 req-1c49c014-f9fe-46bf-b3a2-8aad6d29c820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updated VIF entry in instance network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:56:45 np0005548731 nova_compute[232433]: 2025-12-06 07:56:45.223 232437 DEBUG nova.network.neutron [req-bffdf178-de94-4c96-b57c-5d5c3765e0a1 req-1c49c014-f9fe-46bf-b3a2-8aad6d29c820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating instance_info_cache with network_info: [{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:56:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:45.237 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:56:45 np0005548731 nova_compute[232433]: 2025-12-06 07:56:45.238 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:45 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:45.238 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:56:45 np0005548731 nova_compute[232433]: 2025-12-06 07:56:45.290 232437 DEBUG oslo_concurrency.lockutils [req-bffdf178-de94-4c96-b57c-5d5c3765e0a1 req-1c49c014-f9fe-46bf-b3a2-8aad6d29c820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.013 232437 DEBUG nova.network.neutron [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updating instance_info_cache with network_info: [{"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.061 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Releasing lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.062 232437 DEBUG nova.compute.manager [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Instance network_info: |[{"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.068 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Start _get_guest_xml network_info=[{"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.074 232437 WARNING nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.081 232437 DEBUG nova.virt.libvirt.host [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.081 232437 DEBUG nova.virt.libvirt.host [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.085 232437 DEBUG nova.virt.libvirt.host [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.085 232437 DEBUG nova.virt.libvirt.host [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.086 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.087 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.087 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.087 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.088 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.088 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.088 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.088 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.089 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.089 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.089 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.089 232437 DEBUG nova.virt.hardware [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.092 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.129 232437 DEBUG nova.compute.manager [req-6378fd5b-021e-4d17-bd6d-caaafbfe453e req-2d2b5bcf-f9a7-4e31-a434-ac36292455e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-changed-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.129 232437 DEBUG nova.compute.manager [req-6378fd5b-021e-4d17-bd6d-caaafbfe453e req-2d2b5bcf-f9a7-4e31-a434-ac36292455e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Refreshing instance network info cache due to event network-changed-3b228814-72c3-4086-9740-e056ea1c6d7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.130 232437 DEBUG oslo_concurrency.lockutils [req-6378fd5b-021e-4d17-bd6d-caaafbfe453e req-2d2b5bcf-f9a7-4e31-a434-ac36292455e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.130 232437 DEBUG oslo_concurrency.lockutils [req-6378fd5b-021e-4d17-bd6d-caaafbfe453e req-2d2b5bcf-f9a7-4e31-a434-ac36292455e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.130 232437 DEBUG nova.network.neutron [req-6378fd5b-021e-4d17-bd6d-caaafbfe453e req-2d2b5bcf-f9a7-4e31-a434-ac36292455e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Refreshing network info cache for port 3b228814-72c3-4086-9740-e056ea1c6d7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:56:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:46.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:56:46 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3066351593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.543 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.579 232437 DEBUG nova.storage.rbd_utils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.582 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.647 232437 DEBUG oslo_concurrency.lockutils [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.648 232437 DEBUG oslo_concurrency.lockutils [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.665 232437 INFO nova.compute.manager [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Detaching volume c7a5e27c-c7d8-4759-87be-61891912cec5#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.831 232437 INFO nova.virt.block_device [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Attempting to driver detach volume c7a5e27c-c7d8-4759-87be-61891912cec5 from mountpoint /dev/vdb#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.841 232437 DEBUG nova.virt.libvirt.driver [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Attempting to detach device vdb from instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.842 232437 DEBUG nova.virt.libvirt.guest [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-c7a5e27c-c7d8-4759-87be-61891912cec5">
Dec  6 02:56:46 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  <serial>c7a5e27c-c7d8-4759-87be-61891912cec5</serial>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:56:46 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.850 232437 INFO nova.virt.libvirt.driver [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdb from instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b from the persistent domain config.#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.851 232437 DEBUG nova.virt.libvirt.driver [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:56:46 np0005548731 nova_compute[232433]: 2025-12-06 07:56:46.852 232437 DEBUG nova.virt.libvirt.guest [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-c7a5e27c-c7d8-4759-87be-61891912cec5">
Dec  6 02:56:46 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  <target dev="vdb" bus="virtio"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  <serial>c7a5e27c-c7d8-4759-87be-61891912cec5</serial>
Dec  6 02:56:46 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec  6 02:56:46 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:56:46 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:56:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:56:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2783866018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.023 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.025 232437 DEBUG nova.virt.libvirt.vif [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:56:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-760058587',display_name='tempest-TestShelveInstance-server-760058587',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-760058587',id=178,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPMK8f3hkRq1zqoQC83gMxNZg6M7ZAxKrSEKHe3B4eb91L7pfGPJzplK7LQrq84g1u5RR1ZeVyoQ0VCNm5QQfCXxX7TmQ1C0jimNi3kHh1QlwD/S46/iChPS6huTXb5CpA==',key_name='tempest-TestShelveInstance-967718151',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-g1sdzs5v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:56:41Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=9aa31d67-6e8e-4301-99db-832dd1fe00bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.025 232437 DEBUG nova.network.os_vif_util [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.026 232437 DEBUG nova.network.os_vif_util [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.028 232437 DEBUG nova.objects.instance [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.046 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  <uuid>9aa31d67-6e8e-4301-99db-832dd1fe00bc</uuid>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  <name>instance-000000b2</name>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestShelveInstance-server-760058587</nova:name>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:56:46</nova:creationTime>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <nova:user uuid="98e657096e3f4b528cd461a3dd6a750e">tempest-TestShelveInstance-1863009913-project-member</nova:user>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <nova:project uuid="c3c0564f8e9f4af9ae5b597a275c989f">tempest-TestShelveInstance-1863009913</nova:project>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <nova:port uuid="3b228814-72c3-4086-9740-e056ea1c6d7b">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <entry name="serial">9aa31d67-6e8e-4301-99db-832dd1fe00bc</entry>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <entry name="uuid">9aa31d67-6e8e-4301-99db-832dd1fe00bc</entry>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:b9:91:4c"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <target dev="tap3b228814-72"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/console.log" append="off"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:56:47 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:56:47 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:56:47 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:56:47 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.048 232437 DEBUG nova.compute.manager [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Preparing to wait for external event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.048 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.048 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.049 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.049 232437 DEBUG nova.virt.libvirt.vif [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:56:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-760058587',display_name='tempest-TestShelveInstance-server-760058587',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-760058587',id=178,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPMK8f3hkRq1zqoQC83gMxNZg6M7ZAxKrSEKHe3B4eb91L7pfGPJzplK7LQrq84g1u5RR1ZeVyoQ0VCNm5QQfCXxX7TmQ1C0jimNi3kHh1QlwD/S46/iChPS6huTXb5CpA==',key_name='tempest-TestShelveInstance-967718151',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-g1sdzs5v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:56:41Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=9aa31d67-6e8e-4301-99db-832dd1fe00bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.049 232437 DEBUG nova.network.os_vif_util [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.050 232437 DEBUG nova.network.os_vif_util [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.050 232437 DEBUG os_vif [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.051 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.051 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.052 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.054 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.054 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b228814-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.055 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b228814-72, col_values=(('external_ids', {'iface-id': '3b228814-72c3-4086-9740-e056ea1c6d7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:91:4c', 'vm-uuid': '9aa31d67-6e8e-4301-99db-832dd1fe00bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.091 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:47 np0005548731 NetworkManager[49182]: <info>  [1765007807.0930] manager: (tap3b228814-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.095 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.099 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.099 232437 INFO os_vif [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72')#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.142 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.143 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.143 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No VIF found with MAC fa:16:3e:b9:91:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.143 232437 INFO nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Using config drive#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.167 232437 DEBUG nova.storage.rbd_utils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:56:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.420 232437 DEBUG nova.network.neutron [req-6378fd5b-021e-4d17-bd6d-caaafbfe453e req-2d2b5bcf-f9a7-4e31-a434-ac36292455e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updated VIF entry in instance network info cache for port 3b228814-72c3-4086-9740-e056ea1c6d7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.421 232437 DEBUG nova.network.neutron [req-6378fd5b-021e-4d17-bd6d-caaafbfe453e req-2d2b5bcf-f9a7-4e31-a434-ac36292455e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updating instance_info_cache with network_info: [{"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.458 232437 DEBUG oslo_concurrency.lockutils [req-6378fd5b-021e-4d17-bd6d-caaafbfe453e req-2d2b5bcf-f9a7-4e31-a434-ac36292455e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.515 232437 INFO nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Creating config drive at /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.520 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx__a7aib execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.654 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx__a7aib" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.683 232437 DEBUG nova.storage.rbd_utils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.688 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.714 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765007807.70019, 21c591a0-5eed-4aa8-a68b-59a616b16e2b => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.715 232437 DEBUG nova.virt.libvirt.driver [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.721 232437 INFO nova.virt.libvirt.driver [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdb from instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b from the live domain config.#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.902 232437 DEBUG nova.objects.instance [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 21c591a0-5eed-4aa8-a68b-59a616b16e2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.950 232437 DEBUG oslo_concurrency.lockutils [None req-59aa0d7b-cfed-41a6-881f-308e5ca6c960 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.969 232437 DEBUG oslo_concurrency.processutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:56:47 np0005548731 nova_compute[232433]: 2025-12-06 07:56:47.969 232437 INFO nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Deleting local config drive /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config because it was imported into RBD.#033[00m
Dec  6 02:56:48 np0005548731 kernel: tap3b228814-72: entered promiscuous mode
Dec  6 02:56:48 np0005548731 NetworkManager[49182]: <info>  [1765007808.0159] manager: (tap3b228814-72): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Dec  6 02:56:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:56:48Z|00881|binding|INFO|Claiming lport 3b228814-72c3-4086-9740-e056ea1c6d7b for this chassis.
Dec  6 02:56:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:56:48Z|00882|binding|INFO|3b228814-72c3-4086-9740-e056ea1c6d7b: Claiming fa:16:3e:b9:91:4c 10.100.0.10
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.017 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:56:48Z|00883|binding|INFO|Setting lport 3b228814-72c3-4086-9740-e056ea1c6d7b ovn-installed in OVS
Dec  6 02:56:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:56:48Z|00884|binding|INFO|Setting lport 3b228814-72c3-4086-9740-e056ea1c6d7b up in Southbound
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.034 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:91:4c 10.100.0.10'], port_security=['fa:16:3e:b9:91:4c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9aa31d67-6e8e-4301-99db-832dd1fe00bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8caa40db-27da-43ab-86ca-042284636e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89072f55-dc9c-4de3-8430-0091f653d55a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ce9c1bf-feee-480c-a359-3eaf272f4b83, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=3b228814-72c3-4086-9740-e056ea1c6d7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.035 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.037 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 3b228814-72c3-4086-9740-e056ea1c6d7b in datapath 8caa40db-27da-43ab-86ca-042284636e71 bound to our chassis#033[00m
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.038 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.039 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8caa40db-27da-43ab-86ca-042284636e71#033[00m
Dec  6 02:56:48 np0005548731 systemd-udevd[316394]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.053 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[86f8f606-445a-420e-b5d9-a245296d0774]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 systemd-machined[195355]: New machine qemu-90-instance-000000b2.
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.054 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8caa40db-21 in ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:56:48 np0005548731 NetworkManager[49182]: <info>  [1765007808.0577] device (tap3b228814-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:56:48 np0005548731 NetworkManager[49182]: <info>  [1765007808.0584] device (tap3b228814-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.057 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8caa40db-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.057 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7705d615-506f-41fa-81bc-90fb5c149551]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.059 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a87a9b-f528-4503-b18a-f59fdfa7b061]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.070 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[1497c890-d2c9-4d1f-81fa-064dfea2849d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 systemd[1]: Started Virtual Machine qemu-90-instance-000000b2.
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.085 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dc19a245-c694-45d6-8a53-81e752f4105e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.118 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ea309225-55d2-4444-917c-bcfa5d2027ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 systemd-udevd[316398]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:56:48 np0005548731 NetworkManager[49182]: <info>  [1765007808.1237] manager: (tap8caa40db-20): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.122 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2ffeb0b6-ebbe-41c9-a7e7-ed2161d7bbe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.153 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e66d309f-5111-4a45-90d8-16a8741015fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.157 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3253a7-1168-4261-be9c-7b83adde941b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 NetworkManager[49182]: <info>  [1765007808.1798] device (tap8caa40db-20): carrier: link connected
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.189 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f1977a5c-039e-455a-8e15-dda649b5257c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.208 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4c543f2b-484b-4e60-97ba-58da2cf6bfcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8caa40db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:45:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807018, 'reachable_time': 20174, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316428, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.223 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1c0fa071-aae5-4b4c-a7d8-4647c14bb884]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:4540'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807018, 'tstamp': 807018}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316429, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.238 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c72d7335-30b5-4b9a-a69a-b2ee55998ed1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8caa40db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:45:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807018, 'reachable_time': 20174, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316430, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.267 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0376882a-ef17-4155-a4a0-fc792001c38a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.326 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8d393f1e-ebce-4006-b5b3-194c30d127e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.327 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8caa40db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.327 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.327 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8caa40db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:56:48 np0005548731 NetworkManager[49182]: <info>  [1765007808.3638] manager: (tap8caa40db-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.364 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:48 np0005548731 kernel: tap8caa40db-20: entered promiscuous mode
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.366 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8caa40db-20, col_values=(('external_ids', {'iface-id': '13935eb5-7198-4d3b-b91f-e4e0daf2886d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:56:48 np0005548731 ovn_controller[133927]: 2025-12-06T07:56:48Z|00885|binding|INFO|Releasing lport 13935eb5-7198-4d3b-b91f-e4e0daf2886d from this chassis (sb_readonly=0)
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.367 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.382 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.383 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.384 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccf5122-55c2-487f-a267-424362c7761e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.384 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-8caa40db-27da-43ab-86ca-042284636e71
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 8caa40db-27da-43ab-86ca-042284636e71
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:56:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:48.385 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'env', 'PROCESS_TAG=haproxy-8caa40db-27da-43ab-86ca-042284636e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8caa40db-27da-43ab-86ca-042284636e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:56:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:48.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.453 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007808.4531603, 9aa31d67-6e8e-4301-99db-832dd1fe00bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.454 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] VM Started (Lifecycle Event)#033[00m
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.477 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.484 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007808.455516, 9aa31d67-6e8e-4301-99db-832dd1fe00bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.485 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.501 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.504 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:56:48 np0005548731 nova_compute[232433]: 2025-12-06 07:56:48.525 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:56:48 np0005548731 podman[316506]: 2025-12-06 07:56:48.720372368 +0000 UTC m=+0.043300370 container create 55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:56:48 np0005548731 systemd[1]: Started libpod-conmon-55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d.scope.
Dec  6 02:56:48 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:56:48 np0005548731 podman[316506]: 2025-12-06 07:56:48.697582831 +0000 UTC m=+0.020510853 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:56:48 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e995edef3aae70dc0406e49798650db1c28fec096b83d9cbea2bc9bd0d1e45/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:56:48 np0005548731 podman[316506]: 2025-12-06 07:56:48.809064089 +0000 UTC m=+0.131992101 container init 55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:56:48 np0005548731 podman[316506]: 2025-12-06 07:56:48.815007874 +0000 UTC m=+0.137935876 container start 55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 02:56:48 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[316521]: [NOTICE]   (316525) : New worker (316527) forked
Dec  6 02:56:48 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[316521]: [NOTICE]   (316525) : Loading success.
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:48.954768) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007808954822, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2368, "num_deletes": 262, "total_data_size": 5457712, "memory_usage": 5541104, "flush_reason": "Manual Compaction"}
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007808972082, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 3565501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64923, "largest_seqno": 67286, "table_properties": {"data_size": 3556074, "index_size": 5856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20437, "raw_average_key_size": 20, "raw_value_size": 3536743, "raw_average_value_size": 3543, "num_data_blocks": 254, "num_entries": 998, "num_filter_entries": 998, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007613, "oldest_key_time": 1765007613, "file_creation_time": 1765007808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 17386 microseconds, and 7637 cpu microseconds.
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:48.972153) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 3565501 bytes OK
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:48.972176) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:48.973606) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:48.973619) EVENT_LOG_v1 {"time_micros": 1765007808973615, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:48.973635) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 5447150, prev total WAL file size 5447861, number of live WAL files 2.
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:48.974755) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323734' seq:72057594037927935, type:22 .. '6C6F676D0032353238' seq:0, type:0; will stop at (end)
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(3481KB)], [129(10MB)]
Dec  6 02:56:48 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007808974799, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 14653530, "oldest_snapshot_seqno": -1}
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 9817 keys, 14481684 bytes, temperature: kUnknown
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007809055399, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 14481684, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14415815, "index_size": 40220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 258220, "raw_average_key_size": 26, "raw_value_size": 14240960, "raw_average_value_size": 1450, "num_data_blocks": 1547, "num_entries": 9817, "num_filter_entries": 9817, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765007808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:49.055684) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 14481684 bytes
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:49.057089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.6 rd, 179.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 10.6 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(8.2) write-amplify(4.1) OK, records in: 10359, records dropped: 542 output_compression: NoCompression
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:49.057107) EVENT_LOG_v1 {"time_micros": 1765007809057098, "job": 82, "event": "compaction_finished", "compaction_time_micros": 80671, "compaction_time_cpu_micros": 33955, "output_level": 6, "num_output_files": 1, "total_output_size": 14481684, "num_input_records": 10359, "num_output_records": 9817, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007809057916, "job": 82, "event": "table_file_deletion", "file_number": 131}
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007809059978, "job": 82, "event": "table_file_deletion", "file_number": 129}
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:48.974671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:49.060127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:49.060136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:49.060139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:49.060141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:49.060143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.260 232437 DEBUG nova.compute.manager [req-512de56d-2293-4767-9db3-5fb030f3c5e3 req-4333f2e2-e613-4846-8bb5-d65931b2eef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.260 232437 DEBUG oslo_concurrency.lockutils [req-512de56d-2293-4767-9db3-5fb030f3c5e3 req-4333f2e2-e613-4846-8bb5-d65931b2eef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.261 232437 DEBUG oslo_concurrency.lockutils [req-512de56d-2293-4767-9db3-5fb030f3c5e3 req-4333f2e2-e613-4846-8bb5-d65931b2eef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.261 232437 DEBUG oslo_concurrency.lockutils [req-512de56d-2293-4767-9db3-5fb030f3c5e3 req-4333f2e2-e613-4846-8bb5-d65931b2eef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.261 232437 DEBUG nova.compute.manager [req-512de56d-2293-4767-9db3-5fb030f3c5e3 req-4333f2e2-e613-4846-8bb5-d65931b2eef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Processing event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.261 232437 DEBUG nova.compute.manager [req-512de56d-2293-4767-9db3-5fb030f3c5e3 req-4333f2e2-e613-4846-8bb5-d65931b2eef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.261 232437 DEBUG oslo_concurrency.lockutils [req-512de56d-2293-4767-9db3-5fb030f3c5e3 req-4333f2e2-e613-4846-8bb5-d65931b2eef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.262 232437 DEBUG oslo_concurrency.lockutils [req-512de56d-2293-4767-9db3-5fb030f3c5e3 req-4333f2e2-e613-4846-8bb5-d65931b2eef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.262 232437 DEBUG oslo_concurrency.lockutils [req-512de56d-2293-4767-9db3-5fb030f3c5e3 req-4333f2e2-e613-4846-8bb5-d65931b2eef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.262 232437 DEBUG nova.compute.manager [req-512de56d-2293-4767-9db3-5fb030f3c5e3 req-4333f2e2-e613-4846-8bb5-d65931b2eef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] No waiting events found dispatching network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.262 232437 WARNING nova.compute.manager [req-512de56d-2293-4767-9db3-5fb030f3c5e3 req-4333f2e2-e613-4846-8bb5-d65931b2eef6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received unexpected event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b for instance with vm_state building and task_state spawning.#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.263 232437 DEBUG nova.compute.manager [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.267 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007809.2675288, 9aa31d67-6e8e-4301-99db-832dd1fe00bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.267 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.270 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.274 232437 INFO nova.virt.libvirt.driver [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Instance spawned successfully.#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.275 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.298 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.306 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.309 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.310 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.310 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.311 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.311 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.312 232437 DEBUG nova.virt.libvirt.driver [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.337 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.426 232437 INFO nova.compute.manager [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Took 7.80 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.427 232437 DEBUG nova.compute.manager [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.502 232437 INFO nova.compute.manager [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Took 8.73 seconds to build instance.#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.519 232437 DEBUG oslo_concurrency.lockutils [None req-947b8352-420e-4bb1-a598-5b86900f3d69 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:49 np0005548731 nova_compute[232433]: 2025-12-06 07:56:49.797 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:50 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:56:50.241 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:56:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:50.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:56:50 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/826770176' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:56:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:56:50 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/826770176' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:56:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:51.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.400 232437 DEBUG oslo_concurrency.lockutils [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.400 232437 DEBUG oslo_concurrency.lockutils [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.418 232437 INFO nova.compute.manager [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Detaching volume cbaeb237-1f40-498b-93c2-2ed85aff86b7#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.598 232437 INFO nova.virt.block_device [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Attempting to driver detach volume cbaeb237-1f40-498b-93c2-2ed85aff86b7 from mountpoint /dev/vdc#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.606 232437 DEBUG nova.virt.libvirt.driver [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Attempting to detach device vdc from instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.607 232437 DEBUG nova.virt.libvirt.guest [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-cbaeb237-1f40-498b-93c2-2ed85aff86b7">
Dec  6 02:56:51 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  <target dev="vdc" bus="virtio"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  <serial>cbaeb237-1f40-498b-93c2-2ed85aff86b7</serial>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:56:51 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.615 232437 INFO nova.virt.libvirt.driver [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdc from instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b from the persistent domain config.#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.616 232437 DEBUG nova.virt.libvirt.driver [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.616 232437 DEBUG nova.virt.libvirt.guest [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  <source protocol="rbd" name="volumes/volume-cbaeb237-1f40-498b-93c2-2ed85aff86b7">
Dec  6 02:56:51 np0005548731 nova_compute[232433]:    <host name="192.168.122.100" port="6789"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:    <host name="192.168.122.102" port="6789"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:    <host name="192.168.122.101" port="6789"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  </source>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  <target dev="vdc" bus="virtio"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  <serial>cbaeb237-1f40-498b-93c2-2ed85aff86b7</serial>
Dec  6 02:56:51 np0005548731 nova_compute[232433]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Dec  6 02:56:51 np0005548731 nova_compute[232433]: </disk>
Dec  6 02:56:51 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.676 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765007811.6765716, 21c591a0-5eed-4aa8-a68b-59a616b16e2b => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.678 232437 DEBUG nova.virt.libvirt.driver [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.680 232437 INFO nova.virt.libvirt.driver [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdc from instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b from the live domain config.#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.942 232437 DEBUG nova.objects.instance [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 21c591a0-5eed-4aa8-a68b-59a616b16e2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:56:51 np0005548731 nova_compute[232433]: 2025-12-06 07:56:51.979 232437 DEBUG oslo_concurrency.lockutils [None req-63bcb24a-11bf-4355-a668-545cc4730c51 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:56:52 np0005548731 nova_compute[232433]: 2025-12-06 07:56:52.143 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:52.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:53 np0005548731 nova_compute[232433]: 2025-12-06 07:56:53.025 232437 DEBUG nova.compute.manager [req-d8a071f8-e930-4bf0-856f-fcea4148b6e7 req-fefa10bd-a84a-4b92-bc87-ae0faeaa5fe4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-changed-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:56:53 np0005548731 nova_compute[232433]: 2025-12-06 07:56:53.025 232437 DEBUG nova.compute.manager [req-d8a071f8-e930-4bf0-856f-fcea4148b6e7 req-fefa10bd-a84a-4b92-bc87-ae0faeaa5fe4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Refreshing instance network info cache due to event network-changed-3b228814-72c3-4086-9740-e056ea1c6d7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:56:53 np0005548731 nova_compute[232433]: 2025-12-06 07:56:53.025 232437 DEBUG oslo_concurrency.lockutils [req-d8a071f8-e930-4bf0-856f-fcea4148b6e7 req-fefa10bd-a84a-4b92-bc87-ae0faeaa5fe4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:56:53 np0005548731 nova_compute[232433]: 2025-12-06 07:56:53.026 232437 DEBUG oslo_concurrency.lockutils [req-d8a071f8-e930-4bf0-856f-fcea4148b6e7 req-fefa10bd-a84a-4b92-bc87-ae0faeaa5fe4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:56:53 np0005548731 nova_compute[232433]: 2025-12-06 07:56:53.026 232437 DEBUG nova.network.neutron [req-d8a071f8-e930-4bf0-856f-fcea4148b6e7 req-fefa10bd-a84a-4b92-bc87-ae0faeaa5fe4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Refreshing network info cache for port 3b228814-72c3-4086-9740-e056ea1c6d7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:56:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:53.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.703969) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813704007, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 315, "num_deletes": 251, "total_data_size": 170204, "memory_usage": 176816, "flush_reason": "Manual Compaction"}
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813707492, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 111806, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67291, "largest_seqno": 67601, "table_properties": {"data_size": 109831, "index_size": 202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5149, "raw_average_key_size": 18, "raw_value_size": 105890, "raw_average_value_size": 379, "num_data_blocks": 9, "num_entries": 279, "num_filter_entries": 279, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007808, "oldest_key_time": 1765007808, "file_creation_time": 1765007813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 3551 microseconds, and 887 cpu microseconds.
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.707521) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 111806 bytes OK
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.707548) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.709228) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.709240) EVENT_LOG_v1 {"time_micros": 1765007813709236, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.709253) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 167952, prev total WAL file size 167952, number of live WAL files 2.
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.709529) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(109KB)], [132(13MB)]
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813709566, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 14593490, "oldest_snapshot_seqno": -1}
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1180554779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1180554779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 9586 keys, 12657435 bytes, temperature: kUnknown
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813768522, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 12657435, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12594841, "index_size": 37521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24005, "raw_key_size": 254115, "raw_average_key_size": 26, "raw_value_size": 12425719, "raw_average_value_size": 1296, "num_data_blocks": 1425, "num_entries": 9586, "num_filter_entries": 9586, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765007813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.768826) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 12657435 bytes
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.770349) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 247.1 rd, 214.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.8 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(243.7) write-amplify(113.2) OK, records in: 10096, records dropped: 510 output_compression: NoCompression
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.770369) EVENT_LOG_v1 {"time_micros": 1765007813770360, "job": 84, "event": "compaction_finished", "compaction_time_micros": 59065, "compaction_time_cpu_micros": 28742, "output_level": 6, "num_output_files": 1, "total_output_size": 12657435, "num_input_records": 10096, "num_output_records": 9586, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813770517, "job": 84, "event": "table_file_deletion", "file_number": 134}
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813772791, "job": 84, "event": "table_file_deletion", "file_number": 132}
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.709475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.772842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.772846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.772847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.772849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:56:53.772850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:56:53 np0005548731 nova_compute[232433]: 2025-12-06 07:56:53.968 232437 DEBUG nova.compute.manager [req-e4e4773f-1de2-4d02-8c74-2a5dfd65f80c req-4d0be138-e696-461d-b798-fe5b1f679d55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:56:53 np0005548731 nova_compute[232433]: 2025-12-06 07:56:53.969 232437 DEBUG nova.compute.manager [req-e4e4773f-1de2-4d02-8c74-2a5dfd65f80c req-4d0be138-e696-461d-b798-fe5b1f679d55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing instance network info cache due to event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:56:53 np0005548731 nova_compute[232433]: 2025-12-06 07:56:53.969 232437 DEBUG oslo_concurrency.lockutils [req-e4e4773f-1de2-4d02-8c74-2a5dfd65f80c req-4d0be138-e696-461d-b798-fe5b1f679d55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:56:53 np0005548731 nova_compute[232433]: 2025-12-06 07:56:53.969 232437 DEBUG oslo_concurrency.lockutils [req-e4e4773f-1de2-4d02-8c74-2a5dfd65f80c req-4d0be138-e696-461d-b798-fe5b1f679d55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:56:53 np0005548731 nova_compute[232433]: 2025-12-06 07:56:53.969 232437 DEBUG nova.network.neutron [req-e4e4773f-1de2-4d02-8c74-2a5dfd65f80c req-4d0be138-e696-461d-b798-fe5b1f679d55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:56:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:54.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:54 np0005548731 nova_compute[232433]: 2025-12-06 07:56:54.800 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:54 np0005548731 nova_compute[232433]: 2025-12-06 07:56:54.990 232437 DEBUG nova.network.neutron [req-d8a071f8-e930-4bf0-856f-fcea4148b6e7 req-fefa10bd-a84a-4b92-bc87-ae0faeaa5fe4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updated VIF entry in instance network info cache for port 3b228814-72c3-4086-9740-e056ea1c6d7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:56:54 np0005548731 nova_compute[232433]: 2025-12-06 07:56:54.991 232437 DEBUG nova.network.neutron [req-d8a071f8-e930-4bf0-856f-fcea4148b6e7 req-fefa10bd-a84a-4b92-bc87-ae0faeaa5fe4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updating instance_info_cache with network_info: [{"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:56:55 np0005548731 nova_compute[232433]: 2025-12-06 07:56:55.024 232437 DEBUG oslo_concurrency.lockutils [req-d8a071f8-e930-4bf0-856f-fcea4148b6e7 req-fefa10bd-a84a-4b92-bc87-ae0faeaa5fe4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:56:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:55.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:56.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:56 np0005548731 nova_compute[232433]: 2025-12-06 07:56:56.927 232437 DEBUG nova.network.neutron [req-e4e4773f-1de2-4d02-8c74-2a5dfd65f80c req-4d0be138-e696-461d-b798-fe5b1f679d55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updated VIF entry in instance network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:56:56 np0005548731 nova_compute[232433]: 2025-12-06 07:56:56.929 232437 DEBUG nova.network.neutron [req-e4e4773f-1de2-4d02-8c74-2a5dfd65f80c req-4d0be138-e696-461d-b798-fe5b1f679d55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating instance_info_cache with network_info: [{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:56:56 np0005548731 nova_compute[232433]: 2025-12-06 07:56:56.997 232437 DEBUG oslo_concurrency.lockutils [req-e4e4773f-1de2-4d02-8c74-2a5dfd65f80c req-4d0be138-e696-461d-b798-fe5b1f679d55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:56:57 np0005548731 nova_compute[232433]: 2025-12-06 07:56:57.148 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:56:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:57.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:56:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:58.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:56:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:56:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:56:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:56:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:59.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:56:59 np0005548731 nova_compute[232433]: 2025-12-06 07:56:59.801 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:00.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:57:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2001879348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:57:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:00.903 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:00.904 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:00.904 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:57:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:01.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:57:02 np0005548731 nova_compute[232433]: 2025-12-06 07:57:02.152 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:02.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:02 np0005548731 podman[316545]: 2025-12-06 07:57:02.909601987 +0000 UTC m=+0.065163865 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:57:02 np0005548731 podman[316547]: 2025-12-06 07:57:02.945435524 +0000 UTC m=+0.094708348 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:57:02 np0005548731 podman[316546]: 2025-12-06 07:57:02.994273869 +0000 UTC m=+0.149351425 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  6 02:57:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:57:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:03.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:57:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:57:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3478441018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:57:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:03Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:91:4c 10.100.0.10
Dec  6 02:57:03 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:03Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:91:4c 10.100.0.10
Dec  6 02:57:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:04.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:04 np0005548731 nova_compute[232433]: 2025-12-06 07:57:04.803 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:05.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:06.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 02:57:07 np0005548731 nova_compute[232433]: 2025-12-06 07:57:07.157 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:57:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:08.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:57:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:57:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:57:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:57:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:57:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:57:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:57:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2468210167' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:57:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:57:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2468210167' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:57:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:09.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:09 np0005548731 nova_compute[232433]: 2025-12-06 07:57:09.806 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:10.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:11.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:12 np0005548731 nova_compute[232433]: 2025-12-06 07:57:12.215 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:12.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:57:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:13.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:57:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:14.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:14 np0005548731 nova_compute[232433]: 2025-12-06 07:57:14.808 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:15.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:16.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:17.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:17 np0005548731 nova_compute[232433]: 2025-12-06 07:57:17.275 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:57:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:57:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:18.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:19.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:19 np0005548731 nova_compute[232433]: 2025-12-06 07:57:19.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:57:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:20.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:57:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:57:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:21.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:57:22 np0005548731 nova_compute[232433]: 2025-12-06 07:57:22.321 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:23.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:24.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:24 np0005548731 nova_compute[232433]: 2025-12-06 07:57:24.855 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:25.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:25 np0005548731 nova_compute[232433]: 2025-12-06 07:57:25.761 232437 DEBUG oslo_concurrency.lockutils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:25 np0005548731 nova_compute[232433]: 2025-12-06 07:57:25.762 232437 DEBUG oslo_concurrency.lockutils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:25 np0005548731 nova_compute[232433]: 2025-12-06 07:57:25.762 232437 INFO nova.compute.manager [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Shelving#033[00m
Dec  6 02:57:25 np0005548731 nova_compute[232433]: 2025-12-06 07:57:25.786 232437 DEBUG nova.virt.libvirt.driver [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:57:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:26.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:57:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:27.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:57:27 np0005548731 nova_compute[232433]: 2025-12-06 07:57:27.325 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:28 np0005548731 kernel: tap3b228814-72 (unregistering): left promiscuous mode
Dec  6 02:57:28 np0005548731 NetworkManager[49182]: <info>  [1765007848.0201] device (tap3b228814-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00886|binding|INFO|Releasing lport 3b228814-72c3-4086-9740-e056ea1c6d7b from this chassis (sb_readonly=0)
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00887|binding|INFO|Setting lport 3b228814-72c3-4086-9740-e056ea1c6d7b down in Southbound
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.033 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00888|binding|INFO|Removing iface tap3b228814-72 ovn-installed in OVS
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.035 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.041 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:91:4c 10.100.0.10'], port_security=['fa:16:3e:b9:91:4c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9aa31d67-6e8e-4301-99db-832dd1fe00bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8caa40db-27da-43ab-86ca-042284636e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89072f55-dc9c-4de3-8430-0091f653d55a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ce9c1bf-feee-480c-a359-3eaf272f4b83, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=3b228814-72c3-4086-9740-e056ea1c6d7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.042 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 3b228814-72c3-4086-9740-e056ea1c6d7b in datapath 8caa40db-27da-43ab-86ca-042284636e71 unbound from our chassis#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.044 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8caa40db-27da-43ab-86ca-042284636e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.045 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f539b3-0054-4b7f-ba77-b474ae94cf1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.046 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 namespace which is not needed anymore#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.066 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:28 np0005548731 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000b2.scope: Deactivated successfully.
Dec  6 02:57:28 np0005548731 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000b2.scope: Consumed 14.559s CPU time.
Dec  6 02:57:28 np0005548731 systemd-machined[195355]: Machine qemu-90-instance-000000b2 terminated.
Dec  6 02:57:28 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[316521]: [NOTICE]   (316525) : haproxy version is 2.8.14-c23fe91
Dec  6 02:57:28 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[316521]: [NOTICE]   (316525) : path to executable is /usr/sbin/haproxy
Dec  6 02:57:28 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[316521]: [WARNING]  (316525) : Exiting Master process...
Dec  6 02:57:28 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[316521]: [WARNING]  (316525) : Exiting Master process...
Dec  6 02:57:28 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[316521]: [ALERT]    (316525) : Current worker (316527) exited with code 143 (Terminated)
Dec  6 02:57:28 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[316521]: [WARNING]  (316525) : All workers exited. Exiting... (0)
Dec  6 02:57:28 np0005548731 systemd[1]: libpod-55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d.scope: Deactivated successfully.
Dec  6 02:57:28 np0005548731 podman[316922]: 2025-12-06 07:57:28.173018035 +0000 UTC m=+0.047404911 container died 55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 02:57:28 np0005548731 systemd[1]: var-lib-containers-storage-overlay-d1e995edef3aae70dc0406e49798650db1c28fec096b83d9cbea2bc9bd0d1e45-merged.mount: Deactivated successfully.
Dec  6 02:57:28 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d-userdata-shm.mount: Deactivated successfully.
Dec  6 02:57:28 np0005548731 podman[316922]: 2025-12-06 07:57:28.207278252 +0000 UTC m=+0.081665128 container cleanup 55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:57:28 np0005548731 systemd[1]: libpod-conmon-55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d.scope: Deactivated successfully.
Dec  6 02:57:28 np0005548731 kernel: tap3b228814-72: entered promiscuous mode
Dec  6 02:57:28 np0005548731 NetworkManager[49182]: <info>  [1765007848.2523] manager: (tap3b228814-72): new Tun device (/org/freedesktop/NetworkManager/Devices/410)
Dec  6 02:57:28 np0005548731 systemd-udevd[316902]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:57:28 np0005548731 kernel: tap3b228814-72 (unregistering): left promiscuous mode
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00889|binding|INFO|Claiming lport 3b228814-72c3-4086-9740-e056ea1c6d7b for this chassis.
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00890|binding|INFO|3b228814-72c3-4086-9740-e056ea1c6d7b: Claiming fa:16:3e:b9:91:4c 10.100.0.10
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.307 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:28 np0005548731 virtnodedevd[232753]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  6 02:57:28 np0005548731 virtnodedevd[232753]: hostname: compute-2
Dec  6 02:57:28 np0005548731 virtnodedevd[232753]: ethtool ioctl error on tap3b228814-72: No such device
Dec  6 02:57:28 np0005548731 virtnodedevd[232753]: ethtool ioctl error on tap3b228814-72: No such device
Dec  6 02:57:28 np0005548731 podman[316954]: 2025-12-06 07:57:28.322216294 +0000 UTC m=+0.096886841 container remove 55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 02:57:28 np0005548731 virtnodedevd[232753]: ethtool ioctl error on tap3b228814-72: No such device
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00891|binding|INFO|Setting lport 3b228814-72c3-4086-9740-e056ea1c6d7b ovn-installed in OVS
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00892|binding|INFO|Setting lport 3b228814-72c3-4086-9740-e056ea1c6d7b up in Southbound
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.327 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:91:4c 10.100.0.10'], port_security=['fa:16:3e:b9:91:4c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9aa31d67-6e8e-4301-99db-832dd1fe00bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8caa40db-27da-43ab-86ca-042284636e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89072f55-dc9c-4de3-8430-0091f653d55a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ce9c1bf-feee-480c-a359-3eaf272f4b83, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=3b228814-72c3-4086-9740-e056ea1c6d7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00893|binding|INFO|Releasing lport 3b228814-72c3-4086-9740-e056ea1c6d7b from this chassis (sb_readonly=1)
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:28 np0005548731 virtnodedevd[232753]: ethtool ioctl error on tap3b228814-72: No such device
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.329 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1b2f89-85de-4703-9175-af058635ff0c]: (4, ('Sat Dec  6 07:57:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 (55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d)\n55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d\nSat Dec  6 07:57:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 (55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d)\n55558c9c550399d16fb5212f30e2027d975df5bc5c30105e4f4062aa9e22e14d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00894|binding|INFO|Removing iface tap3b228814-72 ovn-installed in OVS
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00895|if_status|INFO|Dropped 2 log messages in last 601 seconds (most recently, 601 seconds ago) due to excessive rate
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00896|if_status|INFO|Not setting lport 3b228814-72c3-4086-9740-e056ea1c6d7b down as sb is readonly
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.332 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[58ee50fc-86de-4adf-837d-bf30c0261ad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.333 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8caa40db-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00897|binding|INFO|Releasing lport 3b228814-72c3-4086-9740-e056ea1c6d7b from this chassis (sb_readonly=0)
Dec  6 02:57:28 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:28Z|00898|binding|INFO|Setting lport 3b228814-72c3-4086-9740-e056ea1c6d7b down in Southbound
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.334 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:28 np0005548731 virtnodedevd[232753]: ethtool ioctl error on tap3b228814-72: No such device
Dec  6 02:57:28 np0005548731 virtnodedevd[232753]: ethtool ioctl error on tap3b228814-72: No such device
Dec  6 02:57:28 np0005548731 virtnodedevd[232753]: ethtool ioctl error on tap3b228814-72: No such device
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.345 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:91:4c 10.100.0.10'], port_security=['fa:16:3e:b9:91:4c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9aa31d67-6e8e-4301-99db-832dd1fe00bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8caa40db-27da-43ab-86ca-042284636e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89072f55-dc9c-4de3-8430-0091f653d55a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ce9c1bf-feee-480c-a359-3eaf272f4b83, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=3b228814-72c3-4086-9740-e056ea1c6d7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.345 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:28 np0005548731 virtnodedevd[232753]: ethtool ioctl error on tap3b228814-72: No such device
Dec  6 02:57:28 np0005548731 kernel: tap8caa40db-20: left promiscuous mode
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.356 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.359 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.361 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[159419ce-9b59-4a05-8017-98f2b8101ffd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.377 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b01fe9fa-b62d-4aa7-85cc-85d9939034ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.378 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[92eab3c4-c6ff-4158-ba6b-8024629f6f60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.390 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcc3489-de70-473b-b5c9-f6cc39f8457e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807011, 'reachable_time': 39544, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316993, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.392 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.392 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[67ab7512-9c39-4625-b48e-5b721ce15331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.393 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 3b228814-72c3-4086-9740-e056ea1c6d7b in datapath 8caa40db-27da-43ab-86ca-042284636e71 unbound from our chassis#033[00m
Dec  6 02:57:28 np0005548731 systemd[1]: run-netns-ovnmeta\x2d8caa40db\x2d27da\x2d43ab\x2d86ca\x2d042284636e71.mount: Deactivated successfully.
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.395 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8caa40db-27da-43ab-86ca-042284636e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.395 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3952d891-e997-4c53-864e-7a6f3234463a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.396 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 3b228814-72c3-4086-9740-e056ea1c6d7b in datapath 8caa40db-27da-43ab-86ca-042284636e71 unbound from our chassis#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.397 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8caa40db-27da-43ab-86ca-042284636e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:57:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:28.397 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6667bb40-f823-4f5d-90d3-c8966aee7abf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.409 232437 DEBUG nova.compute.manager [req-2fc5810a-7c04-45ec-bd9c-b6621276998e req-759315c4-0773-488f-97a6-9f1d8ddb8f52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-unplugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.409 232437 DEBUG oslo_concurrency.lockutils [req-2fc5810a-7c04-45ec-bd9c-b6621276998e req-759315c4-0773-488f-97a6-9f1d8ddb8f52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.409 232437 DEBUG oslo_concurrency.lockutils [req-2fc5810a-7c04-45ec-bd9c-b6621276998e req-759315c4-0773-488f-97a6-9f1d8ddb8f52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.410 232437 DEBUG oslo_concurrency.lockutils [req-2fc5810a-7c04-45ec-bd9c-b6621276998e req-759315c4-0773-488f-97a6-9f1d8ddb8f52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.410 232437 DEBUG nova.compute.manager [req-2fc5810a-7c04-45ec-bd9c-b6621276998e req-759315c4-0773-488f-97a6-9f1d8ddb8f52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] No waiting events found dispatching network-vif-unplugged-3b228814-72c3-4086-9740-e056ea1c6d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.410 232437 WARNING nova.compute.manager [req-2fc5810a-7c04-45ec-bd9c-b6621276998e req-759315c4-0773-488f-97a6-9f1d8ddb8f52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received unexpected event network-vif-unplugged-3b228814-72c3-4086-9740-e056ea1c6d7b for instance with vm_state active and task_state shelving.#033[00m
Dec  6 02:57:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:28.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.801 232437 INFO nova.virt.libvirt.driver [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Instance shutdown successfully after 3 seconds.#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.806 232437 INFO nova.virt.libvirt.driver [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Instance destroyed successfully.#033[00m
Dec  6 02:57:28 np0005548731 nova_compute[232433]: 2025-12-06 07:57:28.807 232437 DEBUG nova.objects.instance [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'numa_topology' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:57:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:29.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:29 np0005548731 nova_compute[232433]: 2025-12-06 07:57:29.499 232437 INFO nova.virt.libvirt.driver [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Beginning cold snapshot process#033[00m
Dec  6 02:57:29 np0005548731 nova_compute[232433]: 2025-12-06 07:57:29.686 232437 DEBUG nova.virt.libvirt.imagebackend [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec  6 02:57:29 np0005548731 nova_compute[232433]: 2025-12-06 07:57:29.857 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.221 232437 DEBUG nova.storage.rbd_utils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] creating snapshot(7298fac01d8b4209aabc085a89862708) on rbd image(9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:57:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:30.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.504 232437 DEBUG nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.504 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.505 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.505 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.505 232437 DEBUG nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] No waiting events found dispatching network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.505 232437 WARNING nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received unexpected event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.505 232437 DEBUG nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.506 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.506 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.506 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.506 232437 DEBUG nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] No waiting events found dispatching network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.506 232437 WARNING nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received unexpected event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.507 232437 DEBUG nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.507 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.507 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.507 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.507 232437 DEBUG nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] No waiting events found dispatching network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.508 232437 WARNING nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received unexpected event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.508 232437 DEBUG nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-unplugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.508 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.508 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.508 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.508 232437 DEBUG nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] No waiting events found dispatching network-vif-unplugged-3b228814-72c3-4086-9740-e056ea1c6d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.509 232437 WARNING nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received unexpected event network-vif-unplugged-3b228814-72c3-4086-9740-e056ea1c6d7b for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.509 232437 DEBUG nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.509 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.509 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.509 232437 DEBUG oslo_concurrency.lockutils [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.510 232437 DEBUG nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] No waiting events found dispatching network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:57:30 np0005548731 nova_compute[232433]: 2025-12-06 07:57:30.510 232437 WARNING nova.compute.manager [req-d7b2317a-5905-43e1-bdd6-daf2081f07c8 req-7882cefc-6df8-45f0-9bcc-50bf71a98424 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received unexpected event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec  6 02:57:31 np0005548731 nova_compute[232433]: 2025-12-06 07:57:31.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:57:31 np0005548731 nova_compute[232433]: 2025-12-06 07:57:31.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:57:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:31.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e401 e401: 3 total, 3 up, 3 in
Dec  6 02:57:32 np0005548731 nova_compute[232433]: 2025-12-06 07:57:32.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:57:32 np0005548731 nova_compute[232433]: 2025-12-06 07:57:32.199 232437 DEBUG nova.storage.rbd_utils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] cloning vms/9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk@7298fac01d8b4209aabc085a89862708 to images/12e13902-aaee-45d7-b251-bc95170ae31e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  6 02:57:32 np0005548731 nova_compute[232433]: 2025-12-06 07:57:32.326 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:32.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:33 np0005548731 nova_compute[232433]: 2025-12-06 07:57:33.011 232437 DEBUG nova.storage.rbd_utils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] flattening images/12e13902-aaee-45d7-b251-bc95170ae31e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  6 02:57:33 np0005548731 nova_compute[232433]: 2025-12-06 07:57:33.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:57:33 np0005548731 nova_compute[232433]: 2025-12-06 07:57:33.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:57:33 np0005548731 nova_compute[232433]: 2025-12-06 07:57:33.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:57:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:33.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:33 np0005548731 nova_compute[232433]: 2025-12-06 07:57:33.728 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:57:33 np0005548731 nova_compute[232433]: 2025-12-06 07:57:33.729 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:57:33 np0005548731 nova_compute[232433]: 2025-12-06 07:57:33.729 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:57:33 np0005548731 nova_compute[232433]: 2025-12-06 07:57:33.729 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 21c591a0-5eed-4aa8-a68b-59a616b16e2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:57:33 np0005548731 podman[317102]: 2025-12-06 07:57:33.902802463 +0000 UTC m=+0.063720760 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 02:57:33 np0005548731 podman[317104]: 2025-12-06 07:57:33.906389571 +0000 UTC m=+0.063904685 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 02:57:33 np0005548731 podman[317103]: 2025-12-06 07:57:33.929211969 +0000 UTC m=+0.090252338 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true)
Dec  6 02:57:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:34.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:34 np0005548731 nova_compute[232433]: 2025-12-06 07:57:34.859 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:34 np0005548731 nova_compute[232433]: 2025-12-06 07:57:34.923 232437 DEBUG nova.storage.rbd_utils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] removing snapshot(7298fac01d8b4209aabc085a89862708) on rbd image(9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  6 02:57:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:35.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:36 np0005548731 nova_compute[232433]: 2025-12-06 07:57:36.297 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating instance_info_cache with network_info: [{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:57:36 np0005548731 nova_compute[232433]: 2025-12-06 07:57:36.316 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:57:36 np0005548731 nova_compute[232433]: 2025-12-06 07:57:36.317 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:57:36 np0005548731 nova_compute[232433]: 2025-12-06 07:57:36.317 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:57:36 np0005548731 nova_compute[232433]: 2025-12-06 07:57:36.317 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:57:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:36.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e402 e402: 3 total, 3 up, 3 in
Dec  6 02:57:36 np0005548731 nova_compute[232433]: 2025-12-06 07:57:36.720 232437 DEBUG nova.storage.rbd_utils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] creating snapshot(snap) on rbd image(12e13902-aaee-45d7-b251-bc95170ae31e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 02:57:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:37.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.688 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.690 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.692 232437 DEBUG nova.compute.manager [req-bdcc2bbd-d692-4ca5-a7e4-3f883c7ab7d7 req-d16ace6c-bf75-48a5-bca7-d31041b535ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.693 232437 DEBUG nova.compute.manager [req-bdcc2bbd-d692-4ca5-a7e4-3f883c7ab7d7 req-d16ace6c-bf75-48a5-bca7-d31041b535ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing instance network info cache due to event network-changed-c76db338-0396-40a2-82b3-0c720d28d2bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.693 232437 DEBUG oslo_concurrency.lockutils [req-bdcc2bbd-d692-4ca5-a7e4-3f883c7ab7d7 req-d16ace6c-bf75-48a5-bca7-d31041b535ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.694 232437 DEBUG oslo_concurrency.lockutils [req-bdcc2bbd-d692-4ca5-a7e4-3f883c7ab7d7 req-d16ace6c-bf75-48a5-bca7-d31041b535ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.694 232437 DEBUG nova.network.neutron [req-bdcc2bbd-d692-4ca5-a7e4-3f883c7ab7d7 req-d16ace6c-bf75-48a5-bca7-d31041b535ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Refreshing network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.771 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.772 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.772 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.772 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:57:37 np0005548731 nova_compute[232433]: 2025-12-06 07:57:37.773 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:57:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:57:38 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1035152140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.201 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.274 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.274 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.277 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.277 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.442 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.443 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3973MB free_disk=20.896617889404297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.443 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.444 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:38.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.530 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.530 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 9aa31d67-6e8e-4301-99db-832dd1fe00bc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.531 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.531 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:57:38 np0005548731 nova_compute[232433]: 2025-12-06 07:57:38.596 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:57:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e403 e403: 3 total, 3 up, 3 in
Dec  6 02:57:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:57:39 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2114481433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:57:39 np0005548731 nova_compute[232433]: 2025-12-06 07:57:39.026 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:57:39 np0005548731 nova_compute[232433]: 2025-12-06 07:57:39.032 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:57:39 np0005548731 nova_compute[232433]: 2025-12-06 07:57:39.104 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:57:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:39 np0005548731 nova_compute[232433]: 2025-12-06 07:57:39.206 232437 DEBUG nova.network.neutron [req-bdcc2bbd-d692-4ca5-a7e4-3f883c7ab7d7 req-d16ace6c-bf75-48a5-bca7-d31041b535ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updated VIF entry in instance network info cache for port c76db338-0396-40a2-82b3-0c720d28d2bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:57:39 np0005548731 nova_compute[232433]: 2025-12-06 07:57:39.206 232437 DEBUG nova.network.neutron [req-bdcc2bbd-d692-4ca5-a7e4-3f883c7ab7d7 req-d16ace6c-bf75-48a5-bca7-d31041b535ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating instance_info_cache with network_info: [{"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:57:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:39.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:39 np0005548731 nova_compute[232433]: 2025-12-06 07:57:39.499 232437 DEBUG oslo_concurrency.lockutils [req-bdcc2bbd-d692-4ca5-a7e4-3f883c7ab7d7 req-d16ace6c-bf75-48a5-bca7-d31041b535ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-21c591a0-5eed-4aa8-a68b-59a616b16e2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:57:39 np0005548731 nova_compute[232433]: 2025-12-06 07:57:39.721 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:57:39 np0005548731 nova_compute[232433]: 2025-12-06 07:57:39.722 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:39 np0005548731 nova_compute[232433]: 2025-12-06 07:57:39.860 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:40.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:41 np0005548731 nova_compute[232433]: 2025-12-06 07:57:41.124 232437 INFO nova.virt.libvirt.driver [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Snapshot image upload complete#033[00m
Dec  6 02:57:41 np0005548731 nova_compute[232433]: 2025-12-06 07:57:41.125 232437 DEBUG nova.compute.manager [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:57:41 np0005548731 nova_compute[232433]: 2025-12-06 07:57:41.202 232437 INFO nova.compute.manager [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Shelve offloading#033[00m
Dec  6 02:57:41 np0005548731 nova_compute[232433]: 2025-12-06 07:57:41.208 232437 INFO nova.virt.libvirt.driver [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Instance destroyed successfully.#033[00m
Dec  6 02:57:41 np0005548731 nova_compute[232433]: 2025-12-06 07:57:41.208 232437 DEBUG nova.compute.manager [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:57:41 np0005548731 nova_compute[232433]: 2025-12-06 07:57:41.211 232437 DEBUG oslo_concurrency.lockutils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:57:41 np0005548731 nova_compute[232433]: 2025-12-06 07:57:41.211 232437 DEBUG oslo_concurrency.lockutils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquired lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:57:41 np0005548731 nova_compute[232433]: 2025-12-06 07:57:41.211 232437 DEBUG nova.network.neutron [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:57:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:57:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:41.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:57:42 np0005548731 nova_compute[232433]: 2025-12-06 07:57:42.139 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:57:42 np0005548731 nova_compute[232433]: 2025-12-06 07:57:42.139 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:57:42 np0005548731 nova_compute[232433]: 2025-12-06 07:57:42.140 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:57:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:57:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:42.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:57:42 np0005548731 nova_compute[232433]: 2025-12-06 07:57:42.634 232437 DEBUG nova.network.neutron [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updating instance_info_cache with network_info: [{"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:57:42 np0005548731 nova_compute[232433]: 2025-12-06 07:57:42.711 232437 DEBUG oslo_concurrency.lockutils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Releasing lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:57:42 np0005548731 nova_compute[232433]: 2025-12-06 07:57:42.745 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:43.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.335 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007848.3339448, 9aa31d67-6e8e-4301-99db-832dd1fe00bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.336 232437 INFO nova.compute.manager [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.515 232437 DEBUG nova.compute.manager [None req-e4e30358-00b4-4333-a4cf-751ecda36236 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.518 232437 DEBUG nova.compute.manager [None req-e4e30358-00b4-4333-a4cf-751ecda36236 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.550 232437 INFO nova.compute.manager [None req-e4e30358-00b4-4333-a4cf-751ecda36236 - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.735 232437 INFO nova.virt.libvirt.driver [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Instance destroyed successfully.#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.736 232437 DEBUG nova.objects.instance [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'resources' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.752 232437 DEBUG nova.virt.libvirt.vif [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:56:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-760058587',display_name='tempest-TestShelveInstance-server-760058587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-760058587',id=178,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPMK8f3hkRq1zqoQC83gMxNZg6M7ZAxKrSEKHe3B4eb91L7pfGPJzplK7LQrq84g1u5RR1ZeVyoQ0VCNm5QQfCXxX7TmQ1C0jimNi3kHh1QlwD/S46/iChPS6huTXb5CpA==',key_name='tempest-TestShelveInstance-967718151',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:56:49Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-g1sdzs5v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member',shelved_at='2025-12-06T07:57:41.125298',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='12e13902-aaee-45d7-b251-bc95170ae31e'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:57:29Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=9aa31d67-6e8e-4301-99db-832dd1fe00bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.752 232437 DEBUG nova.network.os_vif_util [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.753 232437 DEBUG nova.network.os_vif_util [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.754 232437 DEBUG os_vif [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.756 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.757 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b228814-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.758 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.759 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.763 232437 INFO os_vif [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72')#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.808 232437 DEBUG nova.compute.manager [req-b6df0217-4fb0-4467-b7a5-766ae8aa84a0 req-aa887218-77b0-46d7-9ab2-5e085a96d8d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-changed-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.809 232437 DEBUG nova.compute.manager [req-b6df0217-4fb0-4467-b7a5-766ae8aa84a0 req-aa887218-77b0-46d7-9ab2-5e085a96d8d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Refreshing instance network info cache due to event network-changed-3b228814-72c3-4086-9740-e056ea1c6d7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.809 232437 DEBUG oslo_concurrency.lockutils [req-b6df0217-4fb0-4467-b7a5-766ae8aa84a0 req-aa887218-77b0-46d7-9ab2-5e085a96d8d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.809 232437 DEBUG oslo_concurrency.lockutils [req-b6df0217-4fb0-4467-b7a5-766ae8aa84a0 req-aa887218-77b0-46d7-9ab2-5e085a96d8d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:57:43 np0005548731 nova_compute[232433]: 2025-12-06 07:57:43.810 232437 DEBUG nova.network.neutron [req-b6df0217-4fb0-4467-b7a5-766ae8aa84a0 req-aa887218-77b0-46d7-9ab2-5e085a96d8d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Refreshing network info cache for port 3b228814-72c3-4086-9740-e056ea1c6d7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:57:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e404 e404: 3 total, 3 up, 3 in
Dec  6 02:57:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:44.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:57:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2598950693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:57:44 np0005548731 nova_compute[232433]: 2025-12-06 07:57:44.671 232437 INFO nova.virt.libvirt.driver [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Deleting instance files /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc_del#033[00m
Dec  6 02:57:44 np0005548731 nova_compute[232433]: 2025-12-06 07:57:44.672 232437 INFO nova.virt.libvirt.driver [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Deletion of /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc_del complete#033[00m
Dec  6 02:57:44 np0005548731 nova_compute[232433]: 2025-12-06 07:57:44.751 232437 INFO nova.scheduler.client.report [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Deleted allocations for instance 9aa31d67-6e8e-4301-99db-832dd1fe00bc#033[00m
Dec  6 02:57:44 np0005548731 nova_compute[232433]: 2025-12-06 07:57:44.793 232437 DEBUG oslo_concurrency.lockutils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:44 np0005548731 nova_compute[232433]: 2025-12-06 07:57:44.794 232437 DEBUG oslo_concurrency.lockutils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:44 np0005548731 nova_compute[232433]: 2025-12-06 07:57:44.846 232437 DEBUG oslo_concurrency.processutils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:57:44 np0005548731 nova_compute[232433]: 2025-12-06 07:57:44.903 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:45 np0005548731 nova_compute[232433]: 2025-12-06 07:57:45.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:57:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:45.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:57:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1237685915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:57:45 np0005548731 nova_compute[232433]: 2025-12-06 07:57:45.347 232437 DEBUG oslo_concurrency.processutils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:57:45 np0005548731 nova_compute[232433]: 2025-12-06 07:57:45.353 232437 DEBUG nova.compute.provider_tree [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:57:45 np0005548731 nova_compute[232433]: 2025-12-06 07:57:45.377 232437 DEBUG nova.scheduler.client.report [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:57:45 np0005548731 nova_compute[232433]: 2025-12-06 07:57:45.407 232437 DEBUG oslo_concurrency.lockutils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:45 np0005548731 nova_compute[232433]: 2025-12-06 07:57:45.460 232437 DEBUG oslo_concurrency.lockutils [None req-e2e0224c-825d-40fa-bdd5-7300e24f87b8 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 19.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:46.150 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:57:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:46.151 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:57:46 np0005548731 nova_compute[232433]: 2025-12-06 07:57:46.156 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:46 np0005548731 nova_compute[232433]: 2025-12-06 07:57:46.302 232437 DEBUG nova.network.neutron [req-b6df0217-4fb0-4467-b7a5-766ae8aa84a0 req-aa887218-77b0-46d7-9ab2-5e085a96d8d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updated VIF entry in instance network info cache for port 3b228814-72c3-4086-9740-e056ea1c6d7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:57:46 np0005548731 nova_compute[232433]: 2025-12-06 07:57:46.303 232437 DEBUG nova.network.neutron [req-b6df0217-4fb0-4467-b7a5-766ae8aa84a0 req-aa887218-77b0-46d7-9ab2-5e085a96d8d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updating instance_info_cache with network_info: [{"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": null, "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap3b228814-72", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:57:46 np0005548731 nova_compute[232433]: 2025-12-06 07:57:46.322 232437 DEBUG oslo_concurrency.lockutils [req-b6df0217-4fb0-4467-b7a5-766ae8aa84a0 req-aa887218-77b0-46d7-9ab2-5e085a96d8d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:57:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:46.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:57:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:47.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.527 232437 DEBUG oslo_concurrency.lockutils [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.527 232437 DEBUG oslo_concurrency.lockutils [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.527 232437 DEBUG oslo_concurrency.lockutils [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.527 232437 DEBUG oslo_concurrency.lockutils [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.528 232437 DEBUG oslo_concurrency.lockutils [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.528 232437 INFO nova.compute.manager [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Terminating instance#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.529 232437 DEBUG nova.compute.manager [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 02:57:47 np0005548731 kernel: tapc76db338-03 (unregistering): left promiscuous mode
Dec  6 02:57:47 np0005548731 NetworkManager[49182]: <info>  [1765007867.6637] device (tapc76db338-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.713 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:47Z|00899|binding|INFO|Releasing lport c76db338-0396-40a2-82b3-0c720d28d2bd from this chassis (sb_readonly=0)
Dec  6 02:57:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:47Z|00900|binding|INFO|Setting lport c76db338-0396-40a2-82b3-0c720d28d2bd down in Southbound
Dec  6 02:57:47 np0005548731 ovn_controller[133927]: 2025-12-06T07:57:47Z|00901|binding|INFO|Removing iface tapc76db338-03 ovn-installed in OVS
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.714 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:47.719 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:62:02 10.100.0.4'], port_security=['fa:16:3e:31:62:02 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '21c591a0-5eed-4aa8-a68b-59a616b16e2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3764201-4b86-4407-84d2-684bd05a44b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6164fee998c94b71a37886fe42b4c56c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2bb7af25-e3c4-4687-888a-3caf6297e5c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a293aea-136f-4ea2-8198-6213071653ca, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=c76db338-0396-40a2-82b3-0c720d28d2bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:57:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:47.720 143965 INFO neutron.agent.ovn.metadata.agent [-] Port c76db338-0396-40a2-82b3-0c720d28d2bd in datapath a3764201-4b86-4407-84d2-684bd05a44b3 unbound from our chassis#033[00m
Dec  6 02:57:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:47.722 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3764201-4b86-4407-84d2-684bd05a44b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:57:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:47.723 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[603f0de1-b0c1-48af-9da2-709ddb32900a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:47.723 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 namespace which is not needed anymore#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.728 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:47 np0005548731 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000b0.scope: Deactivated successfully.
Dec  6 02:57:47 np0005548731 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000b0.scope: Consumed 18.502s CPU time.
Dec  6 02:57:47 np0005548731 systemd-machined[195355]: Machine qemu-89-instance-000000b0 terminated.
Dec  6 02:57:47 np0005548731 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[315469]: [NOTICE]   (315473) : haproxy version is 2.8.14-c23fe91
Dec  6 02:57:47 np0005548731 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[315469]: [NOTICE]   (315473) : path to executable is /usr/sbin/haproxy
Dec  6 02:57:47 np0005548731 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[315469]: [WARNING]  (315473) : Exiting Master process...
Dec  6 02:57:47 np0005548731 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[315469]: [WARNING]  (315473) : Exiting Master process...
Dec  6 02:57:47 np0005548731 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[315469]: [ALERT]    (315473) : Current worker (315475) exited with code 143 (Terminated)
Dec  6 02:57:47 np0005548731 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[315469]: [WARNING]  (315473) : All workers exited. Exiting... (0)
Dec  6 02:57:47 np0005548731 systemd[1]: libpod-6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424.scope: Deactivated successfully.
Dec  6 02:57:47 np0005548731 podman[317371]: 2025-12-06 07:57:47.872959104 +0000 UTC m=+0.046217992 container died 6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  6 02:57:47 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424-userdata-shm.mount: Deactivated successfully.
Dec  6 02:57:47 np0005548731 systemd[1]: var-lib-containers-storage-overlay-68173a740d969647938b23c145d7d31b01150b56b517124d6746cb8c447e998c-merged.mount: Deactivated successfully.
Dec  6 02:57:47 np0005548731 podman[317371]: 2025-12-06 07:57:47.908173895 +0000 UTC m=+0.081432783 container cleanup 6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:57:47 np0005548731 systemd[1]: libpod-conmon-6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424.scope: Deactivated successfully.
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.966 232437 INFO nova.virt.libvirt.driver [-] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Instance destroyed successfully.#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.967 232437 DEBUG nova.objects.instance [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'resources' on Instance uuid 21c591a0-5eed-4aa8-a68b-59a616b16e2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:57:47 np0005548731 podman[317400]: 2025-12-06 07:57:47.969599938 +0000 UTC m=+0.041487126 container remove 6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 02:57:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:47.979 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a13a70ff-abf4-4d8c-ac81-263ba1dda2d3]: (4, ('Sat Dec  6 07:57:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 (6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424)\n6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424\nSat Dec  6 07:57:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 (6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424)\n6516198d7d294f8bb86531b26a9d474f4d1b5e8b2cd129761a6aeab20b28a424\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:47.980 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f1b24c-f1ea-4e13-8af5-bd63cb5e1172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:47.981 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3764201-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.983 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:47 np0005548731 kernel: tapa3764201-40: left promiscuous mode
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.991 232437 DEBUG nova.virt.libvirt.vif [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:55:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1553758528',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1553758528',id=176,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMlCJgA181fL+hWV96XYAuaaRjR/DFcxIrENEwwuUSLNLg2Wo/zP2WcPtpxKQuFaV64lRGeBPzRnqkTHdlSql81bpyaGplyAnqRHnVLqVTwCxa7e5Tmw+I0TD65PH3Dpw==',key_name='tempest-TestInstancesWithCinderVolumes-1103529456',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:55:55Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6164fee998c94b71a37886fe42b4c56c',ramdisk_id='',reservation_id='r-228wza3f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-1429596635',owner_user_name='tempest-TestInstancesWithCinderVolumes-1429596635-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:55:55Z,user_data=None,user_id='e685a049c8a74aa8aea831fbdaf2acf8',uuid=21c591a0-5eed-4aa8-a68b-59a616b16e2b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.991 232437 DEBUG nova.network.os_vif_util [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converting VIF {"id": "c76db338-0396-40a2-82b3-0c720d28d2bd", "address": "fa:16:3e:31:62:02", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76db338-03", "ovs_interfaceid": "c76db338-0396-40a2-82b3-0c720d28d2bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.992 232437 DEBUG nova.network.os_vif_util [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:31:62:02,bridge_name='br-int',has_traffic_filtering=True,id=c76db338-0396-40a2-82b3-0c720d28d2bd,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76db338-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.993 232437 DEBUG os_vif [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:62:02,bridge_name='br-int',has_traffic_filtering=True,id=c76db338-0396-40a2-82b3-0c720d28d2bd,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76db338-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.995 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.995 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc76db338-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:47 np0005548731 nova_compute[232433]: 2025-12-06 07:57:47.999 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.002 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:48.003 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[807fc59d-9927-4d3b-bd15-e4a84811c170]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.007 232437 INFO os_vif [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:62:02,bridge_name='br-int',has_traffic_filtering=True,id=c76db338-0396-40a2-82b3-0c720d28d2bd,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76db338-03')#033[00m
Dec  6 02:57:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:48.019 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a9cb4ca4-96aa-43cf-bba5-b4bf2655e9c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:48.020 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd3043c-53dc-4363-a613-d9069f4f243c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:48.042 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[74b340ef-c6ef-4f46-9824-3d5792b6f615]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 801651, 'reachable_time': 21522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317443, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:48.045 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:57:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:48.045 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[79261b4e-8e53-4e18-8dcc-412fe3315f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:57:48 np0005548731 systemd[1]: run-netns-ovnmeta\x2da3764201\x2d4b86\x2d4407\x2d84d2\x2d684bd05a44b3.mount: Deactivated successfully.
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.241 232437 INFO nova.virt.libvirt.driver [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Deleting instance files /var/lib/nova/instances/21c591a0-5eed-4aa8-a68b-59a616b16e2b_del#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.242 232437 INFO nova.virt.libvirt.driver [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Deletion of /var/lib/nova/instances/21c591a0-5eed-4aa8-a68b-59a616b16e2b_del complete#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.351 232437 DEBUG nova.compute.manager [req-da45aa12-d999-4fb7-85ad-132194c749ad req-7a7ddb8c-d27d-47af-8c40-5cc4dbac06f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-vif-unplugged-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.351 232437 DEBUG oslo_concurrency.lockutils [req-da45aa12-d999-4fb7-85ad-132194c749ad req-7a7ddb8c-d27d-47af-8c40-5cc4dbac06f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.352 232437 DEBUG oslo_concurrency.lockutils [req-da45aa12-d999-4fb7-85ad-132194c749ad req-7a7ddb8c-d27d-47af-8c40-5cc4dbac06f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.352 232437 DEBUG oslo_concurrency.lockutils [req-da45aa12-d999-4fb7-85ad-132194c749ad req-7a7ddb8c-d27d-47af-8c40-5cc4dbac06f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.352 232437 DEBUG nova.compute.manager [req-da45aa12-d999-4fb7-85ad-132194c749ad req-7a7ddb8c-d27d-47af-8c40-5cc4dbac06f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] No waiting events found dispatching network-vif-unplugged-c76db338-0396-40a2-82b3-0c720d28d2bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.353 232437 DEBUG nova.compute.manager [req-da45aa12-d999-4fb7-85ad-132194c749ad req-7a7ddb8c-d27d-47af-8c40-5cc4dbac06f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-vif-unplugged-c76db338-0396-40a2-82b3-0c720d28d2bd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.366 232437 INFO nova.compute.manager [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.366 232437 DEBUG oslo.service.loopingcall [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.367 232437 DEBUG nova.compute.manager [-] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 02:57:48 np0005548731 nova_compute[232433]: 2025-12-06 07:57:48.367 232437 DEBUG nova.network.neutron [-] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 02:57:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:48.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:49 np0005548731 nova_compute[232433]: 2025-12-06 07:57:49.012 232437 DEBUG nova.network.neutron [-] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:57:49 np0005548731 nova_compute[232433]: 2025-12-06 07:57:49.035 232437 INFO nova.compute.manager [-] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Took 0.67 seconds to deallocate network for instance.#033[00m
Dec  6 02:57:49 np0005548731 nova_compute[232433]: 2025-12-06 07:57:49.128 232437 DEBUG nova.compute.manager [req-f208f23e-7248-47a4-8b25-8917754cb3e5 req-a4092bae-e655-40d3-9092-41bf6f7e79b6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-vif-deleted-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:57:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:49 np0005548731 nova_compute[232433]: 2025-12-06 07:57:49.255 232437 INFO nova.compute.manager [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Dec  6 02:57:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:49.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:49 np0005548731 nova_compute[232433]: 2025-12-06 07:57:49.319 232437 DEBUG oslo_concurrency.lockutils [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:49 np0005548731 nova_compute[232433]: 2025-12-06 07:57:49.319 232437 DEBUG oslo_concurrency.lockutils [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:49 np0005548731 nova_compute[232433]: 2025-12-06 07:57:49.412 232437 DEBUG oslo_concurrency.processutils [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:57:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:57:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3680227484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:57:49 np0005548731 nova_compute[232433]: 2025-12-06 07:57:49.839 232437 DEBUG oslo_concurrency.processutils [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:57:49 np0005548731 nova_compute[232433]: 2025-12-06 07:57:49.846 232437 DEBUG nova.compute.provider_tree [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:57:49 np0005548731 nova_compute[232433]: 2025-12-06 07:57:49.862 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:49 np0005548731 nova_compute[232433]: 2025-12-06 07:57:49.917 232437 DEBUG nova.scheduler.client.report [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:57:50 np0005548731 nova_compute[232433]: 2025-12-06 07:57:50.242 232437 DEBUG oslo_concurrency.lockutils [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:50 np0005548731 nova_compute[232433]: 2025-12-06 07:57:50.437 232437 INFO nova.scheduler.client.report [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Deleted allocations for instance 21c591a0-5eed-4aa8-a68b-59a616b16e2b#033[00m
Dec  6 02:57:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:50.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:50 np0005548731 nova_compute[232433]: 2025-12-06 07:57:50.676 232437 DEBUG nova.compute.manager [req-a9add809-716a-4717-ae9a-817470749910 req-621f94ee-428a-455d-ae93-971ac36d5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received event network-vif-plugged-c76db338-0396-40a2-82b3-0c720d28d2bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:57:50 np0005548731 nova_compute[232433]: 2025-12-06 07:57:50.677 232437 DEBUG oslo_concurrency.lockutils [req-a9add809-716a-4717-ae9a-817470749910 req-621f94ee-428a-455d-ae93-971ac36d5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:57:50 np0005548731 nova_compute[232433]: 2025-12-06 07:57:50.677 232437 DEBUG oslo_concurrency.lockutils [req-a9add809-716a-4717-ae9a-817470749910 req-621f94ee-428a-455d-ae93-971ac36d5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:57:50 np0005548731 nova_compute[232433]: 2025-12-06 07:57:50.677 232437 DEBUG oslo_concurrency.lockutils [req-a9add809-716a-4717-ae9a-817470749910 req-621f94ee-428a-455d-ae93-971ac36d5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:50 np0005548731 nova_compute[232433]: 2025-12-06 07:57:50.678 232437 DEBUG nova.compute.manager [req-a9add809-716a-4717-ae9a-817470749910 req-621f94ee-428a-455d-ae93-971ac36d5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] No waiting events found dispatching network-vif-plugged-c76db338-0396-40a2-82b3-0c720d28d2bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:57:50 np0005548731 nova_compute[232433]: 2025-12-06 07:57:50.678 232437 WARNING nova.compute.manager [req-a9add809-716a-4717-ae9a-817470749910 req-621f94ee-428a-455d-ae93-971ac36d5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Received unexpected event network-vif-plugged-c76db338-0396-40a2-82b3-0c720d28d2bd for instance with vm_state deleted and task_state None.#033[00m
Dec  6 02:57:50 np0005548731 nova_compute[232433]: 2025-12-06 07:57:50.748 232437 DEBUG oslo_concurrency.lockutils [None req-761a4b2a-2328-48a3-b815-b4df1c264c28 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "21c591a0-5eed-4aa8-a68b-59a616b16e2b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:57:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:51.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:52.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:52 np0005548731 nova_compute[232433]: 2025-12-06 07:57:52.998 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:57:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:53.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:57:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:57:54.155 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:57:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:54.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:54 np0005548731 nova_compute[232433]: 2025-12-06 07:57:54.863 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:55.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:56.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:57.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:57 np0005548731 nova_compute[232433]: 2025-12-06 07:57:57.999 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:57:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:58.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:57:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:57:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:57:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:59.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:57:59 np0005548731 nova_compute[232433]: 2025-12-06 07:57:59.865 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:00.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:00.903 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:58:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:00.904 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:58:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:00.904 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:58:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:01.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:02.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:02 np0005548731 nova_compute[232433]: 2025-12-06 07:58:02.964 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007867.9632156, 21c591a0-5eed-4aa8-a68b-59a616b16e2b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:58:02 np0005548731 nova_compute[232433]: 2025-12-06 07:58:02.964 232437 INFO nova.compute.manager [-] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:58:03 np0005548731 nova_compute[232433]: 2025-12-06 07:58:03.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:03 np0005548731 nova_compute[232433]: 2025-12-06 07:58:03.095 232437 DEBUG nova.compute.manager [None req-04a47ac4-dfc1-4d4b-951f-7707dea744c1 - - - - - -] [instance: 21c591a0-5eed-4aa8-a68b-59a616b16e2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:58:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:03.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:04.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:04 np0005548731 nova_compute[232433]: 2025-12-06 07:58:04.867 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:04 np0005548731 podman[317480]: 2025-12-06 07:58:04.903264593 +0000 UTC m=+0.057805125 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 02:58:04 np0005548731 podman[317482]: 2025-12-06 07:58:04.920068464 +0000 UTC m=+0.067766419 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 02:58:04 np0005548731 podman[317481]: 2025-12-06 07:58:04.93747821 +0000 UTC m=+0.085713629 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec  6 02:58:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:05.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:06.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:07.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:08 np0005548731 nova_compute[232433]: 2025-12-06 07:58:08.001 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e405 e405: 3 total, 3 up, 3 in
Dec  6 02:58:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:08.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:58:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/362523394' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:58:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:58:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/362523394' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:58:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:09.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:09 np0005548731 nova_compute[232433]: 2025-12-06 07:58:09.869 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:10.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:11.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:12.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:13 np0005548731 nova_compute[232433]: 2025-12-06 07:58:13.004 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:14.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:14 np0005548731 nova_compute[232433]: 2025-12-06 07:58:14.870 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:15.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:16.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:17.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:18 np0005548731 nova_compute[232433]: 2025-12-06 07:58:18.005 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:18.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e406 e406: 3 total, 3 up, 3 in
Dec  6 02:58:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:19.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:19 np0005548731 nova_compute[232433]: 2025-12-06 07:58:19.909 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:58:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:58:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:58:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:58:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:58:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:58:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:20.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:58:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:21.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:58:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1654241962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:58:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:58:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1654241962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:58:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:22.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:23 np0005548731 nova_compute[232433]: 2025-12-06 07:58:23.007 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:23.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:24.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:24 np0005548731 nova_compute[232433]: 2025-12-06 07:58:24.910 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:58:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:58:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:25.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:26.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:26.969 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:58:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:26.970 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:58:26 np0005548731 nova_compute[232433]: 2025-12-06 07:58:26.970 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:27 np0005548731 nova_compute[232433]: 2025-12-06 07:58:27.241 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:27.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:27 np0005548731 nova_compute[232433]: 2025-12-06 07:58:27.507 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:28 np0005548731 nova_compute[232433]: 2025-12-06 07:58:28.008 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:28.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 e407: 3 total, 3 up, 3 in
Dec  6 02:58:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:29.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:29 np0005548731 nova_compute[232433]: 2025-12-06 07:58:29.950 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:30.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:58:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:58:32 np0005548731 nova_compute[232433]: 2025-12-06 07:58:32.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:58:32 np0005548731 nova_compute[232433]: 2025-12-06 07:58:32.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:58:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:32.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:32.973 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:58:33 np0005548731 nova_compute[232433]: 2025-12-06 07:58:33.010 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:33 np0005548731 nova_compute[232433]: 2025-12-06 07:58:33.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:58:33 np0005548731 nova_compute[232433]: 2025-12-06 07:58:33.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:58:33 np0005548731 nova_compute[232433]: 2025-12-06 07:58:33.315 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 02:58:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:33.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:34 np0005548731 nova_compute[232433]: 2025-12-06 07:58:34.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:58:34 np0005548731 nova_compute[232433]: 2025-12-06 07:58:34.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:58:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:34.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:34 np0005548731 nova_compute[232433]: 2025-12-06 07:58:34.952 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:35.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:35 np0005548731 podman[317844]: 2025-12-06 07:58:35.912148124 +0000 UTC m=+0.067088063 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 02:58:35 np0005548731 podman[317846]: 2025-12-06 07:58:35.913653241 +0000 UTC m=+0.060540132 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec  6 02:58:35 np0005548731 podman[317845]: 2025-12-06 07:58:35.936192092 +0000 UTC m=+0.090636258 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 02:58:36 np0005548731 nova_compute[232433]: 2025-12-06 07:58:36.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:58:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:36.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:37.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:38 np0005548731 nova_compute[232433]: 2025-12-06 07:58:38.013 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:38.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:58:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.325 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.326 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.326 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.326 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.327 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:58:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:39.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:58:39 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/570637393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.790 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.933 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.934 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4233MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.934 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.935 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:58:39 np0005548731 nova_compute[232433]: 2025-12-06 07:58:39.953 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:40 np0005548731 nova_compute[232433]: 2025-12-06 07:58:40.356 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:58:40 np0005548731 nova_compute[232433]: 2025-12-06 07:58:40.357 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:58:40 np0005548731 nova_compute[232433]: 2025-12-06 07:58:40.427 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:58:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:58:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:40.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:58:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:58:40 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/319071054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:58:40 np0005548731 nova_compute[232433]: 2025-12-06 07:58:40.885 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:58:40 np0005548731 nova_compute[232433]: 2025-12-06 07:58:40.890 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:58:40 np0005548731 nova_compute[232433]: 2025-12-06 07:58:40.917 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:58:40 np0005548731 nova_compute[232433]: 2025-12-06 07:58:40.957 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:58:40 np0005548731 nova_compute[232433]: 2025-12-06 07:58:40.957 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:58:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:41.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:42.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:43 np0005548731 nova_compute[232433]: 2025-12-06 07:58:43.014 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:43.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:43 np0005548731 nova_compute[232433]: 2025-12-06 07:58:43.958 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:58:43 np0005548731 nova_compute[232433]: 2025-12-06 07:58:43.958 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:58:43 np0005548731 nova_compute[232433]: 2025-12-06 07:58:43.958 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:58:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:44.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:44 np0005548731 nova_compute[232433]: 2025-12-06 07:58:44.954 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:45.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:46.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000048s ======
Dec  6 02:58:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:47.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Dec  6 02:58:48 np0005548731 nova_compute[232433]: 2025-12-06 07:58:48.016 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:48.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:49.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:49 np0005548731 nova_compute[232433]: 2025-12-06 07:58:49.956 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:50.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:51.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:51 np0005548731 nova_compute[232433]: 2025-12-06 07:58:51.859 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:58:51 np0005548731 nova_compute[232433]: 2025-12-06 07:58:51.859 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:58:51 np0005548731 nova_compute[232433]: 2025-12-06 07:58:51.885 232437 DEBUG nova.compute.manager [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 02:58:51 np0005548731 nova_compute[232433]: 2025-12-06 07:58:51.966 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:58:51 np0005548731 nova_compute[232433]: 2025-12-06 07:58:51.967 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:58:51 np0005548731 nova_compute[232433]: 2025-12-06 07:58:51.977 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:58:51 np0005548731 nova_compute[232433]: 2025-12-06 07:58:51.977 232437 INFO nova.compute.claims [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.076 232437 DEBUG oslo_concurrency.processutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:58:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:58:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1376359152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.507 232437 DEBUG oslo_concurrency.processutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.513 232437 DEBUG nova.compute.provider_tree [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.531 232437 DEBUG nova.scheduler.client.report [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.557 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.557 232437 DEBUG nova.compute.manager [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.606 232437 DEBUG nova.compute.manager [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.606 232437 DEBUG nova.network.neutron [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 02:58:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:58:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:52.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.639 232437 INFO nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.659 232437 DEBUG nova.compute.manager [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.717 232437 INFO nova.virt.block_device [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Booting with volume f1cb8c3c-521b-48db-b5f7-453fef5dd2fe at /dev/vda#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.900 232437 DEBUG os_brick.utils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.902 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.913 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.913 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[28f36f39-ba5a-42e6-8fec-b029f9e03481]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.914 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.922 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.923 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[804b9450-8320-473f-9d1b-fdf3b36c6a5a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.924 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.935 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.935 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3c9684-9b67-4db1-91cc-d1ba981cb4dd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.937 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8fd2a0-fb7f-49bb-99aa-56466f98158e]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.937 232437 DEBUG oslo_concurrency.processutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.963 232437 DEBUG oslo_concurrency.processutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.966 232437 DEBUG os_brick.initiator.connectors.lightos [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.966 232437 DEBUG os_brick.initiator.connectors.lightos [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.967 232437 DEBUG os_brick.initiator.connectors.lightos [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.967 232437 DEBUG os_brick.utils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:58:52 np0005548731 nova_compute[232433]: 2025-12-06 07:58:52.967 232437 DEBUG nova.virt.block_device [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating existing volume attachment record: 5f015061-4496-477d-9e1d-627940a2de6c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:58:53 np0005548731 nova_compute[232433]: 2025-12-06 07:58:53.017 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:53 np0005548731 nova_compute[232433]: 2025-12-06 07:58:53.267 232437 DEBUG nova.policy [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '98e657096e3f4b528cd461a3dd6a750e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 02:58:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:53.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:54 np0005548731 nova_compute[232433]: 2025-12-06 07:58:54.350 232437 DEBUG nova.compute.manager [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 02:58:54 np0005548731 nova_compute[232433]: 2025-12-06 07:58:54.352 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 02:58:54 np0005548731 nova_compute[232433]: 2025-12-06 07:58:54.353 232437 INFO nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Creating image(s)#033[00m
Dec  6 02:58:54 np0005548731 nova_compute[232433]: 2025-12-06 07:58:54.354 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 02:58:54 np0005548731 nova_compute[232433]: 2025-12-06 07:58:54.354 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Ensure instance console log exists: /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 02:58:54 np0005548731 nova_compute[232433]: 2025-12-06 07:58:54.354 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:58:54 np0005548731 nova_compute[232433]: 2025-12-06 07:58:54.355 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:58:54 np0005548731 nova_compute[232433]: 2025-12-06 07:58:54.355 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:58:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:54.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:54 np0005548731 nova_compute[232433]: 2025-12-06 07:58:54.958 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:55 np0005548731 nova_compute[232433]: 2025-12-06 07:58:55.289 232437 DEBUG nova.network.neutron [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Successfully created port: 2b54b935-6ec8-4a68-a404-6be608d5b405 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 02:58:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:55.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:56 np0005548731 nova_compute[232433]: 2025-12-06 07:58:56.305 232437 DEBUG nova.network.neutron [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Successfully updated port: 2b54b935-6ec8-4a68-a404-6be608d5b405 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 02:58:56 np0005548731 nova_compute[232433]: 2025-12-06 07:58:56.322 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:58:56 np0005548731 nova_compute[232433]: 2025-12-06 07:58:56.322 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquired lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:58:56 np0005548731 nova_compute[232433]: 2025-12-06 07:58:56.323 232437 DEBUG nova.network.neutron [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:58:56 np0005548731 nova_compute[232433]: 2025-12-06 07:58:56.604 232437 DEBUG nova.compute.manager [req-236dc1fa-0567-4fb6-950f-b168ed298bf4 req-1c8d7688-7606-45a2-b9c2-ed14f5fa27d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-changed-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:58:56 np0005548731 nova_compute[232433]: 2025-12-06 07:58:56.605 232437 DEBUG nova.compute.manager [req-236dc1fa-0567-4fb6-950f-b168ed298bf4 req-1c8d7688-7606-45a2-b9c2-ed14f5fa27d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Refreshing instance network info cache due to event network-changed-2b54b935-6ec8-4a68-a404-6be608d5b405. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:58:56 np0005548731 nova_compute[232433]: 2025-12-06 07:58:56.605 232437 DEBUG oslo_concurrency.lockutils [req-236dc1fa-0567-4fb6-950f-b168ed298bf4 req-1c8d7688-7606-45a2-b9c2-ed14f5fa27d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:58:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:56.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:56 np0005548731 nova_compute[232433]: 2025-12-06 07:58:56.699 232437 DEBUG nova.network.neutron [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 02:58:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:58:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:57.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.709 232437 DEBUG nova.network.neutron [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating instance_info_cache with network_info: [{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.733 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Releasing lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.733 232437 DEBUG nova.compute.manager [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Instance network_info: |[{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.734 232437 DEBUG oslo_concurrency.lockutils [req-236dc1fa-0567-4fb6-950f-b168ed298bf4 req-1c8d7688-7606-45a2-b9c2-ed14f5fa27d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.734 232437 DEBUG nova.network.neutron [req-236dc1fa-0567-4fb6-950f-b168ed298bf4 req-1c8d7688-7606-45a2-b9c2-ed14f5fa27d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Refreshing network info cache for port 2b54b935-6ec8-4a68-a404-6be608d5b405 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.736 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Start _get_guest_xml network_info=[{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f1cb8c3c-521b-48db-b5f7-453fef5dd2fe', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f1cb8c3c-521b-48db-b5f7-453fef5dd2fe', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '27f553d7-b010-40ef-b0cb-42ff0c466354', 'attached_at': '', 'detached_at': '', 'volume_id': 'f1cb8c3c-521b-48db-b5f7-453fef5dd2fe', 'serial': 'f1cb8c3c-521b-48db-b5f7-453fef5dd2fe'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'attachment_id': '5f015061-4496-477d-9e1d-627940a2de6c', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.741 232437 WARNING nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.746 232437 DEBUG nova.virt.libvirt.host [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.746 232437 DEBUG nova.virt.libvirt.host [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.755 232437 DEBUG nova.virt.libvirt.host [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.755 232437 DEBUG nova.virt.libvirt.host [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.756 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.757 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.757 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.757 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.758 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.758 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.758 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.758 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.759 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.759 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.759 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.759 232437 DEBUG nova.virt.hardware [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.786 232437 DEBUG nova.storage.rbd_utils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:58:57 np0005548731 nova_compute[232433]: 2025-12-06 07:58:57.789 232437 DEBUG oslo_concurrency.processutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.018 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 02:58:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3401036419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.222 232437 DEBUG oslo_concurrency.processutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.292 232437 DEBUG nova.virt.libvirt.vif [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1733328147',display_name='tempest-TestShelveInstance-server-1733328147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1733328147',id=180,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4sXJH069UiaxcddlkMS95GRWWK+3yMzXihiGRU/mHbPKO5jPcAw1Vlvstr18SgoFXBRYxyHafuDIwfD1IsMDXWkERYPnz3TEHdLWZin3OQEFfVSfeX8bcZN+wdDBDnqQ==',key_name='tempest-TestShelveInstance-1084557310',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-hmpcrqao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:58:52Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=27f553d7-b010-40ef-b0cb-42ff0c466354,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.292 232437 DEBUG nova.network.os_vif_util [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.293 232437 DEBUG nova.network.os_vif_util [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.294 232437 DEBUG nova.objects.instance [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'pci_devices' on Instance uuid 27f553d7-b010-40ef-b0cb-42ff0c466354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.311 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] End _get_guest_xml xml=<domain type="kvm">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  <uuid>27f553d7-b010-40ef-b0cb-42ff0c466354</uuid>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  <name>instance-000000b4</name>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestShelveInstance-server-1733328147</nova:name>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 07:58:57</nova:creationTime>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <nova:user uuid="98e657096e3f4b528cd461a3dd6a750e">tempest-TestShelveInstance-1863009913-project-member</nova:user>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <nova:project uuid="c3c0564f8e9f4af9ae5b597a275c989f">tempest-TestShelveInstance-1863009913</nova:project>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <nova:port uuid="2b54b935-6ec8-4a68-a404-6be608d5b405">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <system>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <entry name="serial">27f553d7-b010-40ef-b0cb-42ff0c466354</entry>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <entry name="uuid">27f553d7-b010-40ef-b0cb-42ff0c466354</entry>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    </system>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  <os>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  </os>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  <features>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  </features>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  </clock>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  <devices>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-f1cb8c3c-521b-48db-b5f7-453fef5dd2fe">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      </source>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      </auth>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <serial>f1cb8c3c-521b-48db-b5f7-453fef5dd2fe</serial>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    </disk>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:15:9d:85"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <target dev="tap2b54b935-6e"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    </interface>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/console.log" append="off"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    </serial>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <video>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    </video>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    </rng>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 02:58:58 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 02:58:58 np0005548731 nova_compute[232433]:  </devices>
Dec  6 02:58:58 np0005548731 nova_compute[232433]: </domain>
Dec  6 02:58:58 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.312 232437 DEBUG nova.compute.manager [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Preparing to wait for external event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.312 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.313 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.313 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.314 232437 DEBUG nova.virt.libvirt.vif [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1733328147',display_name='tempest-TestShelveInstance-server-1733328147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1733328147',id=180,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4sXJH069UiaxcddlkMS95GRWWK+3yMzXihiGRU/mHbPKO5jPcAw1Vlvstr18SgoFXBRYxyHafuDIwfD1IsMDXWkERYPnz3TEHdLWZin3OQEFfVSfeX8bcZN+wdDBDnqQ==',key_name='tempest-TestShelveInstance-1084557310',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-hmpcrqao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:58:52Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=27f553d7-b010-40ef-b0cb-42ff0c466354,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.316 232437 DEBUG nova.network.os_vif_util [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.317 232437 DEBUG nova.network.os_vif_util [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.318 232437 DEBUG os_vif [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.319 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.320 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.320 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.325 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.326 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b54b935-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.327 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b54b935-6e, col_values=(('external_ids', {'iface-id': '2b54b935-6ec8-4a68-a404-6be608d5b405', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:9d:85', 'vm-uuid': '27f553d7-b010-40ef-b0cb-42ff0c466354'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:58 np0005548731 NetworkManager[49182]: <info>  [1765007938.3304] manager: (tap2b54b935-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.340 232437 INFO os_vif [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e')#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.490 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.491 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.491 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No VIF found with MAC fa:16:3e:15:9d:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.492 232437 INFO nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Using config drive#033[00m
Dec  6 02:58:58 np0005548731 nova_compute[232433]: 2025-12-06 07:58:58.516 232437 DEBUG nova.storage.rbd_utils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:58:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:58.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.260 232437 INFO nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Creating config drive at /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config#033[00m
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.265 232437 DEBUG oslo_concurrency.processutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxkgvtewr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:58:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:58:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:58:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:59.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.408 232437 DEBUG oslo_concurrency.processutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxkgvtewr" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.435 232437 DEBUG nova.storage.rbd_utils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.439 232437 DEBUG oslo_concurrency.processutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config 27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.506 232437 DEBUG nova.network.neutron [req-236dc1fa-0567-4fb6-950f-b168ed298bf4 req-1c8d7688-7606-45a2-b9c2-ed14f5fa27d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updated VIF entry in instance network info cache for port 2b54b935-6ec8-4a68-a404-6be608d5b405. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.508 232437 DEBUG nova.network.neutron [req-236dc1fa-0567-4fb6-950f-b168ed298bf4 req-1c8d7688-7606-45a2-b9c2-ed14f5fa27d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating instance_info_cache with network_info: [{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.569 232437 DEBUG oslo_concurrency.lockutils [req-236dc1fa-0567-4fb6-950f-b168ed298bf4 req-1c8d7688-7606-45a2-b9c2-ed14f5fa27d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.585 232437 DEBUG oslo_concurrency.processutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config 27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.586 232437 INFO nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Deleting local config drive /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config because it was imported into RBD.#033[00m
Dec  6 02:58:59 np0005548731 kernel: tap2b54b935-6e: entered promiscuous mode
Dec  6 02:58:59 np0005548731 NetworkManager[49182]: <info>  [1765007939.6403] manager: (tap2b54b935-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.640 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:58:59Z|00902|binding|INFO|Claiming lport 2b54b935-6ec8-4a68-a404-6be608d5b405 for this chassis.
Dec  6 02:58:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:58:59Z|00903|binding|INFO|2b54b935-6ec8-4a68-a404-6be608d5b405: Claiming fa:16:3e:15:9d:85 10.100.0.6
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.646 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:59 np0005548731 systemd-udevd[318149]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:58:59 np0005548731 systemd-machined[195355]: New machine qemu-91-instance-000000b4.
Dec  6 02:58:59 np0005548731 NetworkManager[49182]: <info>  [1765007939.6833] device (tap2b54b935-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 02:58:59 np0005548731 NetworkManager[49182]: <info>  [1765007939.6859] device (tap2b54b935-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 02:58:59 np0005548731 systemd[1]: Started Virtual Machine qemu-91-instance-000000b4.
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.704 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.710 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:58:59Z|00904|binding|INFO|Setting lport 2b54b935-6ec8-4a68-a404-6be608d5b405 ovn-installed in OVS
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.714 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:59 np0005548731 ovn_controller[133927]: 2025-12-06T07:58:59Z|00905|binding|INFO|Setting lport 2b54b935-6ec8-4a68-a404-6be608d5b405 up in Southbound
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.721 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:9d:85 10.100.0.6'], port_security=['fa:16:3e:15:9d:85 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '27f553d7-b010-40ef-b0cb-42ff0c466354', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8caa40db-27da-43ab-86ca-042284636e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fcb04c4c-5a67-4d1a-9338-9588ff78bd36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ce9c1bf-feee-480c-a359-3eaf272f4b83, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=2b54b935-6ec8-4a68-a404-6be608d5b405) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.723 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 2b54b935-6ec8-4a68-a404-6be608d5b405 in datapath 8caa40db-27da-43ab-86ca-042284636e71 bound to our chassis#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.727 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8caa40db-27da-43ab-86ca-042284636e71#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.746 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[615d2e07-20ef-4fc6-a7d1-aeb8f053279c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.749 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8caa40db-21 in ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.753 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8caa40db-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.753 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f550ef25-f9b2-42f7-8a0e-e4d85c07cb8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.754 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5345e217-e6a5-47b9-baaa-6e2f67953035]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.767 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[9b721a9f-8770-47e4-987a-2274503efbea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.797 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c81b6595-c75c-4701-a8fa-68f7244fe559]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.825 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[73dde0f6-aa97-41e8-b884-b0a6d010b794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 NetworkManager[49182]: <info>  [1765007939.8349] manager: (tap8caa40db-20): new Veth device (/org/freedesktop/NetworkManager/Devices/413)
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.833 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9344bcd5-7b4e-4949-8407-a0ca45209834]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 systemd-udevd[318151]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.868 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bc087c5a-dc5c-45dc-a08c-d521ac2ae514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.871 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4582af5c-b7bf-4c5a-84da-0c0a5184cd3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 NetworkManager[49182]: <info>  [1765007939.8921] device (tap8caa40db-20): carrier: link connected
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.897 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e8369f89-c156-4834-a3c6-e4cc7cd3b30b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.912 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[be8a9aa0-f06a-4fc4-96a4-5f550f7ba5c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8caa40db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:45:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 820189, 'reachable_time': 24390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318185, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.927 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a27de122-2af6-4ffa-a1e7-7c045a99aae2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:4540'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 820189, 'tstamp': 820189}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318186, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.945 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bd69ee37-ffad-4eb5-8d2e-95852ad1d2c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8caa40db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:45:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 820189, 'reachable_time': 24390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318187, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:58:59 np0005548731 nova_compute[232433]: 2025-12-06 07:58:59.961 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:58:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:58:59.978 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a4bd65-e8c2-41cf-8935-364b962d70bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.032 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[400c97a7-187f-4806-a1ed-50db19a44bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.034 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8caa40db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.034 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.035 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8caa40db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:59:00 np0005548731 nova_compute[232433]: 2025-12-06 07:59:00.036 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:00 np0005548731 kernel: tap8caa40db-20: entered promiscuous mode
Dec  6 02:59:00 np0005548731 NetworkManager[49182]: <info>  [1765007940.0370] manager: (tap8caa40db-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.040 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8caa40db-20, col_values=(('external_ids', {'iface-id': '13935eb5-7198-4d3b-b91f-e4e0daf2886d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:59:00 np0005548731 nova_compute[232433]: 2025-12-06 07:59:00.041 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:00 np0005548731 ovn_controller[133927]: 2025-12-06T07:59:00Z|00906|binding|INFO|Releasing lport 13935eb5-7198-4d3b-b91f-e4e0daf2886d from this chassis (sb_readonly=0)
Dec  6 02:59:00 np0005548731 nova_compute[232433]: 2025-12-06 07:59:00.055 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.056 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.057 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2688fce3-77a1-4dc3-a759-9be37e2f701c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.057 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-8caa40db-27da-43ab-86ca-042284636e71
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 8caa40db-27da-43ab-86ca-042284636e71
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.058 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'env', 'PROCESS_TAG=haproxy-8caa40db-27da-43ab-86ca-042284636e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8caa40db-27da-43ab-86ca-042284636e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 02:59:00 np0005548731 podman[318255]: 2025-12-06 07:59:00.408015788 +0000 UTC m=+0.044123530 container create 7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 02:59:00 np0005548731 systemd[1]: Started libpod-conmon-7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e.scope.
Dec  6 02:59:00 np0005548731 systemd[1]: Started libcrun container.
Dec  6 02:59:00 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e94baafa72671022bac220d3201fff715a6a7595af593eac9050b1e280f0e60d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 02:59:00 np0005548731 podman[318255]: 2025-12-06 07:59:00.386191784 +0000 UTC m=+0.022299546 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 02:59:00 np0005548731 podman[318255]: 2025-12-06 07:59:00.483637598 +0000 UTC m=+0.119745370 container init 7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  6 02:59:00 np0005548731 podman[318255]: 2025-12-06 07:59:00.489081701 +0000 UTC m=+0.125189483 container start 7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 02:59:00 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[318271]: [NOTICE]   (318275) : New worker (318277) forked
Dec  6 02:59:00 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[318271]: [NOTICE]   (318275) : Loading success.
Dec  6 02:59:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:00.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.904 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.905 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:59:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:00.906 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:59:01 np0005548731 nova_compute[232433]: 2025-12-06 07:59:01.016 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007941.0151572, 27f553d7-b010-40ef-b0cb-42ff0c466354 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:59:01 np0005548731 nova_compute[232433]: 2025-12-06 07:59:01.018 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] VM Started (Lifecycle Event)#033[00m
Dec  6 02:59:01 np0005548731 nova_compute[232433]: 2025-12-06 07:59:01.045 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:59:01 np0005548731 nova_compute[232433]: 2025-12-06 07:59:01.049 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007941.0154264, 27f553d7-b010-40ef-b0cb-42ff0c466354 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:59:01 np0005548731 nova_compute[232433]: 2025-12-06 07:59:01.050 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] VM Paused (Lifecycle Event)#033[00m
Dec  6 02:59:01 np0005548731 nova_compute[232433]: 2025-12-06 07:59:01.071 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:59:01 np0005548731 nova_compute[232433]: 2025-12-06 07:59:01.075 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:59:01 np0005548731 nova_compute[232433]: 2025-12-06 07:59:01.094 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:59:01 np0005548731 nova_compute[232433]: 2025-12-06 07:59:01.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:01 np0005548731 nova_compute[232433]: 2025-12-06 07:59:01.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 02:59:01 np0005548731 nova_compute[232433]: 2025-12-06 07:59:01.124 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 02:59:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:01.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:02.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:03.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.447 232437 DEBUG nova.compute.manager [req-ab780d7f-93a8-4f8b-99e0-b5b05075f36a req-cc2ae038-c7bb-4c5a-94af-6ef83bf9bcb9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.447 232437 DEBUG oslo_concurrency.lockutils [req-ab780d7f-93a8-4f8b-99e0-b5b05075f36a req-cc2ae038-c7bb-4c5a-94af-6ef83bf9bcb9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.447 232437 DEBUG oslo_concurrency.lockutils [req-ab780d7f-93a8-4f8b-99e0-b5b05075f36a req-cc2ae038-c7bb-4c5a-94af-6ef83bf9bcb9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.448 232437 DEBUG oslo_concurrency.lockutils [req-ab780d7f-93a8-4f8b-99e0-b5b05075f36a req-cc2ae038-c7bb-4c5a-94af-6ef83bf9bcb9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.448 232437 DEBUG nova.compute.manager [req-ab780d7f-93a8-4f8b-99e0-b5b05075f36a req-cc2ae038-c7bb-4c5a-94af-6ef83bf9bcb9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Processing event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.448 232437 DEBUG nova.compute.manager [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.452 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765007943.452191, 27f553d7-b010-40ef-b0cb-42ff0c466354 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.453 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] VM Resumed (Lifecycle Event)#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.455 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.459 232437 INFO nova.virt.libvirt.driver [-] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Instance spawned successfully.#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.460 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.595 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.596 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.596 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.596 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.597 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.597 232437 DEBUG nova.virt.libvirt.driver [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.601 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.605 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.694 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.778 232437 INFO nova.compute.manager [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Took 9.43 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.779 232437 DEBUG nova.compute.manager [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:59:03 np0005548731 nova_compute[232433]: 2025-12-06 07:59:03.956 232437 INFO nova.compute.manager [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Took 12.02 seconds to build instance.#033[00m
Dec  6 02:59:04 np0005548731 nova_compute[232433]: 2025-12-06 07:59:04.076 232437 DEBUG oslo_concurrency.lockutils [None req-7fce0135-cf8f-486d-abdc-1aa3e86cd2e6 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:59:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:04.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:04 np0005548731 nova_compute[232433]: 2025-12-06 07:59:04.963 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:05.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:05 np0005548731 nova_compute[232433]: 2025-12-06 07:59:05.600 232437 DEBUG nova.compute.manager [req-df07e5d6-a647-42bf-84dd-3bc5ac42aa1c req-1172e395-10d6-434d-b09b-ef36abd10480 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:59:05 np0005548731 nova_compute[232433]: 2025-12-06 07:59:05.600 232437 DEBUG oslo_concurrency.lockutils [req-df07e5d6-a647-42bf-84dd-3bc5ac42aa1c req-1172e395-10d6-434d-b09b-ef36abd10480 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:59:05 np0005548731 nova_compute[232433]: 2025-12-06 07:59:05.601 232437 DEBUG oslo_concurrency.lockutils [req-df07e5d6-a647-42bf-84dd-3bc5ac42aa1c req-1172e395-10d6-434d-b09b-ef36abd10480 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:59:05 np0005548731 nova_compute[232433]: 2025-12-06 07:59:05.601 232437 DEBUG oslo_concurrency.lockutils [req-df07e5d6-a647-42bf-84dd-3bc5ac42aa1c req-1172e395-10d6-434d-b09b-ef36abd10480 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:59:05 np0005548731 nova_compute[232433]: 2025-12-06 07:59:05.601 232437 DEBUG nova.compute.manager [req-df07e5d6-a647-42bf-84dd-3bc5ac42aa1c req-1172e395-10d6-434d-b09b-ef36abd10480 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] No waiting events found dispatching network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:59:05 np0005548731 nova_compute[232433]: 2025-12-06 07:59:05.601 232437 WARNING nova.compute.manager [req-df07e5d6-a647-42bf-84dd-3bc5ac42aa1c req-1172e395-10d6-434d-b09b-ef36abd10480 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received unexpected event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 for instance with vm_state active and task_state None.#033[00m
Dec  6 02:59:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:06.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:06 np0005548731 podman[318296]: 2025-12-06 07:59:06.962113162 +0000 UTC m=+0.114611244 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 02:59:06 np0005548731 podman[318298]: 2025-12-06 07:59:06.963386223 +0000 UTC m=+0.109565351 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 02:59:06 np0005548731 podman[318297]: 2025-12-06 07:59:06.978456712 +0000 UTC m=+0.131025146 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:59:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:07.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:08 np0005548731 nova_compute[232433]: 2025-12-06 07:59:08.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:08.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:08.775 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:59:08 np0005548731 nova_compute[232433]: 2025-12-06 07:59:08.776 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:08.777 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:59:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 02:59:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/137019635' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 02:59:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 02:59:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/137019635' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 02:59:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:09.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:09 np0005548731 nova_compute[232433]: 2025-12-06 07:59:09.966 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:10.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:11 np0005548731 nova_compute[232433]: 2025-12-06 07:59:11.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:11 np0005548731 nova_compute[232433]: 2025-12-06 07:59:11.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 02:59:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:11.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:12.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:12.779 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:59:13 np0005548731 nova_compute[232433]: 2025-12-06 07:59:13.335 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:13.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:14 np0005548731 NetworkManager[49182]: <info>  [1765007954.1378] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Dec  6 02:59:14 np0005548731 NetworkManager[49182]: <info>  [1765007954.1401] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Dec  6 02:59:14 np0005548731 nova_compute[232433]: 2025-12-06 07:59:14.139 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:14 np0005548731 nova_compute[232433]: 2025-12-06 07:59:14.300 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:14 np0005548731 ovn_controller[133927]: 2025-12-06T07:59:14Z|00907|binding|INFO|Releasing lport 13935eb5-7198-4d3b-b91f-e4e0daf2886d from this chassis (sb_readonly=0)
Dec  6 02:59:14 np0005548731 nova_compute[232433]: 2025-12-06 07:59:14.316 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:14.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:14 np0005548731 nova_compute[232433]: 2025-12-06 07:59:14.967 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:15 np0005548731 nova_compute[232433]: 2025-12-06 07:59:15.317 232437 DEBUG nova.compute.manager [req-b09142e7-aa22-459c-9c47-4b5d50cf5375 req-dac772f7-b1cc-430a-8857-82f0023af047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-changed-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:59:15 np0005548731 nova_compute[232433]: 2025-12-06 07:59:15.317 232437 DEBUG nova.compute.manager [req-b09142e7-aa22-459c-9c47-4b5d50cf5375 req-dac772f7-b1cc-430a-8857-82f0023af047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Refreshing instance network info cache due to event network-changed-2b54b935-6ec8-4a68-a404-6be608d5b405. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:59:15 np0005548731 nova_compute[232433]: 2025-12-06 07:59:15.318 232437 DEBUG oslo_concurrency.lockutils [req-b09142e7-aa22-459c-9c47-4b5d50cf5375 req-dac772f7-b1cc-430a-8857-82f0023af047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:59:15 np0005548731 nova_compute[232433]: 2025-12-06 07:59:15.318 232437 DEBUG oslo_concurrency.lockutils [req-b09142e7-aa22-459c-9c47-4b5d50cf5375 req-dac772f7-b1cc-430a-8857-82f0023af047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:59:15 np0005548731 nova_compute[232433]: 2025-12-06 07:59:15.318 232437 DEBUG nova.network.neutron [req-b09142e7-aa22-459c-9c47-4b5d50cf5375 req-dac772f7-b1cc-430a-8857-82f0023af047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Refreshing network info cache for port 2b54b935-6ec8-4a68-a404-6be608d5b405 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:59:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:15.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:16.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:17 np0005548731 nova_compute[232433]: 2025-12-06 07:59:17.260 232437 DEBUG nova.network.neutron [req-b09142e7-aa22-459c-9c47-4b5d50cf5375 req-dac772f7-b1cc-430a-8857-82f0023af047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updated VIF entry in instance network info cache for port 2b54b935-6ec8-4a68-a404-6be608d5b405. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:59:17 np0005548731 nova_compute[232433]: 2025-12-06 07:59:17.261 232437 DEBUG nova.network.neutron [req-b09142e7-aa22-459c-9c47-4b5d50cf5375 req-dac772f7-b1cc-430a-8857-82f0023af047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating instance_info_cache with network_info: [{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:59:17 np0005548731 nova_compute[232433]: 2025-12-06 07:59:17.336 232437 DEBUG oslo_concurrency.lockutils [req-b09142e7-aa22-459c-9c47-4b5d50cf5375 req-dac772f7-b1cc-430a-8857-82f0023af047 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:59:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:17.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:59:18Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:9d:85 10.100.0.6
Dec  6 02:59:18 np0005548731 ovn_controller[133927]: 2025-12-06T07:59:18Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:9d:85 10.100.0.6
Dec  6 02:59:18 np0005548731 nova_compute[232433]: 2025-12-06 07:59:18.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:18.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:19.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:20 np0005548731 nova_compute[232433]: 2025-12-06 07:59:20.014 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:20.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:21.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:22.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:23 np0005548731 nova_compute[232433]: 2025-12-06 07:59:23.339 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:23.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:24.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 02:59:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 69K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1583 writes, 7927 keys, 1583 commit groups, 1.0 writes per commit group, ingest: 16.03 MB, 0.03 MB/s#012Interval WAL: 1583 writes, 1583 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     34.9      2.41              0.25        42    0.057       0      0       0.0       0.0#012  L6      1/0   12.07 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0     99.8     85.2      4.94              1.20        41    0.121    303K    22K       0.0       0.0#012 Sum      1/0   12.07 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     67.1     68.7      7.35              1.45        83    0.089    303K    22K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.5    155.0    155.5      0.52              0.21        12    0.043     59K   3128       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     99.8     85.2      4.94              1.20        41    0.121    303K    22K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     35.0      2.41              0.25        41    0.059       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.082, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.49 GB write, 0.09 MB/s write, 0.48 GB read, 0.09 MB/s read, 7.4 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 54.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000326 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3096,52.12 MB,17.1435%) FilterBlock(83,839.30 KB,0.269614%) IndexBlock(83,1.37 MB,0.450696%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 02:59:25 np0005548731 nova_compute[232433]: 2025-12-06 07:59:25.058 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:25.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:26.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:27 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:59:27 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:59:27 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 02:59:27 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:59:27 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 02:59:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:27.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:28 np0005548731 nova_compute[232433]: 2025-12-06 07:59:28.340 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:28.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:29.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:30 np0005548731 nova_compute[232433]: 2025-12-06 07:59:30.108 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:30.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:31 np0005548731 nova_compute[232433]: 2025-12-06 07:59:31.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:31 np0005548731 nova_compute[232433]: 2025-12-06 07:59:31.214 232437 DEBUG oslo_concurrency.lockutils [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:59:31 np0005548731 nova_compute[232433]: 2025-12-06 07:59:31.215 232437 DEBUG oslo_concurrency.lockutils [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354" acquired by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:59:31 np0005548731 nova_compute[232433]: 2025-12-06 07:59:31.215 232437 INFO nova.compute.manager [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Shelve offloading#033[00m
Dec  6 02:59:31 np0005548731 nova_compute[232433]: 2025-12-06 07:59:31.238 232437 DEBUG nova.virt.libvirt.driver [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 02:59:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:31.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:32 np0005548731 nova_compute[232433]: 2025-12-06 07:59:32.159 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:32 np0005548731 nova_compute[232433]: 2025-12-06 07:59:32.159 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 02:59:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:32.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:33 np0005548731 nova_compute[232433]: 2025-12-06 07:59:33.343 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:33.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 02:59:33 np0005548731 kernel: tap2b54b935-6e (unregistering): left promiscuous mode
Dec  6 02:59:33 np0005548731 NetworkManager[49182]: <info>  [1765007973.7420] device (tap2b54b935-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 02:59:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:59:33Z|00908|binding|INFO|Releasing lport 2b54b935-6ec8-4a68-a404-6be608d5b405 from this chassis (sb_readonly=0)
Dec  6 02:59:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:59:33Z|00909|binding|INFO|Setting lport 2b54b935-6ec8-4a68-a404-6be608d5b405 down in Southbound
Dec  6 02:59:33 np0005548731 nova_compute[232433]: 2025-12-06 07:59:33.751 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:33 np0005548731 ovn_controller[133927]: 2025-12-06T07:59:33Z|00910|binding|INFO|Removing iface tap2b54b935-6e ovn-installed in OVS
Dec  6 02:59:33 np0005548731 nova_compute[232433]: 2025-12-06 07:59:33.753 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:33.758 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:9d:85 10.100.0.6'], port_security=['fa:16:3e:15:9d:85 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '27f553d7-b010-40ef-b0cb-42ff0c466354', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8caa40db-27da-43ab-86ca-042284636e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fcb04c4c-5a67-4d1a-9338-9588ff78bd36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ce9c1bf-feee-480c-a359-3eaf272f4b83, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=2b54b935-6ec8-4a68-a404-6be608d5b405) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:59:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:33.759 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 2b54b935-6ec8-4a68-a404-6be608d5b405 in datapath 8caa40db-27da-43ab-86ca-042284636e71 unbound from our chassis#033[00m
Dec  6 02:59:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:33.760 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8caa40db-27da-43ab-86ca-042284636e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 02:59:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:33.762 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[301423ea-8c51-434f-8156-633e46da95f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:33.762 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 namespace which is not needed anymore#033[00m
Dec  6 02:59:33 np0005548731 nova_compute[232433]: 2025-12-06 07:59:33.768 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:33 np0005548731 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Dec  6 02:59:33 np0005548731 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000b4.scope: Consumed 14.543s CPU time.
Dec  6 02:59:33 np0005548731 systemd-machined[195355]: Machine qemu-91-instance-000000b4 terminated.
Dec  6 02:59:33 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[318271]: [NOTICE]   (318275) : haproxy version is 2.8.14-c23fe91
Dec  6 02:59:33 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[318271]: [NOTICE]   (318275) : path to executable is /usr/sbin/haproxy
Dec  6 02:59:33 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[318271]: [WARNING]  (318275) : Exiting Master process...
Dec  6 02:59:33 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[318271]: [ALERT]    (318275) : Current worker (318277) exited with code 143 (Terminated)
Dec  6 02:59:33 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[318271]: [WARNING]  (318275) : All workers exited. Exiting... (0)
Dec  6 02:59:33 np0005548731 systemd[1]: libpod-7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e.scope: Deactivated successfully.
Dec  6 02:59:33 np0005548731 podman[318683]: 2025-12-06 07:59:33.912491001 +0000 UTC m=+0.046918520 container died 7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.916284) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973916328, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1936, "num_deletes": 253, "total_data_size": 4498498, "memory_usage": 4577720, "flush_reason": "Manual Compaction"}
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973929831, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 1835587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67606, "largest_seqno": 69537, "table_properties": {"data_size": 1829327, "index_size": 3205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16842, "raw_average_key_size": 21, "raw_value_size": 1815433, "raw_average_value_size": 2318, "num_data_blocks": 142, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007814, "oldest_key_time": 1765007814, "file_creation_time": 1765007973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 13593 microseconds, and 5710 cpu microseconds.
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.929876) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 1835587 bytes OK
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.929896) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.931469) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.931484) EVENT_LOG_v1 {"time_micros": 1765007973931479, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.931501) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 4489649, prev total WAL file size 4489649, number of live WAL files 2.
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.933082) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323736' seq:72057594037927935, type:22 .. '6D6772737461740032353237' seq:0, type:0; will stop at (end)
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(1792KB)], [135(12MB)]
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973933135, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 14493022, "oldest_snapshot_seqno": -1}
Dec  6 02:59:33 np0005548731 systemd[1]: var-lib-containers-storage-overlay-e94baafa72671022bac220d3201fff715a6a7595af593eac9050b1e280f0e60d-merged.mount: Deactivated successfully.
Dec  6 02:59:33 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e-userdata-shm.mount: Deactivated successfully.
Dec  6 02:59:33 np0005548731 podman[318683]: 2025-12-06 07:59:33.95984733 +0000 UTC m=+0.094274849 container cleanup 7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true)
Dec  6 02:59:33 np0005548731 systemd[1]: libpod-conmon-7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e.scope: Deactivated successfully.
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9916 keys, 11730335 bytes, temperature: kUnknown
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973993354, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 11730335, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11667979, "index_size": 36451, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24837, "raw_key_size": 261498, "raw_average_key_size": 26, "raw_value_size": 11495630, "raw_average_value_size": 1159, "num_data_blocks": 1384, "num_entries": 9916, "num_filter_entries": 9916, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765007973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.993624) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 11730335 bytes
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.994951) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 240.3 rd, 194.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.1 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(14.3) write-amplify(6.4) OK, records in: 10369, records dropped: 453 output_compression: NoCompression
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.994968) EVENT_LOG_v1 {"time_micros": 1765007973994960, "job": 86, "event": "compaction_finished", "compaction_time_micros": 60303, "compaction_time_cpu_micros": 33328, "output_level": 6, "num_output_files": 1, "total_output_size": 11730335, "num_input_records": 10369, "num_output_records": 9916, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973995417, "job": 86, "event": "table_file_deletion", "file_number": 137}
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973997520, "job": 86, "event": "table_file_deletion", "file_number": 135}
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.932631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.997591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.997595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.997596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.997598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:59:33 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-07:59:33.997599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 02:59:34 np0005548731 podman[318718]: 2025-12-06 07:59:34.021243501 +0000 UTC m=+0.041565877 container remove 7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 02:59:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:34.028 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9461ed2d-3d86-488e-85d9-47f39b9465dd]: (4, ('Sat Dec  6 07:59:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 (7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e)\n7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e\nSat Dec  6 07:59:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 (7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e)\n7566af901b86dc528b1189e2965aeb8827cd41ab2469975db7c4ae60437caf3e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:34.030 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a70a5025-f0c2-459c-89f2-a2916a988f57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:34.031 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8caa40db-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.033 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:34 np0005548731 kernel: tap8caa40db-20: left promiscuous mode
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.054 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:34.057 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[875a980a-0438-4374-880b-19c0a398c32e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:34.072 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[16939ae4-3fd0-43ff-ab25-4ac2aa7c4a4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:34.073 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[277d5a66-e324-4dcb-8533-8e21f121d2aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:34.087 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7f6de6-1967-4992-b7ca-a7e7314419e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 820182, 'reachable_time': 27738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318742, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:34.089 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 02:59:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:34.089 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb78899-9040-4b19-ac32-5191cea5853a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:34 np0005548731 systemd[1]: run-netns-ovnmeta\x2d8caa40db\x2d27da\x2d43ab\x2d86ca\x2d042284636e71.mount: Deactivated successfully.
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.162 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.163 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.163 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.163 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 27f553d7-b010-40ef-b0cb-42ff0c466354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:59:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.254 232437 INFO nova.virt.libvirt.driver [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Instance shutdown successfully after 3 seconds.#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.259 232437 INFO nova.virt.libvirt.driver [-] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Instance destroyed successfully.#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.260 232437 DEBUG nova.objects.instance [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'numa_topology' on Instance uuid 27f553d7-b010-40ef-b0cb-42ff0c466354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.455 232437 DEBUG nova.compute.manager [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.458 232437 DEBUG oslo_concurrency.lockutils [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.467 232437 DEBUG nova.compute.manager [req-361ce349-f7fc-4cb6-b378-c642dbe2e1d8 req-9c9c0e4b-1fcb-4dfc-897e-b55344c9b0c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-vif-unplugged-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.467 232437 DEBUG oslo_concurrency.lockutils [req-361ce349-f7fc-4cb6-b378-c642dbe2e1d8 req-9c9c0e4b-1fcb-4dfc-897e-b55344c9b0c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.468 232437 DEBUG oslo_concurrency.lockutils [req-361ce349-f7fc-4cb6-b378-c642dbe2e1d8 req-9c9c0e4b-1fcb-4dfc-897e-b55344c9b0c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.468 232437 DEBUG oslo_concurrency.lockutils [req-361ce349-f7fc-4cb6-b378-c642dbe2e1d8 req-9c9c0e4b-1fcb-4dfc-897e-b55344c9b0c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.468 232437 DEBUG nova.compute.manager [req-361ce349-f7fc-4cb6-b378-c642dbe2e1d8 req-9c9c0e4b-1fcb-4dfc-897e-b55344c9b0c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] No waiting events found dispatching network-vif-unplugged-2b54b935-6ec8-4a68-a404-6be608d5b405 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:59:34 np0005548731 nova_compute[232433]: 2025-12-06 07:59:34.468 232437 WARNING nova.compute.manager [req-361ce349-f7fc-4cb6-b378-c642dbe2e1d8 req-9c9c0e4b-1fcb-4dfc-897e-b55344c9b0c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received unexpected event network-vif-unplugged-2b54b935-6ec8-4a68-a404-6be608d5b405 for instance with vm_state active and task_state shelving.#033[00m
Dec  6 02:59:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:34.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:35 np0005548731 nova_compute[232433]: 2025-12-06 07:59:35.160 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:35.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:35 np0005548731 nova_compute[232433]: 2025-12-06 07:59:35.747 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating instance_info_cache with network_info: [{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:59:35 np0005548731 nova_compute[232433]: 2025-12-06 07:59:35.778 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:59:35 np0005548731 nova_compute[232433]: 2025-12-06 07:59:35.779 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 02:59:35 np0005548731 nova_compute[232433]: 2025-12-06 07:59:35.779 232437 DEBUG oslo_concurrency.lockutils [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquired lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:59:35 np0005548731 nova_compute[232433]: 2025-12-06 07:59:35.779 232437 DEBUG nova.network.neutron [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 02:59:35 np0005548731 nova_compute[232433]: 2025-12-06 07:59:35.780 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:36 np0005548731 nova_compute[232433]: 2025-12-06 07:59:36.566 232437 DEBUG nova.compute.manager [req-7a15cad6-5eb4-471d-b48c-e0f75a212c3c req-56b5e78d-0d84-49de-aeb8-3b95f4fffd60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:59:36 np0005548731 nova_compute[232433]: 2025-12-06 07:59:36.567 232437 DEBUG oslo_concurrency.lockutils [req-7a15cad6-5eb4-471d-b48c-e0f75a212c3c req-56b5e78d-0d84-49de-aeb8-3b95f4fffd60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:59:36 np0005548731 nova_compute[232433]: 2025-12-06 07:59:36.567 232437 DEBUG oslo_concurrency.lockutils [req-7a15cad6-5eb4-471d-b48c-e0f75a212c3c req-56b5e78d-0d84-49de-aeb8-3b95f4fffd60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:59:36 np0005548731 nova_compute[232433]: 2025-12-06 07:59:36.568 232437 DEBUG oslo_concurrency.lockutils [req-7a15cad6-5eb4-471d-b48c-e0f75a212c3c req-56b5e78d-0d84-49de-aeb8-3b95f4fffd60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:59:36 np0005548731 nova_compute[232433]: 2025-12-06 07:59:36.568 232437 DEBUG nova.compute.manager [req-7a15cad6-5eb4-471d-b48c-e0f75a212c3c req-56b5e78d-0d84-49de-aeb8-3b95f4fffd60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] No waiting events found dispatching network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 02:59:36 np0005548731 nova_compute[232433]: 2025-12-06 07:59:36.568 232437 WARNING nova.compute.manager [req-7a15cad6-5eb4-471d-b48c-e0f75a212c3c req-56b5e78d-0d84-49de-aeb8-3b95f4fffd60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received unexpected event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 for instance with vm_state active and task_state shelving.#033[00m
Dec  6 02:59:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:36.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:36 np0005548731 nova_compute[232433]: 2025-12-06 07:59:36.775 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:37.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:37 np0005548731 nova_compute[232433]: 2025-12-06 07:59:37.428 232437 DEBUG nova.network.neutron [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating instance_info_cache with network_info: [{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:59:37 np0005548731 nova_compute[232433]: 2025-12-06 07:59:37.651 232437 DEBUG oslo_concurrency.lockutils [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Releasing lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:59:37 np0005548731 podman[318745]: 2025-12-06 07:59:37.88406293 +0000 UTC m=+0.052133776 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 02:59:37 np0005548731 podman[318747]: 2025-12-06 07:59:37.897463588 +0000 UTC m=+0.059832485 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 02:59:37 np0005548731 podman[318746]: 2025-12-06 07:59:37.92125796 +0000 UTC m=+0.088168318 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 02:59:38 np0005548731 nova_compute[232433]: 2025-12-06 07:59:38.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:38 np0005548731 nova_compute[232433]: 2025-12-06 07:59:38.345 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:59:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:38.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:59:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:39.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:40 np0005548731 nova_compute[232433]: 2025-12-06 07:59:40.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:40 np0005548731 nova_compute[232433]: 2025-12-06 07:59:40.162 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:40 np0005548731 nova_compute[232433]: 2025-12-06 07:59:40.458 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:59:40 np0005548731 nova_compute[232433]: 2025-12-06 07:59:40.458 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:59:40 np0005548731 nova_compute[232433]: 2025-12-06 07:59:40.459 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:59:40 np0005548731 nova_compute[232433]: 2025-12-06 07:59:40.459 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 02:59:40 np0005548731 nova_compute[232433]: 2025-12-06 07:59:40.459 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:59:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:40.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:59:40 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2980456092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:59:40 np0005548731 nova_compute[232433]: 2025-12-06 07:59:40.953 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:59:41 np0005548731 nova_compute[232433]: 2025-12-06 07:59:41.242 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:59:41 np0005548731 nova_compute[232433]: 2025-12-06 07:59:41.243 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 02:59:41 np0005548731 nova_compute[232433]: 2025-12-06 07:59:41.404 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 02:59:41 np0005548731 nova_compute[232433]: 2025-12-06 07:59:41.405 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4205MB free_disk=20.897357940673828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 02:59:41 np0005548731 nova_compute[232433]: 2025-12-06 07:59:41.406 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:59:41 np0005548731 nova_compute[232433]: 2025-12-06 07:59:41.406 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:59:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:41.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:41 np0005548731 nova_compute[232433]: 2025-12-06 07:59:41.745 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 27f553d7-b010-40ef-b0cb-42ff0c466354 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 02:59:41 np0005548731 nova_compute[232433]: 2025-12-06 07:59:41.745 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 02:59:41 np0005548731 nova_compute[232433]: 2025-12-06 07:59:41.745 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 02:59:41 np0005548731 nova_compute[232433]: 2025-12-06 07:59:41.795 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.125 232437 INFO nova.virt.libvirt.driver [-] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Instance destroyed successfully.#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.126 232437 DEBUG nova.objects.instance [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'resources' on Instance uuid 27f553d7-b010-40ef-b0cb-42ff0c466354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.164 232437 DEBUG nova.virt.libvirt.vif [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1733328147',display_name='tempest-TestShelveInstance-server-1733328147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1733328147',id=180,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4sXJH069UiaxcddlkMS95GRWWK+3yMzXihiGRU/mHbPKO5jPcAw1Vlvstr18SgoFXBRYxyHafuDIwfD1IsMDXWkERYPnz3TEHdLWZin3OQEFfVSfeX8bcZN+wdDBDnqQ==',key_name='tempest-TestShelveInstance-1084557310',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:59:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-hmpcrqao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member'},tags=<?>,task_state='shelving',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:59:03Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=27f553d7-b010-40ef-b0cb-42ff0c466354,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.165 232437 DEBUG nova.network.os_vif_util [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.166 232437 DEBUG nova.network.os_vif_util [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.167 232437 DEBUG os_vif [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.172 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.173 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b54b935-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.175 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.182 232437 INFO os_vif [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e')#033[00m
Dec  6 02:59:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:59:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3364756026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.216 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.223 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.255 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.304 232437 DEBUG nova.compute.manager [req-97835069-d26b-4ddd-9212-991fd034da03 req-addf71c4-5508-40f3-a81a-77870480e13f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-changed-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.305 232437 DEBUG nova.compute.manager [req-97835069-d26b-4ddd-9212-991fd034da03 req-addf71c4-5508-40f3-a81a-77870480e13f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Refreshing instance network info cache due to event network-changed-2b54b935-6ec8-4a68-a404-6be608d5b405. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.305 232437 DEBUG oslo_concurrency.lockutils [req-97835069-d26b-4ddd-9212-991fd034da03 req-addf71c4-5508-40f3-a81a-77870480e13f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.306 232437 DEBUG oslo_concurrency.lockutils [req-97835069-d26b-4ddd-9212-991fd034da03 req-addf71c4-5508-40f3-a81a-77870480e13f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.306 232437 DEBUG nova.network.neutron [req-97835069-d26b-4ddd-9212-991fd034da03 req-addf71c4-5508-40f3-a81a-77870480e13f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Refreshing network info cache for port 2b54b935-6ec8-4a68-a404-6be608d5b405 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.308 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.308 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.400 232437 INFO nova.virt.libvirt.driver [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Deleting instance files /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354_del#033[00m
Dec  6 02:59:42 np0005548731 nova_compute[232433]: 2025-12-06 07:59:42.401 232437 INFO nova.virt.libvirt.driver [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Deletion of /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354_del complete#033[00m
Dec  6 02:59:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:42.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:43 np0005548731 nova_compute[232433]: 2025-12-06 07:59:43.119 232437 INFO nova.scheduler.client.report [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Deleted allocations for instance 27f553d7-b010-40ef-b0cb-42ff0c466354#033[00m
Dec  6 02:59:43 np0005548731 nova_compute[232433]: 2025-12-06 07:59:43.184 232437 DEBUG oslo_concurrency.lockutils [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:59:43 np0005548731 nova_compute[232433]: 2025-12-06 07:59:43.185 232437 DEBUG oslo_concurrency.lockutils [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:59:43 np0005548731 nova_compute[232433]: 2025-12-06 07:59:43.211 232437 DEBUG oslo_concurrency.processutils [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:59:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:59:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:43.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:59:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:59:43 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1719363118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:59:43 np0005548731 nova_compute[232433]: 2025-12-06 07:59:43.666 232437 DEBUG oslo_concurrency.processutils [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:59:43 np0005548731 nova_compute[232433]: 2025-12-06 07:59:43.674 232437 DEBUG nova.compute.provider_tree [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 02:59:43 np0005548731 nova_compute[232433]: 2025-12-06 07:59:43.719 232437 DEBUG nova.scheduler.client.report [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 02:59:43 np0005548731 nova_compute[232433]: 2025-12-06 07:59:43.767 232437 DEBUG oslo_concurrency.lockutils [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:59:43 np0005548731 nova_compute[232433]: 2025-12-06 07:59:43.917 232437 DEBUG oslo_concurrency.lockutils [None req-71168235-5a06-41ef-b290-838f681e8bf5 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354" "released" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: held 12.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 02:59:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:44 np0005548731 nova_compute[232433]: 2025-12-06 07:59:44.308 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:44 np0005548731 nova_compute[232433]: 2025-12-06 07:59:44.308 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:44 np0005548731 nova_compute[232433]: 2025-12-06 07:59:44.309 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 02:59:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:44.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 02:59:45 np0005548731 nova_compute[232433]: 2025-12-06 07:59:45.189 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:45.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:46.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:47 np0005548731 nova_compute[232433]: 2025-12-06 07:59:47.021 232437 DEBUG nova.network.neutron [req-97835069-d26b-4ddd-9212-991fd034da03 req-addf71c4-5508-40f3-a81a-77870480e13f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updated VIF entry in instance network info cache for port 2b54b935-6ec8-4a68-a404-6be608d5b405. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 02:59:47 np0005548731 nova_compute[232433]: 2025-12-06 07:59:47.022 232437 DEBUG nova.network.neutron [req-97835069-d26b-4ddd-9212-991fd034da03 req-addf71c4-5508-40f3-a81a-77870480e13f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating instance_info_cache with network_info: [{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": null, "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap2b54b935-6e", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 02:59:47 np0005548731 nova_compute[232433]: 2025-12-06 07:59:47.093 232437 DEBUG oslo_concurrency.lockutils [req-97835069-d26b-4ddd-9212-991fd034da03 req-addf71c4-5508-40f3-a81a-77870480e13f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 02:59:47 np0005548731 nova_compute[232433]: 2025-12-06 07:59:47.177 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:47.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:48.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:48 np0005548731 nova_compute[232433]: 2025-12-06 07:59:48.979 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007973.9782896, 27f553d7-b010-40ef-b0cb-42ff0c466354 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 02:59:48 np0005548731 nova_compute[232433]: 2025-12-06 07:59:48.980 232437 INFO nova.compute.manager [-] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] VM Stopped (Lifecycle Event)#033[00m
Dec  6 02:59:49 np0005548731 nova_compute[232433]: 2025-12-06 07:59:49.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 02:59:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:49.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:50 np0005548731 nova_compute[232433]: 2025-12-06 07:59:50.191 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:50.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:51.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:51 np0005548731 nova_compute[232433]: 2025-12-06 07:59:51.628 232437 DEBUG nova.compute.manager [None req-7a98432e-743d-4954-9bb7-307702882da6 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 02:59:52 np0005548731 nova_compute[232433]: 2025-12-06 07:59:52.188 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:52.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:53.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:54.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:54 np0005548731 nova_compute[232433]: 2025-12-06 07:59:54.914 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:59:54 np0005548731 nova_compute[232433]: 2025-12-06 07:59:54.915 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:59:54 np0005548731 nova_compute[232433]: 2025-12-06 07:59:54.915 232437 INFO nova.compute.manager [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Unshelving#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.015 232437 INFO nova.virt.block_device [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Booting with volume f1cb8c3c-521b-48db-b5f7-453fef5dd2fe at /dev/vda#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.193 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:55.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.808 232437 DEBUG os_brick.utils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.810 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.822 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.822 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[0c956c8e-651f-41ae-b7dc-dea8fd5bcf83]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.824 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.833 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.833 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5b1bb7-e8e4-45c5-936f-ae9dab1b9c39]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.835 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.843 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.844 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[edaf03d7-a424-4b35-9e52-ca17d5313d04]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.845 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[619deb4b-c1f6-406b-aa8c-cf1202f39d61]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.846 232437 DEBUG oslo_concurrency.processutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.880 232437 DEBUG oslo_concurrency.processutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.883 232437 DEBUG os_brick.initiator.connectors.lightos [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.883 232437 DEBUG os_brick.initiator.connectors.lightos [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.883 232437 DEBUG os_brick.initiator.connectors.lightos [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.883 232437 DEBUG os_brick.utils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 02:59:55 np0005548731 nova_compute[232433]: 2025-12-06 07:59:55.884 232437 DEBUG nova.virt.block_device [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating existing volume attachment record: e469efc4-a2a5-40fa-b1ad-62c2931170d2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 02:59:56 np0005548731 nova_compute[232433]: 2025-12-06 07:59:56.059 232437 DEBUG nova.compute.manager [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Dec  6 02:59:56 np0005548731 nova_compute[232433]: 2025-12-06 07:59:56.205 232437 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 02:59:56 np0005548731 nova_compute[232433]: 2025-12-06 07:59:56.206 232437 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 02:59:56 np0005548731 nova_compute[232433]: 2025-12-06 07:59:56.486 232437 DEBUG nova.objects.instance [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'pci_requests' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:59:56 np0005548731 nova_compute[232433]: 2025-12-06 07:59:56.507 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 02:59:56 np0005548731 nova_compute[232433]: 2025-12-06 07:59:56.507 232437 INFO nova.compute.claims [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 02:59:56 np0005548731 nova_compute[232433]: 2025-12-06 07:59:56.508 232437 DEBUG nova.objects.instance [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'resources' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:59:56 np0005548731 nova_compute[232433]: 2025-12-06 07:59:56.586 232437 DEBUG nova.objects.instance [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'numa_topology' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:59:56 np0005548731 nova_compute[232433]: 2025-12-06 07:59:56.652 232437 DEBUG nova.objects.instance [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'pci_devices' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 02:59:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:56.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:57 np0005548731 nova_compute[232433]: 2025-12-06 07:59:57.190 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 02:59:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:57.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 02:59:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:58.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 02:59:59 np0005548731 nova_compute[232433]: 2025-12-06 07:59:59.316 232437 INFO nova.compute.resource_tracker [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating resource usage from migration 3edf7ec4-1fd2-4a82-b2cd-ff838e7ab052#033[00m
Dec  6 02:59:59 np0005548731 nova_compute[232433]: 2025-12-06 07:59:59.316 232437 DEBUG nova.compute.resource_tracker [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Starting to track incoming migration 3edf7ec4-1fd2-4a82-b2cd-ff838e7ab052 with flavor 25848a18-11d9-4f11-80b5-5d005675c76d _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Dec  6 02:59:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:59.342 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 02:59:59 np0005548731 nova_compute[232433]: 2025-12-06 07:59:59.343 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 02:59:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 07:59:59.344 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 02:59:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 02:59:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 02:59:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:59.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 02:59:59 np0005548731 nova_compute[232433]: 2025-12-06 07:59:59.484 232437 DEBUG oslo_concurrency.processutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 02:59:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 02:59:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1256096928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 02:59:59 np0005548731 nova_compute[232433]: 2025-12-06 07:59:59.925 232437 DEBUG oslo_concurrency.processutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 02:59:59 np0005548731 nova_compute[232433]: 2025-12-06 07:59:59.933 232437 DEBUG nova.compute.provider_tree [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:00:00 np0005548731 nova_compute[232433]: 2025-12-06 08:00:00.233 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:00 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 03:00:00 np0005548731 nova_compute[232433]: 2025-12-06 08:00:00.549 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:00 np0005548731 nova_compute[232433]: 2025-12-06 08:00:00.557 232437 DEBUG nova.scheduler.client.report [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:00:00 np0005548731 nova_compute[232433]: 2025-12-06 08:00:00.652 232437 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 4.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:00 np0005548731 nova_compute[232433]: 2025-12-06 08:00:00.652 232437 INFO nova.compute.manager [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Migrating#033[00m
Dec  6 03:00:00 np0005548731 nova_compute[232433]: 2025-12-06 08:00:00.659 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:00 np0005548731 nova_compute[232433]: 2025-12-06 08:00:00.665 232437 DEBUG nova.objects.instance [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'pci_requests' on Instance uuid 27f553d7-b010-40ef-b0cb-42ff0c466354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:00:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:00.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:00 np0005548731 nova_compute[232433]: 2025-12-06 08:00:00.769 232437 DEBUG nova.objects.instance [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'numa_topology' on Instance uuid 27f553d7-b010-40ef-b0cb-42ff0c466354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:00:00 np0005548731 nova_compute[232433]: 2025-12-06 08:00:00.791 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:00:00 np0005548731 nova_compute[232433]: 2025-12-06 08:00:00.792 232437 INFO nova.compute.claims [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:00:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:00.905 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:00.906 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:00.906 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:01 np0005548731 nova_compute[232433]: 2025-12-06 08:00:01.006 232437 DEBUG oslo_concurrency.processutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:00:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:00:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3876234082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:00:01 np0005548731 nova_compute[232433]: 2025-12-06 08:00:01.438 232437 DEBUG oslo_concurrency.processutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:00:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:01.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:01 np0005548731 nova_compute[232433]: 2025-12-06 08:00:01.445 232437 DEBUG nova.compute.provider_tree [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:00:01 np0005548731 nova_compute[232433]: 2025-12-06 08:00:01.475 232437 DEBUG nova.scheduler.client.report [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:00:01 np0005548731 nova_compute[232433]: 2025-12-06 08:00:01.504 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:02 np0005548731 nova_compute[232433]: 2025-12-06 08:00:02.175 232437 INFO nova.network.neutron [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating port 2b54b935-6ec8-4a68-a404-6be608d5b405 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec  6 03:00:02 np0005548731 nova_compute[232433]: 2025-12-06 08:00:02.193 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:02.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:02 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 03:00:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:03.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:04.345 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:04.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:04 np0005548731 nova_compute[232433]: 2025-12-06 08:00:04.918 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:00:04 np0005548731 nova_compute[232433]: 2025-12-06 08:00:04.918 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquired lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:00:04 np0005548731 nova_compute[232433]: 2025-12-06 08:00:04.918 232437 DEBUG nova.network.neutron [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:00:05 np0005548731 nova_compute[232433]: 2025-12-06 08:00:05.100 232437 DEBUG nova.compute.manager [req-5efca039-17bd-4460-987a-18ed49d9c713 req-e78c5b68-7610-416b-b2cc-a4b3d0f25479 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-changed-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:05 np0005548731 nova_compute[232433]: 2025-12-06 08:00:05.101 232437 DEBUG nova.compute.manager [req-5efca039-17bd-4460-987a-18ed49d9c713 req-e78c5b68-7610-416b-b2cc-a4b3d0f25479 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Refreshing instance network info cache due to event network-changed-2b54b935-6ec8-4a68-a404-6be608d5b405. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:00:05 np0005548731 nova_compute[232433]: 2025-12-06 08:00:05.101 232437 DEBUG oslo_concurrency.lockutils [req-5efca039-17bd-4460-987a-18ed49d9c713 req-e78c5b68-7610-416b-b2cc-a4b3d0f25479 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:00:05 np0005548731 nova_compute[232433]: 2025-12-06 08:00:05.234 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:05.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:06 np0005548731 systemd[1]: Created slice User Slice of UID 42436.
Dec  6 03:00:06 np0005548731 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec  6 03:00:06 np0005548731 systemd-logind[794]: New session 67 of user nova.
Dec  6 03:00:06 np0005548731 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec  6 03:00:06 np0005548731 systemd[1]: Starting User Manager for UID 42436...
Dec  6 03:00:06 np0005548731 systemd[319012]: Queued start job for default target Main User Target.
Dec  6 03:00:06 np0005548731 systemd[319012]: Created slice User Application Slice.
Dec  6 03:00:06 np0005548731 systemd[319012]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 03:00:06 np0005548731 systemd[319012]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 03:00:06 np0005548731 systemd[319012]: Reached target Paths.
Dec  6 03:00:06 np0005548731 systemd[319012]: Reached target Timers.
Dec  6 03:00:06 np0005548731 systemd[319012]: Starting D-Bus User Message Bus Socket...
Dec  6 03:00:06 np0005548731 systemd[319012]: Starting Create User's Volatile Files and Directories...
Dec  6 03:00:06 np0005548731 systemd[319012]: Finished Create User's Volatile Files and Directories.
Dec  6 03:00:06 np0005548731 systemd[319012]: Listening on D-Bus User Message Bus Socket.
Dec  6 03:00:06 np0005548731 systemd[319012]: Reached target Sockets.
Dec  6 03:00:06 np0005548731 systemd[319012]: Reached target Basic System.
Dec  6 03:00:06 np0005548731 systemd[319012]: Reached target Main User Target.
Dec  6 03:00:06 np0005548731 systemd[319012]: Startup finished in 128ms.
Dec  6 03:00:06 np0005548731 systemd[1]: Started User Manager for UID 42436.
Dec  6 03:00:06 np0005548731 systemd[1]: Started Session 67 of User nova.
Dec  6 03:00:06 np0005548731 systemd-logind[794]: Session 67 logged out. Waiting for processes to exit.
Dec  6 03:00:06 np0005548731 systemd[1]: session-67.scope: Deactivated successfully.
Dec  6 03:00:06 np0005548731 systemd-logind[794]: Removed session 67.
Dec  6 03:00:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:06.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:06 np0005548731 systemd-logind[794]: New session 69 of user nova.
Dec  6 03:00:06 np0005548731 systemd[1]: Started Session 69 of User nova.
Dec  6 03:00:06 np0005548731 systemd[1]: session-69.scope: Deactivated successfully.
Dec  6 03:00:06 np0005548731 systemd-logind[794]: Session 69 logged out. Waiting for processes to exit.
Dec  6 03:00:06 np0005548731 systemd-logind[794]: Removed session 69.
Dec  6 03:00:07 np0005548731 nova_compute[232433]: 2025-12-06 08:00:07.233 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:00:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:07.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:00:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:08.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:08 np0005548731 podman[319087]: 2025-12-06 08:00:08.903820172 +0000 UTC m=+0.064205172 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 03:00:08 np0005548731 podman[319085]: 2025-12-06 08:00:08.91273444 +0000 UTC m=+0.075679243 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 03:00:08 np0005548731 podman[319086]: 2025-12-06 08:00:08.935475707 +0000 UTC m=+0.095017866 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 03:00:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:00:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3263341091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:00:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:00:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3263341091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.946 232437 DEBUG nova.network.neutron [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating instance_info_cache with network_info: [{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.971 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Releasing lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.973 232437 DEBUG nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.974 232437 INFO nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Creating image(s)#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.974 232437 DEBUG nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.974 232437 DEBUG nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Ensure instance console log exists: /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.975 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.975 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.975 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.978 232437 DEBUG nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Start _get_guest_xml network_info=[{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f1cb8c3c-521b-48db-b5f7-453fef5dd2fe', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f1cb8c3c-521b-48db-b5f7-453fef5dd2fe', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '27f553d7-b010-40ef-b0cb-42ff0c466354', 'attached_at': '', 'detached_at': '', 'volume_id': 'f1cb8c3c-521b-48db-b5f7-453fef5dd2fe', 'serial': 'f1cb8c3c-521b-48db-b5f7-453fef5dd2fe'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'attachment_id': 'e469efc4-a2a5-40fa-b1ad-62c2931170d2', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.979 232437 DEBUG oslo_concurrency.lockutils [req-5efca039-17bd-4460-987a-18ed49d9c713 req-e78c5b68-7610-416b-b2cc-a4b3d0f25479 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.979 232437 DEBUG nova.network.neutron [req-5efca039-17bd-4460-987a-18ed49d9c713 req-e78c5b68-7610-416b-b2cc-a4b3d0f25479 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Refreshing network info cache for port 2b54b935-6ec8-4a68-a404-6be608d5b405 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.984 232437 WARNING nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.990 232437 DEBUG nova.virt.libvirt.host [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.991 232437 DEBUG nova.virt.libvirt.host [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.994 232437 DEBUG nova.virt.libvirt.host [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.994 232437 DEBUG nova.virt.libvirt.host [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.996 232437 DEBUG nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.996 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.997 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.997 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.997 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.998 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.998 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.999 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:00:08 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.999 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:08.999 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.000 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.000 232437 DEBUG nova.virt.hardware [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.000 232437 DEBUG nova.objects.instance [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 27f553d7-b010-40ef-b0cb-42ff0c466354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.046 232437 DEBUG nova.storage.rbd_utils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.050 232437 DEBUG oslo_concurrency.processutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:00:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:09.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:00:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1989904733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.508 232437 DEBUG oslo_concurrency.processutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.549 232437 DEBUG nova.virt.libvirt.vif [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1733328147',display_name='tempest-TestShelveInstance-server-1733328147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1733328147',id=180,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1084557310',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:59:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-hmpcrqao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:59:54Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=27f553d7-b010-40ef-b0cb-42ff0c466354,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.550 232437 DEBUG nova.network.os_vif_util [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.551 232437 DEBUG nova.network.os_vif_util [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.552 232437 DEBUG nova.objects.instance [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'pci_devices' on Instance uuid 27f553d7-b010-40ef-b0cb-42ff0c466354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.575 232437 DEBUG nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  <uuid>27f553d7-b010-40ef-b0cb-42ff0c466354</uuid>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  <name>instance-000000b4</name>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestShelveInstance-server-1733328147</nova:name>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:00:08</nova:creationTime>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <nova:user uuid="98e657096e3f4b528cd461a3dd6a750e">tempest-TestShelveInstance-1863009913-project-member</nova:user>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <nova:project uuid="c3c0564f8e9f4af9ae5b597a275c989f">tempest-TestShelveInstance-1863009913</nova:project>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <nova:port uuid="2b54b935-6ec8-4a68-a404-6be608d5b405">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <entry name="serial">27f553d7-b010-40ef-b0cb-42ff0c466354</entry>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <entry name="uuid">27f553d7-b010-40ef-b0cb-42ff0c466354</entry>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-f1cb8c3c-521b-48db-b5f7-453fef5dd2fe">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <serial>f1cb8c3c-521b-48db-b5f7-453fef5dd2fe</serial>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:15:9d:85"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <target dev="tap2b54b935-6e"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/console.log" append="off"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <input type="keyboard" bus="usb"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:00:09 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:00:09 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:00:09 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:00:09 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.577 232437 DEBUG nova.compute.manager [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Preparing to wait for external event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.577 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.577 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.578 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.578 232437 DEBUG nova.virt.libvirt.vif [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1733328147',display_name='tempest-TestShelveInstance-server-1733328147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1733328147',id=180,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1084557310',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:59:03Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-hmpcrqao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:59:54Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=27f553d7-b010-40ef-b0cb-42ff0c466354,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.579 232437 DEBUG nova.network.os_vif_util [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.579 232437 DEBUG nova.network.os_vif_util [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.580 232437 DEBUG os_vif [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.580 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.581 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.581 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.584 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.584 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b54b935-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.585 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b54b935-6e, col_values=(('external_ids', {'iface-id': '2b54b935-6ec8-4a68-a404-6be608d5b405', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:9d:85', 'vm-uuid': '27f553d7-b010-40ef-b0cb-42ff0c466354'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.586 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:09 np0005548731 NetworkManager[49182]: <info>  [1765008009.5873] manager: (tap2b54b935-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.590 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.593 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.595 232437 INFO os_vif [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e')#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.639 232437 DEBUG nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.640 232437 DEBUG nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.640 232437 DEBUG nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No VIF found with MAC fa:16:3e:15:9d:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.640 232437 INFO nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Using config drive#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.668 232437 DEBUG nova.storage.rbd_utils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.689 232437 DEBUG nova.objects.instance [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 27f553d7-b010-40ef-b0cb-42ff0c466354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:00:09 np0005548731 nova_compute[232433]: 2025-12-06 08:00:09.748 232437 DEBUG nova.objects.instance [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'keypairs' on Instance uuid 27f553d7-b010-40ef-b0cb-42ff0c466354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:00:10 np0005548731 nova_compute[232433]: 2025-12-06 08:00:10.235 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:10 np0005548731 nova_compute[232433]: 2025-12-06 08:00:10.271 232437 INFO nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Creating config drive at /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config#033[00m
Dec  6 03:00:10 np0005548731 nova_compute[232433]: 2025-12-06 08:00:10.278 232437 DEBUG oslo_concurrency.processutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfqi4emih execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:00:10 np0005548731 nova_compute[232433]: 2025-12-06 08:00:10.434 232437 DEBUG oslo_concurrency.processutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfqi4emih" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:00:10 np0005548731 nova_compute[232433]: 2025-12-06 08:00:10.459 232437 DEBUG nova.storage.rbd_utils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:00:10 np0005548731 nova_compute[232433]: 2025-12-06 08:00:10.462 232437 DEBUG oslo_concurrency.processutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config 27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:00:10 np0005548731 nova_compute[232433]: 2025-12-06 08:00:10.655 232437 DEBUG oslo_concurrency.processutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config 27f553d7-b010-40ef-b0cb-42ff0c466354_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:00:10 np0005548731 nova_compute[232433]: 2025-12-06 08:00:10.656 232437 INFO nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Deleting local config drive /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354/disk.config because it was imported into RBD.#033[00m
Dec  6 03:00:10 np0005548731 kernel: tap2b54b935-6e: entered promiscuous mode
Dec  6 03:00:10 np0005548731 NetworkManager[49182]: <info>  [1765008010.7241] manager: (tap2b54b935-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/418)
Dec  6 03:00:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:10Z|00911|binding|INFO|Claiming lport 2b54b935-6ec8-4a68-a404-6be608d5b405 for this chassis.
Dec  6 03:00:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:10Z|00912|binding|INFO|2b54b935-6ec8-4a68-a404-6be608d5b405: Claiming fa:16:3e:15:9d:85 10.100.0.6
Dec  6 03:00:10 np0005548731 nova_compute[232433]: 2025-12-06 08:00:10.724 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:10.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:10Z|00913|binding|INFO|Setting lport 2b54b935-6ec8-4a68-a404-6be608d5b405 ovn-installed in OVS
Dec  6 03:00:10 np0005548731 nova_compute[232433]: 2025-12-06 08:00:10.746 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:10 np0005548731 nova_compute[232433]: 2025-12-06 08:00:10.749 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:10 np0005548731 systemd-machined[195355]: New machine qemu-92-instance-000000b4.
Dec  6 03:00:10 np0005548731 systemd-udevd[319259]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:00:10 np0005548731 NetworkManager[49182]: <info>  [1765008010.7781] device (tap2b54b935-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:00:10 np0005548731 NetworkManager[49182]: <info>  [1765008010.7788] device (tap2b54b935-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:00:10 np0005548731 systemd[1]: Started Virtual Machine qemu-92-instance-000000b4.
Dec  6 03:00:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:10.967 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:9d:85 10.100.0.6'], port_security=['fa:16:3e:15:9d:85 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '27f553d7-b010-40ef-b0cb-42ff0c466354', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8caa40db-27da-43ab-86ca-042284636e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'fcb04c4c-5a67-4d1a-9338-9588ff78bd36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ce9c1bf-feee-480c-a359-3eaf272f4b83, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=2b54b935-6ec8-4a68-a404-6be608d5b405) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:00:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:10Z|00914|binding|INFO|Setting lport 2b54b935-6ec8-4a68-a404-6be608d5b405 up in Southbound
Dec  6 03:00:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:10.968 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 2b54b935-6ec8-4a68-a404-6be608d5b405 in datapath 8caa40db-27da-43ab-86ca-042284636e71 bound to our chassis#033[00m
Dec  6 03:00:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:10.970 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8caa40db-27da-43ab-86ca-042284636e71#033[00m
Dec  6 03:00:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:10.981 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bc132f34-07bc-4798-907e-d540ec5e7b60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:10.983 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8caa40db-21 in ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:00:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:10.985 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8caa40db-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:00:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:10.985 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0f02e34b-893b-4842-b355-2f011dee7f2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:10.985 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a1bee6-0454-4a15-8c11-b633d6e7b814]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:10.998 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[b96c5ea5-6a63-4446-ab95-28f66b915e1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.020 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[51e2b4d4-0e52-488e-bf7f-4b8728040b08]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.053 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[49e725ff-57d5-457b-a094-fe7f44afc0cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 NetworkManager[49182]: <info>  [1765008011.0598] manager: (tap8caa40db-20): new Veth device (/org/freedesktop/NetworkManager/Devices/419)
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.061 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba176db-ad15-478e-ac7b-1ecea42d6736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 systemd-udevd[319261]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.099 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5abc02ea-881f-40e3-ad94-c4eea4ca9e8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.102 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[895705b8-15c4-432f-b087-061ff69a9235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 NetworkManager[49182]: <info>  [1765008011.1254] device (tap8caa40db-20): carrier: link connected
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.132 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9130913c-ac38-4b7f-98a4-c1514b46bba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.152 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bce544ec-2b8a-4825-919c-618c77c475ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8caa40db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:45:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827313, 'reachable_time': 32286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319292, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.169 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7c68b6f8-601d-48ae-a7d4-0d2de4a1f169]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:4540'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827313, 'tstamp': 827313}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319293, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.191 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c23866c4-607a-47ca-bde3-b76f084abab8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8caa40db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:45:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827313, 'reachable_time': 32286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319294, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.232 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[92154cae-46b4-41f8-8c33-4af18e8dc8d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.293 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8593afe7-4919-423f-ad82-d49e34dffe69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.294 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8caa40db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.295 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.295 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8caa40db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.332 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:11 np0005548731 kernel: tap8caa40db-20: entered promiscuous mode
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.335 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:11 np0005548731 NetworkManager[49182]: <info>  [1765008011.3361] manager: (tap8caa40db-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Dec  6 03:00:11 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:11Z|00915|binding|INFO|Releasing lport 13935eb5-7198-4d3b-b91f-e4e0daf2886d from this chassis (sb_readonly=0)
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.336 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8caa40db-20, col_values=(('external_ids', {'iface-id': '13935eb5-7198-4d3b-b91f-e4e0daf2886d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.352 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.353 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.354 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc356a3-513d-4064-b1b3-ddc005fba367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.355 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-8caa40db-27da-43ab-86ca-042284636e71
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 8caa40db-27da-43ab-86ca-042284636e71
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:00:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:11.356 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'env', 'PROCESS_TAG=haproxy-8caa40db-27da-43ab-86ca-042284636e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8caa40db-27da-43ab-86ca-042284636e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.370 232437 DEBUG nova.network.neutron [req-5efca039-17bd-4460-987a-18ed49d9c713 req-e78c5b68-7610-416b-b2cc-a4b3d0f25479 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updated VIF entry in instance network info cache for port 2b54b935-6ec8-4a68-a404-6be608d5b405. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.370 232437 DEBUG nova.network.neutron [req-5efca039-17bd-4460-987a-18ed49d9c713 req-e78c5b68-7610-416b-b2cc-a4b3d0f25479 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating instance_info_cache with network_info: [{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.394 232437 DEBUG oslo_concurrency.lockutils [req-5efca039-17bd-4460-987a-18ed49d9c713 req-e78c5b68-7610-416b-b2cc-a4b3d0f25479 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.416 232437 DEBUG nova.compute.manager [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-unplugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.417 232437 DEBUG oslo_concurrency.lockutils [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.417 232437 DEBUG oslo_concurrency.lockutils [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.418 232437 DEBUG oslo_concurrency.lockutils [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.418 232437 DEBUG nova.compute.manager [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] No waiting events found dispatching network-vif-unplugged-05c9980c-d230-4d61-9d98-4586e200fac5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.418 232437 WARNING nova.compute.manager [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received unexpected event network-vif-unplugged-05c9980c-d230-4d61-9d98-4586e200fac5 for instance with vm_state active and task_state resize_migrating.#033[00m
Dec  6 03:00:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:11.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.554 232437 DEBUG nova.compute.manager [req-58c7d534-30dc-4a5a-940a-3cd33d7bd690 req-9eb0672f-73f7-4f1d-8985-8a09cc80226a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.555 232437 DEBUG oslo_concurrency.lockutils [req-58c7d534-30dc-4a5a-940a-3cd33d7bd690 req-9eb0672f-73f7-4f1d-8985-8a09cc80226a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.555 232437 DEBUG oslo_concurrency.lockutils [req-58c7d534-30dc-4a5a-940a-3cd33d7bd690 req-9eb0672f-73f7-4f1d-8985-8a09cc80226a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.555 232437 DEBUG oslo_concurrency.lockutils [req-58c7d534-30dc-4a5a-940a-3cd33d7bd690 req-9eb0672f-73f7-4f1d-8985-8a09cc80226a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:11 np0005548731 nova_compute[232433]: 2025-12-06 08:00:11.555 232437 DEBUG nova.compute.manager [req-58c7d534-30dc-4a5a-940a-3cd33d7bd690 req-9eb0672f-73f7-4f1d-8985-8a09cc80226a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Processing event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:00:11 np0005548731 podman[319326]: 2025-12-06 08:00:11.70718762 +0000 UTC m=+0.044381216 container create 4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:00:11 np0005548731 systemd[1]: Started libpod-conmon-4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3.scope.
Dec  6 03:00:11 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:00:11 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b33ecc8da489d6f5dafb4c0f4d7488ac589556eebb344f009ba0348cf859bfc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:00:11 np0005548731 podman[319326]: 2025-12-06 08:00:11.684568707 +0000 UTC m=+0.021762333 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:00:11 np0005548731 podman[319326]: 2025-12-06 08:00:11.786156193 +0000 UTC m=+0.123349789 container init 4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  6 03:00:11 np0005548731 podman[319326]: 2025-12-06 08:00:11.791001261 +0000 UTC m=+0.128194857 container start 4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 03:00:11 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[319342]: [NOTICE]   (319346) : New worker (319348) forked
Dec  6 03:00:11 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[319342]: [NOTICE]   (319346) : Loading success.
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.165 232437 INFO nova.network.neutron [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating port 05c9980c-d230-4d61-9d98-4586e200fac5 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.558 232437 DEBUG nova.compute.manager [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.559 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008012.5579827, 27f553d7-b010-40ef-b0cb-42ff0c466354 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.559 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] VM Started (Lifecycle Event)#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.562 232437 DEBUG nova.virt.libvirt.driver [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.566 232437 INFO nova.virt.libvirt.driver [-] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Instance spawned successfully.#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.566 232437 DEBUG nova.compute.manager [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.600 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.604 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.653 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.653 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008012.5599809, 27f553d7-b010-40ef-b0cb-42ff0c466354 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.654 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.665 232437 DEBUG oslo_concurrency.lockutils [None req-46aaf0b4-4d43-4a15-8f59-13d36da9de86 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 17.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.675 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.678 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008012.5618925, 27f553d7-b010-40ef-b0cb-42ff0c466354 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.678 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.697 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:00:12 np0005548731 nova_compute[232433]: 2025-12-06 08:00:12.700 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:00:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:12.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:13.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.479 232437 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.479 232437 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquired lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.479 232437 DEBUG nova.network.neutron [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.607 232437 DEBUG nova.compute.manager [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.608 232437 DEBUG oslo_concurrency.lockutils [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.608 232437 DEBUG oslo_concurrency.lockutils [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.608 232437 DEBUG oslo_concurrency.lockutils [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.609 232437 DEBUG nova.compute.manager [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] No waiting events found dispatching network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.609 232437 WARNING nova.compute.manager [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received unexpected event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 for instance with vm_state active and task_state resize_migrated.#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.636 232437 DEBUG nova.compute.manager [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-changed-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.637 232437 DEBUG nova.compute.manager [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Refreshing instance network info cache due to event network-changed-05c9980c-d230-4d61-9d98-4586e200fac5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.637 232437 DEBUG oslo_concurrency.lockutils [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.860 232437 DEBUG nova.compute.manager [req-8703bd97-24a6-4453-bcf6-c90ec1566e14 req-0c3ec5d3-2ab4-4aae-bb1f-1e1f56a3badd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.861 232437 DEBUG oslo_concurrency.lockutils [req-8703bd97-24a6-4453-bcf6-c90ec1566e14 req-0c3ec5d3-2ab4-4aae-bb1f-1e1f56a3badd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.861 232437 DEBUG oslo_concurrency.lockutils [req-8703bd97-24a6-4453-bcf6-c90ec1566e14 req-0c3ec5d3-2ab4-4aae-bb1f-1e1f56a3badd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.861 232437 DEBUG oslo_concurrency.lockutils [req-8703bd97-24a6-4453-bcf6-c90ec1566e14 req-0c3ec5d3-2ab4-4aae-bb1f-1e1f56a3badd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.862 232437 DEBUG nova.compute.manager [req-8703bd97-24a6-4453-bcf6-c90ec1566e14 req-0c3ec5d3-2ab4-4aae-bb1f-1e1f56a3badd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] No waiting events found dispatching network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:00:13 np0005548731 nova_compute[232433]: 2025-12-06 08:00:13.862 232437 WARNING nova.compute.manager [req-8703bd97-24a6-4453-bcf6-c90ec1566e14 req-0c3ec5d3-2ab4-4aae-bb1f-1e1f56a3badd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received unexpected event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:00:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:14 np0005548731 nova_compute[232433]: 2025-12-06 08:00:14.588 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:14.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:15 np0005548731 nova_compute[232433]: 2025-12-06 08:00:15.211 232437 DEBUG nova.network.neutron [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:00:15 np0005548731 nova_compute[232433]: 2025-12-06 08:00:15.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:15 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:15Z|00916|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  6 03:00:15 np0005548731 nova_compute[232433]: 2025-12-06 08:00:15.408 232437 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Releasing lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:00:15 np0005548731 nova_compute[232433]: 2025-12-06 08:00:15.412 232437 DEBUG oslo_concurrency.lockutils [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:00:15 np0005548731 nova_compute[232433]: 2025-12-06 08:00:15.412 232437 DEBUG nova.network.neutron [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Refreshing network info cache for port 05c9980c-d230-4d61-9d98-4586e200fac5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:00:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:15.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:15 np0005548731 nova_compute[232433]: 2025-12-06 08:00:15.691 232437 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Dec  6 03:00:15 np0005548731 nova_compute[232433]: 2025-12-06 08:00:15.693 232437 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 03:00:15 np0005548731 nova_compute[232433]: 2025-12-06 08:00:15.693 232437 INFO nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Creating image(s)#033[00m
Dec  6 03:00:15 np0005548731 nova_compute[232433]: 2025-12-06 08:00:15.732 232437 DEBUG nova.storage.rbd_utils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] creating snapshot(nova-resize) on rbd image(d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 03:00:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e408 e408: 3 total, 3 up, 3 in
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.715 232437 DEBUG nova.objects.instance [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:00:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:16.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.830 232437 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.830 232437 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Ensure instance console log exists: /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.831 232437 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.831 232437 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.831 232437 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.834 232437 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Start _get_guest_xml network_info=[{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--425482001", "vif_mac": "fa:16:3e:28:35:99"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.838 232437 WARNING nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.842 232437 DEBUG nova.virt.libvirt.host [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.843 232437 DEBUG nova.virt.libvirt.host [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.845 232437 DEBUG nova.virt.libvirt.host [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.846 232437 DEBUG nova.virt.libvirt.host [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.847 232437 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.847 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.847 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.848 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.848 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.848 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.848 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.848 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.849 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.849 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.849 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.849 232437 DEBUG nova.virt.hardware [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.849 232437 DEBUG nova.objects.instance [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:00:16 np0005548731 nova_compute[232433]: 2025-12-06 08:00:16.866 232437 DEBUG oslo_concurrency.processutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:00:17 np0005548731 systemd[1]: Stopping User Manager for UID 42436...
Dec  6 03:00:17 np0005548731 systemd[319012]: Activating special unit Exit the Session...
Dec  6 03:00:17 np0005548731 systemd[319012]: Stopped target Main User Target.
Dec  6 03:00:17 np0005548731 systemd[319012]: Stopped target Basic System.
Dec  6 03:00:17 np0005548731 systemd[319012]: Stopped target Paths.
Dec  6 03:00:17 np0005548731 systemd[319012]: Stopped target Sockets.
Dec  6 03:00:17 np0005548731 systemd[319012]: Stopped target Timers.
Dec  6 03:00:17 np0005548731 systemd[319012]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  6 03:00:17 np0005548731 systemd[319012]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  6 03:00:17 np0005548731 systemd[319012]: Closed D-Bus User Message Bus Socket.
Dec  6 03:00:17 np0005548731 systemd[319012]: Stopped Create User's Volatile Files and Directories.
Dec  6 03:00:17 np0005548731 systemd[319012]: Removed slice User Application Slice.
Dec  6 03:00:17 np0005548731 systemd[319012]: Reached target Shutdown.
Dec  6 03:00:17 np0005548731 systemd[319012]: Finished Exit the Session.
Dec  6 03:00:17 np0005548731 systemd[319012]: Reached target Exit the Session.
Dec  6 03:00:17 np0005548731 systemd[1]: user@42436.service: Deactivated successfully.
Dec  6 03:00:17 np0005548731 systemd[1]: Stopped User Manager for UID 42436.
Dec  6 03:00:17 np0005548731 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec  6 03:00:17 np0005548731 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec  6 03:00:17 np0005548731 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec  6 03:00:17 np0005548731 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec  6 03:00:17 np0005548731 systemd[1]: Removed slice User Slice of UID 42436.
Dec  6 03:00:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:00:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3011078489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.326 232437 DEBUG oslo_concurrency.processutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.362 232437 DEBUG oslo_concurrency.processutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.392 232437 DEBUG nova.network.neutron [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updated VIF entry in instance network info cache for port 05c9980c-d230-4d61-9d98-4586e200fac5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.393 232437 DEBUG nova.network.neutron [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.425 232437 DEBUG oslo_concurrency.lockutils [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:00:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:17.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:00:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/188999392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.795 232437 DEBUG oslo_concurrency.processutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.796 232437 DEBUG nova.virt.libvirt.vif [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:58:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1262973925',display_name='tempest-TestNetworkAdvancedServerOps-server-1262973925',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1262973925',id=181,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHajJo6sYKmo7BjEFfbTegHFFaysH3CPUR6yuP2Rayw3S9ts1Wd6TY6anx2QtLxK6yp4z4nQqn7Ss4CGPtBiZQsZd5U8dFeDqjYG81KqlV6e9SPXI48qB0u9ty6SGnMpqw==',key_name='tempest-TestNetworkAdvancedServerOps-720860597',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:59:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-1koojzsy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:00:11Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=d1a8ef9c-0ce7-4841-9523-7f11435a1884,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--425482001", "vif_mac": "fa:16:3e:28:35:99"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.797 232437 DEBUG nova.network.os_vif_util [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converting VIF {"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--425482001", "vif_mac": "fa:16:3e:28:35:99"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.798 232437 DEBUG nova.network.os_vif_util [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.800 232437 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  <uuid>d1a8ef9c-0ce7-4841-9523-7f11435a1884</uuid>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  <name>instance-000000b5</name>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1262973925</nova:name>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:00:16</nova:creationTime>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <nova:port uuid="05c9980c-d230-4d61-9d98-4586e200fac5">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <entry name="serial">d1a8ef9c-0ce7-4841-9523-7f11435a1884</entry>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <entry name="uuid">d1a8ef9c-0ce7-4841-9523-7f11435a1884</entry>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk.config">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:28:35:99"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <target dev="tap05c9980c-d2"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/console.log" append="off"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:00:17 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:00:17 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:00:17 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:00:17 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.801 232437 DEBUG nova.virt.libvirt.vif [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:58:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1262973925',display_name='tempest-TestNetworkAdvancedServerOps-server-1262973925',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1262973925',id=181,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHajJo6sYKmo7BjEFfbTegHFFaysH3CPUR6yuP2Rayw3S9ts1Wd6TY6anx2QtLxK6yp4z4nQqn7Ss4CGPtBiZQsZd5U8dFeDqjYG81KqlV6e9SPXI48qB0u9ty6SGnMpqw==',key_name='tempest-TestNetworkAdvancedServerOps-720860597',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:59:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-1koojzsy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:00:11Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=d1a8ef9c-0ce7-4841-9523-7f11435a1884,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--425482001", "vif_mac": "fa:16:3e:28:35:99"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.802 232437 DEBUG nova.network.os_vif_util [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converting VIF {"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--425482001", "vif_mac": "fa:16:3e:28:35:99"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.802 232437 DEBUG nova.network.os_vif_util [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.803 232437 DEBUG os_vif [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.803 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.804 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.804 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.806 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.806 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05c9980c-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.806 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05c9980c-d2, col_values=(('external_ids', {'iface-id': '05c9980c-d230-4d61-9d98-4586e200fac5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:35:99', 'vm-uuid': 'd1a8ef9c-0ce7-4841-9523-7f11435a1884'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.808 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:17 np0005548731 NetworkManager[49182]: <info>  [1765008017.8088] manager: (tap05c9980c-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.810 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.814 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.815 232437 INFO os_vif [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2')#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.900 232437 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.901 232437 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.901 232437 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] No VIF found with MAC fa:16:3e:28:35:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:00:17 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.902 232437 INFO nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Using config drive#033[00m
Dec  6 03:00:17 np0005548731 kernel: tap05c9980c-d2: entered promiscuous mode
Dec  6 03:00:18 np0005548731 NetworkManager[49182]: <info>  [1765008017.9833] manager: (tap05c9980c-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.983 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:17.989 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:18 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:17Z|00917|binding|INFO|Claiming lport 05c9980c-d230-4d61-9d98-4586e200fac5 for this chassis.
Dec  6 03:00:18 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:17Z|00918|binding|INFO|05c9980c-d230-4d61-9d98-4586e200fac5: Claiming fa:16:3e:28:35:99 10.100.0.14
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:17.999 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:35:99 10.100.0.14'], port_security=['fa:16:3e:28:35:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd1a8ef9c-0ce7-4841-9523-7f11435a1884', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6d104e6f-7c98-46ec-bf5b-e2d926211253', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c0a0ddb-0305-44e4-8a0a-e612a87c4904, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=05c9980c-d230-4d61-9d98-4586e200fac5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.000 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 05c9980c-d230-4d61-9d98-4586e200fac5 in datapath ecf01de9-e04e-423a-b106-dcf22b107dc4 bound to our chassis#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.002 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ecf01de9-e04e-423a-b106-dcf22b107dc4#033[00m
Dec  6 03:00:18 np0005548731 systemd-udevd[319574]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.013 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[148557bf-f176-4df2-acc8-9f6dbf9a96fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.015 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapecf01de9-e1 in ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.017 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapecf01de9-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.017 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca4ac30-145c-413d-8913-39d06db9f9cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.017 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[70a6175b-e06d-44a7-b7b8-84c6eed44ea6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:18.025 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:18 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:18Z|00919|binding|INFO|Setting lport 05c9980c-d230-4d61-9d98-4586e200fac5 ovn-installed in OVS
Dec  6 03:00:18 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:18Z|00920|binding|INFO|Setting lport 05c9980c-d230-4d61-9d98-4586e200fac5 up in Southbound
Dec  6 03:00:18 np0005548731 NetworkManager[49182]: <info>  [1765008018.0301] device (tap05c9980c-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:00:18 np0005548731 NetworkManager[49182]: <info>  [1765008018.0307] device (tap05c9980c-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.029 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[fe22b2c3-a9b2-4d1b-867d-c8507da82219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 systemd-machined[195355]: New machine qemu-93-instance-000000b5.
Dec  6 03:00:18 np0005548731 systemd[1]: Started Virtual Machine qemu-93-instance-000000b5.
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.052 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[863aeec5-267d-46ed-81a9-e544367ce0a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.096 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[79fc4c8f-488b-4564-af89-c830e271d6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 NetworkManager[49182]: <info>  [1765008018.1055] manager: (tapecf01de9-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.104 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[58c33d69-6882-4957-9506-733267804109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.134 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f814c761-100f-43b2-88f3-73a0377db8f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.138 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[879e8379-6589-4692-bfaf-2f12966557bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 NetworkManager[49182]: <info>  [1765008018.1627] device (tapecf01de9-e0): carrier: link connected
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.168 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b8cccb68-b073-4dcc-adbb-1edd43b060a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.185 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3dff0b12-0609-4572-ad36-2db47164ecd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapecf01de9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:13:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828016, 'reachable_time': 32240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319609, 'error': None, 'target': 'ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.201 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4f8900-5bd3-4040-8668-64ca2b086247]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:1385'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 828016, 'tstamp': 828016}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319610, 'error': None, 'target': 'ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.218 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[30655df4-30a6-4c8b-9418-29622d0bafe1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapecf01de9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:13:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828016, 'reachable_time': 32240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319611, 'error': None, 'target': 'ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.254 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e47dfe16-c3ac-4ba1-9db4-0ded14369d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.335 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe657ff-46f4-4114-9140-c065d4a76278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.337 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapecf01de9-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.337 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.337 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapecf01de9-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:18.339 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:18 np0005548731 kernel: tapecf01de9-e0: entered promiscuous mode
Dec  6 03:00:18 np0005548731 NetworkManager[49182]: <info>  [1765008018.3402] manager: (tapecf01de9-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.342 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapecf01de9-e0, col_values=(('external_ids', {'iface-id': '99dcb7eb-e90a-4aef-bab3-916110abd3fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:18 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:18Z|00921|binding|INFO|Releasing lport 99dcb7eb-e90a-4aef-bab3-916110abd3fa from this chassis (sb_readonly=0)
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:18.346 232437 DEBUG nova.compute.manager [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:18.347 232437 DEBUG oslo_concurrency.lockutils [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:18.347 232437 DEBUG oslo_concurrency.lockutils [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:18.347 232437 DEBUG oslo_concurrency.lockutils [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:18.348 232437 DEBUG nova.compute.manager [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] No waiting events found dispatching network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:18.348 232437 WARNING nova.compute.manager [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received unexpected event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 for instance with vm_state active and task_state resize_finish.#033[00m
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:18.349 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:18 np0005548731 nova_compute[232433]: 2025-12-06 08:00:18.358 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.359 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ecf01de9-e04e-423a-b106-dcf22b107dc4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ecf01de9-e04e-423a-b106-dcf22b107dc4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.360 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[95a2bc4c-9b85-488a-85ed-bffbd6dd5ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.361 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-ecf01de9-e04e-423a-b106-dcf22b107dc4
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/ecf01de9-e04e-423a-b106-dcf22b107dc4.pid.haproxy
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID ecf01de9-e04e-423a-b106-dcf22b107dc4
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:00:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:18.361 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'env', 'PROCESS_TAG=haproxy-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ecf01de9-e04e-423a-b106-dcf22b107dc4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:00:18 np0005548731 podman[319644]: 2025-12-06 08:00:18.717037621 +0000 UTC m=+0.044925499 container create 588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 03:00:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:18.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:18 np0005548731 systemd[1]: Started libpod-conmon-588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0.scope.
Dec  6 03:00:18 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:00:18 np0005548731 podman[319644]: 2025-12-06 08:00:18.694275425 +0000 UTC m=+0.022163323 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:00:18 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b0bd473afbf5164d03b8273a4a4e80dec511953821abec8aaa2d7c2e1b8393/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:00:18 np0005548731 podman[319644]: 2025-12-06 08:00:18.801408304 +0000 UTC m=+0.129296182 container init 588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:00:18 np0005548731 podman[319644]: 2025-12-06 08:00:18.80617684 +0000 UTC m=+0.134064718 container start 588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:00:18 np0005548731 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[319659]: [NOTICE]   (319663) : New worker (319665) forked
Dec  6 03:00:18 np0005548731 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[319659]: [NOTICE]   (319663) : Loading success.
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.005 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008019.0050635, d1a8ef9c-0ce7-4841-9523-7f11435a1884 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.007 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.009 232437 DEBUG nova.compute.manager [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.012 232437 INFO nova.virt.libvirt.driver [-] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Instance running successfully.#033[00m
Dec  6 03:00:19 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.014 232437 DEBUG nova.virt.libvirt.guest [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.015 232437 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.045 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.053 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.087 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.087 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008019.0059628, d1a8ef9c-0ce7-4841-9523-7f11435a1884 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.088 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] VM Started (Lifecycle Event)#033[00m
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.119 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:00:19 np0005548731 nova_compute[232433]: 2025-12-06 08:00:19.122 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:00:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:19.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:20 np0005548731 nova_compute[232433]: 2025-12-06 08:00:20.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:20.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:20 np0005548731 nova_compute[232433]: 2025-12-06 08:00:20.757 232437 DEBUG nova.compute.manager [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:20 np0005548731 nova_compute[232433]: 2025-12-06 08:00:20.757 232437 DEBUG oslo_concurrency.lockutils [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:20 np0005548731 nova_compute[232433]: 2025-12-06 08:00:20.757 232437 DEBUG oslo_concurrency.lockutils [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:20 np0005548731 nova_compute[232433]: 2025-12-06 08:00:20.758 232437 DEBUG oslo_concurrency.lockutils [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:20 np0005548731 nova_compute[232433]: 2025-12-06 08:00:20.758 232437 DEBUG nova.compute.manager [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] No waiting events found dispatching network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:00:20 np0005548731 nova_compute[232433]: 2025-12-06 08:00:20.758 232437 WARNING nova.compute.manager [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received unexpected event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 for instance with vm_state resized and task_state None.#033[00m
Dec  6 03:00:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:21.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:22.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:22 np0005548731 nova_compute[232433]: 2025-12-06 08:00:22.828 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:23.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:24.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:25 np0005548731 nova_compute[232433]: 2025-12-06 08:00:25.328 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:00:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:25.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e409 e409: 3 total, 3 up, 3 in
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:25.944184) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008025944240, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 756, "num_deletes": 251, "total_data_size": 1355828, "memory_usage": 1376680, "flush_reason": "Manual Compaction"}
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008025951941, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 894187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69542, "largest_seqno": 70293, "table_properties": {"data_size": 890607, "index_size": 1423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8317, "raw_average_key_size": 19, "raw_value_size": 883367, "raw_average_value_size": 2063, "num_data_blocks": 63, "num_entries": 428, "num_filter_entries": 428, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007974, "oldest_key_time": 1765007974, "file_creation_time": 1765008025, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 7812 microseconds, and 3253 cpu microseconds.
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:25.952002) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 894187 bytes OK
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:25.952025) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:25.953264) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:25.953276) EVENT_LOG_v1 {"time_micros": 1765008025953272, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:25.953293) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1351845, prev total WAL file size 1351845, number of live WAL files 2.
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:25.953788) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(873KB)], [138(11MB)]
Dec  6 03:00:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008025953823, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 12624522, "oldest_snapshot_seqno": -1}
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 9830 keys, 10656696 bytes, temperature: kUnknown
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008026002387, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10656696, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10595905, "index_size": 35142, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 260453, "raw_average_key_size": 26, "raw_value_size": 10425967, "raw_average_value_size": 1060, "num_data_blocks": 1322, "num_entries": 9830, "num_filter_entries": 9830, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765008025, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:26.002759) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10656696 bytes
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:26.003930) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 258.8 rd, 218.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(26.0) write-amplify(11.9) OK, records in: 10344, records dropped: 514 output_compression: NoCompression
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:26.003950) EVENT_LOG_v1 {"time_micros": 1765008026003941, "job": 88, "event": "compaction_finished", "compaction_time_micros": 48785, "compaction_time_cpu_micros": 24814, "output_level": 6, "num_output_files": 1, "total_output_size": 10656696, "num_input_records": 10344, "num_output_records": 9830, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008026004679, "job": 88, "event": "table_file_deletion", "file_number": 140}
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008026007202, "job": 88, "event": "table_file_deletion", "file_number": 138}
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:25.953745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:26.007295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:26.007300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:26.007301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:26.007303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:00:26.007304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:00:26 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:26Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:9d:85 10.100.0.6
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:00:26 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/404955260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:00:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:26.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:27.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:27 np0005548731 nova_compute[232433]: 2025-12-06 08:00:27.831 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:28.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:29.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:30 np0005548731 nova_compute[232433]: 2025-12-06 08:00:30.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:30 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #57. Immutable memtables: 13.
Dec  6 03:00:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:30.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:00:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5402.4 total, 600.0 interval#012Cumulative writes: 67K writes, 270K keys, 67K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.05 MB/s#012Cumulative WAL: 67K writes, 25K syncs, 2.70 writes per sync, written: 0.27 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8272 writes, 30K keys, 8272 commit groups, 1.0 writes per commit group, ingest: 32.58 MB, 0.05 MB/s#012Interval WAL: 8273 writes, 3385 syncs, 2.44 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 03:00:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:31.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:32.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:32 np0005548731 nova_compute[232433]: 2025-12-06 08:00:32.823 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:32 np0005548731 nova_compute[232433]: 2025-12-06 08:00:32.832 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:33 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:33Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:35:99 10.100.0.14
Dec  6 03:00:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 e410: 3 total, 3 up, 3 in
Dec  6 03:00:34 np0005548731 nova_compute[232433]: 2025-12-06 08:00:34.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:00:34 np0005548731 nova_compute[232433]: 2025-12-06 08:00:34.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:00:34 np0005548731 nova_compute[232433]: 2025-12-06 08:00:34.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:00:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:34 np0005548731 nova_compute[232433]: 2025-12-06 08:00:34.305 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:00:34 np0005548731 nova_compute[232433]: 2025-12-06 08:00:34.306 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:00:34 np0005548731 nova_compute[232433]: 2025-12-06 08:00:34.306 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 03:00:34 np0005548731 nova_compute[232433]: 2025-12-06 08:00:34.307 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:00:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:00:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2233248341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:00:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:34.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:35 np0005548731 nova_compute[232433]: 2025-12-06 08:00:35.373 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:35.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:36.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:37 np0005548731 nova_compute[232433]: 2025-12-06 08:00:37.213 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:00:37 np0005548731 nova_compute[232433]: 2025-12-06 08:00:37.229 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:00:37 np0005548731 nova_compute[232433]: 2025-12-06 08:00:37.229 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 03:00:37 np0005548731 nova_compute[232433]: 2025-12-06 08:00:37.229 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:00:37 np0005548731 nova_compute[232433]: 2025-12-06 08:00:37.230 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:00:37 np0005548731 nova_compute[232433]: 2025-12-06 08:00:37.230 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:00:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:37.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:37 np0005548731 nova_compute[232433]: 2025-12-06 08:00:37.836 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:38 np0005548731 nova_compute[232433]: 2025-12-06 08:00:38.225 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:00:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:00:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:00:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:38.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:39 np0005548731 nova_compute[232433]: 2025-12-06 08:00:39.050 232437 INFO nova.compute.manager [None req-2e742e04-de42-44f1-977e-fe0603f0e6bd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Get console output#033[00m
Dec  6 03:00:39 np0005548731 nova_compute[232433]: 2025-12-06 08:00:39.057 261230 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 03:00:39 np0005548731 nova_compute[232433]: 2025-12-06 08:00:39.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:00:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:00:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:39.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:00:39 np0005548731 podman[320025]: 2025-12-06 08:00:39.935426671 +0000 UTC m=+0.088993996 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:00:39 np0005548731 podman[320027]: 2025-12-06 08:00:39.949140857 +0000 UTC m=+0.070369031 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec  6 03:00:40 np0005548731 podman[320026]: 2025-12-06 08:00:40.003774046 +0000 UTC m=+0.154615844 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.376 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.762 232437 DEBUG nova.compute.manager [req-ef9b058b-1415-4be8-960e-b1b9b6dac38d req-50d15b87-f6d6-455c-90de-64591894bc27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-changed-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.763 232437 DEBUG nova.compute.manager [req-ef9b058b-1415-4be8-960e-b1b9b6dac38d req-50d15b87-f6d6-455c-90de-64591894bc27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Refreshing instance network info cache due to event network-changed-05c9980c-d230-4d61-9d98-4586e200fac5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.763 232437 DEBUG oslo_concurrency.lockutils [req-ef9b058b-1415-4be8-960e-b1b9b6dac38d req-50d15b87-f6d6-455c-90de-64591894bc27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.763 232437 DEBUG oslo_concurrency.lockutils [req-ef9b058b-1415-4be8-960e-b1b9b6dac38d req-50d15b87-f6d6-455c-90de-64591894bc27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.763 232437 DEBUG nova.network.neutron [req-ef9b058b-1415-4be8-960e-b1b9b6dac38d req-50d15b87-f6d6-455c-90de-64591894bc27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Refreshing network info cache for port 05c9980c-d230-4d61-9d98-4586e200fac5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:00:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:00:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:40.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.817 232437 DEBUG oslo_concurrency.lockutils [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.818 232437 DEBUG oslo_concurrency.lockutils [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.819 232437 DEBUG oslo_concurrency.lockutils [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.819 232437 DEBUG oslo_concurrency.lockutils [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.819 232437 DEBUG oslo_concurrency.lockutils [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.821 232437 INFO nova.compute.manager [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Terminating instance#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.822 232437 DEBUG nova.compute.manager [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:00:40 np0005548731 kernel: tap05c9980c-d2 (unregistering): left promiscuous mode
Dec  6 03:00:40 np0005548731 NetworkManager[49182]: <info>  [1765008040.8761] device (tap05c9980c-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.893 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.899 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:40 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:40Z|00922|binding|INFO|Releasing lport 05c9980c-d230-4d61-9d98-4586e200fac5 from this chassis (sb_readonly=0)
Dec  6 03:00:40 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:40Z|00923|binding|INFO|Setting lport 05c9980c-d230-4d61-9d98-4586e200fac5 down in Southbound
Dec  6 03:00:40 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:40Z|00924|binding|INFO|Removing iface tap05c9980c-d2 ovn-installed in OVS
Dec  6 03:00:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:40.903 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:35:99 10.100.0.14'], port_security=['fa:16:3e:28:35:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd1a8ef9c-0ce7-4841-9523-7f11435a1884', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6d104e6f-7c98-46ec-bf5b-e2d926211253', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c0a0ddb-0305-44e4-8a0a-e612a87c4904, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=05c9980c-d230-4d61-9d98-4586e200fac5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:00:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:40.905 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 05c9980c-d230-4d61-9d98-4586e200fac5 in datapath ecf01de9-e04e-423a-b106-dcf22b107dc4 unbound from our chassis#033[00m
Dec  6 03:00:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:40.907 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ecf01de9-e04e-423a-b106-dcf22b107dc4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:00:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:40.911 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b113b8-960e-418a-92ce-9f949dc6018e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:40.912 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4 namespace which is not needed anymore#033[00m
Dec  6 03:00:40 np0005548731 nova_compute[232433]: 2025-12-06 08:00:40.919 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:40 np0005548731 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Dec  6 03:00:40 np0005548731 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000b5.scope: Consumed 15.824s CPU time.
Dec  6 03:00:40 np0005548731 systemd-machined[195355]: Machine qemu-93-instance-000000b5 terminated.
Dec  6 03:00:41 np0005548731 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[319659]: [NOTICE]   (319663) : haproxy version is 2.8.14-c23fe91
Dec  6 03:00:41 np0005548731 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[319659]: [NOTICE]   (319663) : path to executable is /usr/sbin/haproxy
Dec  6 03:00:41 np0005548731 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[319659]: [WARNING]  (319663) : Exiting Master process...
Dec  6 03:00:41 np0005548731 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[319659]: [ALERT]    (319663) : Current worker (319665) exited with code 143 (Terminated)
Dec  6 03:00:41 np0005548731 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[319659]: [WARNING]  (319663) : All workers exited. Exiting... (0)
Dec  6 03:00:41 np0005548731 systemd[1]: libpod-588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0.scope: Deactivated successfully.
Dec  6 03:00:41 np0005548731 podman[320111]: 2025-12-06 08:00:41.123767911 +0000 UTC m=+0.110131539 container died 588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.127 232437 INFO nova.virt.libvirt.driver [-] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Instance destroyed successfully.#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.128 232437 DEBUG nova.objects.instance [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.141 232437 DEBUG nova.virt.libvirt.vif [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:58:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1262973925',display_name='tempest-TestNetworkAdvancedServerOps-server-1262973925',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1262973925',id=181,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHajJo6sYKmo7BjEFfbTegHFFaysH3CPUR6yuP2Rayw3S9ts1Wd6TY6anx2QtLxK6yp4z4nQqn7Ss4CGPtBiZQsZd5U8dFeDqjYG81KqlV6e9SPXI48qB0u9ty6SGnMpqw==',key_name='tempest-TestNetworkAdvancedServerOps-720860597',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:00:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-1koojzsy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:00:27Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=d1a8ef9c-0ce7-4841-9523-7f11435a1884,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.141 232437 DEBUG nova.network.os_vif_util [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.142 232437 DEBUG nova.network.os_vif_util [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.142 232437 DEBUG os_vif [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.144 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.144 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05c9980c-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.147 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.148 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.149 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:41.152 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.153 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.154 232437 INFO os_vif [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2')#033[00m
Dec  6 03:00:41 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0-userdata-shm.mount: Deactivated successfully.
Dec  6 03:00:41 np0005548731 systemd[1]: var-lib-containers-storage-overlay-26b0bd473afbf5164d03b8273a4a4e80dec511953821abec8aaa2d7c2e1b8393-merged.mount: Deactivated successfully.
Dec  6 03:00:41 np0005548731 podman[320111]: 2025-12-06 08:00:41.17977479 +0000 UTC m=+0.166138428 container cleanup 588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 03:00:41 np0005548731 systemd[1]: libpod-conmon-588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0.scope: Deactivated successfully.
Dec  6 03:00:41 np0005548731 podman[320165]: 2025-12-06 08:00:41.248408937 +0000 UTC m=+0.046310593 container remove 588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:00:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:41.257 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[514bb6d6-79b6-4a4b-9c04-baa14b37f3f8]: (4, ('Sat Dec  6 08:00:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4 (588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0)\n588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0\nSat Dec  6 08:00:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4 (588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0)\n588d00c79bb982c10f2e74d226e5d6f711d3dd09cbe5b8b62d6fd8b13864a2d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:41.260 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3f674c3e-19ee-4f91-955e-f99c4d3611fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:41.261 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapecf01de9-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.263 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:41 np0005548731 kernel: tapecf01de9-e0: left promiscuous mode
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.293 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:41.296 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[90843c0a-f06a-4020-9736-dbf39aa4ec1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:41.314 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b185f95f-ef65-427e-b44e-4a1f615d5152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:41.315 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[506812a8-3980-43f4-ab5c-405a6dab6047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:41.338 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a7dc4188-66cb-4433-89da-2f55cdd836dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828009, 'reachable_time': 30205, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320183, 'error': None, 'target': 'ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:41 np0005548731 systemd[1]: run-netns-ovnmeta\x2decf01de9\x2de04e\x2d423a\x2db106\x2ddcf22b107dc4.mount: Deactivated successfully.
Dec  6 03:00:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:41.342 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:00:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:41.342 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[bf850270-32a6-4d9d-880c-62d9aff6039c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:41 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:41.343 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:00:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:41.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.959 232437 INFO nova.virt.libvirt.driver [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Deleting instance files /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884_del#033[00m
Dec  6 03:00:41 np0005548731 nova_compute[232433]: 2025-12-06 08:00:41.960 232437 INFO nova.virt.libvirt.driver [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Deletion of /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884_del complete#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.017 232437 INFO nova.compute.manager [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Took 1.19 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.018 232437 DEBUG oslo.service.loopingcall [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.018 232437 DEBUG nova.compute.manager [-] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.019 232437 DEBUG nova.network.neutron [-] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.102 232437 DEBUG nova.network.neutron [req-ef9b058b-1415-4be8-960e-b1b9b6dac38d req-50d15b87-f6d6-455c-90de-64591894bc27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updated VIF entry in instance network info cache for port 05c9980c-d230-4d61-9d98-4586e200fac5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.103 232437 DEBUG nova.network.neutron [req-ef9b058b-1415-4be8-960e-b1b9b6dac38d req-50d15b87-f6d6-455c-90de-64591894bc27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.131 232437 DEBUG oslo_concurrency.lockutils [req-ef9b058b-1415-4be8-960e-b1b9b6dac38d req-50d15b87-f6d6-455c-90de-64591894bc27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.135 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.135 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.136 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.136 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.137 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:00:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:00:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/744438241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.625 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.639 232437 DEBUG nova.network.neutron [-] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.654 232437 INFO nova.compute.manager [-] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Took 0.64 seconds to deallocate network for instance.#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.713 232437 DEBUG oslo_concurrency.lockutils [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.713 232437 DEBUG oslo_concurrency.lockutils [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.718 232437 DEBUG oslo_concurrency.lockutils [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.723 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.723 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.732 232437 DEBUG nova.compute.manager [req-c30e2703-ea2b-47a2-a54b-0d7e57ef2f97 req-735c76f2-c92a-4ae0-956d-f71155ba574f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-deleted-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.750 232437 INFO nova.scheduler.client.report [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Deleted allocations for instance d1a8ef9c-0ce7-4841-9523-7f11435a1884#033[00m
Dec  6 03:00:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:42.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.815 232437 DEBUG oslo_concurrency.lockutils [None req-558cd0a9-6afc-4565-b120-4a28d9eff00c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.850 232437 DEBUG nova.compute.manager [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-unplugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.850 232437 DEBUG oslo_concurrency.lockutils [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.850 232437 DEBUG oslo_concurrency.lockutils [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.850 232437 DEBUG oslo_concurrency.lockutils [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.850 232437 DEBUG nova.compute.manager [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] No waiting events found dispatching network-vif-unplugged-05c9980c-d230-4d61-9d98-4586e200fac5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.851 232437 WARNING nova.compute.manager [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received unexpected event network-vif-unplugged-05c9980c-d230-4d61-9d98-4586e200fac5 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.851 232437 DEBUG nova.compute.manager [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.851 232437 DEBUG oslo_concurrency.lockutils [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.851 232437 DEBUG oslo_concurrency.lockutils [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.851 232437 DEBUG oslo_concurrency.lockutils [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.851 232437 DEBUG nova.compute.manager [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] No waiting events found dispatching network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.852 232437 WARNING nova.compute.manager [req-0532b4f7-d5ab-4669-b9de-93a42a94e959 req-4faa7f5f-8ed2-4fe8-a445-a1625b0247b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received unexpected event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.885 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.886 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4007MB free_disk=20.851505279541016GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.886 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.887 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.960 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 27f553d7-b010-40ef-b0cb-42ff0c466354 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.961 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.961 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.975 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.991 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 03:00:42 np0005548731 nova_compute[232433]: 2025-12-06 08:00:42.991 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 03:00:43 np0005548731 nova_compute[232433]: 2025-12-06 08:00:43.004 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 03:00:43 np0005548731 nova_compute[232433]: 2025-12-06 08:00:43.024 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 03:00:43 np0005548731 nova_compute[232433]: 2025-12-06 08:00:43.064 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:00:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:00:43 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1692009206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:00:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:43.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:43 np0005548731 nova_compute[232433]: 2025-12-06 08:00:43.489 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:00:43 np0005548731 nova_compute[232433]: 2025-12-06 08:00:43.497 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:00:43 np0005548731 nova_compute[232433]: 2025-12-06 08:00:43.521 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:00:43 np0005548731 nova_compute[232433]: 2025-12-06 08:00:43.560 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:00:43 np0005548731 nova_compute[232433]: 2025-12-06 08:00:43.561 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:44.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:45 np0005548731 nova_compute[232433]: 2025-12-06 08:00:45.377 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:45.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:45 np0005548731 nova_compute[232433]: 2025-12-06 08:00:45.561 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:00:45 np0005548731 nova_compute[232433]: 2025-12-06 08:00:45.562 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:00:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:00:46 np0005548731 nova_compute[232433]: 2025-12-06 08:00:46.147 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:46.345 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:46.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.121 232437 DEBUG nova.compute.manager [req-ed6f548f-983a-4e97-a245-9dd114fc6dd1 req-23d075ac-ed94-40cc-a042-67d9e307c499 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-changed-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.121 232437 DEBUG nova.compute.manager [req-ed6f548f-983a-4e97-a245-9dd114fc6dd1 req-23d075ac-ed94-40cc-a042-67d9e307c499 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Refreshing instance network info cache due to event network-changed-2b54b935-6ec8-4a68-a404-6be608d5b405. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.121 232437 DEBUG oslo_concurrency.lockutils [req-ed6f548f-983a-4e97-a245-9dd114fc6dd1 req-23d075ac-ed94-40cc-a042-67d9e307c499 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.122 232437 DEBUG oslo_concurrency.lockutils [req-ed6f548f-983a-4e97-a245-9dd114fc6dd1 req-23d075ac-ed94-40cc-a042-67d9e307c499 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.122 232437 DEBUG nova.network.neutron [req-ed6f548f-983a-4e97-a245-9dd114fc6dd1 req-23d075ac-ed94-40cc-a042-67d9e307c499 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Refreshing network info cache for port 2b54b935-6ec8-4a68-a404-6be608d5b405 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.204 232437 DEBUG oslo_concurrency.lockutils [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.205 232437 DEBUG oslo_concurrency.lockutils [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.206 232437 DEBUG oslo_concurrency.lockutils [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.206 232437 DEBUG oslo_concurrency.lockutils [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.207 232437 DEBUG oslo_concurrency.lockutils [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.209 232437 INFO nova.compute.manager [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Terminating instance#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.211 232437 DEBUG nova.compute.manager [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:00:47 np0005548731 kernel: tap2b54b935-6e (unregistering): left promiscuous mode
Dec  6 03:00:47 np0005548731 NetworkManager[49182]: <info>  [1765008047.2697] device (tap2b54b935-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:00:47 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:47Z|00925|binding|INFO|Releasing lport 2b54b935-6ec8-4a68-a404-6be608d5b405 from this chassis (sb_readonly=0)
Dec  6 03:00:47 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:47Z|00926|binding|INFO|Setting lport 2b54b935-6ec8-4a68-a404-6be608d5b405 down in Southbound
Dec  6 03:00:47 np0005548731 ovn_controller[133927]: 2025-12-06T08:00:47Z|00927|binding|INFO|Removing iface tap2b54b935-6e ovn-installed in OVS
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.282 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.284 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.288 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:9d:85 10.100.0.6'], port_security=['fa:16:3e:15:9d:85 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '27f553d7-b010-40ef-b0cb-42ff0c466354', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8caa40db-27da-43ab-86ca-042284636e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'fcb04c4c-5a67-4d1a-9338-9588ff78bd36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ce9c1bf-feee-480c-a359-3eaf272f4b83, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=2b54b935-6ec8-4a68-a404-6be608d5b405) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.289 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 2b54b935-6ec8-4a68-a404-6be608d5b405 in datapath 8caa40db-27da-43ab-86ca-042284636e71 unbound from our chassis#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.291 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8caa40db-27da-43ab-86ca-042284636e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.292 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[17e63a8e-e4c7-43ce-8ec3-748419c968b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.292 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 namespace which is not needed anymore#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.307 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:47 np0005548731 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Dec  6 03:00:47 np0005548731 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000b4.scope: Consumed 15.691s CPU time.
Dec  6 03:00:47 np0005548731 systemd-machined[195355]: Machine qemu-92-instance-000000b4 terminated.
Dec  6 03:00:47 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[319342]: [NOTICE]   (319346) : haproxy version is 2.8.14-c23fe91
Dec  6 03:00:47 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[319342]: [NOTICE]   (319346) : path to executable is /usr/sbin/haproxy
Dec  6 03:00:47 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[319342]: [WARNING]  (319346) : Exiting Master process...
Dec  6 03:00:47 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[319342]: [ALERT]    (319346) : Current worker (319348) exited with code 143 (Terminated)
Dec  6 03:00:47 np0005548731 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[319342]: [WARNING]  (319346) : All workers exited. Exiting... (0)
Dec  6 03:00:47 np0005548731 systemd[1]: libpod-4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3.scope: Deactivated successfully.
Dec  6 03:00:47 np0005548731 podman[320305]: 2025-12-06 08:00:47.419645238 +0000 UTC m=+0.040456960 container died 4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.446 232437 INFO nova.virt.libvirt.driver [-] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Instance destroyed successfully.#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.447 232437 DEBUG nova.objects.instance [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'resources' on Instance uuid 27f553d7-b010-40ef-b0cb-42ff0c466354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:00:47 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3-userdata-shm.mount: Deactivated successfully.
Dec  6 03:00:47 np0005548731 systemd[1]: var-lib-containers-storage-overlay-5b33ecc8da489d6f5dafb4c0f4d7488ac589556eebb344f009ba0348cf859bfc-merged.mount: Deactivated successfully.
Dec  6 03:00:47 np0005548731 podman[320305]: 2025-12-06 08:00:47.464954045 +0000 UTC m=+0.085765777 container cleanup 4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.466 232437 DEBUG nova.virt.libvirt.vif [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:58:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1733328147',display_name='tempest-TestShelveInstance-server-1733328147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1733328147',id=180,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4sXJH069UiaxcddlkMS95GRWWK+3yMzXihiGRU/mHbPKO5jPcAw1Vlvstr18SgoFXBRYxyHafuDIwfD1IsMDXWkERYPnz3TEHdLWZin3OQEFfVSfeX8bcZN+wdDBDnqQ==',key_name='tempest-TestShelveInstance-1084557310',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:00:12Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-hmpcrqao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:00:12Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=27f553d7-b010-40ef-b0cb-42ff0c466354,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.467 232437 DEBUG nova.network.os_vif_util [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.468 232437 DEBUG nova.network.os_vif_util [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.469 232437 DEBUG os_vif [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.471 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.471 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b54b935-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:47 np0005548731 systemd[1]: libpod-conmon-4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3.scope: Deactivated successfully.
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.475 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.478 232437 INFO os_vif [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:9d:85,bridge_name='br-int',has_traffic_filtering=True,id=2b54b935-6ec8-4a68-a404-6be608d5b405,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54b935-6e')#033[00m
Dec  6 03:00:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:47.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:47 np0005548731 podman[320345]: 2025-12-06 08:00:47.525888845 +0000 UTC m=+0.040881531 container remove 4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.533 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[868d1218-7ee8-4140-9af8-5ace696a9f38]: (4, ('Sat Dec  6 08:00:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 (4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3)\n4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3\nSat Dec  6 08:00:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 (4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3)\n4a6a97e5749910a24af38173541969d5e81e43ddeae6c9221f049a868082f8d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.535 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d60419ce-1446-4bff-98c8-f45e8f81ef50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.536 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8caa40db-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:00:47 np0005548731 kernel: tap8caa40db-20: left promiscuous mode
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.539 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.552 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.556 232437 DEBUG nova.compute.manager [req-5ac784da-e57e-4716-9d63-f112de632ec9 req-ddd42176-2db1-4517-bda1-a771931fb1ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-vif-unplugged-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.555 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9011a070-0aeb-4c1c-8303-6a64f31745fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.556 232437 DEBUG oslo_concurrency.lockutils [req-5ac784da-e57e-4716-9d63-f112de632ec9 req-ddd42176-2db1-4517-bda1-a771931fb1ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.556 232437 DEBUG oslo_concurrency.lockutils [req-5ac784da-e57e-4716-9d63-f112de632ec9 req-ddd42176-2db1-4517-bda1-a771931fb1ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.557 232437 DEBUG oslo_concurrency.lockutils [req-5ac784da-e57e-4716-9d63-f112de632ec9 req-ddd42176-2db1-4517-bda1-a771931fb1ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.557 232437 DEBUG nova.compute.manager [req-5ac784da-e57e-4716-9d63-f112de632ec9 req-ddd42176-2db1-4517-bda1-a771931fb1ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] No waiting events found dispatching network-vif-unplugged-2b54b935-6ec8-4a68-a404-6be608d5b405 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.557 232437 DEBUG nova.compute.manager [req-5ac784da-e57e-4716-9d63-f112de632ec9 req-ddd42176-2db1-4517-bda1-a771931fb1ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-vif-unplugged-2b54b935-6ec8-4a68-a404-6be608d5b405 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.574 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7115bb08-2205-408a-83fc-fa3ea3b75414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.575 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[74f02bcf-9bcf-4e0c-adb9-3c38db566f96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.592 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c5dfa0b0-3006-4c4a-a344-e22dc7c1be61]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827305, 'reachable_time': 30496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320378, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:47 np0005548731 systemd[1]: run-netns-ovnmeta\x2d8caa40db\x2d27da\x2d43ab\x2d86ca\x2d042284636e71.mount: Deactivated successfully.
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.596 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:00:47 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:00:47.596 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[b59bb6ac-1a45-48ae-8e99-d4608898c31f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.763 232437 INFO nova.virt.libvirt.driver [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Deleting instance files /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354_del#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.764 232437 INFO nova.virt.libvirt.driver [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Deletion of /var/lib/nova/instances/27f553d7-b010-40ef-b0cb-42ff0c466354_del complete#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.835 232437 INFO nova.compute.manager [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.836 232437 DEBUG oslo.service.loopingcall [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.836 232437 DEBUG nova.compute.manager [-] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:00:47 np0005548731 nova_compute[232433]: 2025-12-06 08:00:47.836 232437 DEBUG nova.network.neutron [-] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:00:48 np0005548731 nova_compute[232433]: 2025-12-06 08:00:48.112 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:48 np0005548731 nova_compute[232433]: 2025-12-06 08:00:48.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:48 np0005548731 nova_compute[232433]: 2025-12-06 08:00:48.629 232437 DEBUG nova.network.neutron [-] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:00:48 np0005548731 nova_compute[232433]: 2025-12-06 08:00:48.659 232437 INFO nova.compute.manager [-] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Took 0.82 seconds to deallocate network for instance.#033[00m
Dec  6 03:00:48 np0005548731 nova_compute[232433]: 2025-12-06 08:00:48.683 232437 DEBUG nova.network.neutron [req-ed6f548f-983a-4e97-a245-9dd114fc6dd1 req-23d075ac-ed94-40cc-a042-67d9e307c499 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updated VIF entry in instance network info cache for port 2b54b935-6ec8-4a68-a404-6be608d5b405. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:00:48 np0005548731 nova_compute[232433]: 2025-12-06 08:00:48.683 232437 DEBUG nova.network.neutron [req-ed6f548f-983a-4e97-a245-9dd114fc6dd1 req-23d075ac-ed94-40cc-a042-67d9e307c499 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating instance_info_cache with network_info: [{"id": "2b54b935-6ec8-4a68-a404-6be608d5b405", "address": "fa:16:3e:15:9d:85", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54b935-6e", "ovs_interfaceid": "2b54b935-6ec8-4a68-a404-6be608d5b405", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:00:48 np0005548731 nova_compute[232433]: 2025-12-06 08:00:48.702 232437 DEBUG oslo_concurrency.lockutils [req-ed6f548f-983a-4e97-a245-9dd114fc6dd1 req-23d075ac-ed94-40cc-a042-67d9e307c499 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-27f553d7-b010-40ef-b0cb-42ff0c466354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:00:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:48.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.246 232437 INFO nova.compute.manager [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Took 0.59 seconds to detach 1 volumes for instance.#033[00m
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.247 232437 DEBUG nova.compute.manager [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Deleting volume: f1cb8c3c-521b-48db-b5f7-453fef5dd2fe _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.285 232437 DEBUG nova.compute.manager [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-vif-deleted-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.286 232437 INFO nova.compute.manager [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Neutron deleted interface 2b54b935-6ec8-4a68-a404-6be608d5b405; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.286 232437 DEBUG nova.network.neutron [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.316 232437 DEBUG nova.compute.manager [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Detach interface failed, port_id=2b54b935-6ec8-4a68-a404-6be608d5b405, reason: Instance 27f553d7-b010-40ef-b0cb-42ff0c466354 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 03:00:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:49.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.702 232437 DEBUG nova.compute.manager [req-b40228f9-b768-4c90-ba75-d51212f1b78b req-b7946630-8ccf-45fd-8b7c-3c4566a9b17c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.702 232437 DEBUG oslo_concurrency.lockutils [req-b40228f9-b768-4c90-ba75-d51212f1b78b req-b7946630-8ccf-45fd-8b7c-3c4566a9b17c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.703 232437 DEBUG oslo_concurrency.lockutils [req-b40228f9-b768-4c90-ba75-d51212f1b78b req-b7946630-8ccf-45fd-8b7c-3c4566a9b17c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.703 232437 DEBUG oslo_concurrency.lockutils [req-b40228f9-b768-4c90-ba75-d51212f1b78b req-b7946630-8ccf-45fd-8b7c-3c4566a9b17c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.703 232437 DEBUG nova.compute.manager [req-b40228f9-b768-4c90-ba75-d51212f1b78b req-b7946630-8ccf-45fd-8b7c-3c4566a9b17c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] No waiting events found dispatching network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:00:49 np0005548731 nova_compute[232433]: 2025-12-06 08:00:49.704 232437 WARNING nova.compute.manager [req-b40228f9-b768-4c90-ba75-d51212f1b78b req-b7946630-8ccf-45fd-8b7c-3c4566a9b17c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Received unexpected event network-vif-plugged-2b54b935-6ec8-4a68-a404-6be608d5b405 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 03:00:50 np0005548731 nova_compute[232433]: 2025-12-06 08:00:50.033 232437 DEBUG oslo_concurrency.lockutils [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:00:50 np0005548731 nova_compute[232433]: 2025-12-06 08:00:50.033 232437 DEBUG oslo_concurrency.lockutils [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:00:50 np0005548731 nova_compute[232433]: 2025-12-06 08:00:50.250 232437 DEBUG oslo_concurrency.processutils [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:00:50 np0005548731 nova_compute[232433]: 2025-12-06 08:00:50.427 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:00:50 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2562147690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:00:50 np0005548731 nova_compute[232433]: 2025-12-06 08:00:50.731 232437 DEBUG oslo_concurrency.processutils [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:00:50 np0005548731 nova_compute[232433]: 2025-12-06 08:00:50.739 232437 DEBUG nova.compute.provider_tree [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:00:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:50.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:50 np0005548731 nova_compute[232433]: 2025-12-06 08:00:50.823 232437 DEBUG nova.scheduler.client.report [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:00:50 np0005548731 nova_compute[232433]: 2025-12-06 08:00:50.848 232437 DEBUG oslo_concurrency.lockutils [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:50 np0005548731 nova_compute[232433]: 2025-12-06 08:00:50.959 232437 INFO nova.scheduler.client.report [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Deleted allocations for instance 27f553d7-b010-40ef-b0cb-42ff0c466354#033[00m
Dec  6 03:00:51 np0005548731 nova_compute[232433]: 2025-12-06 08:00:51.059 232437 DEBUG oslo_concurrency.lockutils [None req-697eff34-ef89-41cb-a7d8-a2d8bfc3f3bc 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "27f553d7-b010-40ef-b0cb-42ff0c466354" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:00:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:51.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:52 np0005548731 nova_compute[232433]: 2025-12-06 08:00:52.476 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:52.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:00:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:54.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:55 np0005548731 nova_compute[232433]: 2025-12-06 08:00:55.456 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:55.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:56 np0005548731 nova_compute[232433]: 2025-12-06 08:00:56.124 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008041.123796, d1a8ef9c-0ce7-4841-9523-7f11435a1884 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:00:56 np0005548731 nova_compute[232433]: 2025-12-06 08:00:56.125 232437 INFO nova.compute.manager [-] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:00:56 np0005548731 nova_compute[232433]: 2025-12-06 08:00:56.433 232437 DEBUG nova.compute.manager [None req-3bb67a95-f31c-4ffe-8231-0293ad7050ff - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:00:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:56.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:57 np0005548731 nova_compute[232433]: 2025-12-06 08:00:57.477 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:00:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:00:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:58.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:00:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:00:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:00:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:00:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:59.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:01:00 np0005548731 nova_compute[232433]: 2025-12-06 08:01:00.458 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:00.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:01:00.906 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:01:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:01:00.907 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:01:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:01:00.907 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:01:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:01.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:02 np0005548731 nova_compute[232433]: 2025-12-06 08:01:02.442 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008047.4417236, 27f553d7-b010-40ef-b0cb-42ff0c466354 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:01:02 np0005548731 nova_compute[232433]: 2025-12-06 08:01:02.443 232437 INFO nova.compute.manager [-] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:01:02 np0005548731 nova_compute[232433]: 2025-12-06 08:01:02.480 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:01:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:02.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:01:03 np0005548731 nova_compute[232433]: 2025-12-06 08:01:03.169 232437 DEBUG nova.compute.manager [None req-f35311c8-de4d-4da1-8c64-f0466fd76ccb - - - - - -] [instance: 27f553d7-b010-40ef-b0cb-42ff0c466354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:01:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:01:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:04.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:01:05 np0005548731 nova_compute[232433]: 2025-12-06 08:01:05.460 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:05.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:01:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:06.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:01:07 np0005548731 nova_compute[232433]: 2025-12-06 08:01:07.484 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:01:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:07.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:01:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:08.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:01:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2492647070' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:01:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:01:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2492647070' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:01:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:01:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:09.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:01:10 np0005548731 nova_compute[232433]: 2025-12-06 08:01:10.462 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:10.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:10 np0005548731 podman[320526]: 2025-12-06 08:01:10.901695833 +0000 UTC m=+0.050907906 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:01:10 np0005548731 podman[320528]: 2025-12-06 08:01:10.906513051 +0000 UTC m=+0.056622895 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 03:01:10 np0005548731 podman[320527]: 2025-12-06 08:01:10.982459077 +0000 UTC m=+0.133675769 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:01:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:01:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:11.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:01:12 np0005548731 nova_compute[232433]: 2025-12-06 08:01:12.486 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:01:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:12.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:01:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:13.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:14.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:15 np0005548731 nova_compute[232433]: 2025-12-06 08:01:15.514 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:15.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:16.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:17 np0005548731 nova_compute[232433]: 2025-12-06 08:01:17.489 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:17.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:18.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:19.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:20 np0005548731 nova_compute[232433]: 2025-12-06 08:01:20.517 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:01:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:20.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:01:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:21.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:22 np0005548731 nova_compute[232433]: 2025-12-06 08:01:22.495 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:22.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:01:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:23.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:01:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:24.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:25 np0005548731 nova_compute[232433]: 2025-12-06 08:01:25.519 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:25.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:26.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:27 np0005548731 nova_compute[232433]: 2025-12-06 08:01:27.498 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:01:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:27.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:01:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:28.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:29.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:30 np0005548731 nova_compute[232433]: 2025-12-06 08:01:30.521 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:30.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:31.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:32 np0005548731 nova_compute[232433]: 2025-12-06 08:01:32.501 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:32.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:01:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:33.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:01:34 np0005548731 nova_compute[232433]: 2025-12-06 08:01:34.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:01:34 np0005548731 nova_compute[232433]: 2025-12-06 08:01:34.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:01:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:34 np0005548731 nova_compute[232433]: 2025-12-06 08:01:34.731 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:01:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:34.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:35.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:35 np0005548731 nova_compute[232433]: 2025-12-06 08:01:35.558 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:36 np0005548731 nova_compute[232433]: 2025-12-06 08:01:36.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:01:36 np0005548731 nova_compute[232433]: 2025-12-06 08:01:36.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:01:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:36.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:37 np0005548731 nova_compute[232433]: 2025-12-06 08:01:37.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:01:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:01:37.113 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:01:37 np0005548731 nova_compute[232433]: 2025-12-06 08:01:37.114 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:01:37.114 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:01:37 np0005548731 nova_compute[232433]: 2025-12-06 08:01:37.503 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:37.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:38 np0005548731 nova_compute[232433]: 2025-12-06 08:01:38.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:01:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:01:38.117 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:01:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:38.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:39.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:40 np0005548731 nova_compute[232433]: 2025-12-06 08:01:40.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:01:40 np0005548731 nova_compute[232433]: 2025-12-06 08:01:40.559 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:40.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:41.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:41 np0005548731 podman[320652]: 2025-12-06 08:01:41.895010334 +0000 UTC m=+0.052931465 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec  6 03:01:41 np0005548731 podman[320654]: 2025-12-06 08:01:41.89688782 +0000 UTC m=+0.050927246 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:01:41 np0005548731 podman[320653]: 2025-12-06 08:01:41.916750155 +0000 UTC m=+0.072595916 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.170 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.170 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.170 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.171 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.171 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.505 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:01:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1929047662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.599 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.758 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.759 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4228MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.760 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:01:42 np0005548731 nova_compute[232433]: 2025-12-06 08:01:42.760 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:01:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:42.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:43.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:43 np0005548731 nova_compute[232433]: 2025-12-06 08:01:43.698 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:01:43 np0005548731 nova_compute[232433]: 2025-12-06 08:01:43.699 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:01:43 np0005548731 nova_compute[232433]: 2025-12-06 08:01:43.755 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:01:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:01:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3042858999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:01:44 np0005548731 nova_compute[232433]: 2025-12-06 08:01:44.154 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:01:44 np0005548731 nova_compute[232433]: 2025-12-06 08:01:44.159 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:01:44 np0005548731 nova_compute[232433]: 2025-12-06 08:01:44.176 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:01:44 np0005548731 nova_compute[232433]: 2025-12-06 08:01:44.203 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:01:44 np0005548731 nova_compute[232433]: 2025-12-06 08:01:44.203 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:01:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:01:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:44.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:01:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:45.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:45 np0005548731 nova_compute[232433]: 2025-12-06 08:01:45.606 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:46 np0005548731 nova_compute[232433]: 2025-12-06 08:01:46.204 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:01:46 np0005548731 nova_compute[232433]: 2025-12-06 08:01:46.204 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:01:46 np0005548731 nova_compute[232433]: 2025-12-06 08:01:46.205 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:01:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:46.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:01:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:01:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:01:47 np0005548731 nova_compute[232433]: 2025-12-06 08:01:47.509 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:01:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:47.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:01:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:48.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:49.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:50 np0005548731 nova_compute[232433]: 2025-12-06 08:01:50.608 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:01:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:50.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:01:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:51.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:52 np0005548731 nova_compute[232433]: 2025-12-06 08:01:52.513 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:52.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:01:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:53.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:01:54 np0005548731 nova_compute[232433]: 2025-12-06 08:01:54.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:01:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:54.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:01:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:01:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:55.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:55 np0005548731 nova_compute[232433]: 2025-12-06 08:01:55.611 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:56.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:57 np0005548731 nova_compute[232433]: 2025-12-06 08:01:57.516 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:01:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:57.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:58.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:01:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:01:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:01:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:01:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:59.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:00 np0005548731 nova_compute[232433]: 2025-12-06 08:02:00.612 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:00.907 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:02:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:00.907 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:02:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:00.907 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:02:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:01.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:02 np0005548731 nova_compute[232433]: 2025-12-06 08:02:02.519 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:02.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:03.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:04.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:05.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:05 np0005548731 nova_compute[232433]: 2025-12-06 08:02:05.613 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:06.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:07 np0005548731 nova_compute[232433]: 2025-12-06 08:02:07.521 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:07.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:08.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:02:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2644919661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:02:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:02:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2644919661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:02:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:10 np0005548731 nova_compute[232433]: 2025-12-06 08:02:10.614 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:10.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:11.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:12 np0005548731 ovn_controller[133927]: 2025-12-06T08:02:12Z|00928|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  6 03:02:12 np0005548731 nova_compute[232433]: 2025-12-06 08:02:12.525 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:12 np0005548731 podman[321057]: 2025-12-06 08:02:12.898658468 +0000 UTC m=+0.055674892 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:02:12 np0005548731 podman[321055]: 2025-12-06 08:02:12.903407234 +0000 UTC m=+0.063096644 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 03:02:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:12.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:12 np0005548731 podman[321056]: 2025-12-06 08:02:12.92286261 +0000 UTC m=+0.082138280 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  6 03:02:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:13.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:14.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:15.036 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:02:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:15.037 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:02:15 np0005548731 nova_compute[232433]: 2025-12-06 08:02:15.037 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000048s ======
Dec  6 03:02:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:15.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Dec  6 03:02:15 np0005548731 nova_compute[232433]: 2025-12-06 08:02:15.616 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:15 np0005548731 nova_compute[232433]: 2025-12-06 08:02:15.857 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:02:15 np0005548731 nova_compute[232433]: 2025-12-06 08:02:15.858 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:02:15 np0005548731 nova_compute[232433]: 2025-12-06 08:02:15.890 232437 DEBUG nova.compute.manager [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.000 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.001 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.009 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.010 232437 INFO nova.compute.claims [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.155 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:02:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:02:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1528343113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.571 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.576 232437 DEBUG nova.compute.provider_tree [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.601 232437 DEBUG nova.scheduler.client.report [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.629 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.629 232437 DEBUG nova.compute.manager [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.699 232437 DEBUG nova.compute.manager [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.700 232437 DEBUG nova.network.neutron [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.896 232437 INFO nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:02:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:16.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:16 np0005548731 nova_compute[232433]: 2025-12-06 08:02:16.924 232437 DEBUG nova.compute.manager [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.041 232437 DEBUG nova.compute.manager [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.043 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.043 232437 INFO nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Creating image(s)#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.069 232437 DEBUG nova.storage.rbd_utils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.094 232437 DEBUG nova.storage.rbd_utils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.120 232437 DEBUG nova.storage.rbd_utils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.123 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.195 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.196 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.196 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.197 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.223 232437 DEBUG nova.storage.rbd_utils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.226 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.372 232437 DEBUG nova.policy [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.528 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.529 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:02:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:17.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.600 232437 DEBUG nova.storage.rbd_utils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.701 232437 DEBUG nova.objects.instance [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.716 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.717 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Ensure instance console log exists: /var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.717 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.718 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:02:17 np0005548731 nova_compute[232433]: 2025-12-06 08:02:17.718 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:02:18 np0005548731 nova_compute[232433]: 2025-12-06 08:02:18.467 232437 DEBUG nova.network.neutron [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Successfully created port: 8f7a7129-4a69-4a4c-8205-c5719b00005b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:02:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:18.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:19.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:20 np0005548731 nova_compute[232433]: 2025-12-06 08:02:20.081 232437 DEBUG nova.network.neutron [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Successfully updated port: 8f7a7129-4a69-4a4c-8205-c5719b00005b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:02:20 np0005548731 nova_compute[232433]: 2025-12-06 08:02:20.104 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:02:20 np0005548731 nova_compute[232433]: 2025-12-06 08:02:20.104 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:02:20 np0005548731 nova_compute[232433]: 2025-12-06 08:02:20.104 232437 DEBUG nova.network.neutron [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:02:20 np0005548731 nova_compute[232433]: 2025-12-06 08:02:20.233 232437 DEBUG nova.compute.manager [req-a373cc2f-5006-4b4b-bc70-3989fbad5453 req-0fe19429-a817-4209-95af-0d29b2230f5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-changed-8f7a7129-4a69-4a4c-8205-c5719b00005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:02:20 np0005548731 nova_compute[232433]: 2025-12-06 08:02:20.233 232437 DEBUG nova.compute.manager [req-a373cc2f-5006-4b4b-bc70-3989fbad5453 req-0fe19429-a817-4209-95af-0d29b2230f5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Refreshing instance network info cache due to event network-changed-8f7a7129-4a69-4a4c-8205-c5719b00005b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:02:20 np0005548731 nova_compute[232433]: 2025-12-06 08:02:20.233 232437 DEBUG oslo_concurrency.lockutils [req-a373cc2f-5006-4b4b-bc70-3989fbad5453 req-0fe19429-a817-4209-95af-0d29b2230f5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:02:20 np0005548731 nova_compute[232433]: 2025-12-06 08:02:20.318 232437 DEBUG nova.network.neutron [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:02:20 np0005548731 nova_compute[232433]: 2025-12-06 08:02:20.617 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:20.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:21.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:22.039 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:02:22 np0005548731 nova_compute[232433]: 2025-12-06 08:02:22.531 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:22.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.288 232437 DEBUG nova.network.neutron [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updating instance_info_cache with network_info: [{"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.319 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.319 232437 DEBUG nova.compute.manager [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Instance network_info: |[{"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.319 232437 DEBUG oslo_concurrency.lockutils [req-a373cc2f-5006-4b4b-bc70-3989fbad5453 req-0fe19429-a817-4209-95af-0d29b2230f5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.320 232437 DEBUG nova.network.neutron [req-a373cc2f-5006-4b4b-bc70-3989fbad5453 req-0fe19429-a817-4209-95af-0d29b2230f5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Refreshing network info cache for port 8f7a7129-4a69-4a4c-8205-c5719b00005b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.322 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Start _get_guest_xml network_info=[{"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.326 232437 WARNING nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.330 232437 DEBUG nova.virt.libvirt.host [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.331 232437 DEBUG nova.virt.libvirt.host [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.334 232437 DEBUG nova.virt.libvirt.host [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.334 232437 DEBUG nova.virt.libvirt.host [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.335 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.336 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.336 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.336 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.337 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.337 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.337 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.337 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.338 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.338 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.338 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.338 232437 DEBUG nova.virt.hardware [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.342 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:02:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:23.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:02:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2139978182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.760 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.785 232437 DEBUG nova.storage.rbd_utils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:02:23 np0005548731 nova_compute[232433]: 2025-12-06 08:02:23.788 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:02:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:02:24 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3856105960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.215 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.217 232437 DEBUG nova.virt.libvirt.vif [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1271350420',display_name='tempest-TestNetworkBasicOps-server-1271350420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1271350420',id=185,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE77FYJgDkrxzG8W4+xLXhPOdFEca45Bj8zAeRPk4LZkIHRLpB+9sImdEg1wjzkrSrzVyN00VUwIMIwYuEfIAoq3Nx58veJTCciIHJ/EgBnAVEqFMMrhoURvjqeCMkMzww==',key_name='tempest-TestNetworkBasicOps-70649793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vjv0p4ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:02:16Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5631bb1a-c5dd-480d-ad0d-b8519f7d2858,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.217 232437 DEBUG nova.network.os_vif_util [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.218 232437 DEBUG nova.network.os_vif_util [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:94:e7,bridge_name='br-int',has_traffic_filtering=True,id=8f7a7129-4a69-4a4c-8205-c5719b00005b,network=Network(6b5f9e0f-ed86-4284-9c14-39494399dc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f7a7129-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.219 232437 DEBUG nova.objects.instance [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:02:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.240 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  <uuid>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</uuid>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  <name>instance-000000b9</name>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestNetworkBasicOps-server-1271350420</nova:name>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:02:23</nova:creationTime>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <nova:port uuid="8f7a7129-4a69-4a4c-8205-c5719b00005b">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <entry name="serial">5631bb1a-c5dd-480d-ad0d-b8519f7d2858</entry>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <entry name="uuid">5631bb1a-c5dd-480d-ad0d-b8519f7d2858</entry>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk.config">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:2d:94:e7"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <target dev="tap8f7a7129-4a"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/console.log" append="off"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:02:24 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:02:24 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:02:24 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:02:24 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.242 232437 DEBUG nova.compute.manager [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Preparing to wait for external event network-vif-plugged-8f7a7129-4a69-4a4c-8205-c5719b00005b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.242 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.242 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.243 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.243 232437 DEBUG nova.virt.libvirt.vif [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1271350420',display_name='tempest-TestNetworkBasicOps-server-1271350420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1271350420',id=185,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE77FYJgDkrxzG8W4+xLXhPOdFEca45Bj8zAeRPk4LZkIHRLpB+9sImdEg1wjzkrSrzVyN00VUwIMIwYuEfIAoq3Nx58veJTCciIHJ/EgBnAVEqFMMrhoURvjqeCMkMzww==',key_name='tempest-TestNetworkBasicOps-70649793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vjv0p4ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:02:16Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5631bb1a-c5dd-480d-ad0d-b8519f7d2858,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.244 232437 DEBUG nova.network.os_vif_util [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.244 232437 DEBUG nova.network.os_vif_util [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:94:e7,bridge_name='br-int',has_traffic_filtering=True,id=8f7a7129-4a69-4a4c-8205-c5719b00005b,network=Network(6b5f9e0f-ed86-4284-9c14-39494399dc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f7a7129-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.245 232437 DEBUG os_vif [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:94:e7,bridge_name='br-int',has_traffic_filtering=True,id=8f7a7129-4a69-4a4c-8205-c5719b00005b,network=Network(6b5f9e0f-ed86-4284-9c14-39494399dc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f7a7129-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.245 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.246 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.246 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.251 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.251 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f7a7129-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.252 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f7a7129-4a, col_values=(('external_ids', {'iface-id': '8f7a7129-4a69-4a4c-8205-c5719b00005b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:94:e7', 'vm-uuid': '5631bb1a-c5dd-480d-ad0d-b8519f7d2858'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.254 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:24 np0005548731 NetworkManager[49182]: <info>  [1765008144.2551] manager: (tap8f7a7129-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.255 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.263 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.264 232437 INFO os_vif [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:94:e7,bridge_name='br-int',has_traffic_filtering=True,id=8f7a7129-4a69-4a4c-8205-c5719b00005b,network=Network(6b5f9e0f-ed86-4284-9c14-39494399dc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f7a7129-4a')#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.315 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.315 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.316 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:2d:94:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.316 232437 INFO nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Using config drive#033[00m
Dec  6 03:02:24 np0005548731 nova_compute[232433]: 2025-12-06 08:02:24.343 232437 DEBUG nova.storage.rbd_utils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:02:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:24.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.302 232437 INFO nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Creating config drive at /var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/disk.config#033[00m
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.309 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz0imyg36 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.453 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz0imyg36" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.481 232437 DEBUG nova.storage.rbd_utils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.486 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/disk.config 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:02:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:25.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.618 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.667 232437 DEBUG oslo_concurrency.processutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/disk.config 5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.669 232437 INFO nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Deleting local config drive /var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/disk.config because it was imported into RBD.#033[00m
Dec  6 03:02:25 np0005548731 kernel: tap8f7a7129-4a: entered promiscuous mode
Dec  6 03:02:25 np0005548731 NetworkManager[49182]: <info>  [1765008145.7200] manager: (tap8f7a7129-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Dec  6 03:02:25 np0005548731 ovn_controller[133927]: 2025-12-06T08:02:25Z|00929|binding|INFO|Claiming lport 8f7a7129-4a69-4a4c-8205-c5719b00005b for this chassis.
Dec  6 03:02:25 np0005548731 ovn_controller[133927]: 2025-12-06T08:02:25Z|00930|binding|INFO|8f7a7129-4a69-4a4c-8205-c5719b00005b: Claiming fa:16:3e:2d:94:e7 10.100.0.7
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.721 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.726 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.727 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.746 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:94:e7 10.100.0.7'], port_security=['fa:16:3e:2d:94:e7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5631bb1a-c5dd-480d-ad0d-b8519f7d2858', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b5f9e0f-ed86-4284-9c14-39494399dc0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4f0cdec7-18c7-485f-bd98-88eef8c2e308', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b927104e-a7e2-447d-8b28-1e80d27343ce, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8f7a7129-4a69-4a4c-8205-c5719b00005b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.748 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8f7a7129-4a69-4a4c-8205-c5719b00005b in datapath 6b5f9e0f-ed86-4284-9c14-39494399dc0e bound to our chassis#033[00m
Dec  6 03:02:25 np0005548731 systemd-udevd[321447]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.749 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6b5f9e0f-ed86-4284-9c14-39494399dc0e#033[00m
Dec  6 03:02:25 np0005548731 systemd-machined[195355]: New machine qemu-94-instance-000000b9.
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.762 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[904fc716-b393-419b-8a25-8de987e5ec73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.764 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6b5f9e0f-e1 in ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.766 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6b5f9e0f-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.766 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[723191b1-8c7e-45f2-b3cd-50dc7fa047a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.767 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e0490423-9469-4cef-86b5-10c7f781edc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 NetworkManager[49182]: <info>  [1765008145.7702] device (tap8f7a7129-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:02:25 np0005548731 NetworkManager[49182]: <info>  [1765008145.7716] device (tap8f7a7129-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.784 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd546ac-6c83-421a-9c10-513d8854094c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.786 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:25 np0005548731 ovn_controller[133927]: 2025-12-06T08:02:25Z|00931|binding|INFO|Setting lport 8f7a7129-4a69-4a4c-8205-c5719b00005b ovn-installed in OVS
Dec  6 03:02:25 np0005548731 ovn_controller[133927]: 2025-12-06T08:02:25Z|00932|binding|INFO|Setting lport 8f7a7129-4a69-4a4c-8205-c5719b00005b up in Southbound
Dec  6 03:02:25 np0005548731 nova_compute[232433]: 2025-12-06 08:02:25.791 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:25 np0005548731 systemd[1]: Started Virtual Machine qemu-94-instance-000000b9.
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.800 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ea44a0c9-4f46-4ea6-b979-0b905c21c652]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.843 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4831ac-601c-41ee-a683-c7b86afab48d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 NetworkManager[49182]: <info>  [1765008145.8504] manager: (tap6b5f9e0f-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/427)
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.849 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b82025-1ed3-4d57-b09a-8ffc0f938d0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.884 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[00b1c8da-3294-49cd-a09d-8c71e7c99934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.887 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[53885596-0dba-4bbb-a41f-6294d1bf0c0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 NetworkManager[49182]: <info>  [1765008145.9111] device (tap6b5f9e0f-e0): carrier: link connected
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.916 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b102011d-fcea-4c12-9153-6118d1d15fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.931 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc75a51-7b17-4b5f-aa36-9f19b9fd8425]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b5f9e0f-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:a8:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840791, 'reachable_time': 18763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321482, 'error': None, 'target': 'ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.949 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa30485-0d72-4e38-bbcf-84523d97a4a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:a830'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840791, 'tstamp': 840791}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321483, 'error': None, 'target': 'ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.964 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[76cbdb56-fcf5-4cda-8337-dd87324d9a7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b5f9e0f-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:a8:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840791, 'reachable_time': 18763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321484, 'error': None, 'target': 'ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:25.992 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4ee670-3ddd-475c-a786-fcff7c4b0309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:26.062 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b8fc1bfb-f17a-4121-b063-550ba5845bbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:26.063 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b5f9e0f-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:26.063 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:26.064 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b5f9e0f-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:02:26 np0005548731 NetworkManager[49182]: <info>  [1765008146.0665] manager: (tap6b5f9e0f-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.066 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:26 np0005548731 kernel: tap6b5f9e0f-e0: entered promiscuous mode
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.071 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:26.072 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6b5f9e0f-e0, col_values=(('external_ids', {'iface-id': '90543e6f-010c-4bda-99b8-1a8d429fda2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.073 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:26 np0005548731 ovn_controller[133927]: 2025-12-06T08:02:26Z|00933|binding|INFO|Releasing lport 90543e6f-010c-4bda-99b8-1a8d429fda2f from this chassis (sb_readonly=0)
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.086 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:26.086 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6b5f9e0f-ed86-4284-9c14-39494399dc0e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6b5f9e0f-ed86-4284-9c14-39494399dc0e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:26.087 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[53961ae6-800f-4a6f-a3b5-3a09a1cea6ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:26.088 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-6b5f9e0f-ed86-4284-9c14-39494399dc0e
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/6b5f9e0f-ed86-4284-9c14-39494399dc0e.pid.haproxy
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 6b5f9e0f-ed86-4284-9c14-39494399dc0e
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:02:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:02:26.090 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e', 'env', 'PROCESS_TAG=haproxy-6b5f9e0f-ed86-4284-9c14-39494399dc0e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6b5f9e0f-ed86-4284-9c14-39494399dc0e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:02:26 np0005548731 podman[321516]: 2025-12-06 08:02:26.448394152 +0000 UTC m=+0.050936286 container create 930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 03:02:26 np0005548731 systemd[1]: Started libpod-conmon-930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9.scope.
Dec  6 03:02:26 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:02:26 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/459e0b987752519b1c945198bd2d0071001ba748a184371765317e08e3462970/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:02:26 np0005548731 podman[321516]: 2025-12-06 08:02:26.422918399 +0000 UTC m=+0.025460553 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:02:26 np0005548731 podman[321516]: 2025-12-06 08:02:26.522455913 +0000 UTC m=+0.124998067 container init 930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 03:02:26 np0005548731 podman[321516]: 2025-12-06 08:02:26.529668679 +0000 UTC m=+0.132210803 container start 930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:02:26 np0005548731 neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e[321571]: [NOTICE]   (321578) : New worker (321581) forked
Dec  6 03:02:26 np0005548731 neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e[321571]: [NOTICE]   (321578) : Loading success.
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.579 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008146.5791433, 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.580 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] VM Started (Lifecycle Event)#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.633 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.637 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008146.5792897, 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.637 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.707 232437 DEBUG nova.compute.manager [req-3c018a61-dd2c-4111-844f-b2f9f5a62062 req-debd6011-2375-4249-98e0-ba90000abcff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-vif-plugged-8f7a7129-4a69-4a4c-8205-c5719b00005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.708 232437 DEBUG oslo_concurrency.lockutils [req-3c018a61-dd2c-4111-844f-b2f9f5a62062 req-debd6011-2375-4249-98e0-ba90000abcff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.708 232437 DEBUG oslo_concurrency.lockutils [req-3c018a61-dd2c-4111-844f-b2f9f5a62062 req-debd6011-2375-4249-98e0-ba90000abcff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.708 232437 DEBUG oslo_concurrency.lockutils [req-3c018a61-dd2c-4111-844f-b2f9f5a62062 req-debd6011-2375-4249-98e0-ba90000abcff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.709 232437 DEBUG nova.compute.manager [req-3c018a61-dd2c-4111-844f-b2f9f5a62062 req-debd6011-2375-4249-98e0-ba90000abcff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Processing event network-vif-plugged-8f7a7129-4a69-4a4c-8205-c5719b00005b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.709 232437 DEBUG nova.compute.manager [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.712 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.715 232437 INFO nova.virt.libvirt.driver [-] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Instance spawned successfully.#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.715 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.727 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.730 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008146.711922, 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.731 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.751 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.751 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.752 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.752 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.752 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.753 232437 DEBUG nova.virt.libvirt.driver [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.762 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.765 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.840 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.889 232437 INFO nova.compute.manager [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Took 9.85 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.890 232437 DEBUG nova.compute.manager [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.936 232437 DEBUG nova.network.neutron [req-a373cc2f-5006-4b4b-bc70-3989fbad5453 req-0fe19429-a817-4209-95af-0d29b2230f5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updated VIF entry in instance network info cache for port 8f7a7129-4a69-4a4c-8205-c5719b00005b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:02:26 np0005548731 nova_compute[232433]: 2025-12-06 08:02:26.937 232437 DEBUG nova.network.neutron [req-a373cc2f-5006-4b4b-bc70-3989fbad5453 req-0fe19429-a817-4209-95af-0d29b2230f5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updating instance_info_cache with network_info: [{"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:02:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:26.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:27 np0005548731 nova_compute[232433]: 2025-12-06 08:02:27.205 232437 DEBUG oslo_concurrency.lockutils [req-a373cc2f-5006-4b4b-bc70-3989fbad5453 req-0fe19429-a817-4209-95af-0d29b2230f5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:02:27 np0005548731 nova_compute[232433]: 2025-12-06 08:02:27.226 232437 INFO nova.compute.manager [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Took 11.27 seconds to build instance.#033[00m
Dec  6 03:02:27 np0005548731 nova_compute[232433]: 2025-12-06 08:02:27.250 232437 DEBUG oslo_concurrency.lockutils [None req-c4fc3eac-9369-4851-a425-c48a6166ac8e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:02:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:27.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:28.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:29 np0005548731 nova_compute[232433]: 2025-12-06 08:02:29.255 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:29.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:29 np0005548731 nova_compute[232433]: 2025-12-06 08:02:29.956 232437 DEBUG nova.compute.manager [req-a790418d-7408-411f-bc2f-b07889080572 req-1f6398b6-1a78-4391-877e-6bf2f0a31049 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-vif-plugged-8f7a7129-4a69-4a4c-8205-c5719b00005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:02:29 np0005548731 nova_compute[232433]: 2025-12-06 08:02:29.957 232437 DEBUG oslo_concurrency.lockutils [req-a790418d-7408-411f-bc2f-b07889080572 req-1f6398b6-1a78-4391-877e-6bf2f0a31049 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:02:29 np0005548731 nova_compute[232433]: 2025-12-06 08:02:29.957 232437 DEBUG oslo_concurrency.lockutils [req-a790418d-7408-411f-bc2f-b07889080572 req-1f6398b6-1a78-4391-877e-6bf2f0a31049 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:02:29 np0005548731 nova_compute[232433]: 2025-12-06 08:02:29.957 232437 DEBUG oslo_concurrency.lockutils [req-a790418d-7408-411f-bc2f-b07889080572 req-1f6398b6-1a78-4391-877e-6bf2f0a31049 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:02:29 np0005548731 nova_compute[232433]: 2025-12-06 08:02:29.957 232437 DEBUG nova.compute.manager [req-a790418d-7408-411f-bc2f-b07889080572 req-1f6398b6-1a78-4391-877e-6bf2f0a31049 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] No waiting events found dispatching network-vif-plugged-8f7a7129-4a69-4a4c-8205-c5719b00005b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:02:29 np0005548731 nova_compute[232433]: 2025-12-06 08:02:29.958 232437 WARNING nova.compute.manager [req-a790418d-7408-411f-bc2f-b07889080572 req-1f6398b6-1a78-4391-877e-6bf2f0a31049 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received unexpected event network-vif-plugged-8f7a7129-4a69-4a4c-8205-c5719b00005b for instance with vm_state active and task_state None.#033[00m
Dec  6 03:02:30 np0005548731 nova_compute[232433]: 2025-12-06 08:02:30.641 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:30.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:31.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:32 np0005548731 NetworkManager[49182]: <info>  [1765008152.8985] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Dec  6 03:02:32 np0005548731 NetworkManager[49182]: <info>  [1765008152.8992] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Dec  6 03:02:32 np0005548731 nova_compute[232433]: 2025-12-06 08:02:32.903 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:32.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:33 np0005548731 ovn_controller[133927]: 2025-12-06T08:02:32Z|00934|binding|INFO|Releasing lport 90543e6f-010c-4bda-99b8-1a8d429fda2f from this chassis (sb_readonly=0)
Dec  6 03:02:33 np0005548731 ovn_controller[133927]: 2025-12-06T08:02:33Z|00935|binding|INFO|Releasing lport 90543e6f-010c-4bda-99b8-1a8d429fda2f from this chassis (sb_readonly=0)
Dec  6 03:02:33 np0005548731 nova_compute[232433]: 2025-12-06 08:02:33.014 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:33 np0005548731 nova_compute[232433]: 2025-12-06 08:02:33.593 232437 DEBUG nova.compute.manager [req-3c2e67bb-ef8b-475a-808c-38d024c9d4b3 req-88f6bd65-da97-4203-956c-25b0b1e3a4e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-changed-8f7a7129-4a69-4a4c-8205-c5719b00005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:02:33 np0005548731 nova_compute[232433]: 2025-12-06 08:02:33.594 232437 DEBUG nova.compute.manager [req-3c2e67bb-ef8b-475a-808c-38d024c9d4b3 req-88f6bd65-da97-4203-956c-25b0b1e3a4e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Refreshing instance network info cache due to event network-changed-8f7a7129-4a69-4a4c-8205-c5719b00005b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:02:33 np0005548731 nova_compute[232433]: 2025-12-06 08:02:33.594 232437 DEBUG oslo_concurrency.lockutils [req-3c2e67bb-ef8b-475a-808c-38d024c9d4b3 req-88f6bd65-da97-4203-956c-25b0b1e3a4e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:02:33 np0005548731 nova_compute[232433]: 2025-12-06 08:02:33.594 232437 DEBUG oslo_concurrency.lockutils [req-3c2e67bb-ef8b-475a-808c-38d024c9d4b3 req-88f6bd65-da97-4203-956c-25b0b1e3a4e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:02:33 np0005548731 nova_compute[232433]: 2025-12-06 08:02:33.594 232437 DEBUG nova.network.neutron [req-3c2e67bb-ef8b-475a-808c-38d024c9d4b3 req-88f6bd65-da97-4203-956c-25b0b1e3a4e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Refreshing network info cache for port 8f7a7129-4a69-4a4c-8205-c5719b00005b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:02:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:33.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:34 np0005548731 nova_compute[232433]: 2025-12-06 08:02:34.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:02:34 np0005548731 nova_compute[232433]: 2025-12-06 08:02:34.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:02:34 np0005548731 nova_compute[232433]: 2025-12-06 08:02:34.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:02:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:34 np0005548731 nova_compute[232433]: 2025-12-06 08:02:34.257 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:34 np0005548731 nova_compute[232433]: 2025-12-06 08:02:34.545 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:02:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:34.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:35.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:35 np0005548731 nova_compute[232433]: 2025-12-06 08:02:35.642 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:36.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:37 np0005548731 nova_compute[232433]: 2025-12-06 08:02:37.028 232437 DEBUG nova.network.neutron [req-3c2e67bb-ef8b-475a-808c-38d024c9d4b3 req-88f6bd65-da97-4203-956c-25b0b1e3a4e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updated VIF entry in instance network info cache for port 8f7a7129-4a69-4a4c-8205-c5719b00005b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:02:37 np0005548731 nova_compute[232433]: 2025-12-06 08:02:37.028 232437 DEBUG nova.network.neutron [req-3c2e67bb-ef8b-475a-808c-38d024c9d4b3 req-88f6bd65-da97-4203-956c-25b0b1e3a4e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updating instance_info_cache with network_info: [{"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:02:37 np0005548731 nova_compute[232433]: 2025-12-06 08:02:37.054 232437 DEBUG oslo_concurrency.lockutils [req-3c2e67bb-ef8b-475a-808c-38d024c9d4b3 req-88f6bd65-da97-4203-956c-25b0b1e3a4e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:02:37 np0005548731 nova_compute[232433]: 2025-12-06 08:02:37.055 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:02:37 np0005548731 nova_compute[232433]: 2025-12-06 08:02:37.055 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 03:02:37 np0005548731 nova_compute[232433]: 2025-12-06 08:02:37.055 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:02:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:37.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:38.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:39 np0005548731 nova_compute[232433]: 2025-12-06 08:02:39.300 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:40 np0005548731 nova_compute[232433]: 2025-12-06 08:02:40.394 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updating instance_info_cache with network_info: [{"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:02:40 np0005548731 nova_compute[232433]: 2025-12-06 08:02:40.414 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:02:40 np0005548731 nova_compute[232433]: 2025-12-06 08:02:40.414 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 03:02:40 np0005548731 nova_compute[232433]: 2025-12-06 08:02:40.415 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:02:40 np0005548731 nova_compute[232433]: 2025-12-06 08:02:40.416 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:02:40 np0005548731 nova_compute[232433]: 2025-12-06 08:02:40.416 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:02:40 np0005548731 ovn_controller[133927]: 2025-12-06T08:02:40Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:94:e7 10.100.0.7
Dec  6 03:02:40 np0005548731 ovn_controller[133927]: 2025-12-06T08:02:40Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:94:e7 10.100.0.7
Dec  6 03:02:40 np0005548731 nova_compute[232433]: 2025-12-06 08:02:40.645 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:40.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:41.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.135 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.135 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.136 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.136 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:02:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:02:42 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2170586488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.624 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.719 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.719 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.859 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.860 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3966MB free_disk=20.898025512695312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.860 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.861 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:02:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:42.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.974 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.974 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:02:42 np0005548731 nova_compute[232433]: 2025-12-06 08:02:42.974 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:02:43 np0005548731 nova_compute[232433]: 2025-12-06 08:02:43.018 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:02:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:02:43 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/702947871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:02:43 np0005548731 nova_compute[232433]: 2025-12-06 08:02:43.456 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:02:43 np0005548731 nova_compute[232433]: 2025-12-06 08:02:43.463 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:02:43 np0005548731 nova_compute[232433]: 2025-12-06 08:02:43.509 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:02:43 np0005548731 nova_compute[232433]: 2025-12-06 08:02:43.547 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:02:43 np0005548731 nova_compute[232433]: 2025-12-06 08:02:43.548 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:02:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:43 np0005548731 podman[321695]: 2025-12-06 08:02:43.897354912 +0000 UTC m=+0.056108004 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 03:02:43 np0005548731 podman[321697]: 2025-12-06 08:02:43.918146839 +0000 UTC m=+0.067527472 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 03:02:43 np0005548731 podman[321696]: 2025-12-06 08:02:43.941325686 +0000 UTC m=+0.094081421 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 03:02:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:44 np0005548731 nova_compute[232433]: 2025-12-06 08:02:44.302 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:44.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:45 np0005548731 nova_compute[232433]: 2025-12-06 08:02:45.547 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:02:45 np0005548731 nova_compute[232433]: 2025-12-06 08:02:45.547 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:02:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:02:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:45.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:02:45 np0005548731 nova_compute[232433]: 2025-12-06 08:02:45.645 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:46 np0005548731 nova_compute[232433]: 2025-12-06 08:02:46.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:02:46 np0005548731 nova_compute[232433]: 2025-12-06 08:02:46.311 232437 INFO nova.compute.manager [None req-75490dbb-c305-4632-b2ed-5a491650f13d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Get console output#033[00m
Dec  6 03:02:46 np0005548731 nova_compute[232433]: 2025-12-06 08:02:46.317 261230 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 03:02:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:46.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:48.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:49 np0005548731 nova_compute[232433]: 2025-12-06 08:02:49.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:02:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:49.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:02:50 np0005548731 nova_compute[232433]: 2025-12-06 08:02:50.647 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:50.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:51.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:02:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:52.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:02:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:02:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:53.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:02:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:54 np0005548731 nova_compute[232433]: 2025-12-06 08:02:54.336 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:54.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:55.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:55 np0005548731 nova_compute[232433]: 2025-12-06 08:02:55.649 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e411 e411: 3 total, 3 up, 3 in
Dec  6 03:02:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:56.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:57 np0005548731 nova_compute[232433]: 2025-12-06 08:02:57.147 232437 DEBUG oslo_concurrency.lockutils [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "interface-5631bb1a-c5dd-480d-ad0d-b8519f7d2858-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:02:57 np0005548731 nova_compute[232433]: 2025-12-06 08:02:57.149 232437 DEBUG oslo_concurrency.lockutils [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "interface-5631bb1a-c5dd-480d-ad0d-b8519f7d2858-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:02:57 np0005548731 nova_compute[232433]: 2025-12-06 08:02:57.149 232437 DEBUG nova.objects.instance [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'flavor' on Instance uuid 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:02:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:02:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:02:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:02:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:02:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:02:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:57.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:58 np0005548731 nova_compute[232433]: 2025-12-06 08:02:58.456 232437 DEBUG nova.objects.instance [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_requests' on Instance uuid 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:02:58 np0005548731 nova_compute[232433]: 2025-12-06 08:02:58.472 232437 DEBUG nova.network.neutron [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:02:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:58.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:02:59 np0005548731 nova_compute[232433]: 2025-12-06 08:02:59.053 232437 DEBUG nova.policy [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:02:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:02:59 np0005548731 nova_compute[232433]: 2025-12-06 08:02:59.395 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:02:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:02:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:02:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:59.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:00 np0005548731 nova_compute[232433]: 2025-12-06 08:03:00.652 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:00.907 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:00.908 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:00.909 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:00 np0005548731 nova_compute[232433]: 2025-12-06 08:03:00.971 232437 DEBUG nova.network.neutron [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Successfully created port: 837c2676-98c7-4a84-b313-6146c9acfb65 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:03:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:00.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:01.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:02.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:03.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:04 np0005548731 nova_compute[232433]: 2025-12-06 08:03:04.395 232437 DEBUG nova.network.neutron [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Successfully updated port: 837c2676-98c7-4a84-b313-6146c9acfb65 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:03:04 np0005548731 nova_compute[232433]: 2025-12-06 08:03:04.397 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:04 np0005548731 nova_compute[232433]: 2025-12-06 08:03:04.435 232437 DEBUG oslo_concurrency.lockutils [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:03:04 np0005548731 nova_compute[232433]: 2025-12-06 08:03:04.435 232437 DEBUG oslo_concurrency.lockutils [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:03:04 np0005548731 nova_compute[232433]: 2025-12-06 08:03:04.435 232437 DEBUG nova.network.neutron [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:03:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:03:04 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:03:04 np0005548731 nova_compute[232433]: 2025-12-06 08:03:04.646 232437 DEBUG nova.compute.manager [req-36720dfa-ad70-4ada-8e0b-bb1aee5f023e req-bdb604db-b76c-4775-9aa4-37a91b2ad5f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-changed-837c2676-98c7-4a84-b313-6146c9acfb65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:03:04 np0005548731 nova_compute[232433]: 2025-12-06 08:03:04.646 232437 DEBUG nova.compute.manager [req-36720dfa-ad70-4ada-8e0b-bb1aee5f023e req-bdb604db-b76c-4775-9aa4-37a91b2ad5f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Refreshing instance network info cache due to event network-changed-837c2676-98c7-4a84-b313-6146c9acfb65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:03:04 np0005548731 nova_compute[232433]: 2025-12-06 08:03:04.646 232437 DEBUG oslo_concurrency.lockutils [req-36720dfa-ad70-4ada-8e0b-bb1aee5f023e req-bdb604db-b76c-4775-9aa4-37a91b2ad5f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:03:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:05.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:05.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:05 np0005548731 nova_compute[232433]: 2025-12-06 08:03:05.652 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:07.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:07.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.456 232437 DEBUG nova.network.neutron [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updating instance_info_cache with network_info: [{"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.486 232437 DEBUG oslo_concurrency.lockutils [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.487 232437 DEBUG oslo_concurrency.lockutils [req-36720dfa-ad70-4ada-8e0b-bb1aee5f023e req-bdb604db-b76c-4775-9aa4-37a91b2ad5f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.487 232437 DEBUG nova.network.neutron [req-36720dfa-ad70-4ada-8e0b-bb1aee5f023e req-bdb604db-b76c-4775-9aa4-37a91b2ad5f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Refreshing network info cache for port 837c2676-98c7-4a84-b313-6146c9acfb65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.490 232437 DEBUG nova.virt.libvirt.vif [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1271350420',display_name='tempest-TestNetworkBasicOps-server-1271350420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1271350420',id=185,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE77FYJgDkrxzG8W4+xLXhPOdFEca45Bj8zAeRPk4LZkIHRLpB+9sImdEg1wjzkrSrzVyN00VUwIMIwYuEfIAoq3Nx58veJTCciIHJ/EgBnAVEqFMMrhoURvjqeCMkMzww==',key_name='tempest-TestNetworkBasicOps-70649793',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vjv0p4ap',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:02:27Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5631bb1a-c5dd-480d-ad0d-b8519f7d2858,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.490 232437 DEBUG nova.network.os_vif_util [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.491 232437 DEBUG nova.network.os_vif_util [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.491 232437 DEBUG os_vif [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.492 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.492 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.492 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.495 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.495 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap837c2676-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.496 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap837c2676-98, col_values=(('external_ids', {'iface-id': '837c2676-98c7-4a84-b313-6146c9acfb65', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:14:0f', 'vm-uuid': '5631bb1a-c5dd-480d-ad0d-b8519f7d2858'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:08 np0005548731 NetworkManager[49182]: <info>  [1765008188.4986] manager: (tap837c2676-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.498 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.500 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.503 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.504 232437 INFO os_vif [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98')#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.505 232437 DEBUG nova.virt.libvirt.vif [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1271350420',display_name='tempest-TestNetworkBasicOps-server-1271350420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1271350420',id=185,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE77FYJgDkrxzG8W4+xLXhPOdFEca45Bj8zAeRPk4LZkIHRLpB+9sImdEg1wjzkrSrzVyN00VUwIMIwYuEfIAoq3Nx58veJTCciIHJ/EgBnAVEqFMMrhoURvjqeCMkMzww==',key_name='tempest-TestNetworkBasicOps-70649793',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vjv0p4ap',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:02:27Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5631bb1a-c5dd-480d-ad0d-b8519f7d2858,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.505 232437 DEBUG nova.network.os_vif_util [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.506 232437 DEBUG nova.network.os_vif_util [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.508 232437 DEBUG nova.virt.libvirt.guest [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] attach device xml: <interface type="ethernet">
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <mac address="fa:16:3e:84:14:0f"/>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <model type="virtio"/>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <mtu size="1442"/>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <target dev="tap837c2676-98"/>
Dec  6 03:03:08 np0005548731 nova_compute[232433]: </interface>
Dec  6 03:03:08 np0005548731 nova_compute[232433]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec  6 03:03:08 np0005548731 kernel: tap837c2676-98: entered promiscuous mode
Dec  6 03:03:08 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:08Z|00936|binding|INFO|Claiming lport 837c2676-98c7-4a84-b313-6146c9acfb65 for this chassis.
Dec  6 03:03:08 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:08Z|00937|binding|INFO|837c2676-98c7-4a84-b313-6146c9acfb65: Claiming fa:16:3e:84:14:0f 10.100.0.21
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.520 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 NetworkManager[49182]: <info>  [1765008188.5215] manager: (tap837c2676-98): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.532 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:14:0f 10.100.0.21'], port_security=['fa:16:3e:84:14:0f 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': '5631bb1a-c5dd-480d-ad0d-b8519f7d2858', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07739e87-9923-4e04-a051-ba5152c0258f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7c0486dd-d5db-4e6a-a51e-94ac8eaed290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0a1ae33-3d1f-4bc5-97b2-106300889e6f, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=837c2676-98c7-4a84-b313-6146c9acfb65) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.534 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 837c2676-98c7-4a84-b313-6146c9acfb65 in datapath 07739e87-9923-4e04-a051-ba5152c0258f bound to our chassis#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.536 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 07739e87-9923-4e04-a051-ba5152c0258f#033[00m
Dec  6 03:03:08 np0005548731 systemd-udevd[322011]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.550 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[974afb49-6a86-4b99-9cd1-02978f75851a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.551 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap07739e87-91 in ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.553 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap07739e87-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.553 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7eef46d0-0157-4751-9069-6529039167b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.554 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9a2f71-28b3-4734-9414-3e3bc38055b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.559 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:08Z|00938|binding|INFO|Setting lport 837c2676-98c7-4a84-b313-6146c9acfb65 ovn-installed in OVS
Dec  6 03:03:08 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:08Z|00939|binding|INFO|Setting lport 837c2676-98c7-4a84-b313-6146c9acfb65 up in Southbound
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.563 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 NetworkManager[49182]: <info>  [1765008188.5636] device (tap837c2676-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:03:08 np0005548731 NetworkManager[49182]: <info>  [1765008188.5650] device (tap837c2676-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.569 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[085652f0-fac7-4a65-855b-e932f8fed729]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.597 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cce00473-c3d8-4f7a-887a-c7cdf2114af2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.622 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7fdb56-ccc3-49e1-ad15-def0c443a69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 NetworkManager[49182]: <info>  [1765008188.6282] manager: (tap07739e87-90): new Veth device (/org/freedesktop/NetworkManager/Devices/433)
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.628 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[12a3e5c6-a712-4e50-8c49-69786180f7ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.649 232437 DEBUG nova.virt.libvirt.driver [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.650 232437 DEBUG nova.virt.libvirt.driver [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.650 232437 DEBUG nova.virt.libvirt.driver [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:2d:94:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.650 232437 DEBUG nova.virt.libvirt.driver [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:84:14:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.663 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[58192498-99cb-4c4b-a283-975e1ac3d9cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.666 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[855c3208-47c1-4b15-ac02-76bb56f67318]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.675 232437 DEBUG nova.virt.libvirt.guest [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <nova:name>tempest-TestNetworkBasicOps-server-1271350420</nova:name>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 08:03:08</nova:creationTime>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 03:03:08 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:    <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:    <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:    <nova:port uuid="8f7a7129-4a69-4a4c-8205-c5719b00005b">
Dec  6 03:03:08 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:    <nova:port uuid="837c2676-98c7-4a84-b313-6146c9acfb65">
Dec  6 03:03:08 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 03:03:08 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 03:03:08 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 03:03:08 np0005548731 nova_compute[232433]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 03:03:08 np0005548731 NetworkManager[49182]: <info>  [1765008188.6904] device (tap07739e87-90): carrier: link connected
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.696 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[394e6941-5385-4d08-a1a2-6aafd882e40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.712 232437 DEBUG oslo_concurrency.lockutils [None req-fdd6af0c-d253-4ca6-a944-6c35ed5cd991 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "interface-5631bb1a-c5dd-480d-ad0d-b8519f7d2858-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.711 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4153d782-a32c-4895-9467-05d0159aa017]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07739e87-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:f1:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845069, 'reachable_time': 26703, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322037, 'error': None, 'target': 'ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.727 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c45b8e01-687f-4c27-ab54-b65ed109a535]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:f133'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845069, 'tstamp': 845069}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322038, 'error': None, 'target': 'ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.744 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ac0dd1-c900-4eeb-84e3-e275f235d82e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07739e87-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:f1:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845069, 'reachable_time': 26703, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322039, 'error': None, 'target': 'ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.770 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[030cd7dd-d557-4fba-af1e-16044c1b6792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.822 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[341a12b7-0d25-4e5a-ac10-29c56a86b896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.823 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07739e87-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.823 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.824 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07739e87-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.825 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 NetworkManager[49182]: <info>  [1765008188.8263] manager: (tap07739e87-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Dec  6 03:03:08 np0005548731 kernel: tap07739e87-90: entered promiscuous mode
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.827 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.829 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.828 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap07739e87-90, col_values=(('external_ids', {'iface-id': '548a6088-8bfb-4e8b-9d99-5d56c34dc577'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:08 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:08Z|00940|binding|INFO|Releasing lport 548a6088-8bfb-4e8b-9d99-5d56c34dc577 from this chassis (sb_readonly=0)
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.845 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 nova_compute[232433]: 2025-12-06 08:03:08.846 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.847 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/07739e87-9923-4e04-a051-ba5152c0258f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/07739e87-9923-4e04-a051-ba5152c0258f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.847 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[92186ee3-d680-4b27-95f5-e91bc6e7d8b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.848 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-07739e87-9923-4e04-a051-ba5152c0258f
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/07739e87-9923-4e04-a051-ba5152c0258f.pid.haproxy
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 07739e87-9923-4e04-a051-ba5152c0258f
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:03:08 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:08.849 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f', 'env', 'PROCESS_TAG=haproxy-07739e87-9923-4e04-a051-ba5152c0258f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/07739e87-9923-4e04-a051-ba5152c0258f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:03:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:09.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:09 np0005548731 podman[322121]: 2025-12-06 08:03:09.199287423 +0000 UTC m=+0.051258375 container create d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 03:03:09 np0005548731 systemd[1]: Started libpod-conmon-d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808.scope.
Dec  6 03:03:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:09 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:03:09 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9d102acecce32236d33ffaf73a4d3c470ffaaaeac98e8c21c8fe09e601d2c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:03:09 np0005548731 podman[322121]: 2025-12-06 08:03:09.170061448 +0000 UTC m=+0.022032450 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:03:09 np0005548731 podman[322121]: 2025-12-06 08:03:09.272566764 +0000 UTC m=+0.124537716 container init d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 03:03:09 np0005548731 podman[322121]: 2025-12-06 08:03:09.27767631 +0000 UTC m=+0.129647262 container start d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 03:03:09 np0005548731 neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f[322136]: [NOTICE]   (322140) : New worker (322142) forked
Dec  6 03:03:09 np0005548731 neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f[322136]: [NOTICE]   (322140) : Loading success.
Dec  6 03:03:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:09.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:10Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:14:0f 10.100.0.21
Dec  6 03:03:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:10Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:14:0f 10.100.0.21
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.654 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.976 232437 DEBUG nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-vif-plugged-837c2676-98c7-4a84-b313-6146c9acfb65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.977 232437 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.977 232437 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.977 232437 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.977 232437 DEBUG nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] No waiting events found dispatching network-vif-plugged-837c2676-98c7-4a84-b313-6146c9acfb65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.977 232437 WARNING nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received unexpected event network-vif-plugged-837c2676-98c7-4a84-b313-6146c9acfb65 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.977 232437 DEBUG nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-vif-plugged-837c2676-98c7-4a84-b313-6146c9acfb65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.978 232437 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.978 232437 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.978 232437 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.978 232437 DEBUG nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] No waiting events found dispatching network-vif-plugged-837c2676-98c7-4a84-b313-6146c9acfb65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:03:10 np0005548731 nova_compute[232433]: 2025-12-06 08:03:10.978 232437 WARNING nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received unexpected event network-vif-plugged-837c2676-98c7-4a84-b313-6146c9acfb65 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:03:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:11.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.166 232437 DEBUG oslo_concurrency.lockutils [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "interface-5631bb1a-c5dd-480d-ad0d-b8519f7d2858-837c2676-98c7-4a84-b313-6146c9acfb65" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.167 232437 DEBUG oslo_concurrency.lockutils [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "interface-5631bb1a-c5dd-480d-ad0d-b8519f7d2858-837c2676-98c7-4a84-b313-6146c9acfb65" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.252 232437 DEBUG nova.objects.instance [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'flavor' on Instance uuid 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.276 232437 DEBUG nova.virt.libvirt.vif [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1271350420',display_name='tempest-TestNetworkBasicOps-server-1271350420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1271350420',id=185,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE77FYJgDkrxzG8W4+xLXhPOdFEca45Bj8zAeRPk4LZkIHRLpB+9sImdEg1wjzkrSrzVyN00VUwIMIwYuEfIAoq3Nx58veJTCciIHJ/EgBnAVEqFMMrhoURvjqeCMkMzww==',key_name='tempest-TestNetworkBasicOps-70649793',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vjv0p4ap',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:02:27Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5631bb1a-c5dd-480d-ad0d-b8519f7d2858,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.277 232437 DEBUG nova.network.os_vif_util [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.277 232437 DEBUG nova.network.os_vif_util [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.282 232437 DEBUG nova.virt.libvirt.guest [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:84:14:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap837c2676-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.285 232437 DEBUG nova.virt.libvirt.guest [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:84:14:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap837c2676-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.287 232437 DEBUG nova.virt.libvirt.driver [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Attempting to detach device tap837c2676-98 from instance 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.288 232437 DEBUG nova.virt.libvirt.guest [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] detach device xml: <interface type="ethernet">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <mac address="fa:16:3e:84:14:0f"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <model type="virtio"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <mtu size="1442"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <target dev="tap837c2676-98"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: </interface>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.295 232437 DEBUG nova.virt.libvirt.guest [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:84:14:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap837c2676-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.298 232437 DEBUG nova.virt.libvirt.guest [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:84:14:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap837c2676-98"/></interface>not found in domain: <domain type='kvm' id='94'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <name>instance-000000b9</name>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <uuid>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</uuid>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:name>tempest-TestNetworkBasicOps-server-1271350420</nova:name>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 08:03:08</nova:creationTime>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:port uuid="8f7a7129-4a69-4a4c-8205-c5719b00005b">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:port uuid="837c2676-98c7-4a84-b313-6146c9acfb65">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <memory unit='KiB'>131072</memory>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <vcpu placement='static'>1</vcpu>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <resource>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <partition>/machine</partition>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </resource>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <sysinfo type='smbios'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='manufacturer'>RDO</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='serial'>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='uuid'>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='family'>Virtual Machine</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <boot dev='hd'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <smbios mode='sysinfo'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <vmcoreinfo state='on'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <model fallback='forbid'>Nehalem</model>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <feature policy='require' name='x2apic'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <feature policy='require' name='hypervisor'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <feature policy='require' name='vme'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <clock offset='utc'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <timer name='hpet' present='no'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <on_poweroff>destroy</on_poweroff>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <on_reboot>restart</on_reboot>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <on_crash>destroy</on_crash>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <disk type='network' device='disk'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk' index='2'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target dev='vda' bus='virtio'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='virtio-disk0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <disk type='network' device='cdrom'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk.config' index='1'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target dev='sda' bus='sata'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <readonly/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='sata0-0-0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pcie.0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='1' port='0x10'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='2' port='0x11'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='3' port='0x12'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.3'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='4' port='0x13'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.4'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='5' port='0x14'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.5'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='6' port='0x15'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.6'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='7' port='0x16'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.7'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='8' port='0x17'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.8'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='9' port='0x18'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.9'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='10' port='0x19'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.10'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='11' port='0x1a'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.11'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='12' port='0x1b'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.12'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='13' port='0x1c'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.13'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='14' port='0x1d'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.14'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='15' port='0x1e'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.15'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='16' port='0x1f'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.16'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='17' port='0x20'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.17'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='18' port='0x21'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.18'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='19' port='0x22'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.19'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='20' port='0x23'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.20'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='21' port='0x24'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.21'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='22' port='0x25'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.22'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='23' port='0x26'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.23'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='24' port='0x27'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.24'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='25' port='0x28'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.25'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-pci-bridge'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.26'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='usb'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='sata' index='0'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='ide'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:2d:94:e7'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target dev='tap8f7a7129-4a'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='net0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:84:14:0f'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target dev='tap837c2676-98'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='net1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <serial type='pty'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/console.log' append='off'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target type='isa-serial' port='0'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <model name='isa-serial'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      </target>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/console.log' append='off'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target type='serial' port='0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </console>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <input type='tablet' bus='usb'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='input0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='usb' bus='0' port='1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <input type='mouse' bus='ps2'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='input1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <input type='keyboard' bus='ps2'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='input2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <listen type='address' address='::0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <audio id='1' type='none'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='video0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <watchdog model='itco' action='reset'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='watchdog0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </watchdog>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <memballoon model='virtio'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <stats period='10'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='balloon0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <rng model='virtio'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <backend model='random'>/dev/urandom</backend>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='rng0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <label>system_u:system_r:svirt_t:s0:c603,c605</label>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c603,c605</imagelabel>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <label>+107:+107</label>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <imagelabel>+107:+107</imagelabel>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.298 232437 INFO nova.virt.libvirt.driver [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully detached device tap837c2676-98 from instance 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 from the persistent domain config.#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.298 232437 DEBUG nova.virt.libvirt.driver [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] (1/8): Attempting to detach device tap837c2676-98 with device alias net1 from instance 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.299 232437 DEBUG nova.virt.libvirt.guest [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] detach device xml: <interface type="ethernet">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <mac address="fa:16:3e:84:14:0f"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <model type="virtio"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <mtu size="1442"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <target dev="tap837c2676-98"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: </interface>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec  6 03:03:11 np0005548731 kernel: tap837c2676-98 (unregistering): left promiscuous mode
Dec  6 03:03:11 np0005548731 NetworkManager[49182]: <info>  [1765008191.3982] device (tap837c2676-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:03:11 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:11Z|00941|binding|INFO|Releasing lport 837c2676-98c7-4a84-b313-6146c9acfb65 from this chassis (sb_readonly=0)
Dec  6 03:03:11 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:11Z|00942|binding|INFO|Setting lport 837c2676-98c7-4a84-b313-6146c9acfb65 down in Southbound
Dec  6 03:03:11 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:11Z|00943|binding|INFO|Removing iface tap837c2676-98 ovn-installed in OVS
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.411 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.416 232437 DEBUG nova.virt.libvirt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Received event <DeviceRemovedEvent: 1765008191.4159448, 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.418 232437 DEBUG nova.virt.libvirt.driver [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Start waiting for the detach event from libvirt for device tap837c2676-98 with device alias net1 for instance 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.419 232437 DEBUG nova.virt.libvirt.guest [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:84:14:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap837c2676-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.423 232437 DEBUG nova.virt.libvirt.guest [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:84:14:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap837c2676-98"/></interface>not found in domain: <domain type='kvm' id='94'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <name>instance-000000b9</name>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <uuid>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</uuid>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:name>tempest-TestNetworkBasicOps-server-1271350420</nova:name>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 08:03:08</nova:creationTime>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:port uuid="8f7a7129-4a69-4a4c-8205-c5719b00005b">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:port uuid="837c2676-98c7-4a84-b313-6146c9acfb65">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <memory unit='KiB'>131072</memory>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <vcpu placement='static'>1</vcpu>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <resource>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <partition>/machine</partition>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </resource>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <sysinfo type='smbios'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='manufacturer'>RDO</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='serial'>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='uuid'>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <entry name='family'>Virtual Machine</entry>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <boot dev='hd'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <smbios mode='sysinfo'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <vmcoreinfo state='on'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <model fallback='forbid'>Nehalem</model>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <feature policy='require' name='x2apic'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <feature policy='require' name='hypervisor'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <feature policy='require' name='vme'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <clock offset='utc'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <timer name='hpet' present='no'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <on_poweroff>destroy</on_poweroff>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <on_reboot>restart</on_reboot>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <on_crash>destroy</on_crash>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <disk type='network' device='disk'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk' index='2'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target dev='vda' bus='virtio'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='virtio-disk0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <disk type='network' device='cdrom'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk.config' index='1'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target dev='sda' bus='sata'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <readonly/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='sata0-0-0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pcie.0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='1' port='0x10'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='2' port='0x11'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='3' port='0x12'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.3'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='4' port='0x13'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.4'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='5' port='0x14'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.5'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='6' port='0x15'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.6'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='7' port='0x16'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.7'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='8' port='0x17'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.8'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='9' port='0x18'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.9'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='10' port='0x19'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.10'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='11' port='0x1a'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.11'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='12' port='0x1b'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.12'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='13' port='0x1c'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.13'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='14' port='0x1d'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.14'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='15' port='0x1e'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.15'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='16' port='0x1f'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.16'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='17' port='0x20'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.17'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='18' port='0x21'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.18'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='19' port='0x22'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.19'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='20' port='0x23'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.20'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='21' port='0x24'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.21'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='22' port='0x25'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.22'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='23' port='0x26'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.23'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='24' port='0x27'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.24'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target chassis='25' port='0x28'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.25'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model name='pcie-pci-bridge'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='pci.26'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='usb'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <controller type='sata' index='0'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='ide'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:2d:94:e7'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target dev='tap8f7a7129-4a'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='net0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <serial type='pty'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/console.log' append='off'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target type='isa-serial' port='0'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:        <model name='isa-serial'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      </target>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/console.log' append='off'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <target type='serial' port='0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </console>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <input type='tablet' bus='usb'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='input0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='usb' bus='0' port='1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <input type='mouse' bus='ps2'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='input1'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <input type='keyboard' bus='ps2'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='input2'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <listen type='address' address='::0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <audio id='1' type='none'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='video0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <watchdog model='itco' action='reset'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='watchdog0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </watchdog>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <memballoon model='virtio'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <stats period='10'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='balloon0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <rng model='virtio'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <backend model='random'>/dev/urandom</backend>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <alias name='rng0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <label>system_u:system_r:svirt_t:s0:c603,c605</label>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c603,c605</imagelabel>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <label>+107:+107</label>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <imagelabel>+107:+107</imagelabel>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.424 232437 INFO nova.virt.libvirt.driver [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully detached device tap837c2676-98 from instance 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 from the live domain config.#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.426 232437 DEBUG nova.virt.libvirt.vif [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1271350420',display_name='tempest-TestNetworkBasicOps-server-1271350420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1271350420',id=185,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE77FYJgDkrxzG8W4+xLXhPOdFEca45Bj8zAeRPk4LZkIHRLpB+9sImdEg1wjzkrSrzVyN00VUwIMIwYuEfIAoq3Nx58veJTCciIHJ/EgBnAVEqFMMrhoURvjqeCMkMzww==',key_name='tempest-TestNetworkBasicOps-70649793',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vjv0p4ap',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:02:27Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5631bb1a-c5dd-480d-ad0d-b8519f7d2858,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.427 232437 DEBUG nova.network.os_vif_util [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.428 232437 DEBUG nova.network.os_vif_util [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.428 232437 DEBUG os_vif [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.433 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.434 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c2676-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.436 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.438 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.442 232437 INFO os_vif [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98')#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.443 232437 DEBUG nova.virt.libvirt.guest [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:name>tempest-TestNetworkBasicOps-server-1271350420</nova:name>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 08:03:11</nova:creationTime>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    <nova:port uuid="8f7a7129-4a69-4a4c-8205-c5719b00005b">
Dec  6 03:03:11 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 03:03:11 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 03:03:11 np0005548731 nova_compute[232433]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.443 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:14:0f 10.100.0.21'], port_security=['fa:16:3e:84:14:0f 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': '5631bb1a-c5dd-480d-ad0d-b8519f7d2858', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07739e87-9923-4e04-a051-ba5152c0258f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c0486dd-d5db-4e6a-a51e-94ac8eaed290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0a1ae33-3d1f-4bc5-97b2-106300889e6f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=837c2676-98c7-4a84-b313-6146c9acfb65) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.447 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 837c2676-98c7-4a84-b313-6146c9acfb65 in datapath 07739e87-9923-4e04-a051-ba5152c0258f unbound from our chassis#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.449 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 07739e87-9923-4e04-a051-ba5152c0258f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.450 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6b20d030-0fb2-4acc-b7ab-580ada8dffda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.450 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f namespace which is not needed anymore#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.544 232437 DEBUG nova.network.neutron [req-36720dfa-ad70-4ada-8e0b-bb1aee5f023e req-bdb604db-b76c-4775-9aa4-37a91b2ad5f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updated VIF entry in instance network info cache for port 837c2676-98c7-4a84-b313-6146c9acfb65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.544 232437 DEBUG nova.network.neutron [req-36720dfa-ad70-4ada-8e0b-bb1aee5f023e req-bdb604db-b76c-4775-9aa4-37a91b2ad5f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updating instance_info_cache with network_info: [{"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.565 232437 DEBUG oslo_concurrency.lockutils [req-36720dfa-ad70-4ada-8e0b-bb1aee5f023e req-bdb604db-b76c-4775-9aa4-37a91b2ad5f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:03:11 np0005548731 neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f[322136]: [NOTICE]   (322140) : haproxy version is 2.8.14-c23fe91
Dec  6 03:03:11 np0005548731 neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f[322136]: [NOTICE]   (322140) : path to executable is /usr/sbin/haproxy
Dec  6 03:03:11 np0005548731 neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f[322136]: [WARNING]  (322140) : Exiting Master process...
Dec  6 03:03:11 np0005548731 neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f[322136]: [ALERT]    (322140) : Current worker (322142) exited with code 143 (Terminated)
Dec  6 03:03:11 np0005548731 neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f[322136]: [WARNING]  (322140) : All workers exited. Exiting... (0)
Dec  6 03:03:11 np0005548731 systemd[1]: libpod-d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808.scope: Deactivated successfully.
Dec  6 03:03:11 np0005548731 podman[322173]: 2025-12-06 08:03:11.628104376 +0000 UTC m=+0.052099085 container died d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:03:11 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808-userdata-shm.mount: Deactivated successfully.
Dec  6 03:03:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:11 np0005548731 systemd[1]: var-lib-containers-storage-overlay-9f9d102acecce32236d33ffaf73a4d3c470ffaaaeac98e8c21c8fe09e601d2c9-merged.mount: Deactivated successfully.
Dec  6 03:03:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:11.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:11 np0005548731 podman[322173]: 2025-12-06 08:03:11.686710129 +0000 UTC m=+0.110704878 container cleanup d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:03:11 np0005548731 systemd[1]: libpod-conmon-d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808.scope: Deactivated successfully.
Dec  6 03:03:11 np0005548731 podman[322203]: 2025-12-06 08:03:11.752752173 +0000 UTC m=+0.041912926 container remove d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.762 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[54a80214-8839-4a9a-95b9-95efc1aa9719]: (4, ('Sat Dec  6 08:03:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f (d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808)\nd4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808\nSat Dec  6 08:03:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f (d4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808)\nd4828f0f99d79d15f10b1cac5292004219e1a34ffaebd27d4f2a821a79bf9808\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.765 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7ecc02-7934-469d-9ec4-1be1940ac125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.766 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07739e87-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.803 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:11 np0005548731 kernel: tap07739e87-90: left promiscuous mode
Dec  6 03:03:11 np0005548731 nova_compute[232433]: 2025-12-06 08:03:11.815 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.818 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[809c134c-cec3-4b03-996d-3d0587622f85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.835 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b6ddda-9e67-4262-838b-dd815488148e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.837 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[69daeb4d-c401-4648-8365-d12743c1f7b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.852 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7b31eeda-9534-4435-8225-8c5745ccbd01]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845062, 'reachable_time': 34521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322216, 'error': None, 'target': 'ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:11 np0005548731 systemd[1]: run-netns-ovnmeta\x2d07739e87\x2d9923\x2d4e04\x2da051\x2dba5152c0258f.mount: Deactivated successfully.
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.856 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-07739e87-9923-4e04-a051-ba5152c0258f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:03:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:11.857 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c6003fbb-4962-4564-83da-0f1cbfef8b25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:13.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.525 232437 DEBUG nova.compute.manager [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-vif-unplugged-837c2676-98c7-4a84-b313-6146c9acfb65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.526 232437 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.526 232437 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.527 232437 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.527 232437 DEBUG nova.compute.manager [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] No waiting events found dispatching network-vif-unplugged-837c2676-98c7-4a84-b313-6146c9acfb65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.527 232437 WARNING nova.compute.manager [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received unexpected event network-vif-unplugged-837c2676-98c7-4a84-b313-6146c9acfb65 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.528 232437 DEBUG nova.compute.manager [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-vif-plugged-837c2676-98c7-4a84-b313-6146c9acfb65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.528 232437 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.528 232437 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.529 232437 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.529 232437 DEBUG nova.compute.manager [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] No waiting events found dispatching network-vif-plugged-837c2676-98c7-4a84-b313-6146c9acfb65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:03:13 np0005548731 nova_compute[232433]: 2025-12-06 08:03:13.529 232437 WARNING nova.compute.manager [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received unexpected event network-vif-plugged-837c2676-98c7-4a84-b313-6146c9acfb65 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:03:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:13.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.484 232437 DEBUG oslo_concurrency.lockutils [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.485 232437 DEBUG oslo_concurrency.lockutils [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.485 232437 DEBUG nova.network.neutron [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.593 232437 DEBUG nova.compute.manager [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-vif-deleted-837c2676-98c7-4a84-b313-6146c9acfb65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.594 232437 INFO nova.compute.manager [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Neutron deleted interface 837c2676-98c7-4a84-b313-6146c9acfb65; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.595 232437 DEBUG nova.network.neutron [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updating instance_info_cache with network_info: [{"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.627 232437 DEBUG nova.objects.instance [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lazy-loading 'system_metadata' on Instance uuid 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.676 232437 DEBUG nova.objects.instance [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lazy-loading 'flavor' on Instance uuid 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.706 232437 DEBUG nova.virt.libvirt.vif [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1271350420',display_name='tempest-TestNetworkBasicOps-server-1271350420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1271350420',id=185,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE77FYJgDkrxzG8W4+xLXhPOdFEca45Bj8zAeRPk4LZkIHRLpB+9sImdEg1wjzkrSrzVyN00VUwIMIwYuEfIAoq3Nx58veJTCciIHJ/EgBnAVEqFMMrhoURvjqeCMkMzww==',key_name='tempest-TestNetworkBasicOps-70649793',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vjv0p4ap',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:02:27Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5631bb1a-c5dd-480d-ad0d-b8519f7d2858,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.706 232437 DEBUG nova.network.os_vif_util [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converting VIF {"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.707 232437 DEBUG nova.network.os_vif_util [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.710 232437 DEBUG nova.virt.libvirt.guest [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:84:14:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap837c2676-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.713 232437 DEBUG nova.virt.libvirt.guest [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:84:14:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap837c2676-98"/></interface>not found in domain: <domain type='kvm' id='94'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <name>instance-000000b9</name>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <uuid>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</uuid>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:name>tempest-TestNetworkBasicOps-server-1271350420</nova:name>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 08:03:11</nova:creationTime>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:port uuid="8f7a7129-4a69-4a4c-8205-c5719b00005b">
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 03:03:14 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <memory unit='KiB'>131072</memory>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <vcpu placement='static'>1</vcpu>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <resource>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <partition>/machine</partition>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </resource>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <sysinfo type='smbios'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='manufacturer'>RDO</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='serial'>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='uuid'>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='family'>Virtual Machine</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <boot dev='hd'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <smbios mode='sysinfo'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <vmcoreinfo state='on'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <model fallback='forbid'>Nehalem</model>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <feature policy='require' name='x2apic'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <feature policy='require' name='hypervisor'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <feature policy='require' name='vme'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <clock offset='utc'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <timer name='hpet' present='no'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <on_poweroff>destroy</on_poweroff>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <on_reboot>restart</on_reboot>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <on_crash>destroy</on_crash>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <disk type='network' device='disk'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk' index='2'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target dev='vda' bus='virtio'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='virtio-disk0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <disk type='network' device='cdrom'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk.config' index='1'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target dev='sda' bus='sata'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <readonly/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='sata0-0-0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pcie.0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='1' port='0x10'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='2' port='0x11'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='3' port='0x12'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.3'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='4' port='0x13'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.4'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='5' port='0x14'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.5'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='6' port='0x15'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.6'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='7' port='0x16'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.7'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='8' port='0x17'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.8'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='9' port='0x18'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.9'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='10' port='0x19'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.10'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='11' port='0x1a'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.11'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='12' port='0x1b'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.12'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='13' port='0x1c'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.13'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='14' port='0x1d'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.14'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='15' port='0x1e'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.15'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='16' port='0x1f'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.16'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='17' port='0x20'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.17'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='18' port='0x21'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.18'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='19' port='0x22'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.19'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='20' port='0x23'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.20'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='21' port='0x24'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.21'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='22' port='0x25'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.22'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='23' port='0x26'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.23'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='24' port='0x27'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.24'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='25' port='0x28'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.25'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-pci-bridge'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.26'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='usb'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='sata' index='0'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='ide'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:2d:94:e7'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target dev='tap8f7a7129-4a'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='net0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <serial type='pty'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/console.log' append='off'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target type='isa-serial' port='0'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <model name='isa-serial'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      </target>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/console.log' append='off'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target type='serial' port='0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </console>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <input type='tablet' bus='usb'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='input0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='usb' bus='0' port='1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <input type='mouse' bus='ps2'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='input1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <input type='keyboard' bus='ps2'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='input2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <listen type='address' address='::0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <audio id='1' type='none'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='video0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <watchdog model='itco' action='reset'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='watchdog0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </watchdog>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <memballoon model='virtio'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <stats period='10'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='balloon0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <rng model='virtio'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <backend model='random'>/dev/urandom</backend>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='rng0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <label>system_u:system_r:svirt_t:s0:c603,c605</label>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c603,c605</imagelabel>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <label>+107:+107</label>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <imagelabel>+107:+107</imagelabel>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 03:03:14 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:03:14 np0005548731 nova_compute[232433]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.714 232437 DEBUG nova.virt.libvirt.guest [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:84:14:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap837c2676-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.718 232437 DEBUG nova.virt.libvirt.guest [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:84:14:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap837c2676-98"/></interface>not found in domain: <domain type='kvm' id='94'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <name>instance-000000b9</name>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <uuid>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</uuid>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:name>tempest-TestNetworkBasicOps-server-1271350420</nova:name>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 08:03:11</nova:creationTime>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:port uuid="8f7a7129-4a69-4a4c-8205-c5719b00005b">
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 03:03:14 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <memory unit='KiB'>131072</memory>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <vcpu placement='static'>1</vcpu>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <resource>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <partition>/machine</partition>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </resource>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <sysinfo type='smbios'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='manufacturer'>RDO</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='product'>OpenStack Compute</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='serial'>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='uuid'>5631bb1a-c5dd-480d-ad0d-b8519f7d2858</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <entry name='family'>Virtual Machine</entry>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <boot dev='hd'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <smbios mode='sysinfo'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <vmcoreinfo state='on'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <cpu mode='custom' match='exact' check='full'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <model fallback='forbid'>Nehalem</model>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <feature policy='require' name='x2apic'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <feature policy='require' name='hypervisor'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <feature policy='require' name='vme'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <clock offset='utc'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <timer name='pit' tickpolicy='delay'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <timer name='rtc' tickpolicy='catchup'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <timer name='hpet' present='no'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <on_poweroff>destroy</on_poweroff>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <on_reboot>restart</on_reboot>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <on_crash>destroy</on_crash>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <disk type='network' device='disk'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk' index='2'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target dev='vda' bus='virtio'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='virtio-disk0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <disk type='network' device='cdrom'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <driver name='qemu' type='raw' cache='none'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <auth username='openstack'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <source protocol='rbd' name='vms/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_disk.config' index='1'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.100' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.102' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <host name='192.168.122.101' port='6789'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target dev='sda' bus='sata'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <readonly/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='sata0-0-0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='0' model='pcie-root'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pcie.0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='1' port='0x10'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='2' port='0x11'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='3' port='0x12'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.3'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='4' port='0x13'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.4'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='5' port='0x14'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.5'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='6' port='0x15'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.6'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='7' port='0x16'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.7'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='8' port='0x17'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.8'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='9' port='0x18'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.9'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='10' port='0x19'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.10'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='11' port='0x1a'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.11'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='12' port='0x1b'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.12'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='13' port='0x1c'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.13'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='14' port='0x1d'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.14'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='15' port='0x1e'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.15'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='16' port='0x1f'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.16'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='17' port='0x20'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.17'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='18' port='0x21'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.18'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='19' port='0x22'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.19'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='20' port='0x23'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.20'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='21' port='0x24'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.21'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='22' port='0x25'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.22'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='23' port='0x26'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.23'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='24' port='0x27'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.24'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-root-port'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target chassis='25' port='0x28'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.25'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model name='pcie-pci-bridge'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='pci.26'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='usb'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <controller type='sata' index='0'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='ide'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </controller>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <interface type='ethernet'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <mac address='fa:16:3e:2d:94:e7'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target dev='tap8f7a7129-4a'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model type='virtio'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <driver name='vhost' rx_queue_size='512'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <mtu size='1442'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='net0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <serial type='pty'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/console.log' append='off'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target type='isa-serial' port='0'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:        <model name='isa-serial'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      </target>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <console type='pty' tty='/dev/pts/0'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <source path='/dev/pts/0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <log file='/var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858/console.log' append='off'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <target type='serial' port='0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='serial0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </console>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <input type='tablet' bus='usb'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='input0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='usb' bus='0' port='1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <input type='mouse' bus='ps2'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='input1'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <input type='keyboard' bus='ps2'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='input2'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </input>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <listen type='address' address='::0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </graphics>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <audio id='1' type='none'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <model type='virtio' heads='1' primary='yes'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='video0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <watchdog model='itco' action='reset'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='watchdog0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </watchdog>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <memballoon model='virtio'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <stats period='10'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='balloon0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <rng model='virtio'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <backend model='random'>/dev/urandom</backend>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <alias name='rng0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <label>system_u:system_r:svirt_t:s0:c603,c605</label>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c603,c605</imagelabel>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <label>+107:+107</label>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <imagelabel>+107:+107</imagelabel>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </seclabel>
Dec  6 03:03:14 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:03:14 np0005548731 nova_compute[232433]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.718 232437 WARNING nova.virt.libvirt.driver [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Detaching interface fa:16:3e:84:14:0f failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap837c2676-98' not found.#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.719 232437 DEBUG nova.virt.libvirt.vif [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1271350420',display_name='tempest-TestNetworkBasicOps-server-1271350420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1271350420',id=185,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE77FYJgDkrxzG8W4+xLXhPOdFEca45Bj8zAeRPk4LZkIHRLpB+9sImdEg1wjzkrSrzVyN00VUwIMIwYuEfIAoq3Nx58veJTCciIHJ/EgBnAVEqFMMrhoURvjqeCMkMzww==',key_name='tempest-TestNetworkBasicOps-70649793',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vjv0p4ap',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:02:27Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5631bb1a-c5dd-480d-ad0d-b8519f7d2858,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.719 232437 DEBUG nova.network.os_vif_util [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converting VIF {"id": "837c2676-98c7-4a84-b313-6146c9acfb65", "address": "fa:16:3e:84:14:0f", "network": {"id": "07739e87-9923-4e04-a051-ba5152c0258f", "bridge": "br-int", "label": "tempest-network-smoke--1533215199", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap837c2676-98", "ovs_interfaceid": "837c2676-98c7-4a84-b313-6146c9acfb65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.720 232437 DEBUG nova.network.os_vif_util [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.720 232437 DEBUG os_vif [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.721 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.721 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap837c2676-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.722 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.724 232437 INFO os_vif [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:14:0f,bridge_name='br-int',has_traffic_filtering=True,id=837c2676-98c7-4a84-b313-6146c9acfb65,network=Network(07739e87-9923-4e04-a051-ba5152c0258f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap837c2676-98')#033[00m
Dec  6 03:03:14 np0005548731 nova_compute[232433]: 2025-12-06 08:03:14.725 232437 DEBUG nova.virt.libvirt.guest [req-771a3038-a49c-4ed0-b054-55a6a75458a9 req-b2346300-fbcb-44f3-a025-50ae9ec43f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:name>tempest-TestNetworkBasicOps-server-1271350420</nova:name>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:creationTime>2025-12-06 08:03:14</nova:creationTime>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:flavor name="m1.nano">
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:memory>128</nova:memory>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:disk>1</nova:disk>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:swap>0</nova:swap>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:vcpus>1</nova:vcpus>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </nova:flavor>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:owner>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </nova:owner>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  <nova:ports>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    <nova:port uuid="8f7a7129-4a69-4a4c-8205-c5719b00005b">
Dec  6 03:03:14 np0005548731 nova_compute[232433]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:    </nova:port>
Dec  6 03:03:14 np0005548731 nova_compute[232433]:  </nova:ports>
Dec  6 03:03:14 np0005548731 nova_compute[232433]: </nova:instance>
Dec  6 03:03:14 np0005548731 nova_compute[232433]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec  6 03:03:14 np0005548731 podman[322225]: 2025-12-06 08:03:14.894484068 +0000 UTC m=+0.048753483 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 03:03:14 np0005548731 podman[322227]: 2025-12-06 08:03:14.93343333 +0000 UTC m=+0.083736428 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:03:14 np0005548731 podman[322226]: 2025-12-06 08:03:14.937227593 +0000 UTC m=+0.087912780 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 03:03:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:15.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:15 np0005548731 nova_compute[232433]: 2025-12-06 08:03:15.656 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:15.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:16.183 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:03:16 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:16.184 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:03:16 np0005548731 nova_compute[232433]: 2025-12-06 08:03:16.184 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e412 e412: 3 total, 3 up, 3 in
Dec  6 03:03:16 np0005548731 nova_compute[232433]: 2025-12-06 08:03:16.436 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:16 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:16Z|00944|binding|INFO|Releasing lport 90543e6f-010c-4bda-99b8-1a8d429fda2f from this chassis (sb_readonly=0)
Dec  6 03:03:16 np0005548731 nova_compute[232433]: 2025-12-06 08:03:16.863 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:17.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:17.186 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:17.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:17 np0005548731 nova_compute[232433]: 2025-12-06 08:03:17.867 232437 INFO nova.network.neutron [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Port 837c2676-98c7-4a84-b313-6146c9acfb65 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  6 03:03:17 np0005548731 nova_compute[232433]: 2025-12-06 08:03:17.868 232437 DEBUG nova.network.neutron [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updating instance_info_cache with network_info: [{"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:03:17 np0005548731 nova_compute[232433]: 2025-12-06 08:03:17.896 232437 DEBUG oslo_concurrency.lockutils [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:03:17 np0005548731 nova_compute[232433]: 2025-12-06 08:03:17.927 232437 DEBUG oslo_concurrency.lockutils [None req-cca8475b-494e-4e87-a02c-71844362d792 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "interface-5631bb1a-c5dd-480d-ad0d-b8519f7d2858-837c2676-98c7-4a84-b313-6146c9acfb65" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 6.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.581 232437 DEBUG nova.compute.manager [req-3dd46d46-bb1a-4d74-9275-88d408a139fb req-aac2ae6a-45d4-48c4-9da0-d013cb0afd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-changed-8f7a7129-4a69-4a4c-8205-c5719b00005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.582 232437 DEBUG nova.compute.manager [req-3dd46d46-bb1a-4d74-9275-88d408a139fb req-aac2ae6a-45d4-48c4-9da0-d013cb0afd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Refreshing instance network info cache due to event network-changed-8f7a7129-4a69-4a4c-8205-c5719b00005b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.582 232437 DEBUG oslo_concurrency.lockutils [req-3dd46d46-bb1a-4d74-9275-88d408a139fb req-aac2ae6a-45d4-48c4-9da0-d013cb0afd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.582 232437 DEBUG oslo_concurrency.lockutils [req-3dd46d46-bb1a-4d74-9275-88d408a139fb req-aac2ae6a-45d4-48c4-9da0-d013cb0afd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.582 232437 DEBUG nova.network.neutron [req-3dd46d46-bb1a-4d74-9275-88d408a139fb req-aac2ae6a-45d4-48c4-9da0-d013cb0afd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Refreshing network info cache for port 8f7a7129-4a69-4a4c-8205-c5719b00005b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.629 232437 DEBUG oslo_concurrency.lockutils [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.630 232437 DEBUG oslo_concurrency.lockutils [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.630 232437 DEBUG oslo_concurrency.lockutils [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.630 232437 DEBUG oslo_concurrency.lockutils [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.631 232437 DEBUG oslo_concurrency.lockutils [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.632 232437 INFO nova.compute.manager [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Terminating instance#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.633 232437 DEBUG nova.compute.manager [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:03:18 np0005548731 kernel: tap8f7a7129-4a (unregistering): left promiscuous mode
Dec  6 03:03:18 np0005548731 NetworkManager[49182]: <info>  [1765008198.6888] device (tap8f7a7129-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:03:18 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:18Z|00945|binding|INFO|Releasing lport 8f7a7129-4a69-4a4c-8205-c5719b00005b from this chassis (sb_readonly=0)
Dec  6 03:03:18 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:18Z|00946|binding|INFO|Setting lport 8f7a7129-4a69-4a4c-8205-c5719b00005b down in Southbound
Dec  6 03:03:18 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:18Z|00947|binding|INFO|Removing iface tap8f7a7129-4a ovn-installed in OVS
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.699 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.701 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:18.706 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:94:e7 10.100.0.7'], port_security=['fa:16:3e:2d:94:e7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5631bb1a-c5dd-480d-ad0d-b8519f7d2858', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b5f9e0f-ed86-4284-9c14-39494399dc0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4f0cdec7-18c7-485f-bd98-88eef8c2e308', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b927104e-a7e2-447d-8b28-1e80d27343ce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=8f7a7129-4a69-4a4c-8205-c5719b00005b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:18.707 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 8f7a7129-4a69-4a4c-8205-c5719b00005b in datapath 6b5f9e0f-ed86-4284-9c14-39494399dc0e unbound from our chassis#033[00m
Dec  6 03:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:18.708 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6b5f9e0f-ed86-4284-9c14-39494399dc0e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:18.710 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dead64ee-2733-4f46-9324-a497f84e3386]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:18.713 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e namespace which is not needed anymore#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.717 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:18 np0005548731 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000b9.scope: Deactivated successfully.
Dec  6 03:03:18 np0005548731 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000b9.scope: Consumed 16.036s CPU time.
Dec  6 03:03:18 np0005548731 systemd-machined[195355]: Machine qemu-94-instance-000000b9 terminated.
Dec  6 03:03:18 np0005548731 neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e[321571]: [NOTICE]   (321578) : haproxy version is 2.8.14-c23fe91
Dec  6 03:03:18 np0005548731 neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e[321571]: [NOTICE]   (321578) : path to executable is /usr/sbin/haproxy
Dec  6 03:03:18 np0005548731 neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e[321571]: [WARNING]  (321578) : Exiting Master process...
Dec  6 03:03:18 np0005548731 neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e[321571]: [WARNING]  (321578) : Exiting Master process...
Dec  6 03:03:18 np0005548731 neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e[321571]: [ALERT]    (321578) : Current worker (321581) exited with code 143 (Terminated)
Dec  6 03:03:18 np0005548731 neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e[321571]: [WARNING]  (321578) : All workers exited. Exiting... (0)
Dec  6 03:03:18 np0005548731 systemd[1]: libpod-930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9.scope: Deactivated successfully.
Dec  6 03:03:18 np0005548731 conmon[321571]: conmon 930397b43b82509a4fa3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9.scope/container/memory.events
Dec  6 03:03:18 np0005548731 podman[322315]: 2025-12-06 08:03:18.844217787 +0000 UTC m=+0.047995364 container died 930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.871 232437 INFO nova.virt.libvirt.driver [-] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Instance destroyed successfully.#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.872 232437 DEBUG nova.objects.instance [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:03:18 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9-userdata-shm.mount: Deactivated successfully.
Dec  6 03:03:18 np0005548731 systemd[1]: var-lib-containers-storage-overlay-459e0b987752519b1c945198bd2d0071001ba748a184371765317e08e3462970-merged.mount: Deactivated successfully.
Dec  6 03:03:18 np0005548731 podman[322315]: 2025-12-06 08:03:18.889394591 +0000 UTC m=+0.093172158 container cleanup 930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.893 232437 DEBUG nova.virt.libvirt.vif [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1271350420',display_name='tempest-TestNetworkBasicOps-server-1271350420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1271350420',id=185,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE77FYJgDkrxzG8W4+xLXhPOdFEca45Bj8zAeRPk4LZkIHRLpB+9sImdEg1wjzkrSrzVyN00VUwIMIwYuEfIAoq3Nx58veJTCciIHJ/EgBnAVEqFMMrhoURvjqeCMkMzww==',key_name='tempest-TestNetworkBasicOps-70649793',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vjv0p4ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:02:27Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5631bb1a-c5dd-480d-ad0d-b8519f7d2858,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.894 232437 DEBUG nova.network.os_vif_util [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.894 232437 DEBUG nova.network.os_vif_util [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:94:e7,bridge_name='br-int',has_traffic_filtering=True,id=8f7a7129-4a69-4a4c-8205-c5719b00005b,network=Network(6b5f9e0f-ed86-4284-9c14-39494399dc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f7a7129-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.895 232437 DEBUG os_vif [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:94:e7,bridge_name='br-int',has_traffic_filtering=True,id=8f7a7129-4a69-4a4c-8205-c5719b00005b,network=Network(6b5f9e0f-ed86-4284-9c14-39494399dc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f7a7129-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.896 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:18 np0005548731 systemd[1]: libpod-conmon-930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9.scope: Deactivated successfully.
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.897 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f7a7129-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:18 np0005548731 podman[322354]: 2025-12-06 08:03:18.952233568 +0000 UTC m=+0.039791834 container remove 930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 03:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:18.954 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[61388ecc-17a0-4904-a5a2-22f4b669f318]: (4, ('Sat Dec  6 08:03:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e (930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9)\n930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9\nSat Dec  6 08:03:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e (930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9)\n930397b43b82509a4fa36d1ace61ad92706dc075d07b29c280c133547d89eee9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:18.955 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a554f8be-33aa-4ef6-9443-5ee09fe1ce79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:18.956 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b5f9e0f-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.960 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:18 np0005548731 kernel: tap6b5f9e0f-e0: left promiscuous mode
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.963 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.973 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:18 np0005548731 nova_compute[232433]: 2025-12-06 08:03:18.976 232437 INFO os_vif [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:94:e7,bridge_name='br-int',has_traffic_filtering=True,id=8f7a7129-4a69-4a4c-8205-c5719b00005b,network=Network(6b5f9e0f-ed86-4284-9c14-39494399dc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f7a7129-4a')#033[00m
Dec  6 03:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:18.976 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6175f0eb-51dc-48e8-9eac-26e5f4777732]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:18.992 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5c936f-bcbc-45b9-9b4b-6085f7261081]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:18 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:18.993 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b3094b-3f6a-4cdd-8b9e-ffff54fd5784]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:19.009 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c413dc-ed17-4ead-a6e1-9a76ab9bedc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840784, 'reachable_time': 19178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322382, 'error': None, 'target': 'ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:19.012 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6b5f9e0f-ed86-4284-9c14-39494399dc0e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:03:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:19.012 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bb4d4f-3a1e-4e29-b22c-5da6993b50dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:19 np0005548731 systemd[1]: run-netns-ovnmeta\x2d6b5f9e0f\x2ded86\x2d4284\x2d9c14\x2d39494399dc0e.mount: Deactivated successfully.
Dec  6 03:03:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:19.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:19 np0005548731 nova_compute[232433]: 2025-12-06 08:03:19.362 232437 INFO nova.virt.libvirt.driver [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Deleting instance files /var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_del#033[00m
Dec  6 03:03:19 np0005548731 nova_compute[232433]: 2025-12-06 08:03:19.363 232437 INFO nova.virt.libvirt.driver [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Deletion of /var/lib/nova/instances/5631bb1a-c5dd-480d-ad0d-b8519f7d2858_del complete#033[00m
Dec  6 03:03:19 np0005548731 nova_compute[232433]: 2025-12-06 08:03:19.494 232437 INFO nova.compute.manager [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:03:19 np0005548731 nova_compute[232433]: 2025-12-06 08:03:19.495 232437 DEBUG oslo.service.loopingcall [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:03:19 np0005548731 nova_compute[232433]: 2025-12-06 08:03:19.495 232437 DEBUG nova.compute.manager [-] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:03:19 np0005548731 nova_compute[232433]: 2025-12-06 08:03:19.495 232437 DEBUG nova.network.neutron [-] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:03:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:19.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:20 np0005548731 nova_compute[232433]: 2025-12-06 08:03:20.428 232437 DEBUG nova.compute.manager [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-vif-unplugged-8f7a7129-4a69-4a4c-8205-c5719b00005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:03:20 np0005548731 nova_compute[232433]: 2025-12-06 08:03:20.428 232437 DEBUG oslo_concurrency.lockutils [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:20 np0005548731 nova_compute[232433]: 2025-12-06 08:03:20.429 232437 DEBUG oslo_concurrency.lockutils [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:20 np0005548731 nova_compute[232433]: 2025-12-06 08:03:20.429 232437 DEBUG oslo_concurrency.lockutils [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:20 np0005548731 nova_compute[232433]: 2025-12-06 08:03:20.429 232437 DEBUG nova.compute.manager [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] No waiting events found dispatching network-vif-unplugged-8f7a7129-4a69-4a4c-8205-c5719b00005b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:03:20 np0005548731 nova_compute[232433]: 2025-12-06 08:03:20.429 232437 DEBUG nova.compute.manager [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-vif-unplugged-8f7a7129-4a69-4a4c-8205-c5719b00005b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:03:20 np0005548731 nova_compute[232433]: 2025-12-06 08:03:20.658 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:21.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:21.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:21 np0005548731 nova_compute[232433]: 2025-12-06 08:03:21.786 232437 DEBUG nova.network.neutron [-] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:03:21 np0005548731 nova_compute[232433]: 2025-12-06 08:03:21.822 232437 INFO nova.compute.manager [-] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Took 2.33 seconds to deallocate network for instance.#033[00m
Dec  6 03:03:21 np0005548731 nova_compute[232433]: 2025-12-06 08:03:21.892 232437 DEBUG oslo_concurrency.lockutils [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:21 np0005548731 nova_compute[232433]: 2025-12-06 08:03:21.893 232437 DEBUG oslo_concurrency.lockutils [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:21 np0005548731 nova_compute[232433]: 2025-12-06 08:03:21.940 232437 DEBUG nova.compute.manager [req-baf3f2f7-ea3c-45d3-a7db-1385cee78859 req-47d191dc-2870-4b87-bac6-75ca8f483f38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-vif-deleted-8f7a7129-4a69-4a4c-8205-c5719b00005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:03:21 np0005548731 nova_compute[232433]: 2025-12-06 08:03:21.949 232437 DEBUG oslo_concurrency.processutils [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:03:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:03:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3120929876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.367 232437 DEBUG oslo_concurrency.processutils [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.373 232437 DEBUG nova.compute.provider_tree [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.397 232437 DEBUG nova.scheduler.client.report [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.446 232437 DEBUG nova.network.neutron [req-3dd46d46-bb1a-4d74-9275-88d408a139fb req-aac2ae6a-45d4-48c4-9da0-d013cb0afd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updated VIF entry in instance network info cache for port 8f7a7129-4a69-4a4c-8205-c5719b00005b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.447 232437 DEBUG nova.network.neutron [req-3dd46d46-bb1a-4d74-9275-88d408a139fb req-aac2ae6a-45d4-48c4-9da0-d013cb0afd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Updating instance_info_cache with network_info: [{"id": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "address": "fa:16:3e:2d:94:e7", "network": {"id": "6b5f9e0f-ed86-4284-9c14-39494399dc0e", "bridge": "br-int", "label": "tempest-network-smoke--576336473", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f7a7129-4a", "ovs_interfaceid": "8f7a7129-4a69-4a4c-8205-c5719b00005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.449 232437 DEBUG oslo_concurrency.lockutils [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.467 232437 DEBUG oslo_concurrency.lockutils [req-3dd46d46-bb1a-4d74-9275-88d408a139fb req-aac2ae6a-45d4-48c4-9da0-d013cb0afd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5631bb1a-c5dd-480d-ad0d-b8519f7d2858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.482 232437 INFO nova.scheduler.client.report [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 5631bb1a-c5dd-480d-ad0d-b8519f7d2858#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.561 232437 DEBUG oslo_concurrency.lockutils [None req-ceb4f65f-28d9-4ea2-a762-6456637c0a7f d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.563 232437 DEBUG nova.compute.manager [req-78602fae-0a2d-412a-9a35-77b7734bd745 req-74b16184-e067-4284-85d0-037b740f4542 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received event network-vif-plugged-8f7a7129-4a69-4a4c-8205-c5719b00005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.564 232437 DEBUG oslo_concurrency.lockutils [req-78602fae-0a2d-412a-9a35-77b7734bd745 req-74b16184-e067-4284-85d0-037b740f4542 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.564 232437 DEBUG oslo_concurrency.lockutils [req-78602fae-0a2d-412a-9a35-77b7734bd745 req-74b16184-e067-4284-85d0-037b740f4542 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.564 232437 DEBUG oslo_concurrency.lockutils [req-78602fae-0a2d-412a-9a35-77b7734bd745 req-74b16184-e067-4284-85d0-037b740f4542 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5631bb1a-c5dd-480d-ad0d-b8519f7d2858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.564 232437 DEBUG nova.compute.manager [req-78602fae-0a2d-412a-9a35-77b7734bd745 req-74b16184-e067-4284-85d0-037b740f4542 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] No waiting events found dispatching network-vif-plugged-8f7a7129-4a69-4a4c-8205-c5719b00005b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:03:22 np0005548731 nova_compute[232433]: 2025-12-06 08:03:22.565 232437 WARNING nova.compute.manager [req-78602fae-0a2d-412a-9a35-77b7734bd745 req-74b16184-e067-4284-85d0-037b740f4542 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Received unexpected event network-vif-plugged-8f7a7129-4a69-4a4c-8205-c5719b00005b for instance with vm_state deleted and task_state None.#033[00m
Dec  6 03:03:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:23.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:23.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 e413: 3 total, 3 up, 3 in
Dec  6 03:03:24 np0005548731 nova_compute[232433]: 2025-12-06 08:03:24.002 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:25.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:25 np0005548731 nova_compute[232433]: 2025-12-06 08:03:25.661 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:03:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:25.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:03:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:03:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:27.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:03:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:27.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:29 np0005548731 nova_compute[232433]: 2025-12-06 08:03:29.004 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:29.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:29 np0005548731 nova_compute[232433]: 2025-12-06 08:03:29.479 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:29 np0005548731 nova_compute[232433]: 2025-12-06 08:03:29.593 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:29.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:30 np0005548731 nova_compute[232433]: 2025-12-06 08:03:30.664 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:31.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:31.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:33.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:03:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:33.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:03:33 np0005548731 nova_compute[232433]: 2025-12-06 08:03:33.867 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008198.8659968, 5631bb1a-c5dd-480d-ad0d-b8519f7d2858 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:03:33 np0005548731 nova_compute[232433]: 2025-12-06 08:03:33.868 232437 INFO nova.compute.manager [-] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:03:34 np0005548731 nova_compute[232433]: 2025-12-06 08:03:34.036 232437 DEBUG nova.compute.manager [None req-de14afa4-0829-4294-956a-de83341010bc - - - - - -] [instance: 5631bb1a-c5dd-480d-ad0d-b8519f7d2858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:03:34 np0005548731 nova_compute[232433]: 2025-12-06 08:03:34.037 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:35.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:35 np0005548731 nova_compute[232433]: 2025-12-06 08:03:35.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:03:35 np0005548731 nova_compute[232433]: 2025-12-06 08:03:35.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:03:35 np0005548731 nova_compute[232433]: 2025-12-06 08:03:35.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:03:35 np0005548731 nova_compute[232433]: 2025-12-06 08:03:35.124 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:03:35 np0005548731 nova_compute[232433]: 2025-12-06 08:03:35.666 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:35.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:37.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:37.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:39 np0005548731 nova_compute[232433]: 2025-12-06 08:03:39.040 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:39.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:39 np0005548731 nova_compute[232433]: 2025-12-06 08:03:39.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:03:39 np0005548731 nova_compute[232433]: 2025-12-06 08:03:39.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:03:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:39.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:40 np0005548731 nova_compute[232433]: 2025-12-06 08:03:40.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:03:40 np0005548731 nova_compute[232433]: 2025-12-06 08:03:40.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:03:40 np0005548731 nova_compute[232433]: 2025-12-06 08:03:40.669 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:41.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:41.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:43.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:43 np0005548731 nova_compute[232433]: 2025-12-06 08:03:43.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:03:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:43.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.042 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.132 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.133 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:03:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:03:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1425635196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.550 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.696 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.698 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4176MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.698 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.698 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.780 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.781 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:03:44 np0005548731 nova_compute[232433]: 2025-12-06 08:03:44.808 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:03:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:45.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:03:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1345692704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:03:45 np0005548731 nova_compute[232433]: 2025-12-06 08:03:45.232 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:03:45 np0005548731 nova_compute[232433]: 2025-12-06 08:03:45.237 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:03:45 np0005548731 nova_compute[232433]: 2025-12-06 08:03:45.253 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:03:45 np0005548731 nova_compute[232433]: 2025-12-06 08:03:45.281 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:03:45 np0005548731 nova_compute[232433]: 2025-12-06 08:03:45.282 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:45 np0005548731 nova_compute[232433]: 2025-12-06 08:03:45.671 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:45.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:45 np0005548731 podman[322519]: 2025-12-06 08:03:45.896956462 +0000 UTC m=+0.054564575 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 03:03:45 np0005548731 podman[322521]: 2025-12-06 08:03:45.903319727 +0000 UTC m=+0.054857062 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 03:03:45 np0005548731 podman[322520]: 2025-12-06 08:03:45.926381262 +0000 UTC m=+0.083054382 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 03:03:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:47.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:47.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:48 np0005548731 nova_compute[232433]: 2025-12-06 08:03:48.282 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:03:48 np0005548731 nova_compute[232433]: 2025-12-06 08:03:48.283 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.044 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:49.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.183 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5b86240b-8681-4833-a88a-b665a431a2fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.184 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.198 232437 DEBUG nova.compute.manager [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:03:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.317 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.318 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.325 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.326 232437 INFO nova.compute.claims [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.460 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:03:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:49.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:03:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2935238790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.945 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.951 232437 DEBUG nova.compute.provider_tree [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:03:49 np0005548731 nova_compute[232433]: 2025-12-06 08:03:49.977 232437 DEBUG nova.scheduler.client.report [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.011 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.012 232437 DEBUG nova.compute.manager [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.064 232437 DEBUG nova.compute.manager [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.064 232437 DEBUG nova.network.neutron [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.096 232437 INFO nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.113 232437 DEBUG nova.compute.manager [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.210 232437 DEBUG nova.compute.manager [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.211 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.211 232437 INFO nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Creating image(s)#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.231 232437 DEBUG nova.storage.rbd_utils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5b86240b-8681-4833-a88a-b665a431a2fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.252 232437 DEBUG nova.storage.rbd_utils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5b86240b-8681-4833-a88a-b665a431a2fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.273 232437 DEBUG nova.storage.rbd_utils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5b86240b-8681-4833-a88a-b665a431a2fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.277 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.342 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.343 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.344 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.344 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.365 232437 DEBUG nova.storage.rbd_utils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5b86240b-8681-4833-a88a-b665a431a2fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.368 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5b86240b-8681-4833-a88a-b665a431a2fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.398 232437 DEBUG nova.policy [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.674 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.688 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5b86240b-8681-4833-a88a-b665a431a2fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.768 232437 DEBUG nova.storage.rbd_utils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 5b86240b-8681-4833-a88a-b665a431a2fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.872 232437 DEBUG nova.objects.instance [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 5b86240b-8681-4833-a88a-b665a431a2fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.894 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.895 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Ensure instance console log exists: /var/lib/nova/instances/5b86240b-8681-4833-a88a-b665a431a2fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.895 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.896 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:50 np0005548731 nova_compute[232433]: 2025-12-06 08:03:50.896 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:51.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:51.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:53.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:53.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:53 np0005548731 nova_compute[232433]: 2025-12-06 08:03:53.778 232437 DEBUG nova.network.neutron [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Successfully created port: 1bb80b61-0282-441f-9bd2-9938bad211d8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:03:53 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Dec  6 03:03:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:53.989256) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:03:53 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Dec  6 03:03:53 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008233989295, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2372, "num_deletes": 257, "total_data_size": 5881066, "memory_usage": 5952864, "flush_reason": "Manual Compaction"}
Dec  6 03:03:53 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234012075, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 3819360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70298, "largest_seqno": 72665, "table_properties": {"data_size": 3809665, "index_size": 6124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19924, "raw_average_key_size": 20, "raw_value_size": 3790269, "raw_average_value_size": 3867, "num_data_blocks": 267, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008026, "oldest_key_time": 1765008026, "file_creation_time": 1765008233, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 22907 microseconds, and 8235 cpu microseconds.
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.012161) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 3819360 bytes OK
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.012184) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.014131) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.014146) EVENT_LOG_v1 {"time_micros": 1765008234014141, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.014165) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 5870577, prev total WAL file size 5870577, number of live WAL files 2.
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.015997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353237' seq:72057594037927935, type:22 .. '6C6F676D0032373738' seq:0, type:0; will stop at (end)
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(3729KB)], [141(10MB)]
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234016110, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 14476056, "oldest_snapshot_seqno": -1}
Dec  6 03:03:54 np0005548731 nova_compute[232433]: 2025-12-06 08:03:54.046 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 10275 keys, 14316387 bytes, temperature: kUnknown
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234109240, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 14316387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14248985, "index_size": 40601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25733, "raw_key_size": 270720, "raw_average_key_size": 26, "raw_value_size": 14067798, "raw_average_value_size": 1369, "num_data_blocks": 1554, "num_entries": 10275, "num_filter_entries": 10275, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765008234, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.109724) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 14316387 bytes
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.112016) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.3 rd, 153.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.2 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.5) write-amplify(3.7) OK, records in: 10810, records dropped: 535 output_compression: NoCompression
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.112066) EVENT_LOG_v1 {"time_micros": 1765008234112047, "job": 90, "event": "compaction_finished", "compaction_time_micros": 93241, "compaction_time_cpu_micros": 58984, "output_level": 6, "num_output_files": 1, "total_output_size": 14316387, "num_input_records": 10810, "num_output_records": 10275, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234112989, "job": 90, "event": "table_file_deletion", "file_number": 143}
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234114827, "job": 90, "event": "table_file_deletion", "file_number": 141}
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.015876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.114902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.114907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.114909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.114910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:54.114912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:54.759 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:03:54 np0005548731 nova_compute[232433]: 2025-12-06 08:03:54.760 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:54.760 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:03:54 np0005548731 nova_compute[232433]: 2025-12-06 08:03:54.922 232437 DEBUG nova.network.neutron [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Successfully updated port: 1bb80b61-0282-441f-9bd2-9938bad211d8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:03:54 np0005548731 nova_compute[232433]: 2025-12-06 08:03:54.938 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:03:54 np0005548731 nova_compute[232433]: 2025-12-06 08:03:54.938 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:03:54 np0005548731 nova_compute[232433]: 2025-12-06 08:03:54.939 232437 DEBUG nova.network.neutron [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:03:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:03:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:55.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:03:55 np0005548731 nova_compute[232433]: 2025-12-06 08:03:55.145 232437 DEBUG nova.compute.manager [req-6b9defe8-6f41-47c9-a3be-27ced98d199a req-f9f9ba63-e837-4a2e-8ca0-acee6cb0a9c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Received event network-changed-1bb80b61-0282-441f-9bd2-9938bad211d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:03:55 np0005548731 nova_compute[232433]: 2025-12-06 08:03:55.145 232437 DEBUG nova.compute.manager [req-6b9defe8-6f41-47c9-a3be-27ced98d199a req-f9f9ba63-e837-4a2e-8ca0-acee6cb0a9c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Refreshing instance network info cache due to event network-changed-1bb80b61-0282-441f-9bd2-9938bad211d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:03:55 np0005548731 nova_compute[232433]: 2025-12-06 08:03:55.145 232437 DEBUG oslo_concurrency.lockutils [req-6b9defe8-6f41-47c9-a3be-27ced98d199a req-f9f9ba63-e837-4a2e-8ca0-acee6cb0a9c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:03:55 np0005548731 nova_compute[232433]: 2025-12-06 08:03:55.265 232437 DEBUG nova.network.neutron [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:03:55 np0005548731 nova_compute[232433]: 2025-12-06 08:03:55.676 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:03:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:55.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.016367) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236016404, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 273, "num_deletes": 251, "total_data_size": 69657, "memory_usage": 75688, "flush_reason": "Manual Compaction"}
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236018882, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 45381, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72670, "largest_seqno": 72938, "table_properties": {"data_size": 43497, "index_size": 112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4821, "raw_average_key_size": 18, "raw_value_size": 39888, "raw_average_value_size": 151, "num_data_blocks": 5, "num_entries": 264, "num_filter_entries": 264, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008234, "oldest_key_time": 1765008234, "file_creation_time": 1765008236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 2566 microseconds, and 800 cpu microseconds.
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.018935) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 45381 bytes OK
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.018952) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.020573) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.020622) EVENT_LOG_v1 {"time_micros": 1765008236020616, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.020637) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 67567, prev total WAL file size 67567, number of live WAL files 2.
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.021099) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(44KB)], [144(13MB)]
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236021291, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 14361768, "oldest_snapshot_seqno": -1}
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 10030 keys, 12478999 bytes, temperature: kUnknown
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236082924, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 12478999, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12414727, "index_size": 38111, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 266322, "raw_average_key_size": 26, "raw_value_size": 12239206, "raw_average_value_size": 1220, "num_data_blocks": 1442, "num_entries": 10030, "num_filter_entries": 10030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765008236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.083191) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 12478999 bytes
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.084488) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 232.7 rd, 202.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.7 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(591.5) write-amplify(275.0) OK, records in: 10539, records dropped: 509 output_compression: NoCompression
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.084505) EVENT_LOG_v1 {"time_micros": 1765008236084497, "job": 92, "event": "compaction_finished", "compaction_time_micros": 61705, "compaction_time_cpu_micros": 34472, "output_level": 6, "num_output_files": 1, "total_output_size": 12478999, "num_input_records": 10539, "num_output_records": 10030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236084636, "job": 92, "event": "table_file_deletion", "file_number": 146}
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236087070, "job": 92, "event": "table_file_deletion", "file_number": 144}
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.020947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.087124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.087128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.087130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.087131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:56 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:03:56.087133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:03:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:57.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.425 232437 DEBUG nova.network.neutron [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Updating instance_info_cache with network_info: [{"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.447 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.448 232437 DEBUG nova.compute.manager [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Instance network_info: |[{"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.448 232437 DEBUG oslo_concurrency.lockutils [req-6b9defe8-6f41-47c9-a3be-27ced98d199a req-f9f9ba63-e837-4a2e-8ca0-acee6cb0a9c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.449 232437 DEBUG nova.network.neutron [req-6b9defe8-6f41-47c9-a3be-27ced98d199a req-f9f9ba63-e837-4a2e-8ca0-acee6cb0a9c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Refreshing network info cache for port 1bb80b61-0282-441f-9bd2-9938bad211d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.454 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Start _get_guest_xml network_info=[{"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.459 232437 WARNING nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.465 232437 DEBUG nova.virt.libvirt.host [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.466 232437 DEBUG nova.virt.libvirt.host [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.470 232437 DEBUG nova.virt.libvirt.host [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.471 232437 DEBUG nova.virt.libvirt.host [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.471 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.472 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.472 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.472 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.472 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.472 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.472 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.473 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.473 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.473 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.473 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.473 232437 DEBUG nova.virt.hardware [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.476 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:03:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:57.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:03:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2113418163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.886 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.921 232437 DEBUG nova.storage.rbd_utils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5b86240b-8681-4833-a88a-b665a431a2fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:03:57 np0005548731 nova_compute[232433]: 2025-12-06 08:03:57.926 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:03:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:03:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3306292249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.336 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.338 232437 DEBUG nova.virt.libvirt.vif [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:03:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-344208075',display_name='tempest-TestNetworkBasicOps-server-344208075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-344208075',id=186,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSpLe9wt3iazbueNL3t1WWFE6UXiHr9wL+zsYKFeYZoxUorBUetaDQvQ0p5TQHThzBBQCIHxwdWWs9lpd08ISgAxIVbg0B4LPEH84xZMbpS+UdtxXt7FeNhtMXABPa5jQ==',key_name='tempest-TestNetworkBasicOps-629546403',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-itjq9ogk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:03:50Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5b86240b-8681-4833-a88a-b665a431a2fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.338 232437 DEBUG nova.network.os_vif_util [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.339 232437 DEBUG nova.network.os_vif_util [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:1d:27,bridge_name='br-int',has_traffic_filtering=True,id=1bb80b61-0282-441f-9bd2-9938bad211d8,network=Network(731d5525-647e-47b4-9d5e-931670d91230),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb80b61-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.340 232437 DEBUG nova.objects.instance [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b86240b-8681-4833-a88a-b665a431a2fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.359 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  <uuid>5b86240b-8681-4833-a88a-b665a431a2fb</uuid>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  <name>instance-000000ba</name>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestNetworkBasicOps-server-344208075</nova:name>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:03:57</nova:creationTime>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <nova:port uuid="1bb80b61-0282-441f-9bd2-9938bad211d8">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <entry name="serial">5b86240b-8681-4833-a88a-b665a431a2fb</entry>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <entry name="uuid">5b86240b-8681-4833-a88a-b665a431a2fb</entry>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/5b86240b-8681-4833-a88a-b665a431a2fb_disk">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/5b86240b-8681-4833-a88a-b665a431a2fb_disk.config">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:b7:1d:27"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <target dev="tap1bb80b61-02"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/5b86240b-8681-4833-a88a-b665a431a2fb/console.log" append="off"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:03:58 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:03:58 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:03:58 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:03:58 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.361 232437 DEBUG nova.compute.manager [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Preparing to wait for external event network-vif-plugged-1bb80b61-0282-441f-9bd2-9938bad211d8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.361 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.362 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.362 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.362 232437 DEBUG nova.virt.libvirt.vif [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:03:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-344208075',display_name='tempest-TestNetworkBasicOps-server-344208075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-344208075',id=186,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSpLe9wt3iazbueNL3t1WWFE6UXiHr9wL+zsYKFeYZoxUorBUetaDQvQ0p5TQHThzBBQCIHxwdWWs9lpd08ISgAxIVbg0B4LPEH84xZMbpS+UdtxXt7FeNhtMXABPa5jQ==',key_name='tempest-TestNetworkBasicOps-629546403',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-itjq9ogk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:03:50Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5b86240b-8681-4833-a88a-b665a431a2fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.363 232437 DEBUG nova.network.os_vif_util [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.363 232437 DEBUG nova.network.os_vif_util [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:1d:27,bridge_name='br-int',has_traffic_filtering=True,id=1bb80b61-0282-441f-9bd2-9938bad211d8,network=Network(731d5525-647e-47b4-9d5e-931670d91230),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb80b61-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.364 232437 DEBUG os_vif [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:1d:27,bridge_name='br-int',has_traffic_filtering=True,id=1bb80b61-0282-441f-9bd2-9938bad211d8,network=Network(731d5525-647e-47b4-9d5e-931670d91230),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb80b61-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.364 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.365 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.365 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.368 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.369 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bb80b61-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.369 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1bb80b61-02, col_values=(('external_ids', {'iface-id': '1bb80b61-0282-441f-9bd2-9938bad211d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:1d:27', 'vm-uuid': '5b86240b-8681-4833-a88a-b665a431a2fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:58 np0005548731 NetworkManager[49182]: <info>  [1765008238.4011] manager: (tap1bb80b61-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.400 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.403 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.407 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.408 232437 INFO os_vif [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:1d:27,bridge_name='br-int',has_traffic_filtering=True,id=1bb80b61-0282-441f-9bd2-9938bad211d8,network=Network(731d5525-647e-47b4-9d5e-931670d91230),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb80b61-02')#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.457 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.457 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.457 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:b7:1d:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.458 232437 INFO nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Using config drive#033[00m
Dec  6 03:03:58 np0005548731 nova_compute[232433]: 2025-12-06 08:03:58.481 232437 DEBUG nova.storage.rbd_utils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5b86240b-8681-4833-a88a-b665a431a2fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:03:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:03:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:59.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.188 232437 INFO nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Creating config drive at /var/lib/nova/instances/5b86240b-8681-4833-a88a-b665a431a2fb/disk.config#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.194 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b86240b-8681-4833-a88a-b665a431a2fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphbh6z3c0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.234 232437 DEBUG nova.network.neutron [req-6b9defe8-6f41-47c9-a3be-27ced98d199a req-f9f9ba63-e837-4a2e-8ca0-acee6cb0a9c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Updated VIF entry in instance network info cache for port 1bb80b61-0282-441f-9bd2-9938bad211d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.235 232437 DEBUG nova.network.neutron [req-6b9defe8-6f41-47c9-a3be-27ced98d199a req-f9f9ba63-e837-4a2e-8ca0-acee6cb0a9c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Updating instance_info_cache with network_info: [{"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.251 232437 DEBUG oslo_concurrency.lockutils [req-6b9defe8-6f41-47c9-a3be-27ced98d199a req-f9f9ba63-e837-4a2e-8ca0-acee6cb0a9c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:03:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.336 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b86240b-8681-4833-a88a-b665a431a2fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphbh6z3c0" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.364 232437 DEBUG nova.storage.rbd_utils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5b86240b-8681-4833-a88a-b665a431a2fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.368 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b86240b-8681-4833-a88a-b665a431a2fb/disk.config 5b86240b-8681-4833-a88a-b665a431a2fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.555 232437 DEBUG oslo_concurrency.processutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b86240b-8681-4833-a88a-b665a431a2fb/disk.config 5b86240b-8681-4833-a88a-b665a431a2fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.556 232437 INFO nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Deleting local config drive /var/lib/nova/instances/5b86240b-8681-4833-a88a-b665a431a2fb/disk.config because it was imported into RBD.#033[00m
Dec  6 03:03:59 np0005548731 kernel: tap1bb80b61-02: entered promiscuous mode
Dec  6 03:03:59 np0005548731 NetworkManager[49182]: <info>  [1765008239.6132] manager: (tap1bb80b61-02): new Tun device (/org/freedesktop/NetworkManager/Devices/436)
Dec  6 03:03:59 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:59Z|00948|binding|INFO|Claiming lport 1bb80b61-0282-441f-9bd2-9938bad211d8 for this chassis.
Dec  6 03:03:59 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:59Z|00949|binding|INFO|1bb80b61-0282-441f-9bd2-9938bad211d8: Claiming fa:16:3e:b7:1d:27 10.100.0.13
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.615 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.618 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.629 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:1d:27 10.100.0.13'], port_security=['fa:16:3e:b7:1d:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5b86240b-8681-4833-a88a-b665a431a2fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-731d5525-647e-47b4-9d5e-931670d91230', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43b01da0-0ee5-4320-90f1-0527d78fc942', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3702498-6e99-49db-bc99-4d978916c2ab, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=1bb80b61-0282-441f-9bd2-9938bad211d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.630 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 1bb80b61-0282-441f-9bd2-9938bad211d8 in datapath 731d5525-647e-47b4-9d5e-931670d91230 bound to our chassis#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.631 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 731d5525-647e-47b4-9d5e-931670d91230#033[00m
Dec  6 03:03:59 np0005548731 systemd-udevd[322961]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.644 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4051dc8b-de10-45b3-98a7-db199cc3b116]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.645 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap731d5525-61 in ovnmeta-731d5525-647e-47b4-9d5e-931670d91230 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.647 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap731d5525-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.647 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f04041-18e3-4b17-bb45-e2b31dbf0832]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.647 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[faa76e76-af33-4fae-a581-db3fac6c78e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 systemd-machined[195355]: New machine qemu-95-instance-000000ba.
Dec  6 03:03:59 np0005548731 NetworkManager[49182]: <info>  [1765008239.6542] device (tap1bb80b61-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:03:59 np0005548731 NetworkManager[49182]: <info>  [1765008239.6554] device (tap1bb80b61-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.660 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6bb28d-4c7b-4f68-b239-9a476736fe64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 systemd[1]: Started Virtual Machine qemu-95-instance-000000ba.
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.674 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b42f6c-d564-4824-92ba-9f5dfe6b67cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.679 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:59 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:59Z|00950|binding|INFO|Setting lport 1bb80b61-0282-441f-9bd2-9938bad211d8 ovn-installed in OVS
Dec  6 03:03:59 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:59Z|00951|binding|INFO|Setting lport 1bb80b61-0282-441f-9bd2-9938bad211d8 up in Southbound
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.686 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.706 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[290a958f-509e-43bf-a87a-9efe67380250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.711 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e5dc4349-26c8-404c-885e-c720746aec04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 NetworkManager[49182]: <info>  [1765008239.7126] manager: (tap731d5525-60): new Veth device (/org/freedesktop/NetworkManager/Devices/437)
Dec  6 03:03:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:03:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:03:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:59.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.741 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[8de9e373-ecac-453b-820a-7f108c4fe285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.743 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[869c189b-4ab3-4305-828f-13fb515673c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 NetworkManager[49182]: <info>  [1765008239.7654] device (tap731d5525-60): carrier: link connected
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.769 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e2150c67-ba4f-472b-adac-1041e9b21e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.786 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f03d7550-84fc-4683-83a5-89a40f0f0b9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap731d5525-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:79:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 287], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850177, 'reachable_time': 26734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322994, 'error': None, 'target': 'ovnmeta-731d5525-647e-47b4-9d5e-931670d91230', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.801 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[705585b6-5a8c-4f10-b9bc-66d0acc65345]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:7946'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 850177, 'tstamp': 850177}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322995, 'error': None, 'target': 'ovnmeta-731d5525-647e-47b4-9d5e-931670d91230', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.814 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf18b71-d70d-4eef-8cf4-c086e6766d4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap731d5525-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:79:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 287], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850177, 'reachable_time': 26734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322996, 'error': None, 'target': 'ovnmeta-731d5525-647e-47b4-9d5e-931670d91230', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.842 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2b99fcf6-80af-4229-9b11-f58306b31545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.890 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d07c3b-9afe-44a9-86cb-c6233ab40b8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.895 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap731d5525-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.896 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.897 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap731d5525-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:59 np0005548731 NetworkManager[49182]: <info>  [1765008239.8991] manager: (tap731d5525-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Dec  6 03:03:59 np0005548731 kernel: tap731d5525-60: entered promiscuous mode
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.898 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.900 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.903 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap731d5525-60, col_values=(('external_ids', {'iface-id': '4f69bc24-8a58-474f-9da6-ba47b346dcb9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:03:59 np0005548731 ovn_controller[133927]: 2025-12-06T08:03:59Z|00952|binding|INFO|Releasing lport 4f69bc24-8a58-474f-9da6-ba47b346dcb9 from this chassis (sb_readonly=0)
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.904 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.905 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.906 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/731d5525-647e-47b4-9d5e-931670d91230.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/731d5525-647e-47b4-9d5e-931670d91230.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.906 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cb08e8c0-5fb1-4746-a491-ad3be2f24ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.907 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-731d5525-647e-47b4-9d5e-931670d91230
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/731d5525-647e-47b4-9d5e-931670d91230.pid.haproxy
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 731d5525-647e-47b4-9d5e-931670d91230
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:03:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:03:59.908 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-731d5525-647e-47b4-9d5e-931670d91230', 'env', 'PROCESS_TAG=haproxy-731d5525-647e-47b4-9d5e-931670d91230', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/731d5525-647e-47b4-9d5e-931670d91230.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:03:59 np0005548731 nova_compute[232433]: 2025-12-06 08:03:59.919 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.087 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008240.0867512, 5b86240b-8681-4833-a88a-b665a431a2fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.087 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] VM Started (Lifecycle Event)#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.230 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.234 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008240.0899205, 5b86240b-8681-4833-a88a-b665a431a2fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.234 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.253 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.256 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:04:00 np0005548731 podman[323070]: 2025-12-06 08:04:00.278163815 +0000 UTC m=+0.051466630 container create 01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.281 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:04:00 np0005548731 systemd[1]: Started libpod-conmon-01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874.scope.
Dec  6 03:04:00 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:04:00 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2af0f4b0e521e6b450157164f26e222f9cb8d7895a28ae3a7b05058266d0c42/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:04:00 np0005548731 podman[323070]: 2025-12-06 08:04:00.253305727 +0000 UTC m=+0.026608562 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:04:00 np0005548731 podman[323070]: 2025-12-06 08:04:00.351014086 +0000 UTC m=+0.124316921 container init 01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  6 03:04:00 np0005548731 podman[323070]: 2025-12-06 08:04:00.356427818 +0000 UTC m=+0.129730633 container start 01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:04:00 np0005548731 neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230[323085]: [NOTICE]   (323089) : New worker (323091) forked
Dec  6 03:04:00 np0005548731 neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230[323085]: [NOTICE]   (323089) : Loading success.
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.613 232437 DEBUG nova.compute.manager [req-f329ae8c-b317-46a9-86ea-23fa0a548bf3 req-39ab388b-0687-4ad8-8df6-0ffe6d1865d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Received event network-vif-plugged-1bb80b61-0282-441f-9bd2-9938bad211d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.614 232437 DEBUG oslo_concurrency.lockutils [req-f329ae8c-b317-46a9-86ea-23fa0a548bf3 req-39ab388b-0687-4ad8-8df6-0ffe6d1865d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.614 232437 DEBUG oslo_concurrency.lockutils [req-f329ae8c-b317-46a9-86ea-23fa0a548bf3 req-39ab388b-0687-4ad8-8df6-0ffe6d1865d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.614 232437 DEBUG oslo_concurrency.lockutils [req-f329ae8c-b317-46a9-86ea-23fa0a548bf3 req-39ab388b-0687-4ad8-8df6-0ffe6d1865d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.614 232437 DEBUG nova.compute.manager [req-f329ae8c-b317-46a9-86ea-23fa0a548bf3 req-39ab388b-0687-4ad8-8df6-0ffe6d1865d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Processing event network-vif-plugged-1bb80b61-0282-441f-9bd2-9938bad211d8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.615 232437 DEBUG nova.compute.manager [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.620 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008240.6200113, 5b86240b-8681-4833-a88a-b665a431a2fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.623 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.625 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.628 232437 INFO nova.virt.libvirt.driver [-] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Instance spawned successfully.#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.629 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.648 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.655 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.661 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.661 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.662 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.662 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.663 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.663 232437 DEBUG nova.virt.libvirt.driver [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.678 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.684 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.745 232437 INFO nova.compute.manager [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Took 10.53 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.745 232437 DEBUG nova.compute.manager [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.859 232437 INFO nova.compute.manager [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Took 11.57 seconds to build instance.#033[00m
Dec  6 03:04:00 np0005548731 nova_compute[232433]: 2025-12-06 08:04:00.884 232437 DEBUG oslo_concurrency.lockutils [None req-f450ac06-05a3-41a0-9357-e7e49a88a9e8 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:00.909 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:00.910 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:00.910 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:01.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:01.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:01.762 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:02 np0005548731 nova_compute[232433]: 2025-12-06 08:04:02.711 232437 DEBUG nova.compute.manager [req-ce3c0fec-2811-4371-b61f-e2c32c84b112 req-6836d226-d85e-4dcf-a8ac-7a81fbfc9c64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Received event network-vif-plugged-1bb80b61-0282-441f-9bd2-9938bad211d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:04:02 np0005548731 nova_compute[232433]: 2025-12-06 08:04:02.712 232437 DEBUG oslo_concurrency.lockutils [req-ce3c0fec-2811-4371-b61f-e2c32c84b112 req-6836d226-d85e-4dcf-a8ac-7a81fbfc9c64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:02 np0005548731 nova_compute[232433]: 2025-12-06 08:04:02.712 232437 DEBUG oslo_concurrency.lockutils [req-ce3c0fec-2811-4371-b61f-e2c32c84b112 req-6836d226-d85e-4dcf-a8ac-7a81fbfc9c64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:02 np0005548731 nova_compute[232433]: 2025-12-06 08:04:02.712 232437 DEBUG oslo_concurrency.lockutils [req-ce3c0fec-2811-4371-b61f-e2c32c84b112 req-6836d226-d85e-4dcf-a8ac-7a81fbfc9c64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:02 np0005548731 nova_compute[232433]: 2025-12-06 08:04:02.713 232437 DEBUG nova.compute.manager [req-ce3c0fec-2811-4371-b61f-e2c32c84b112 req-6836d226-d85e-4dcf-a8ac-7a81fbfc9c64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] No waiting events found dispatching network-vif-plugged-1bb80b61-0282-441f-9bd2-9938bad211d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:04:02 np0005548731 nova_compute[232433]: 2025-12-06 08:04:02.713 232437 WARNING nova.compute.manager [req-ce3c0fec-2811-4371-b61f-e2c32c84b112 req-6836d226-d85e-4dcf-a8ac-7a81fbfc9c64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Received unexpected event network-vif-plugged-1bb80b61-0282-441f-9bd2-9938bad211d8 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:04:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:04:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:03.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:04:03 np0005548731 nova_compute[232433]: 2025-12-06 08:04:03.402 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:03.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:04 np0005548731 nova_compute[232433]: 2025-12-06 08:04:04.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:04 np0005548731 nova_compute[232433]: 2025-12-06 08:04:04.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 03:04:04 np0005548731 nova_compute[232433]: 2025-12-06 08:04:04.126 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 03:04:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:05.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:05 np0005548731 podman[323395]: 2025-12-06 08:04:05.519002471 +0000 UTC m=+0.058404469 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 03:04:05 np0005548731 podman[323395]: 2025-12-06 08:04:05.625759661 +0000 UTC m=+0.165161649 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  6 03:04:05 np0005548731 nova_compute[232433]: 2025-12-06 08:04:05.680 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:05.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 03:04:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 03:04:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:04:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:04:06 np0005548731 podman[323550]: 2025-12-06 08:04:06.212096957 +0000 UTC m=+0.057926497 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 03:04:06 np0005548731 podman[323550]: 2025-12-06 08:04:06.222919671 +0000 UTC m=+0.068749211 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 03:04:06 np0005548731 podman[323614]: 2025-12-06 08:04:06.416598307 +0000 UTC m=+0.053814837 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, com.redhat.component=keepalived-container, name=keepalived, version=2.2.4, io.buildah.version=1.28.2, release=1793, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=)
Dec  6 03:04:06 np0005548731 podman[323614]: 2025-12-06 08:04:06.429806109 +0000 UTC m=+0.067022599 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, architecture=x86_64, distribution-scope=public, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9)
Dec  6 03:04:06 np0005548731 NetworkManager[49182]: <info>  [1765008246.5131] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Dec  6 03:04:06 np0005548731 NetworkManager[49182]: <info>  [1765008246.5138] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Dec  6 03:04:06 np0005548731 nova_compute[232433]: 2025-12-06 08:04:06.512 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:06 np0005548731 nova_compute[232433]: 2025-12-06 08:04:06.515 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:06 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:06Z|00953|binding|INFO|Releasing lport 4f69bc24-8a58-474f-9da6-ba47b346dcb9 from this chassis (sb_readonly=0)
Dec  6 03:04:06 np0005548731 nova_compute[232433]: 2025-12-06 08:04:06.523 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:06 np0005548731 nova_compute[232433]: 2025-12-06 08:04:06.788 232437 DEBUG nova.compute.manager [req-1e5319db-7792-4839-b1c3-94fc1c9dbe29 req-aae6a63f-cee6-49db-a61e-5d5e2deec2e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Received event network-changed-1bb80b61-0282-441f-9bd2-9938bad211d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:04:06 np0005548731 nova_compute[232433]: 2025-12-06 08:04:06.789 232437 DEBUG nova.compute.manager [req-1e5319db-7792-4839-b1c3-94fc1c9dbe29 req-aae6a63f-cee6-49db-a61e-5d5e2deec2e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Refreshing instance network info cache due to event network-changed-1bb80b61-0282-441f-9bd2-9938bad211d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:04:06 np0005548731 nova_compute[232433]: 2025-12-06 08:04:06.789 232437 DEBUG oslo_concurrency.lockutils [req-1e5319db-7792-4839-b1c3-94fc1c9dbe29 req-aae6a63f-cee6-49db-a61e-5d5e2deec2e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:04:06 np0005548731 nova_compute[232433]: 2025-12-06 08:04:06.790 232437 DEBUG oslo_concurrency.lockutils [req-1e5319db-7792-4839-b1c3-94fc1c9dbe29 req-aae6a63f-cee6-49db-a61e-5d5e2deec2e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:04:06 np0005548731 nova_compute[232433]: 2025-12-06 08:04:06.790 232437 DEBUG nova.network.neutron [req-1e5319db-7792-4839-b1c3-94fc1c9dbe29 req-aae6a63f-cee6-49db-a61e-5d5e2deec2e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Refreshing network info cache for port 1bb80b61-0282-441f-9bd2-9938bad211d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:04:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:04:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:04:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:07.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:07.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:07 np0005548731 podman[323919]: 2025-12-06 08:04:07.73249935 +0000 UTC m=+0.021484427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 03:04:07 np0005548731 podman[323919]: 2025-12-06 08:04:07.895330481 +0000 UTC m=+0.184315548 container create b3aa465521737debe5372be6102017bb37b28778781ef01445498e11f41332ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  6 03:04:07 np0005548731 systemd[1]: Started libpod-conmon-b3aa465521737debe5372be6102017bb37b28778781ef01445498e11f41332ce.scope.
Dec  6 03:04:07 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:04:07 np0005548731 podman[323919]: 2025-12-06 08:04:07.982059831 +0000 UTC m=+0.271044908 container init b3aa465521737debe5372be6102017bb37b28778781ef01445498e11f41332ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  6 03:04:07 np0005548731 podman[323919]: 2025-12-06 08:04:07.989642686 +0000 UTC m=+0.278627743 container start b3aa465521737debe5372be6102017bb37b28778781ef01445498e11f41332ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 03:04:07 np0005548731 podman[323919]: 2025-12-06 08:04:07.994544817 +0000 UTC m=+0.283529894 container attach b3aa465521737debe5372be6102017bb37b28778781ef01445498e11f41332ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_golick, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  6 03:04:07 np0005548731 exciting_golick[323935]: 167 167
Dec  6 03:04:07 np0005548731 systemd[1]: libpod-b3aa465521737debe5372be6102017bb37b28778781ef01445498e11f41332ce.scope: Deactivated successfully.
Dec  6 03:04:07 np0005548731 podman[323919]: 2025-12-06 08:04:07.998663807 +0000 UTC m=+0.287648884 container died b3aa465521737debe5372be6102017bb37b28778781ef01445498e11f41332ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_golick, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 03:04:08 np0005548731 systemd[1]: var-lib-containers-storage-overlay-86f69a655416c1d02af78eeb246ce47a06d4fe3a4158b569a3d989e60893c3fc-merged.mount: Deactivated successfully.
Dec  6 03:04:08 np0005548731 podman[323919]: 2025-12-06 08:04:08.037587239 +0000 UTC m=+0.326572296 container remove b3aa465521737debe5372be6102017bb37b28778781ef01445498e11f41332ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec  6 03:04:08 np0005548731 systemd[1]: libpod-conmon-b3aa465521737debe5372be6102017bb37b28778781ef01445498e11f41332ce.scope: Deactivated successfully.
Dec  6 03:04:08 np0005548731 podman[323961]: 2025-12-06 08:04:08.206778816 +0000 UTC m=+0.039206900 container create 9f5c105a6e9b9a6a79bee6df88c6ff163596ca1b31267ecf0db941b9a792a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Dec  6 03:04:08 np0005548731 systemd[1]: Started libpod-conmon-9f5c105a6e9b9a6a79bee6df88c6ff163596ca1b31267ecf0db941b9a792a465.scope.
Dec  6 03:04:08 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:04:08 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0119a833247c9fe603324cd2cdc5fb82c74239b1109d26b4cd5a8bfe350eb295/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 03:04:08 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0119a833247c9fe603324cd2cdc5fb82c74239b1109d26b4cd5a8bfe350eb295/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 03:04:08 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0119a833247c9fe603324cd2cdc5fb82c74239b1109d26b4cd5a8bfe350eb295/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 03:04:08 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0119a833247c9fe603324cd2cdc5fb82c74239b1109d26b4cd5a8bfe350eb295/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 03:04:08 np0005548731 podman[323961]: 2025-12-06 08:04:08.282959488 +0000 UTC m=+0.115387622 container init 9f5c105a6e9b9a6a79bee6df88c6ff163596ca1b31267ecf0db941b9a792a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pike, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  6 03:04:08 np0005548731 podman[323961]: 2025-12-06 08:04:08.189860561 +0000 UTC m=+0.022288665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 03:04:08 np0005548731 podman[323961]: 2025-12-06 08:04:08.289534389 +0000 UTC m=+0.121962473 container start 9f5c105a6e9b9a6a79bee6df88c6ff163596ca1b31267ecf0db941b9a792a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  6 03:04:08 np0005548731 podman[323961]: 2025-12-06 08:04:08.292938862 +0000 UTC m=+0.125366996 container attach 9f5c105a6e9b9a6a79bee6df88c6ff163596ca1b31267ecf0db941b9a792a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 03:04:08 np0005548731 nova_compute[232433]: 2025-12-06 08:04:08.404 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:08 np0005548731 nova_compute[232433]: 2025-12-06 08:04:08.432 232437 DEBUG nova.network.neutron [req-1e5319db-7792-4839-b1c3-94fc1c9dbe29 req-aae6a63f-cee6-49db-a61e-5d5e2deec2e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Updated VIF entry in instance network info cache for port 1bb80b61-0282-441f-9bd2-9938bad211d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:04:08 np0005548731 nova_compute[232433]: 2025-12-06 08:04:08.433 232437 DEBUG nova.network.neutron [req-1e5319db-7792-4839-b1c3-94fc1c9dbe29 req-aae6a63f-cee6-49db-a61e-5d5e2deec2e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Updating instance_info_cache with network_info: [{"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:04:08 np0005548731 nova_compute[232433]: 2025-12-06 08:04:08.463 232437 DEBUG oslo_concurrency.lockutils [req-1e5319db-7792-4839-b1c3-94fc1c9dbe29 req-aae6a63f-cee6-49db-a61e-5d5e2deec2e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:04:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:09.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:09 np0005548731 elegant_pike[323978]: [
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:    {
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:        "available": false,
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:        "ceph_device": false,
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:        "lsm_data": {},
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:        "lvs": [],
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:        "path": "/dev/sr0",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:        "rejected_reasons": [
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "Has a FileSystem",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "Insufficient space (<5GB)"
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:        ],
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:        "sys_api": {
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "actuators": null,
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "device_nodes": "sr0",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "devname": "sr0",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "human_readable_size": "482.00 KB",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "id_bus": "ata",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "model": "QEMU DVD-ROM",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "nr_requests": "2",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "parent": "/dev/sr0",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "partitions": {},
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "path": "/dev/sr0",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "removable": "1",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "rev": "2.5+",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "ro": "0",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "rotational": "1",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "sas_address": "",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "sas_device_handle": "",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "scheduler_mode": "mq-deadline",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "sectors": 0,
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "sectorsize": "2048",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "size": 493568.0,
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "support_discard": "2048",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "type": "disk",
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:            "vendor": "QEMU"
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:        }
Dec  6 03:04:09 np0005548731 elegant_pike[323978]:    }
Dec  6 03:04:09 np0005548731 elegant_pike[323978]: ]
Dec  6 03:04:09 np0005548731 systemd[1]: libpod-9f5c105a6e9b9a6a79bee6df88c6ff163596ca1b31267ecf0db941b9a792a465.scope: Deactivated successfully.
Dec  6 03:04:09 np0005548731 systemd[1]: libpod-9f5c105a6e9b9a6a79bee6df88c6ff163596ca1b31267ecf0db941b9a792a465.scope: Consumed 1.170s CPU time.
Dec  6 03:04:09 np0005548731 podman[323961]: 2025-12-06 08:04:09.4708005 +0000 UTC m=+1.303228584 container died 9f5c105a6e9b9a6a79bee6df88c6ff163596ca1b31267ecf0db941b9a792a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  6 03:04:09 np0005548731 systemd[1]: var-lib-containers-storage-overlay-0119a833247c9fe603324cd2cdc5fb82c74239b1109d26b4cd5a8bfe350eb295-merged.mount: Deactivated successfully.
Dec  6 03:04:09 np0005548731 podman[323961]: 2025-12-06 08:04:09.536066616 +0000 UTC m=+1.368494690 container remove 9f5c105a6e9b9a6a79bee6df88c6ff163596ca1b31267ecf0db941b9a792a465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  6 03:04:09 np0005548731 systemd[1]: libpod-conmon-9f5c105a6e9b9a6a79bee6df88c6ff163596ca1b31267ecf0db941b9a792a465.scope: Deactivated successfully.
Dec  6 03:04:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:09.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:10 np0005548731 nova_compute[232433]: 2025-12-06 08:04:10.261 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:10 np0005548731 nova_compute[232433]: 2025-12-06 08:04:10.683 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:04:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:04:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:04:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:04:10 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:04:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:11.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:04:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:11.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:04:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:13.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:13 np0005548731 nova_compute[232433]: 2025-12-06 08:04:13.470 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:13.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:15.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:15 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:15Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:1d:27 10.100.0.13
Dec  6 03:04:15 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:15Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:1d:27 10.100.0.13
Dec  6 03:04:15 np0005548731 nova_compute[232433]: 2025-12-06 08:04:15.686 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:15.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:16 np0005548731 podman[325317]: 2025-12-06 08:04:16.894322891 +0000 UTC m=+0.053122240 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 03:04:16 np0005548731 podman[325319]: 2025-12-06 08:04:16.924586361 +0000 UTC m=+0.079962606 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  6 03:04:16 np0005548731 podman[325318]: 2025-12-06 08:04:16.945419071 +0000 UTC m=+0.103696986 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  6 03:04:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:17.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:04:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:04:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:17.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:18 np0005548731 nova_compute[232433]: 2025-12-06 08:04:18.472 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:19.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:19.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:20 np0005548731 nova_compute[232433]: 2025-12-06 08:04:20.688 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:21.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:21 np0005548731 nova_compute[232433]: 2025-12-06 08:04:21.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:21 np0005548731 nova_compute[232433]: 2025-12-06 08:04:21.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 03:04:21 np0005548731 nova_compute[232433]: 2025-12-06 08:04:21.646 232437 INFO nova.compute.manager [None req-3ac9a68f-c0f8-4a87-945e-d54c4a8d4e1b d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Get console output#033[00m
Dec  6 03:04:21 np0005548731 nova_compute[232433]: 2025-12-06 08:04:21.653 261230 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 03:04:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:21.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:22 np0005548731 nova_compute[232433]: 2025-12-06 08:04:22.690 232437 INFO nova.compute.manager [None req-3a7c3a26-3deb-41d7-b121-564cab3cd0bf d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Get console output#033[00m
Dec  6 03:04:22 np0005548731 nova_compute[232433]: 2025-12-06 08:04:22.696 261230 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 03:04:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:23.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:23 np0005548731 nova_compute[232433]: 2025-12-06 08:04:23.474 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:23.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.263 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.296 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Triggering sync for uuid 5b86240b-8681-4833-a88a-b665a431a2fb _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.296 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "5b86240b-8681-4833-a88a-b665a431a2fb" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.297 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "5b86240b-8681-4833-a88a-b665a431a2fb" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.327 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "5b86240b-8681-4833-a88a-b665a431a2fb" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.609 232437 DEBUG nova.compute.manager [req-fa228a0b-a11a-4949-89ca-67ff6af73659 req-c001a40e-8c27-4bde-b791-da980742436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Received event network-changed-1bb80b61-0282-441f-9bd2-9938bad211d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.609 232437 DEBUG nova.compute.manager [req-fa228a0b-a11a-4949-89ca-67ff6af73659 req-c001a40e-8c27-4bde-b791-da980742436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Refreshing instance network info cache due to event network-changed-1bb80b61-0282-441f-9bd2-9938bad211d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.610 232437 DEBUG oslo_concurrency.lockutils [req-fa228a0b-a11a-4949-89ca-67ff6af73659 req-c001a40e-8c27-4bde-b791-da980742436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.610 232437 DEBUG oslo_concurrency.lockutils [req-fa228a0b-a11a-4949-89ca-67ff6af73659 req-c001a40e-8c27-4bde-b791-da980742436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.610 232437 DEBUG nova.network.neutron [req-fa228a0b-a11a-4949-89ca-67ff6af73659 req-c001a40e-8c27-4bde-b791-da980742436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Refreshing network info cache for port 1bb80b61-0282-441f-9bd2-9938bad211d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.661 232437 DEBUG oslo_concurrency.lockutils [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5b86240b-8681-4833-a88a-b665a431a2fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.662 232437 DEBUG oslo_concurrency.lockutils [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.663 232437 DEBUG oslo_concurrency.lockutils [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.663 232437 DEBUG oslo_concurrency.lockutils [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.664 232437 DEBUG oslo_concurrency.lockutils [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.665 232437 INFO nova.compute.manager [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Terminating instance#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.667 232437 DEBUG nova.compute.manager [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:04:24 np0005548731 kernel: tap1bb80b61-02 (unregistering): left promiscuous mode
Dec  6 03:04:24 np0005548731 NetworkManager[49182]: <info>  [1765008264.7175] device (tap1bb80b61-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.724 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:24 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:24Z|00954|binding|INFO|Releasing lport 1bb80b61-0282-441f-9bd2-9938bad211d8 from this chassis (sb_readonly=0)
Dec  6 03:04:24 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:24Z|00955|binding|INFO|Setting lport 1bb80b61-0282-441f-9bd2-9938bad211d8 down in Southbound
Dec  6 03:04:24 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:24Z|00956|binding|INFO|Removing iface tap1bb80b61-02 ovn-installed in OVS
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.727 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:24.731 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:1d:27 10.100.0.13'], port_security=['fa:16:3e:b7:1d:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5b86240b-8681-4833-a88a-b665a431a2fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-731d5525-647e-47b4-9d5e-931670d91230', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43b01da0-0ee5-4320-90f1-0527d78fc942', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3702498-6e99-49db-bc99-4d978916c2ab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=1bb80b61-0282-441f-9bd2-9938bad211d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:04:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:24.733 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 1bb80b61-0282-441f-9bd2-9938bad211d8 in datapath 731d5525-647e-47b4-9d5e-931670d91230 unbound from our chassis#033[00m
Dec  6 03:04:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:24.734 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 731d5525-647e-47b4-9d5e-931670d91230, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:04:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:24.735 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[18b69451-5dfb-4684-b70f-cbf450119615]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:24.736 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-731d5525-647e-47b4-9d5e-931670d91230 namespace which is not needed anymore#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.745 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:24 np0005548731 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000ba.scope: Deactivated successfully.
Dec  6 03:04:24 np0005548731 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000ba.scope: Consumed 14.687s CPU time.
Dec  6 03:04:24 np0005548731 systemd-machined[195355]: Machine qemu-95-instance-000000ba terminated.
Dec  6 03:04:24 np0005548731 neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230[323085]: [NOTICE]   (323089) : haproxy version is 2.8.14-c23fe91
Dec  6 03:04:24 np0005548731 neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230[323085]: [NOTICE]   (323089) : path to executable is /usr/sbin/haproxy
Dec  6 03:04:24 np0005548731 neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230[323085]: [WARNING]  (323089) : Exiting Master process...
Dec  6 03:04:24 np0005548731 neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230[323085]: [WARNING]  (323089) : Exiting Master process...
Dec  6 03:04:24 np0005548731 neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230[323085]: [ALERT]    (323089) : Current worker (323091) exited with code 143 (Terminated)
Dec  6 03:04:24 np0005548731 neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230[323085]: [WARNING]  (323089) : All workers exited. Exiting... (0)
Dec  6 03:04:24 np0005548731 systemd[1]: libpod-01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874.scope: Deactivated successfully.
Dec  6 03:04:24 np0005548731 podman[325457]: 2025-12-06 08:04:24.874824151 +0000 UTC m=+0.043036683 container died 01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.905 232437 INFO nova.virt.libvirt.driver [-] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Instance destroyed successfully.#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.905 232437 DEBUG nova.objects.instance [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 5b86240b-8681-4833-a88a-b665a431a2fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:04:24 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874-userdata-shm.mount: Deactivated successfully.
Dec  6 03:04:24 np0005548731 systemd[1]: var-lib-containers-storage-overlay-b2af0f4b0e521e6b450157164f26e222f9cb8d7895a28ae3a7b05058266d0c42-merged.mount: Deactivated successfully.
Dec  6 03:04:24 np0005548731 podman[325457]: 2025-12-06 08:04:24.916975522 +0000 UTC m=+0.085188054 container cleanup 01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.920 232437 DEBUG nova.virt.libvirt.vif [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:03:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-344208075',display_name='tempest-TestNetworkBasicOps-server-344208075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-344208075',id=186,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSpLe9wt3iazbueNL3t1WWFE6UXiHr9wL+zsYKFeYZoxUorBUetaDQvQ0p5TQHThzBBQCIHxwdWWs9lpd08ISgAxIVbg0B4LPEH84xZMbpS+UdtxXt7FeNhtMXABPa5jQ==',key_name='tempest-TestNetworkBasicOps-629546403',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:04:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-itjq9ogk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:04:00Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5b86240b-8681-4833-a88a-b665a431a2fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.920 232437 DEBUG nova.network.os_vif_util [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.921 232437 DEBUG nova.network.os_vif_util [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:1d:27,bridge_name='br-int',has_traffic_filtering=True,id=1bb80b61-0282-441f-9bd2-9938bad211d8,network=Network(731d5525-647e-47b4-9d5e-931670d91230),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb80b61-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.921 232437 DEBUG os_vif [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:1d:27,bridge_name='br-int',has_traffic_filtering=True,id=1bb80b61-0282-441f-9bd2-9938bad211d8,network=Network(731d5525-647e-47b4-9d5e-931670d91230),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb80b61-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.923 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:24 np0005548731 systemd[1]: libpod-conmon-01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874.scope: Deactivated successfully.
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.923 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bb80b61-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.925 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.927 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.930 232437 INFO os_vif [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:1d:27,bridge_name='br-int',has_traffic_filtering=True,id=1bb80b61-0282-441f-9bd2-9938bad211d8,network=Network(731d5525-647e-47b4-9d5e-931670d91230),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb80b61-02')#033[00m
Dec  6 03:04:24 np0005548731 podman[325500]: 2025-12-06 08:04:24.979640774 +0000 UTC m=+0.040771587 container remove 01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 03:04:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:24.985 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[317152b0-a30d-4d6f-8eb4-cffb4c5b29a6]: (4, ('Sat Dec  6 08:04:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230 (01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874)\n01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874\nSat Dec  6 08:04:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-731d5525-647e-47b4-9d5e-931670d91230 (01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874)\n01450d91cbd241ee9ca5a17fd4f7e6b18c9dde482c907da1454825c3d9217874\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.987 232437 DEBUG nova.compute.manager [req-42be166b-7fc8-4f71-9fbc-9996df7648dc req-5df6dcf4-172a-46ba-9eb5-67b7bef80cce 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Received event network-vif-unplugged-1bb80b61-0282-441f-9bd2-9938bad211d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.987 232437 DEBUG oslo_concurrency.lockutils [req-42be166b-7fc8-4f71-9fbc-9996df7648dc req-5df6dcf4-172a-46ba-9eb5-67b7bef80cce 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:24.987 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[657d7822-b5c2-4ab9-a000-9147cf869e9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.988 232437 DEBUG oslo_concurrency.lockutils [req-42be166b-7fc8-4f71-9fbc-9996df7648dc req-5df6dcf4-172a-46ba-9eb5-67b7bef80cce 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.988 232437 DEBUG oslo_concurrency.lockutils [req-42be166b-7fc8-4f71-9fbc-9996df7648dc req-5df6dcf4-172a-46ba-9eb5-67b7bef80cce 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:24 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:24.988 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap731d5525-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.988 232437 DEBUG nova.compute.manager [req-42be166b-7fc8-4f71-9fbc-9996df7648dc req-5df6dcf4-172a-46ba-9eb5-67b7bef80cce 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] No waiting events found dispatching network-vif-unplugged-1bb80b61-0282-441f-9bd2-9938bad211d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.988 232437 DEBUG nova.compute.manager [req-42be166b-7fc8-4f71-9fbc-9996df7648dc req-5df6dcf4-172a-46ba-9eb5-67b7bef80cce 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Received event network-vif-unplugged-1bb80b61-0282-441f-9bd2-9938bad211d8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:04:24 np0005548731 nova_compute[232433]: 2025-12-06 08:04:24.990 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:24 np0005548731 kernel: tap731d5525-60: left promiscuous mode
Dec  6 03:04:25 np0005548731 nova_compute[232433]: 2025-12-06 08:04:25.003 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:25.005 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b638f3-186a-4eb0-bfda-bd25a610ce31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:25.022 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[eaabf6f8-f12f-489f-9db8-12b3fbf9eecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:25.024 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[702dfcd9-1fd3-477d-9ce0-89cdf7010c6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:25.040 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[caecfada-b13f-4867-82a1-f9327bff966f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850170, 'reachable_time': 44999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325531, 'error': None, 'target': 'ovnmeta-731d5525-647e-47b4-9d5e-931670d91230', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:25.042 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-731d5525-647e-47b4-9d5e-931670d91230 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:04:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:25.042 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[63fa5486-038d-4e6f-9794-a8ad5b3687df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:25 np0005548731 systemd[1]: run-netns-ovnmeta\x2d731d5525\x2d647e\x2d47b4\x2d9d5e\x2d931670d91230.mount: Deactivated successfully.
Dec  6 03:04:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:25.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:25 np0005548731 nova_compute[232433]: 2025-12-06 08:04:25.341 232437 INFO nova.virt.libvirt.driver [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Deleting instance files /var/lib/nova/instances/5b86240b-8681-4833-a88a-b665a431a2fb_del#033[00m
Dec  6 03:04:25 np0005548731 nova_compute[232433]: 2025-12-06 08:04:25.341 232437 INFO nova.virt.libvirt.driver [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Deletion of /var/lib/nova/instances/5b86240b-8681-4833-a88a-b665a431a2fb_del complete#033[00m
Dec  6 03:04:25 np0005548731 nova_compute[232433]: 2025-12-06 08:04:25.445 232437 INFO nova.compute.manager [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:04:25 np0005548731 nova_compute[232433]: 2025-12-06 08:04:25.446 232437 DEBUG oslo.service.loopingcall [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:04:25 np0005548731 nova_compute[232433]: 2025-12-06 08:04:25.446 232437 DEBUG nova.compute.manager [-] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:04:25 np0005548731 nova_compute[232433]: 2025-12-06 08:04:25.446 232437 DEBUG nova.network.neutron [-] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:04:25 np0005548731 nova_compute[232433]: 2025-12-06 08:04:25.689 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:25.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:25 np0005548731 nova_compute[232433]: 2025-12-06 08:04:25.894 232437 DEBUG nova.network.neutron [req-fa228a0b-a11a-4949-89ca-67ff6af73659 req-c001a40e-8c27-4bde-b791-da980742436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Updated VIF entry in instance network info cache for port 1bb80b61-0282-441f-9bd2-9938bad211d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:04:25 np0005548731 nova_compute[232433]: 2025-12-06 08:04:25.894 232437 DEBUG nova.network.neutron [req-fa228a0b-a11a-4949-89ca-67ff6af73659 req-c001a40e-8c27-4bde-b791-da980742436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Updating instance_info_cache with network_info: [{"id": "1bb80b61-0282-441f-9bd2-9938bad211d8", "address": "fa:16:3e:b7:1d:27", "network": {"id": "731d5525-647e-47b4-9d5e-931670d91230", "bridge": "br-int", "label": "tempest-network-smoke--921439422", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb80b61-02", "ovs_interfaceid": "1bb80b61-0282-441f-9bd2-9938bad211d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:04:25 np0005548731 nova_compute[232433]: 2025-12-06 08:04:25.922 232437 DEBUG oslo_concurrency.lockutils [req-fa228a0b-a11a-4949-89ca-67ff6af73659 req-c001a40e-8c27-4bde-b791-da980742436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5b86240b-8681-4833-a88a-b665a431a2fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:04:26 np0005548731 nova_compute[232433]: 2025-12-06 08:04:26.620 232437 DEBUG nova.network.neutron [-] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:04:26 np0005548731 nova_compute[232433]: 2025-12-06 08:04:26.639 232437 INFO nova.compute.manager [-] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Took 1.19 seconds to deallocate network for instance.#033[00m
Dec  6 03:04:26 np0005548731 nova_compute[232433]: 2025-12-06 08:04:26.682 232437 DEBUG oslo_concurrency.lockutils [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:26 np0005548731 nova_compute[232433]: 2025-12-06 08:04:26.683 232437 DEBUG oslo_concurrency.lockutils [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:26 np0005548731 nova_compute[232433]: 2025-12-06 08:04:26.703 232437 DEBUG nova.compute.manager [req-8201e9d8-fb5f-4bbd-802e-b85886869ede req-1a2f35a6-d40f-4f13-86e9-a25d3f652f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Received event network-vif-deleted-1bb80b61-0282-441f-9bd2-9938bad211d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:04:26 np0005548731 nova_compute[232433]: 2025-12-06 08:04:26.727 232437 DEBUG oslo_concurrency.processutils [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.091 232437 DEBUG nova.compute.manager [req-96a69a67-49d8-43ba-9473-32ab34508a14 req-9c275c43-130a-4be6-8a85-fbee2e3d47f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Received event network-vif-plugged-1bb80b61-0282-441f-9bd2-9938bad211d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.092 232437 DEBUG oslo_concurrency.lockutils [req-96a69a67-49d8-43ba-9473-32ab34508a14 req-9c275c43-130a-4be6-8a85-fbee2e3d47f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.092 232437 DEBUG oslo_concurrency.lockutils [req-96a69a67-49d8-43ba-9473-32ab34508a14 req-9c275c43-130a-4be6-8a85-fbee2e3d47f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.092 232437 DEBUG oslo_concurrency.lockutils [req-96a69a67-49d8-43ba-9473-32ab34508a14 req-9c275c43-130a-4be6-8a85-fbee2e3d47f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.093 232437 DEBUG nova.compute.manager [req-96a69a67-49d8-43ba-9473-32ab34508a14 req-9c275c43-130a-4be6-8a85-fbee2e3d47f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] No waiting events found dispatching network-vif-plugged-1bb80b61-0282-441f-9bd2-9938bad211d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.093 232437 WARNING nova.compute.manager [req-96a69a67-49d8-43ba-9473-32ab34508a14 req-9c275c43-130a-4be6-8a85-fbee2e3d47f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Received unexpected event network-vif-plugged-1bb80b61-0282-441f-9bd2-9938bad211d8 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 03:04:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:27.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:04:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/848864160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.164 232437 DEBUG oslo_concurrency.processutils [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.172 232437 DEBUG nova.compute.provider_tree [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.190 232437 DEBUG nova.scheduler.client.report [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.216 232437 DEBUG oslo_concurrency.lockutils [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.241 232437 INFO nova.scheduler.client.report [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 5b86240b-8681-4833-a88a-b665a431a2fb#033[00m
Dec  6 03:04:27 np0005548731 nova_compute[232433]: 2025-12-06 08:04:27.312 232437 DEBUG oslo_concurrency.lockutils [None req-2d7a6a50-4bce-45ee-8760-3102732159af d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5b86240b-8681-4833-a88a-b665a431a2fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:04:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:27.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:04:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:29.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:29.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:29 np0005548731 nova_compute[232433]: 2025-12-06 08:04:29.980 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:30 np0005548731 nova_compute[232433]: 2025-12-06 08:04:30.692 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:31 np0005548731 nova_compute[232433]: 2025-12-06 08:04:31.099 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:04:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:31.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:04:31 np0005548731 nova_compute[232433]: 2025-12-06 08:04:31.216 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:31.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:33.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:33.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:34 np0005548731 nova_compute[232433]: 2025-12-06 08:04:34.985 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:35.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:35 np0005548731 nova_compute[232433]: 2025-12-06 08:04:35.694 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:35.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:36 np0005548731 nova_compute[232433]: 2025-12-06 08:04:36.138 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:36 np0005548731 nova_compute[232433]: 2025-12-06 08:04:36.139 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:04:36 np0005548731 nova_compute[232433]: 2025-12-06 08:04:36.139 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:04:36 np0005548731 nova_compute[232433]: 2025-12-06 08:04:36.156 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:04:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:37.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:37.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:04:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:39.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:04:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:39.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:39 np0005548731 nova_compute[232433]: 2025-12-06 08:04:39.904 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008264.9033012, 5b86240b-8681-4833-a88a-b665a431a2fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:04:39 np0005548731 nova_compute[232433]: 2025-12-06 08:04:39.904 232437 INFO nova.compute.manager [-] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:04:39 np0005548731 nova_compute[232433]: 2025-12-06 08:04:39.923 232437 DEBUG nova.compute.manager [None req-db1dcb07-785d-40a4-a3e6-7c6a66daa8d7 - - - - - -] [instance: 5b86240b-8681-4833-a88a-b665a431a2fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:04:39 np0005548731 nova_compute[232433]: 2025-12-06 08:04:39.987 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:40 np0005548731 nova_compute[232433]: 2025-12-06 08:04:40.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:40 np0005548731 nova_compute[232433]: 2025-12-06 08:04:40.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:40 np0005548731 nova_compute[232433]: 2025-12-06 08:04:40.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:04:40 np0005548731 nova_compute[232433]: 2025-12-06 08:04:40.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:40 np0005548731 nova_compute[232433]: 2025-12-06 08:04:40.697 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:41 np0005548731 nova_compute[232433]: 2025-12-06 08:04:41.110 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:41.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:41.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:43 np0005548731 nova_compute[232433]: 2025-12-06 08:04:43.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:43.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:43.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:44 np0005548731 nova_compute[232433]: 2025-12-06 08:04:44.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:44 np0005548731 nova_compute[232433]: 2025-12-06 08:04:44.989 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:45.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.231 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.231 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.231 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.232 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.232 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:04:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:04:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/826595747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.686 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.700 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:45.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.860 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.861 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4186MB free_disk=20.942890167236328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.861 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.861 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.926 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.927 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:04:45 np0005548731 nova_compute[232433]: 2025-12-06 08:04:45.949 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:04:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:04:46 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2339609487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:04:46 np0005548731 nova_compute[232433]: 2025-12-06 08:04:46.410 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:04:46 np0005548731 nova_compute[232433]: 2025-12-06 08:04:46.416 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:04:46 np0005548731 nova_compute[232433]: 2025-12-06 08:04:46.482 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:04:46 np0005548731 nova_compute[232433]: 2025-12-06 08:04:46.512 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:04:46 np0005548731 nova_compute[232433]: 2025-12-06 08:04:46.513 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:47.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:47.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:47 np0005548731 podman[325667]: 2025-12-06 08:04:47.896032442 +0000 UTC m=+0.056102292 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 03:04:47 np0005548731 podman[325669]: 2025-12-06 08:04:47.898328639 +0000 UTC m=+0.054377321 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Dec  6 03:04:47 np0005548731 podman[325668]: 2025-12-06 08:04:47.924847167 +0000 UTC m=+0.083954914 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Dec  6 03:04:47 np0005548731 nova_compute[232433]: 2025-12-06 08:04:47.982 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:47 np0005548731 nova_compute[232433]: 2025-12-06 08:04:47.983 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:48 np0005548731 nova_compute[232433]: 2025-12-06 08:04:48.571 232437 DEBUG nova.compute.manager [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:04:48 np0005548731 nova_compute[232433]: 2025-12-06 08:04:48.640 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:48 np0005548731 nova_compute[232433]: 2025-12-06 08:04:48.640 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:48 np0005548731 nova_compute[232433]: 2025-12-06 08:04:48.646 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:04:48 np0005548731 nova_compute[232433]: 2025-12-06 08:04:48.646 232437 INFO nova.compute.claims [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:04:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:49.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:49 np0005548731 nova_compute[232433]: 2025-12-06 08:04:49.588 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:04:49 np0005548731 nova_compute[232433]: 2025-12-06 08:04:49.618 232437 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Creating tmpfile /var/lib/nova/instances/tmpddofkvg0 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Dec  6 03:04:49 np0005548731 nova_compute[232433]: 2025-12-06 08:04:49.743 232437 DEBUG nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpddofkvg0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Dec  6 03:04:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:49.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:49 np0005548731 nova_compute[232433]: 2025-12-06 08:04:49.992 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:04:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4333110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.012 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.017 232437 DEBUG nova.compute.provider_tree [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.041 232437 DEBUG nova.scheduler.client.report [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.088 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.089 232437 DEBUG nova.compute.manager [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.149 232437 DEBUG nova.compute.manager [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.149 232437 DEBUG nova.network.neutron [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.203 232437 INFO nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.219 232437 DEBUG nova.compute.manager [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.305 232437 DEBUG nova.compute.manager [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.307 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.308 232437 INFO nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Creating image(s)#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.341 232437 DEBUG nova.storage.rbd_utils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.378 232437 DEBUG nova.storage.rbd_utils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.403 232437 DEBUG nova.storage.rbd_utils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.408 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.512 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.514 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.514 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.515 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.515 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.540 232437 DEBUG nova.storage.rbd_utils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.543 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.569 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.667 232437 DEBUG nova.policy [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.701 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.806 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.863 232437 DEBUG nova.storage.rbd_utils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.961 232437 DEBUG nova.objects.instance [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 53cf05eb-a0e9-4817-a441-cbba2c770c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.974 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.975 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Ensure instance console log exists: /var/lib/nova/instances/53cf05eb-a0e9-4817-a441-cbba2c770c4b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.975 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.976 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:50 np0005548731 nova_compute[232433]: 2025-12-06 08:04:50.976 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:51.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:51.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:51 np0005548731 nova_compute[232433]: 2025-12-06 08:04:51.935 232437 DEBUG nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpddofkvg0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2ce29812-b64c-4801-a37b-68c55429b70c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Dec  6 03:04:51 np0005548731 nova_compute[232433]: 2025-12-06 08:04:51.970 232437 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:04:51 np0005548731 nova_compute[232433]: 2025-12-06 08:04:51.971 232437 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquired lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:04:51 np0005548731 nova_compute[232433]: 2025-12-06 08:04:51.971 232437 DEBUG nova.network.neutron [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:04:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:53 np0005548731 nova_compute[232433]: 2025-12-06 08:04:53.190 232437 DEBUG nova.network.neutron [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Successfully created port: 43e1b7c7-10b3-4138-8e30-33110bd9f83e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:04:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:53.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.450 232437 DEBUG nova.network.neutron [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Successfully updated port: 43e1b7c7-10b3-4138-8e30-33110bd9f83e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.466 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.466 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.466 232437 DEBUG nova.network.neutron [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.478 232437 DEBUG nova.network.neutron [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updating instance_info_cache with network_info: [{"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.504 232437 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Releasing lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.506 232437 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpddofkvg0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2ce29812-b64c-4801-a37b-68c55429b70c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.506 232437 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Creating instance directory: /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.506 232437 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Ensure instance console log exists: /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.507 232437 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.508 232437 DEBUG nova.virt.libvirt.vif [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1543702869',display_name='tempest-TestNetworkAdvancedServerOps-server-1543702869',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1543702869',id=187,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBInGUhg7nhCoc2vx+Ix7gLWIjZpxEjyqveZeyfMP/1wxX8FSrtE3tQA2JbvpPn3Vva7vIRTnPCXD+7DHbX9YJlXkUS+5x8l7M/agABi3TQb6p6z9n1aAcCS+pz1vzZhCpQ==',key_name='tempest-TestNetworkAdvancedServerOps-1853257958',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-q1zcoiif',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:04:23Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=2ce29812-b64c-4801-a37b-68c55429b70c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.508 232437 DEBUG nova.network.os_vif_util [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converting VIF {"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.509 232437 DEBUG nova.network.os_vif_util [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.509 232437 DEBUG os_vif [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.509 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.510 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.510 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.513 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.513 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42776015-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.514 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap42776015-1d, col_values=(('external_ids', {'iface-id': '42776015-1d70-4b92-9890-d31aaa444637', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:d9:3a', 'vm-uuid': '2ce29812-b64c-4801-a37b-68c55429b70c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.516 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:54 np0005548731 NetworkManager[49182]: <info>  [1765008294.5171] manager: (tap42776015-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.517 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.521 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.522 232437 INFO os_vif [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d')#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.523 232437 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.523 232437 DEBUG nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpddofkvg0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2ce29812-b64c-4801-a37b-68c55429b70c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.568 232437 DEBUG nova.compute.manager [req-74d6ec45-256a-4df9-8755-628a627b565d req-7ea14f0a-ae0a-4074-b61b-cf7e4cb509d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Received event network-changed-43e1b7c7-10b3-4138-8e30-33110bd9f83e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.568 232437 DEBUG nova.compute.manager [req-74d6ec45-256a-4df9-8755-628a627b565d req-7ea14f0a-ae0a-4074-b61b-cf7e4cb509d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Refreshing instance network info cache due to event network-changed-43e1b7c7-10b3-4138-8e30-33110bd9f83e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.569 232437 DEBUG oslo_concurrency.lockutils [req-74d6ec45-256a-4df9-8755-628a627b565d req-7ea14f0a-ae0a-4074-b61b-cf7e4cb509d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:04:54 np0005548731 nova_compute[232433]: 2025-12-06 08:04:54.673 232437 DEBUG nova.network.neutron [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:04:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:55.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:55 np0005548731 nova_compute[232433]: 2025-12-06 08:04:55.702 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:55.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:56.166 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.166 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:56.168 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.342 232437 DEBUG nova.network.neutron [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Updating instance_info_cache with network_info: [{"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.394 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.395 232437 DEBUG nova.compute.manager [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Instance network_info: |[{"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.395 232437 DEBUG oslo_concurrency.lockutils [req-74d6ec45-256a-4df9-8755-628a627b565d req-7ea14f0a-ae0a-4074-b61b-cf7e4cb509d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.395 232437 DEBUG nova.network.neutron [req-74d6ec45-256a-4df9-8755-628a627b565d req-7ea14f0a-ae0a-4074-b61b-cf7e4cb509d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Refreshing network info cache for port 43e1b7c7-10b3-4138-8e30-33110bd9f83e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.398 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Start _get_guest_xml network_info=[{"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.403 232437 WARNING nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.410 232437 DEBUG nova.virt.libvirt.host [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.411 232437 DEBUG nova.virt.libvirt.host [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.416 232437 DEBUG nova.virt.libvirt.host [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.416 232437 DEBUG nova.virt.libvirt.host [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.417 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.417 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.418 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.418 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.418 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.418 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.419 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.419 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.419 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.419 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.419 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.420 232437 DEBUG nova.virt.hardware [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.423 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.594 232437 DEBUG nova.network.neutron [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Port 42776015-1d70-4b92-9890-d31aaa444637 updated with migration profile {'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.597 232437 DEBUG nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpddofkvg0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2ce29812-b64c-4801-a37b-68c55429b70c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Dec  6 03:04:56 np0005548731 systemd[1]: Starting libvirt proxy daemon...
Dec  6 03:04:56 np0005548731 systemd[1]: Started libvirt proxy daemon.
Dec  6 03:04:56 np0005548731 kernel: tap42776015-1d: entered promiscuous mode
Dec  6 03:04:56 np0005548731 NetworkManager[49182]: <info>  [1765008296.8363] manager: (tap42776015-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/442)
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.837 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:56Z|00957|binding|INFO|Claiming lport 42776015-1d70-4b92-9890-d31aaa444637 for this additional chassis.
Dec  6 03:04:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:56Z|00958|binding|INFO|42776015-1d70-4b92-9890-d31aaa444637: Claiming fa:16:3e:ae:d9:3a 10.100.0.9
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.841 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:56 np0005548731 NetworkManager[49182]: <info>  [1765008296.8487] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.848 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:56 np0005548731 NetworkManager[49182]: <info>  [1765008296.8493] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Dec  6 03:04:56 np0005548731 systemd-udevd[326021]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:04:56 np0005548731 NetworkManager[49182]: <info>  [1765008296.8745] device (tap42776015-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:04:56 np0005548731 NetworkManager[49182]: <info>  [1765008296.8754] device (tap42776015-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:04:56 np0005548731 systemd-machined[195355]: New machine qemu-96-instance-000000bb.
Dec  6 03:04:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:04:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3888507031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.908 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.940 232437 DEBUG nova.storage.rbd_utils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.945 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:04:56 np0005548731 systemd[1]: Started Virtual Machine qemu-96-instance-000000bb.
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.972 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.981 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:56Z|00959|binding|INFO|Setting lport 42776015-1d70-4b92-9890-d31aaa444637 ovn-installed in OVS
Dec  6 03:04:56 np0005548731 nova_compute[232433]: 2025-12-06 08:04:56.993 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:57.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:04:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4063916969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.379 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.382 232437 DEBUG nova.virt.libvirt.vif [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1175655928',display_name='tempest-TestNetworkBasicOps-server-1175655928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1175655928',id=188,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBt0jLPk2iN+Hpq+kn9o8I8ax4aimgQqm1H7syySMpztN4N1lD49+MzdovmLDVVSxMcVhESXiJzt8WdekylzuvNkp6MvhMMn70BtxkiBahaQSbGPzzOiP2J48ZImfcQaw==',key_name='tempest-TestNetworkBasicOps-640512507',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-f0wuvw2n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:04:50Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=53cf05eb-a0e9-4817-a441-cbba2c770c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.382 232437 DEBUG nova.network.os_vif_util [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.383 232437 DEBUG nova.network.os_vif_util [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:8e:a2,bridge_name='br-int',has_traffic_filtering=True,id=43e1b7c7-10b3-4138-8e30-33110bd9f83e,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e1b7c7-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.385 232437 DEBUG nova.objects.instance [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 53cf05eb-a0e9-4817-a441-cbba2c770c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.402 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  <uuid>53cf05eb-a0e9-4817-a441-cbba2c770c4b</uuid>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  <name>instance-000000bc</name>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestNetworkBasicOps-server-1175655928</nova:name>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:04:56</nova:creationTime>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <nova:port uuid="43e1b7c7-10b3-4138-8e30-33110bd9f83e">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <entry name="serial">53cf05eb-a0e9-4817-a441-cbba2c770c4b</entry>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <entry name="uuid">53cf05eb-a0e9-4817-a441-cbba2c770c4b</entry>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk.config">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:87:8e:a2"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <target dev="tap43e1b7c7-10"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/53cf05eb-a0e9-4817-a441-cbba2c770c4b/console.log" append="off"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:04:57 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:04:57 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:04:57 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:04:57 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.405 232437 DEBUG nova.compute.manager [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Preparing to wait for external event network-vif-plugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.405 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.405 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.406 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.407 232437 DEBUG nova.virt.libvirt.vif [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1175655928',display_name='tempest-TestNetworkBasicOps-server-1175655928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1175655928',id=188,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBt0jLPk2iN+Hpq+kn9o8I8ax4aimgQqm1H7syySMpztN4N1lD49+MzdovmLDVVSxMcVhESXiJzt8WdekylzuvNkp6MvhMMn70BtxkiBahaQSbGPzzOiP2J48ZImfcQaw==',key_name='tempest-TestNetworkBasicOps-640512507',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-f0wuvw2n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:04:50Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=53cf05eb-a0e9-4817-a441-cbba2c770c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.407 232437 DEBUG nova.network.os_vif_util [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.408 232437 DEBUG nova.network.os_vif_util [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:8e:a2,bridge_name='br-int',has_traffic_filtering=True,id=43e1b7c7-10b3-4138-8e30-33110bd9f83e,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e1b7c7-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.408 232437 DEBUG os_vif [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:8e:a2,bridge_name='br-int',has_traffic_filtering=True,id=43e1b7c7-10b3-4138-8e30-33110bd9f83e,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e1b7c7-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.409 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.409 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.410 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.412 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.412 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43e1b7c7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.413 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43e1b7c7-10, col_values=(('external_ids', {'iface-id': '43e1b7c7-10b3-4138-8e30-33110bd9f83e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:8e:a2', 'vm-uuid': '53cf05eb-a0e9-4817-a441-cbba2c770c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:57 np0005548731 NetworkManager[49182]: <info>  [1765008297.4164] manager: (tap43e1b7c7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.416 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.420 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.421 232437 INFO os_vif [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:8e:a2,bridge_name='br-int',has_traffic_filtering=True,id=43e1b7c7-10b3-4138-8e30-33110bd9f83e,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e1b7c7-10')#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.479 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008297.4784682, 2ce29812-b64c-4801-a37b-68c55429b70c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.480 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] VM Started (Lifecycle Event)#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.483 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.483 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.483 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:87:8e:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.484 232437 INFO nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Using config drive#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.508 232437 DEBUG nova.storage.rbd_utils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.514 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:04:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:57.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.815 232437 INFO nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Creating config drive at /var/lib/nova/instances/53cf05eb-a0e9-4817-a441-cbba2c770c4b/disk.config#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.820 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/53cf05eb-a0e9-4817-a441-cbba2c770c4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqwt51g1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.941 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008297.9408004, 2ce29812-b64c-4801-a37b-68c55429b70c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.941 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.954 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/53cf05eb-a0e9-4817-a441-cbba2c770c4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqwt51g1" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.984 232437 DEBUG nova.storage.rbd_utils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:04:57 np0005548731 nova_compute[232433]: 2025-12-06 08:04:57.988 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/53cf05eb-a0e9-4817-a441-cbba2c770c4b/disk.config 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.015 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.019 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.038 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.127 232437 DEBUG oslo_concurrency.processutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/53cf05eb-a0e9-4817-a441-cbba2c770c4b/disk.config 53cf05eb-a0e9-4817-a441-cbba2c770c4b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.127 232437 INFO nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Deleting local config drive /var/lib/nova/instances/53cf05eb-a0e9-4817-a441-cbba2c770c4b/disk.config because it was imported into RBD.#033[00m
Dec  6 03:04:58 np0005548731 kernel: tap43e1b7c7-10: entered promiscuous mode
Dec  6 03:04:58 np0005548731 NetworkManager[49182]: <info>  [1765008298.1698] manager: (tap43e1b7c7-10): new Tun device (/org/freedesktop/NetworkManager/Devices/446)
Dec  6 03:04:58 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:58Z|00960|binding|INFO|Claiming lport 43e1b7c7-10b3-4138-8e30-33110bd9f83e for this chassis.
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.171 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:58 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:58Z|00961|binding|INFO|43e1b7c7-10b3-4138-8e30-33110bd9f83e: Claiming fa:16:3e:87:8e:a2 10.100.0.4
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.177 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:8e:a2 10.100.0.4'], port_security=['fa:16:3e:87:8e:a2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '53cf05eb-a0e9-4817-a441-cbba2c770c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f40db9ca-2d53-4929-957f-d781dfa63d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d84d8f55-4938-4502-958a-437fbc252df8, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=43e1b7c7-10b3-4138-8e30-33110bd9f83e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.178 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 43e1b7c7-10b3-4138-8e30-33110bd9f83e in datapath 9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 bound to our chassis#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.180 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9482cb7a-b1a1-4dca-80a7-c7782ee5fe71#033[00m
Dec  6 03:04:58 np0005548731 NetworkManager[49182]: <info>  [1765008298.1829] device (tap43e1b7c7-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:04:58 np0005548731 NetworkManager[49182]: <info>  [1765008298.1843] device (tap43e1b7c7-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:04:58 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:58Z|00962|binding|INFO|Setting lport 43e1b7c7-10b3-4138-8e30-33110bd9f83e ovn-installed in OVS
Dec  6 03:04:58 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:58Z|00963|binding|INFO|Setting lport 43e1b7c7-10b3-4138-8e30-33110bd9f83e up in Southbound
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.188 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.191 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.192 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e4ff96-9d08-4bad-b07d-4dc5f4363b43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.193 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9482cb7a-b1 in ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.195 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9482cb7a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.195 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[24bc969d-d792-4ad3-a0cd-f002e3a660e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.196 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4a288b74-cca6-4b47-a3c7-a51f4fb9dced]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 systemd-machined[195355]: New machine qemu-97-instance-000000bc.
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.206 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[44d9038e-0f7d-455c-8af2-430a57779304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 systemd[1]: Started Virtual Machine qemu-97-instance-000000bc.
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.218 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb158d3-0fc4-4085-b342-6a7d45569e6c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.252 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[593a6f0e-c6f5-4222-bafd-85e9384263f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 NetworkManager[49182]: <info>  [1765008298.2577] manager: (tap9482cb7a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/447)
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.257 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[701ba3e8-5995-470d-ad3b-c89a58bb12ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.286 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb388df-8fa5-4177-a211-c72d25dea8d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.289 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[380f6a33-5c0d-40ec-8474-f79afc365839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 NetworkManager[49182]: <info>  [1765008298.3095] device (tap9482cb7a-b0): carrier: link connected
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.318 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4463a74a-7f4d-4d0d-afbb-53d93bb393ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.336 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf7ee5e-df2c-4804-81b4-c72f2c177809]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9482cb7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:30:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856031, 'reachable_time': 42419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326222, 'error': None, 'target': 'ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.353 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[23a69d88-84cd-475d-93b5-d1f79d7a85b9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febe:303c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856031, 'tstamp': 856031}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326223, 'error': None, 'target': 'ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.371 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[971e2807-8916-4fd1-84db-e1c507e57b5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9482cb7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:30:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856031, 'reachable_time': 42419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326224, 'error': None, 'target': 'ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.399 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[70e1da39-aae6-453e-a01b-56c1ec58d3c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.454 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0ded9d-a96f-4399-9684-4d4e833d9434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.455 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9482cb7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.456 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.456 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9482cb7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:58 np0005548731 kernel: tap9482cb7a-b0: entered promiscuous mode
Dec  6 03:04:58 np0005548731 NetworkManager[49182]: <info>  [1765008298.4586] manager: (tap9482cb7a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.457 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.461 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9482cb7a-b0, col_values=(('external_ids', {'iface-id': '7a26b0d3-5a0f-46fe-987b-780e7076a0fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.462 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:58 np0005548731 ovn_controller[133927]: 2025-12-06T08:04:58Z|00964|binding|INFO|Releasing lport 7a26b0d3-5a0f-46fe-987b-780e7076a0fa from this chassis (sb_readonly=0)
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.463 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9482cb7a-b1a1-4dca-80a7-c7782ee5fe71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9482cb7a-b1a1-4dca-80a7-c7782ee5fe71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.464 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a8aae8-5d98-47e8-8ca3-cadc6d5c80ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.467 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/9482cb7a-b1a1-4dca-80a7-c7782ee5fe71.pid.haproxy
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 9482cb7a-b1a1-4dca-80a7-c7782ee5fe71
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:04:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:04:58.468 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'env', 'PROCESS_TAG=haproxy-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9482cb7a-b1a1-4dca-80a7-c7782ee5fe71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.474 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:04:58 np0005548731 podman[326255]: 2025-12-06 08:04:58.815503199 +0000 UTC m=+0.048486637 container create 8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 03:04:58 np0005548731 systemd[1]: Started libpod-conmon-8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c.scope.
Dec  6 03:04:58 np0005548731 podman[326255]: 2025-12-06 08:04:58.787240288 +0000 UTC m=+0.020223736 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:04:58 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:04:58 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27adc9e7918aa03eb8bb5cc2c21cc5ce9fe53ed5cc9b67d4d83d2c3f04a85d87/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:04:58 np0005548731 podman[326255]: 2025-12-06 08:04:58.903950261 +0000 UTC m=+0.136933709 container init 8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:04:58 np0005548731 podman[326255]: 2025-12-06 08:04:58.911029434 +0000 UTC m=+0.144012862 container start 8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.913 232437 DEBUG nova.compute.manager [req-e586cd59-1544-4b08-aeb2-044f7970220f req-e7a14688-04b0-45c9-a8f4-c4264e52de62 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Received event network-vif-plugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.914 232437 DEBUG oslo_concurrency.lockutils [req-e586cd59-1544-4b08-aeb2-044f7970220f req-e7a14688-04b0-45c9-a8f4-c4264e52de62 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.915 232437 DEBUG oslo_concurrency.lockutils [req-e586cd59-1544-4b08-aeb2-044f7970220f req-e7a14688-04b0-45c9-a8f4-c4264e52de62 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.915 232437 DEBUG oslo_concurrency.lockutils [req-e586cd59-1544-4b08-aeb2-044f7970220f req-e7a14688-04b0-45c9-a8f4-c4264e52de62 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:58 np0005548731 nova_compute[232433]: 2025-12-06 08:04:58.915 232437 DEBUG nova.compute.manager [req-e586cd59-1544-4b08-aeb2-044f7970220f req-e7a14688-04b0-45c9-a8f4-c4264e52de62 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Processing event network-vif-plugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:04:58 np0005548731 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[326270]: [NOTICE]   (326274) : New worker (326276) forked
Dec  6 03:04:58 np0005548731 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[326270]: [NOTICE]   (326274) : Loading success.
Dec  6 03:04:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:04:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:59.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:04:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.441 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008299.4406426, 53cf05eb-a0e9-4817-a441-cbba2c770c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.442 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] VM Started (Lifecycle Event)#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.444 232437 DEBUG nova.compute.manager [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.447 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.451 232437 INFO nova.virt.libvirt.driver [-] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Instance spawned successfully.#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.451 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.465 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.470 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.473 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.474 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.474 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.474 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.475 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.476 232437 DEBUG nova.virt.libvirt.driver [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.497 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.498 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008299.4413562, 53cf05eb-a0e9-4817-a441-cbba2c770c4b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.498 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.538 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.540 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008299.4467773, 53cf05eb-a0e9-4817-a441-cbba2c770c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.540 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.546 232437 INFO nova.compute.manager [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Took 9.24 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.547 232437 DEBUG nova.compute.manager [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.560 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.563 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.606 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.658 232437 INFO nova.compute.manager [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Took 11.04 seconds to build instance.#033[00m
Dec  6 03:04:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:04:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:04:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:59.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.849 232437 DEBUG oslo_concurrency.lockutils [None req-53cf589f-8131-41a2-8be1-a2744b87c7c5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.915 232437 DEBUG nova.network.neutron [req-74d6ec45-256a-4df9-8755-628a627b565d req-7ea14f0a-ae0a-4074-b61b-cf7e4cb509d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Updated VIF entry in instance network info cache for port 43e1b7c7-10b3-4138-8e30-33110bd9f83e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.916 232437 DEBUG nova.network.neutron [req-74d6ec45-256a-4df9-8755-628a627b565d req-7ea14f0a-ae0a-4074-b61b-cf7e4cb509d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Updating instance_info_cache with network_info: [{"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:04:59 np0005548731 nova_compute[232433]: 2025-12-06 08:04:59.938 232437 DEBUG oslo_concurrency.lockutils [req-74d6ec45-256a-4df9-8755-628a627b565d req-7ea14f0a-ae0a-4074-b61b-cf7e4cb509d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:05:00 np0005548731 ovn_controller[133927]: 2025-12-06T08:05:00Z|00965|binding|INFO|Claiming lport 42776015-1d70-4b92-9890-d31aaa444637 for this chassis.
Dec  6 03:05:00 np0005548731 ovn_controller[133927]: 2025-12-06T08:05:00Z|00966|binding|INFO|42776015-1d70-4b92-9890-d31aaa444637: Claiming fa:16:3e:ae:d9:3a 10.100.0.9
Dec  6 03:05:00 np0005548731 ovn_controller[133927]: 2025-12-06T08:05:00Z|00967|binding|INFO|Setting lport 42776015-1d70-4b92-9890-d31aaa444637 up in Southbound
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.438 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:d9:3a 10.100.0.9'], port_security=['fa:16:3e:ae:d9:3a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2ce29812-b64c-4801-a37b-68c55429b70c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '2b4f8dab-73cc-4482-899b-78d869a3817d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81064306-0257-4e49-ba6e-9b830b20be21, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=42776015-1d70-4b92-9890-d31aaa444637) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.439 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 42776015-1d70-4b92-9890-d31aaa444637 in datapath 7ab4eeff-e26c-426f-afdf-0ed982f0262e bound to our chassis#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.441 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ab4eeff-e26c-426f-afdf-0ed982f0262e#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.452 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ae19b0b6-3397-47bd-9c58-4b1147524299]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.452 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ab4eeff-e1 in ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.455 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ab4eeff-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.455 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c32389-bd3a-4da9-817e-de0f8b69e625]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.456 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d810376a-d668-4421-b2b0-e5e4457c4332]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.468 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c2d82e-3b90-4c48-b9a6-adfa32042f46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.480 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dacec3b3-be26-4d62-9735-c3adaf6cac5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.517 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[8fad805f-79a1-4bb7-8e91-0d99558536e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.523 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd5d14f-c3d3-4835-a340-79d96f5e1663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 NetworkManager[49182]: <info>  [1765008300.5244] manager: (tap7ab4eeff-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/449)
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.558 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[01fb338f-62d0-4977-8ca9-071285b1e4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.561 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b12e29c0-1171-45e1-b7d9-dbc35195b913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 systemd-udevd[326335]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:05:00 np0005548731 NetworkManager[49182]: <info>  [1765008300.5916] device (tap7ab4eeff-e0): carrier: link connected
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.598 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[a28b1eb8-7724-4dff-9e1f-8364f78156b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.616 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[82b29987-b827-4940-a75d-bd995529ac81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ab4eeff-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:5d:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 292], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856259, 'reachable_time': 39271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326354, 'error': None, 'target': 'ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.633 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d2484a1a-7728-41b7-a992-94dc3cb83049]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe89:5dda'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856259, 'tstamp': 856259}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326355, 'error': None, 'target': 'ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.648 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[23cfeb86-7b90-4800-9154-294932dc6c56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ab4eeff-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:5d:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 292], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856259, 'reachable_time': 39271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326356, 'error': None, 'target': 'ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 nova_compute[232433]: 2025-12-06 08:05:00.671 232437 INFO nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Post operation of migration started#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.679 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc34cc1-cbbb-4fe9-ab9d-079e74814150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 nova_compute[232433]: 2025-12-06 08:05:00.704 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.735 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3c820db6-8751-4358-a68e-515c64d043b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.736 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ab4eeff-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.736 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.737 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ab4eeff-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:05:00 np0005548731 kernel: tap7ab4eeff-e0: entered promiscuous mode
Dec  6 03:05:00 np0005548731 NetworkManager[49182]: <info>  [1765008300.7396] manager: (tap7ab4eeff-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.742 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ab4eeff-e0, col_values=(('external_ids', {'iface-id': '341c5909-2539-4d67-99cf-ac110e415c92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:05:00 np0005548731 ovn_controller[133927]: 2025-12-06T08:05:00Z|00968|binding|INFO|Releasing lport 341c5909-2539-4d67-99cf-ac110e415c92 from this chassis (sb_readonly=0)
Dec  6 03:05:00 np0005548731 nova_compute[232433]: 2025-12-06 08:05:00.743 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:00 np0005548731 nova_compute[232433]: 2025-12-06 08:05:00.762 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:00 np0005548731 nova_compute[232433]: 2025-12-06 08:05:00.763 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.764 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ab4eeff-e26c-426f-afdf-0ed982f0262e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ab4eeff-e26c-426f-afdf-0ed982f0262e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.765 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[65d8e652-f947-4970-bc3e-f19e71ce35e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.765 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-7ab4eeff-e26c-426f-afdf-0ed982f0262e
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/7ab4eeff-e26c-426f-afdf-0ed982f0262e.pid.haproxy
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 7ab4eeff-e26c-426f-afdf-0ed982f0262e
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.766 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'env', 'PROCESS_TAG=haproxy-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ab4eeff-e26c-426f-afdf-0ed982f0262e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.909 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.910 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:05:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:00.911 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:05:01 np0005548731 podman[326388]: 2025-12-06 08:05:01.109573437 +0000 UTC m=+0.044947349 container create 9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 03:05:01 np0005548731 systemd[1]: Started libpod-conmon-9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31.scope.
Dec  6 03:05:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:01.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:01 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:05:01 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c272f9041c4e6a92c2eb0d3322f02443c46e07257d663312ba0c7b9cd45e30c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:05:01 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:01.170 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:05:01 np0005548731 podman[326388]: 2025-12-06 08:05:01.181083187 +0000 UTC m=+0.116457109 container init 9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  6 03:05:01 np0005548731 podman[326388]: 2025-12-06 08:05:01.085587101 +0000 UTC m=+0.020961033 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:05:01 np0005548731 podman[326388]: 2025-12-06 08:05:01.186532379 +0000 UTC m=+0.121906291 container start 9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 03:05:01 np0005548731 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[326403]: [NOTICE]   (326407) : New worker (326409) forked
Dec  6 03:05:01 np0005548731 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[326403]: [NOTICE]   (326407) : Loading success.
Dec  6 03:05:01 np0005548731 nova_compute[232433]: 2025-12-06 08:05:01.234 232437 DEBUG nova.compute.manager [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Received event network-vif-plugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:05:01 np0005548731 nova_compute[232433]: 2025-12-06 08:05:01.234 232437 DEBUG oslo_concurrency.lockutils [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:05:01 np0005548731 nova_compute[232433]: 2025-12-06 08:05:01.234 232437 DEBUG oslo_concurrency.lockutils [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:05:01 np0005548731 nova_compute[232433]: 2025-12-06 08:05:01.235 232437 DEBUG oslo_concurrency.lockutils [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:05:01 np0005548731 nova_compute[232433]: 2025-12-06 08:05:01.235 232437 DEBUG nova.compute.manager [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] No waiting events found dispatching network-vif-plugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:05:01 np0005548731 nova_compute[232433]: 2025-12-06 08:05:01.235 232437 WARNING nova.compute.manager [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Received unexpected event network-vif-plugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e for instance with vm_state active and task_state None.#033[00m
Dec  6 03:05:01 np0005548731 nova_compute[232433]: 2025-12-06 08:05:01.680 232437 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:05:01 np0005548731 nova_compute[232433]: 2025-12-06 08:05:01.681 232437 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquired lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:05:01 np0005548731 nova_compute[232433]: 2025-12-06 08:05:01.681 232437 DEBUG nova.network.neutron [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:05:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:01.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:02 np0005548731 nova_compute[232433]: 2025-12-06 08:05:02.416 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:05:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:03.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:05:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:03.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:04 np0005548731 nova_compute[232433]: 2025-12-06 08:05:04.052 232437 DEBUG nova.network.neutron [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updating instance_info_cache with network_info: [{"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:05:04 np0005548731 nova_compute[232433]: 2025-12-06 08:05:04.087 232437 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Releasing lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:05:04 np0005548731 nova_compute[232433]: 2025-12-06 08:05:04.115 232437 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:05:04 np0005548731 nova_compute[232433]: 2025-12-06 08:05:04.116 232437 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:05:04 np0005548731 nova_compute[232433]: 2025-12-06 08:05:04.116 232437 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:05:04 np0005548731 nova_compute[232433]: 2025-12-06 08:05:04.120 232437 INFO nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Dec  6 03:05:04 np0005548731 virtqemud[232080]: Domain id=96 name='instance-000000bb' uuid=2ce29812-b64c-4801-a37b-68c55429b70c is tainted: custom-monitor
Dec  6 03:05:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:05 np0005548731 nova_compute[232433]: 2025-12-06 08:05:05.128 232437 INFO nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Dec  6 03:05:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:05.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:05 np0005548731 nova_compute[232433]: 2025-12-06 08:05:05.705 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:05:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:05.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:05:06 np0005548731 nova_compute[232433]: 2025-12-06 08:05:06.134 232437 INFO nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Dec  6 03:05:06 np0005548731 nova_compute[232433]: 2025-12-06 08:05:06.140 232437 DEBUG nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:05:06 np0005548731 nova_compute[232433]: 2025-12-06 08:05:06.271 232437 DEBUG nova.compute.manager [req-ffb9233f-4eca-459a-b560-9db99ee538c8 req-8bd1cdba-a212-47d2-b621-33b69d21c5d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Received event network-changed-43e1b7c7-10b3-4138-8e30-33110bd9f83e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:05:06 np0005548731 nova_compute[232433]: 2025-12-06 08:05:06.272 232437 DEBUG nova.compute.manager [req-ffb9233f-4eca-459a-b560-9db99ee538c8 req-8bd1cdba-a212-47d2-b621-33b69d21c5d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Refreshing instance network info cache due to event network-changed-43e1b7c7-10b3-4138-8e30-33110bd9f83e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:05:06 np0005548731 nova_compute[232433]: 2025-12-06 08:05:06.272 232437 DEBUG oslo_concurrency.lockutils [req-ffb9233f-4eca-459a-b560-9db99ee538c8 req-8bd1cdba-a212-47d2-b621-33b69d21c5d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:05:06 np0005548731 nova_compute[232433]: 2025-12-06 08:05:06.273 232437 DEBUG oslo_concurrency.lockutils [req-ffb9233f-4eca-459a-b560-9db99ee538c8 req-8bd1cdba-a212-47d2-b621-33b69d21c5d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:05:06 np0005548731 nova_compute[232433]: 2025-12-06 08:05:06.273 232437 DEBUG nova.network.neutron [req-ffb9233f-4eca-459a-b560-9db99ee538c8 req-8bd1cdba-a212-47d2-b621-33b69d21c5d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Refreshing network info cache for port 43e1b7c7-10b3-4138-8e30-33110bd9f83e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:05:06 np0005548731 nova_compute[232433]: 2025-12-06 08:05:06.289 232437 DEBUG nova.objects.instance [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec  6 03:05:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:07.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:07 np0005548731 nova_compute[232433]: 2025-12-06 08:05:07.418 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:07.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:07 np0005548731 nova_compute[232433]: 2025-12-06 08:05:07.949 232437 DEBUG nova.network.neutron [req-ffb9233f-4eca-459a-b560-9db99ee538c8 req-8bd1cdba-a212-47d2-b621-33b69d21c5d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Updated VIF entry in instance network info cache for port 43e1b7c7-10b3-4138-8e30-33110bd9f83e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:05:07 np0005548731 nova_compute[232433]: 2025-12-06 08:05:07.950 232437 DEBUG nova.network.neutron [req-ffb9233f-4eca-459a-b560-9db99ee538c8 req-8bd1cdba-a212-47d2-b621-33b69d21c5d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Updating instance_info_cache with network_info: [{"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:05:08 np0005548731 nova_compute[232433]: 2025-12-06 08:05:08.676 232437 DEBUG oslo_concurrency.lockutils [req-ffb9233f-4eca-459a-b560-9db99ee538c8 req-8bd1cdba-a212-47d2-b621-33b69d21c5d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:05:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:05:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:09.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:05:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:09 np0005548731 nova_compute[232433]: 2025-12-06 08:05:09.360 232437 INFO nova.compute.manager [None req-3c69ae2c-fd24-40be-86fa-940e65dbbc6d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Get console output#033[00m
Dec  6 03:05:09 np0005548731 nova_compute[232433]: 2025-12-06 08:05:09.365 261230 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 03:05:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:09.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.707 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.810 232437 DEBUG nova.compute.manager [req-1428400a-a3ff-4b48-9c23-e782542d4771 req-a93caf75-e9d4-437a-b3fe-af4dc4655de8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-changed-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.810 232437 DEBUG nova.compute.manager [req-1428400a-a3ff-4b48-9c23-e782542d4771 req-a93caf75-e9d4-437a-b3fe-af4dc4655de8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Refreshing instance network info cache due to event network-changed-42776015-1d70-4b92-9890-d31aaa444637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.810 232437 DEBUG oslo_concurrency.lockutils [req-1428400a-a3ff-4b48-9c23-e782542d4771 req-a93caf75-e9d4-437a-b3fe-af4dc4655de8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.811 232437 DEBUG oslo_concurrency.lockutils [req-1428400a-a3ff-4b48-9c23-e782542d4771 req-a93caf75-e9d4-437a-b3fe-af4dc4655de8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.811 232437 DEBUG nova.network.neutron [req-1428400a-a3ff-4b48-9c23-e782542d4771 req-a93caf75-e9d4-437a-b3fe-af4dc4655de8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Refreshing network info cache for port 42776015-1d70-4b92-9890-d31aaa444637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.878 232437 DEBUG oslo_concurrency.lockutils [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.878 232437 DEBUG oslo_concurrency.lockutils [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.878 232437 DEBUG oslo_concurrency.lockutils [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.879 232437 DEBUG oslo_concurrency.lockutils [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.879 232437 DEBUG oslo_concurrency.lockutils [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.880 232437 INFO nova.compute.manager [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Terminating instance#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.881 232437 DEBUG nova.compute.manager [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:05:10 np0005548731 kernel: tap42776015-1d (unregistering): left promiscuous mode
Dec  6 03:05:10 np0005548731 NetworkManager[49182]: <info>  [1765008310.9347] device (tap42776015-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.943 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:05:10Z|00969|binding|INFO|Releasing lport 42776015-1d70-4b92-9890-d31aaa444637 from this chassis (sb_readonly=0)
Dec  6 03:05:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:05:10Z|00970|binding|INFO|Setting lport 42776015-1d70-4b92-9890-d31aaa444637 down in Southbound
Dec  6 03:05:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:05:10Z|00971|binding|INFO|Removing iface tap42776015-1d ovn-installed in OVS
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.945 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:10.949 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:d9:3a 10.100.0.9'], port_security=['fa:16:3e:ae:d9:3a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2ce29812-b64c-4801-a37b-68c55429b70c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '13', 'neutron:security_group_ids': '2b4f8dab-73cc-4482-899b-78d869a3817d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81064306-0257-4e49-ba6e-9b830b20be21, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=42776015-1d70-4b92-9890-d31aaa444637) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:05:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:10.950 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 42776015-1d70-4b92-9890-d31aaa444637 in datapath 7ab4eeff-e26c-426f-afdf-0ed982f0262e unbound from our chassis#033[00m
Dec  6 03:05:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:10.952 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ab4eeff-e26c-426f-afdf-0ed982f0262e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:05:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:10.953 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7944fe25-b689-4fe6-b4d0-b8fcd6890d8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:10.954 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e namespace which is not needed anymore#033[00m
Dec  6 03:05:10 np0005548731 nova_compute[232433]: 2025-12-06 08:05:10.956 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:11 np0005548731 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000bb.scope: Deactivated successfully.
Dec  6 03:05:11 np0005548731 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000bb.scope: Consumed 1.410s CPU time.
Dec  6 03:05:11 np0005548731 systemd-machined[195355]: Machine qemu-96-instance-000000bb terminated.
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.119 232437 INFO nova.virt.libvirt.driver [-] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Instance destroyed successfully.#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.120 232437 DEBUG nova.objects.instance [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid 2ce29812-b64c-4801-a37b-68c55429b70c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.135 232437 DEBUG nova.virt.libvirt.vif [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T08:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1543702869',display_name='tempest-TestNetworkAdvancedServerOps-server-1543702869',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1543702869',id=187,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBInGUhg7nhCoc2vx+Ix7gLWIjZpxEjyqveZeyfMP/1wxX8FSrtE3tQA2JbvpPn3Vva7vIRTnPCXD+7DHbX9YJlXkUS+5x8l7M/agABi3TQb6p6z9n1aAcCS+pz1vzZhCpQ==',key_name='tempest-TestNetworkAdvancedServerOps-1853257958',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-q1zcoiif',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:05:06Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=2ce29812-b64c-4801-a37b-68c55429b70c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.135 232437 DEBUG nova.network.os_vif_util [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.136 232437 DEBUG nova.network.os_vif_util [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.136 232437 DEBUG os_vif [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.139 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.140 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42776015-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:05:11 np0005548731 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[326403]: [NOTICE]   (326407) : haproxy version is 2.8.14-c23fe91
Dec  6 03:05:11 np0005548731 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[326403]: [NOTICE]   (326407) : path to executable is /usr/sbin/haproxy
Dec  6 03:05:11 np0005548731 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[326403]: [WARNING]  (326407) : Exiting Master process...
Dec  6 03:05:11 np0005548731 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[326403]: [ALERT]    (326407) : Current worker (326409) exited with code 143 (Terminated)
Dec  6 03:05:11 np0005548731 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[326403]: [WARNING]  (326407) : All workers exited. Exiting... (0)
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.144 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.146 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:05:11 np0005548731 systemd[1]: libpod-9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31.scope: Deactivated successfully.
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.150 232437 INFO os_vif [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d')#033[00m
Dec  6 03:05:11 np0005548731 podman[326496]: 2025-12-06 08:05:11.152089993 +0000 UTC m=+0.107159771 container died 9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  6 03:05:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:11.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:11 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31-userdata-shm.mount: Deactivated successfully.
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.180 232437 DEBUG nova.compute.manager [req-cc459596-2aea-4b41-94f9-e289b7e72efa req-743509ac-1dae-48bf-bba8-a59cd1f9dced 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.181 232437 DEBUG oslo_concurrency.lockutils [req-cc459596-2aea-4b41-94f9-e289b7e72efa req-743509ac-1dae-48bf-bba8-a59cd1f9dced 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.181 232437 DEBUG oslo_concurrency.lockutils [req-cc459596-2aea-4b41-94f9-e289b7e72efa req-743509ac-1dae-48bf-bba8-a59cd1f9dced 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.181 232437 DEBUG oslo_concurrency.lockutils [req-cc459596-2aea-4b41-94f9-e289b7e72efa req-743509ac-1dae-48bf-bba8-a59cd1f9dced 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.182 232437 DEBUG nova.compute.manager [req-cc459596-2aea-4b41-94f9-e289b7e72efa req-743509ac-1dae-48bf-bba8-a59cd1f9dced 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] No waiting events found dispatching network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.182 232437 DEBUG nova.compute.manager [req-cc459596-2aea-4b41-94f9-e289b7e72efa req-743509ac-1dae-48bf-bba8-a59cd1f9dced 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:05:11 np0005548731 systemd[1]: var-lib-containers-storage-overlay-1c272f9041c4e6a92c2eb0d3322f02443c46e07257d663312ba0c7b9cd45e30c-merged.mount: Deactivated successfully.
Dec  6 03:05:11 np0005548731 podman[326496]: 2025-12-06 08:05:11.196916049 +0000 UTC m=+0.151985817 container cleanup 9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  6 03:05:11 np0005548731 systemd[1]: libpod-conmon-9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31.scope: Deactivated successfully.
Dec  6 03:05:11 np0005548731 podman[326555]: 2025-12-06 08:05:11.259812747 +0000 UTC m=+0.040907681 container remove 9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 03:05:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:11.269 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4f781dc5-0051-4d16-b933-e8455d4e2665]: (4, ('Sat Dec  6 08:05:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e (9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31)\n9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31\nSat Dec  6 08:05:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e (9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31)\n9247dd7d98e5b063595e6fd1cc09632e6fe1bf54f8c7a0ea37a6389c50f18e31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:11.270 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[587123c5-2a2b-40f6-8504-6adaf519f9b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:11.271 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ab4eeff-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.273 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:11 np0005548731 kernel: tap7ab4eeff-e0: left promiscuous mode
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.286 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:11.290 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a6f5ea-4ef2-45d4-bf25-16edc4d7f7a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:11.305 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1b814a-4928-4f21-9219-331324ee8a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:11.306 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d8dacd-8ad5-4531-a370-4873b6b96a70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:11.326 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e61a864f-ae59-4708-bc99-86e8db6bc277]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856251, 'reachable_time': 22251, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326570, 'error': None, 'target': 'ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:11.327 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:05:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:11.328 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c23ff55d-c9ce-46b8-905c-15db82654886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:05:11 np0005548731 systemd[1]: run-netns-ovnmeta\x2d7ab4eeff\x2de26c\x2d426f\x2dafdf\x2d0ed982f0262e.mount: Deactivated successfully.
Dec  6 03:05:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:11.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.898 232437 INFO nova.virt.libvirt.driver [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Deleting instance files /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c_del#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.899 232437 INFO nova.virt.libvirt.driver [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Deletion of /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c_del complete#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.949 232437 INFO nova.compute.manager [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Took 1.07 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.949 232437 DEBUG oslo.service.loopingcall [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.950 232437 DEBUG nova.compute.manager [-] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:05:11 np0005548731 nova_compute[232433]: 2025-12-06 08:05:11.950 232437 DEBUG nova.network.neutron [-] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:05:12 np0005548731 nova_compute[232433]: 2025-12-06 08:05:12.708 232437 DEBUG nova.network.neutron [req-1428400a-a3ff-4b48-9c23-e782542d4771 req-a93caf75-e9d4-437a-b3fe-af4dc4655de8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updated VIF entry in instance network info cache for port 42776015-1d70-4b92-9890-d31aaa444637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:05:12 np0005548731 nova_compute[232433]: 2025-12-06 08:05:12.708 232437 DEBUG nova.network.neutron [req-1428400a-a3ff-4b48-9c23-e782542d4771 req-a93caf75-e9d4-437a-b3fe-af4dc4655de8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updating instance_info_cache with network_info: [{"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:05:12 np0005548731 nova_compute[232433]: 2025-12-06 08:05:12.730 232437 DEBUG oslo_concurrency.lockutils [req-1428400a-a3ff-4b48-9c23-e782542d4771 req-a93caf75-e9d4-437a-b3fe-af4dc4655de8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:05:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:13.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.290 232437 DEBUG nova.compute.manager [req-cecfa164-8e7b-42bf-ab06-d1f286d65c76 req-78c9033d-7f4a-47b0-823d-f90a4cb0d44e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.290 232437 DEBUG oslo_concurrency.lockutils [req-cecfa164-8e7b-42bf-ab06-d1f286d65c76 req-78c9033d-7f4a-47b0-823d-f90a4cb0d44e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.291 232437 DEBUG oslo_concurrency.lockutils [req-cecfa164-8e7b-42bf-ab06-d1f286d65c76 req-78c9033d-7f4a-47b0-823d-f90a4cb0d44e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.291 232437 DEBUG oslo_concurrency.lockutils [req-cecfa164-8e7b-42bf-ab06-d1f286d65c76 req-78c9033d-7f4a-47b0-823d-f90a4cb0d44e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.291 232437 DEBUG nova.compute.manager [req-cecfa164-8e7b-42bf-ab06-d1f286d65c76 req-78c9033d-7f4a-47b0-823d-f90a4cb0d44e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] No waiting events found dispatching network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.292 232437 WARNING nova.compute.manager [req-cecfa164-8e7b-42bf-ab06-d1f286d65c76 req-78c9033d-7f4a-47b0-823d-f90a4cb0d44e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received unexpected event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.801 232437 DEBUG nova.network.neutron [-] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:05:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.822 232437 INFO nova.compute.manager [-] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Took 1.87 seconds to deallocate network for instance.#033[00m
Dec  6 03:05:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:13.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.889 232437 DEBUG oslo_concurrency.lockutils [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.890 232437 DEBUG oslo_concurrency.lockutils [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.898 232437 DEBUG oslo_concurrency.lockutils [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.922 232437 DEBUG nova.compute.manager [req-fd989bec-a6d1-4b31-933a-c82eb6fe46ac req-07c34728-cec9-499a-9d00-42b3ef84e7e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-deleted-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.935 232437 INFO nova.scheduler.client.report [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Deleted allocations for instance 2ce29812-b64c-4801-a37b-68c55429b70c#033[00m
Dec  6 03:05:13 np0005548731 nova_compute[232433]: 2025-12-06 08:05:13.995 232437 DEBUG oslo_concurrency.lockutils [None req-436b2da4-f8f4-47a2-9961-cad789c98f61 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:05:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:14 np0005548731 ovn_controller[133927]: 2025-12-06T08:05:14Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:8e:a2 10.100.0.4
Dec  6 03:05:14 np0005548731 ovn_controller[133927]: 2025-12-06T08:05:14Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:8e:a2 10.100.0.4
Dec  6 03:05:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:15.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:15 np0005548731 nova_compute[232433]: 2025-12-06 08:05:15.709 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:15.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:16 np0005548731 nova_compute[232433]: 2025-12-06 08:05:16.144 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:17.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:17.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:18 np0005548731 podman[326707]: 2025-12-06 08:05:18.903757509 +0000 UTC m=+0.051970092 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 03:05:18 np0005548731 podman[326709]: 2025-12-06 08:05:18.911090629 +0000 UTC m=+0.059358793 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  6 03:05:18 np0005548731 podman[326708]: 2025-12-06 08:05:18.934701365 +0000 UTC m=+0.083219245 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Dec  6 03:05:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:19.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:19 np0005548731 ovn_controller[133927]: 2025-12-06T08:05:19Z|00972|binding|INFO|Releasing lport 7a26b0d3-5a0f-46fe-987b-780e7076a0fa from this chassis (sb_readonly=0)
Dec  6 03:05:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:19 np0005548731 nova_compute[232433]: 2025-12-06 08:05:19.311 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:05:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:05:19 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:05:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:05:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:19.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:05:20 np0005548731 nova_compute[232433]: 2025-12-06 08:05:20.710 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:20 np0005548731 nova_compute[232433]: 2025-12-06 08:05:20.774 232437 INFO nova.compute.manager [None req-4e433944-b1b0-4b57-ba6f-751e747030a5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Get console output#033[00m
Dec  6 03:05:20 np0005548731 nova_compute[232433]: 2025-12-06 08:05:20.779 261230 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 03:05:21 np0005548731 nova_compute[232433]: 2025-12-06 08:05:21.146 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:21.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:21.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:23.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:23.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:24 np0005548731 nova_compute[232433]: 2025-12-06 08:05:24.060 232437 DEBUG nova.compute.manager [req-10770988-3cf8-48fe-863c-71550c560c85 req-6b47521f-7a6b-4074-a405-f8f1782d51ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Received event network-changed-43e1b7c7-10b3-4138-8e30-33110bd9f83e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:05:24 np0005548731 nova_compute[232433]: 2025-12-06 08:05:24.061 232437 DEBUG nova.compute.manager [req-10770988-3cf8-48fe-863c-71550c560c85 req-6b47521f-7a6b-4074-a405-f8f1782d51ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Refreshing instance network info cache due to event network-changed-43e1b7c7-10b3-4138-8e30-33110bd9f83e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:05:24 np0005548731 nova_compute[232433]: 2025-12-06 08:05:24.061 232437 DEBUG oslo_concurrency.lockutils [req-10770988-3cf8-48fe-863c-71550c560c85 req-6b47521f-7a6b-4074-a405-f8f1782d51ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:05:24 np0005548731 nova_compute[232433]: 2025-12-06 08:05:24.061 232437 DEBUG oslo_concurrency.lockutils [req-10770988-3cf8-48fe-863c-71550c560c85 req-6b47521f-7a6b-4074-a405-f8f1782d51ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:05:24 np0005548731 nova_compute[232433]: 2025-12-06 08:05:24.061 232437 DEBUG nova.network.neutron [req-10770988-3cf8-48fe-863c-71550c560c85 req-6b47521f-7a6b-4074-a405-f8f1782d51ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Refreshing network info cache for port 43e1b7c7-10b3-4138-8e30-33110bd9f83e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:05:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:05:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:25.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:05:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:05:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:05:25 np0005548731 nova_compute[232433]: 2025-12-06 08:05:25.692 232437 DEBUG nova.network.neutron [req-10770988-3cf8-48fe-863c-71550c560c85 req-6b47521f-7a6b-4074-a405-f8f1782d51ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Updated VIF entry in instance network info cache for port 43e1b7c7-10b3-4138-8e30-33110bd9f83e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:05:25 np0005548731 nova_compute[232433]: 2025-12-06 08:05:25.693 232437 DEBUG nova.network.neutron [req-10770988-3cf8-48fe-863c-71550c560c85 req-6b47521f-7a6b-4074-a405-f8f1782d51ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Updating instance_info_cache with network_info: [{"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:05:25 np0005548731 nova_compute[232433]: 2025-12-06 08:05:25.711 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:25 np0005548731 nova_compute[232433]: 2025-12-06 08:05:25.717 232437 DEBUG oslo_concurrency.lockutils [req-10770988-3cf8-48fe-863c-71550c560c85 req-6b47521f-7a6b-4074-a405-f8f1782d51ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:05:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:25.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:26 np0005548731 nova_compute[232433]: 2025-12-06 08:05:26.115 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008311.1137526, 2ce29812-b64c-4801-a37b-68c55429b70c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:05:26 np0005548731 nova_compute[232433]: 2025-12-06 08:05:26.116 232437 INFO nova.compute.manager [-] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:05:26 np0005548731 nova_compute[232433]: 2025-12-06 08:05:26.138 232437 DEBUG nova.compute.manager [None req-64fbdb88-b617-401e-baf7-2b5756eaf938 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:05:26 np0005548731 nova_compute[232433]: 2025-12-06 08:05:26.148 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:27.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:27.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:28 np0005548731 nova_compute[232433]: 2025-12-06 08:05:28.766 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:29.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:29.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:30 np0005548731 nova_compute[232433]: 2025-12-06 08:05:30.714 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:31 np0005548731 nova_compute[232433]: 2025-12-06 08:05:31.150 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:05:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:31.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:05:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:31.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:33.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:33 np0005548731 nova_compute[232433]: 2025-12-06 08:05:33.771 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:33.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:35.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:35 np0005548731 nova_compute[232433]: 2025-12-06 08:05:35.716 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:35.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:36 np0005548731 nova_compute[232433]: 2025-12-06 08:05:36.151 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:05:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:37.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:05:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:37.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:38 np0005548731 nova_compute[232433]: 2025-12-06 08:05:38.160 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:05:38 np0005548731 nova_compute[232433]: 2025-12-06 08:05:38.161 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:05:38 np0005548731 nova_compute[232433]: 2025-12-06 08:05:38.161 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:05:38 np0005548731 nova_compute[232433]: 2025-12-06 08:05:38.471 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:05:38 np0005548731 nova_compute[232433]: 2025-12-06 08:05:38.472 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:05:38 np0005548731 nova_compute[232433]: 2025-12-06 08:05:38.472 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 03:05:38 np0005548731 nova_compute[232433]: 2025-12-06 08:05:38.472 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 53cf05eb-a0e9-4817-a441-cbba2c770c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:05:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:05:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:39.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:05:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:05:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:39.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:05:40 np0005548731 nova_compute[232433]: 2025-12-06 08:05:40.718 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:40.731 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:05:40 np0005548731 nova_compute[232433]: 2025-12-06 08:05:40.731 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:40.732 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:05:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:05:40.733 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:05:41 np0005548731 nova_compute[232433]: 2025-12-06 08:05:41.153 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:41.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:41 np0005548731 nova_compute[232433]: 2025-12-06 08:05:41.506 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Updating instance_info_cache with network_info: [{"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:05:41 np0005548731 nova_compute[232433]: 2025-12-06 08:05:41.528 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-53cf05eb-a0e9-4817-a441-cbba2c770c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:05:41 np0005548731 nova_compute[232433]: 2025-12-06 08:05:41.528 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 03:05:41 np0005548731 nova_compute[232433]: 2025-12-06 08:05:41.529 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:05:41 np0005548731 nova_compute[232433]: 2025-12-06 08:05:41.529 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:05:41 np0005548731 nova_compute[232433]: 2025-12-06 08:05:41.529 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:05:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:05:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:41.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:05:43 np0005548731 nova_compute[232433]: 2025-12-06 08:05:43.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:05:43 np0005548731 nova_compute[232433]: 2025-12-06 08:05:43.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:05:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:43.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:43.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:45 np0005548731 nova_compute[232433]: 2025-12-06 08:05:45.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:05:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:45.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:45 np0005548731 nova_compute[232433]: 2025-12-06 08:05:45.751 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:05:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:45.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:05:46 np0005548731 nova_compute[232433]: 2025-12-06 08:05:46.154 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.129 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.129 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:05:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:47.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:05:47 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/641287599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.562 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.647 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.648 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000bc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.794 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.795 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4005MB free_disk=20.901050567626953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.795 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.795 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:05:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:47.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.877 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 53cf05eb-a0e9-4817-a441-cbba2c770c4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.877 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.877 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.893 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.916 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.916 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.936 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 03:05:47 np0005548731 nova_compute[232433]: 2025-12-06 08:05:47.977 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 03:05:48 np0005548731 nova_compute[232433]: 2025-12-06 08:05:48.016 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:05:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:05:48 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/795117279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:05:48 np0005548731 nova_compute[232433]: 2025-12-06 08:05:48.432 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:05:48 np0005548731 nova_compute[232433]: 2025-12-06 08:05:48.438 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:05:48 np0005548731 nova_compute[232433]: 2025-12-06 08:05:48.605 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:05:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:49.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:49 np0005548731 nova_compute[232433]: 2025-12-06 08:05:49.270 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:05:49 np0005548731 nova_compute[232433]: 2025-12-06 08:05:49.270 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:05:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:49.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:49 np0005548731 podman[326936]: 2025-12-06 08:05:49.896145219 +0000 UTC m=+0.052797141 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  6 03:05:49 np0005548731 podman[326938]: 2025-12-06 08:05:49.930470719 +0000 UTC m=+0.084657751 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 03:05:49 np0005548731 podman[326937]: 2025-12-06 08:05:49.949123575 +0000 UTC m=+0.105559612 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 03:05:50 np0005548731 nova_compute[232433]: 2025-12-06 08:05:50.753 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:51 np0005548731 nova_compute[232433]: 2025-12-06 08:05:51.155 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:05:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:51.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:05:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:05:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:51.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:05:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:53 np0005548731 nova_compute[232433]: 2025-12-06 08:05:53.271 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:05:53 np0005548731 nova_compute[232433]: 2025-12-06 08:05:53.272 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:05:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:05:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:53.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:05:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:55.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:55 np0005548731 nova_compute[232433]: 2025-12-06 08:05:55.756 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:55.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:56 np0005548731 nova_compute[232433]: 2025-12-06 08:05:56.158 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:05:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:57.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:57.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:59.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:05:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:05:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:05:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:05:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:59.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:00 np0005548731 nova_compute[232433]: 2025-12-06 08:06:00.758 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:00.910 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:06:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:00.910 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:06:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:00.911 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:06:01 np0005548731 nova_compute[232433]: 2025-12-06 08:06:01.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:06:01 np0005548731 nova_compute[232433]: 2025-12-06 08:06:01.160 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:01.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:01.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:03.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:03.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:05.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:05 np0005548731 nova_compute[232433]: 2025-12-06 08:06:05.762 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:05.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:06 np0005548731 nova_compute[232433]: 2025-12-06 08:06:06.162 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:07.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:06:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:07.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:06:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:09.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:06:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:09.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:06:10 np0005548731 nova_compute[232433]: 2025-12-06 08:06:10.763 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:11 np0005548731 nova_compute[232433]: 2025-12-06 08:06:11.163 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:11.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:11.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:06:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:13.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:06:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:06:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:13.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.227 232437 DEBUG oslo_concurrency.lockutils [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.227 232437 DEBUG oslo_concurrency.lockutils [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.228 232437 DEBUG oslo_concurrency.lockutils [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.228 232437 DEBUG oslo_concurrency.lockutils [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.228 232437 DEBUG oslo_concurrency.lockutils [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.229 232437 INFO nova.compute.manager [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Terminating instance#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.230 232437 DEBUG nova.compute.manager [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:06:14 np0005548731 kernel: tap43e1b7c7-10 (unregistering): left promiscuous mode
Dec  6 03:06:14 np0005548731 NetworkManager[49182]: <info>  [1765008374.2949] device (tap43e1b7c7-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.304 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:14 np0005548731 ovn_controller[133927]: 2025-12-06T08:06:14Z|00973|binding|INFO|Releasing lport 43e1b7c7-10b3-4138-8e30-33110bd9f83e from this chassis (sb_readonly=0)
Dec  6 03:06:14 np0005548731 ovn_controller[133927]: 2025-12-06T08:06:14Z|00974|binding|INFO|Setting lport 43e1b7c7-10b3-4138-8e30-33110bd9f83e down in Southbound
Dec  6 03:06:14 np0005548731 ovn_controller[133927]: 2025-12-06T08:06:14Z|00975|binding|INFO|Removing iface tap43e1b7c7-10 ovn-installed in OVS
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.309 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.312 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:8e:a2 10.100.0.4'], port_security=['fa:16:3e:87:8e:a2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '53cf05eb-a0e9-4817-a441-cbba2c770c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f40db9ca-2d53-4929-957f-d781dfa63d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d84d8f55-4938-4502-958a-437fbc252df8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=43e1b7c7-10b3-4138-8e30-33110bd9f83e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.313 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 43e1b7c7-10b3-4138-8e30-33110bd9f83e in datapath 9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 unbound from our chassis#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.315 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.317 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[aba4f3c9-62e8-49b8-bf3f-4ad3d596bbaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.318 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 namespace which is not needed anymore#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.328 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:14 np0005548731 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000bc.scope: Deactivated successfully.
Dec  6 03:06:14 np0005548731 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000bc.scope: Consumed 17.232s CPU time.
Dec  6 03:06:14 np0005548731 systemd-machined[195355]: Machine qemu-97-instance-000000bc terminated.
Dec  6 03:06:14 np0005548731 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[326270]: [NOTICE]   (326274) : haproxy version is 2.8.14-c23fe91
Dec  6 03:06:14 np0005548731 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[326270]: [NOTICE]   (326274) : path to executable is /usr/sbin/haproxy
Dec  6 03:06:14 np0005548731 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[326270]: [WARNING]  (326274) : Exiting Master process...
Dec  6 03:06:14 np0005548731 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[326270]: [WARNING]  (326274) : Exiting Master process...
Dec  6 03:06:14 np0005548731 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[326270]: [ALERT]    (326274) : Current worker (326276) exited with code 143 (Terminated)
Dec  6 03:06:14 np0005548731 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[326270]: [WARNING]  (326274) : All workers exited. Exiting... (0)
Dec  6 03:06:14 np0005548731 systemd[1]: libpod-8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c.scope: Deactivated successfully.
Dec  6 03:06:14 np0005548731 podman[327138]: 2025-12-06 08:06:14.453625891 +0000 UTC m=+0.043615157 container died 8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.465 232437 INFO nova.virt.libvirt.driver [-] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Instance destroyed successfully.#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.465 232437 DEBUG nova.objects.instance [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 53cf05eb-a0e9-4817-a441-cbba2c770c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:06:14 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c-userdata-shm.mount: Deactivated successfully.
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.481 232437 DEBUG nova.virt.libvirt.vif [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1175655928',display_name='tempest-TestNetworkBasicOps-server-1175655928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1175655928',id=188,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBt0jLPk2iN+Hpq+kn9o8I8ax4aimgQqm1H7syySMpztN4N1lD49+MzdovmLDVVSxMcVhESXiJzt8WdekylzuvNkp6MvhMMn70BtxkiBahaQSbGPzzOiP2J48ZImfcQaw==',key_name='tempest-TestNetworkBasicOps-640512507',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:04:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-f0wuvw2n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:04:59Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=53cf05eb-a0e9-4817-a441-cbba2c770c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.481 232437 DEBUG nova.network.os_vif_util [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "address": "fa:16:3e:87:8e:a2", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43e1b7c7-10", "ovs_interfaceid": "43e1b7c7-10b3-4138-8e30-33110bd9f83e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.482 232437 DEBUG nova.network.os_vif_util [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:8e:a2,bridge_name='br-int',has_traffic_filtering=True,id=43e1b7c7-10b3-4138-8e30-33110bd9f83e,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e1b7c7-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.482 232437 DEBUG os_vif [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:8e:a2,bridge_name='br-int',has_traffic_filtering=True,id=43e1b7c7-10b3-4138-8e30-33110bd9f83e,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e1b7c7-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.484 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.484 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43e1b7c7-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:06:14 np0005548731 systemd[1]: var-lib-containers-storage-overlay-27adc9e7918aa03eb8bb5cc2c21cc5ce9fe53ed5cc9b67d4d83d2c3f04a85d87-merged.mount: Deactivated successfully.
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.486 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.488 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.492 232437 INFO os_vif [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:8e:a2,bridge_name='br-int',has_traffic_filtering=True,id=43e1b7c7-10b3-4138-8e30-33110bd9f83e,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43e1b7c7-10')#033[00m
Dec  6 03:06:14 np0005548731 podman[327138]: 2025-12-06 08:06:14.501906362 +0000 UTC m=+0.091895628 container cleanup 8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:06:14 np0005548731 systemd[1]: libpod-conmon-8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c.scope: Deactivated successfully.
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.513 232437 DEBUG nova.compute.manager [req-293878df-1e5f-49dc-a315-a6f9e548c801 req-d61954f1-2c53-440e-95b7-f8f3822b6e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Received event network-vif-unplugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.513 232437 DEBUG oslo_concurrency.lockutils [req-293878df-1e5f-49dc-a315-a6f9e548c801 req-d61954f1-2c53-440e-95b7-f8f3822b6e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.513 232437 DEBUG oslo_concurrency.lockutils [req-293878df-1e5f-49dc-a315-a6f9e548c801 req-d61954f1-2c53-440e-95b7-f8f3822b6e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.514 232437 DEBUG oslo_concurrency.lockutils [req-293878df-1e5f-49dc-a315-a6f9e548c801 req-d61954f1-2c53-440e-95b7-f8f3822b6e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.514 232437 DEBUG nova.compute.manager [req-293878df-1e5f-49dc-a315-a6f9e548c801 req-d61954f1-2c53-440e-95b7-f8f3822b6e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] No waiting events found dispatching network-vif-unplugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.514 232437 DEBUG nova.compute.manager [req-293878df-1e5f-49dc-a315-a6f9e548c801 req-d61954f1-2c53-440e-95b7-f8f3822b6e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Received event network-vif-unplugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:06:14 np0005548731 podman[327196]: 2025-12-06 08:06:14.567995907 +0000 UTC m=+0.043593367 container remove 8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.574 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2622f4f9-81a4-40c0-b32a-124495b59b77]: (4, ('Sat Dec  6 08:06:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 (8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c)\n8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c\nSat Dec  6 08:06:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 (8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c)\n8f8e9473206fc201bc4bb85362c4bd75065014c858a44312534e2ebcd70e592c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.576 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6953f7fb-f451-4914-8a8a-7c607603bb20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.577 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9482cb7a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.578 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:14 np0005548731 kernel: tap9482cb7a-b0: left promiscuous mode
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.590 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.593 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c269220c-8297-4b1c-97fc-75fd19b5b02e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.612 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[98c84b17-eabf-4bd3-8466-0bd177685323]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.614 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[11234c78-bf15-419e-8e15-30251feda4fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.629 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4feb6866-b3e8-40de-8331-b55e49791d2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856025, 'reachable_time': 31813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327214, 'error': None, 'target': 'ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:06:14 np0005548731 systemd[1]: run-netns-ovnmeta\x2d9482cb7a\x2db1a1\x2d4dca\x2d80a7\x2dc7782ee5fe71.mount: Deactivated successfully.
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.633 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:06:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:14.633 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c173c399-3f06-4e89-9126-35588edb8da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:06:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.845 232437 INFO nova.virt.libvirt.driver [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Deleting instance files /var/lib/nova/instances/53cf05eb-a0e9-4817-a441-cbba2c770c4b_del#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.846 232437 INFO nova.virt.libvirt.driver [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Deletion of /var/lib/nova/instances/53cf05eb-a0e9-4817-a441-cbba2c770c4b_del complete#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.885 232437 INFO nova.compute.manager [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.886 232437 DEBUG oslo.service.loopingcall [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.886 232437 DEBUG nova.compute.manager [-] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:06:14 np0005548731 nova_compute[232433]: 2025-12-06 08:06:14.887 232437 DEBUG nova.network.neutron [-] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:06:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:15.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:15 np0005548731 nova_compute[232433]: 2025-12-06 08:06:15.425 232437 DEBUG nova.network.neutron [-] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:06:15 np0005548731 nova_compute[232433]: 2025-12-06 08:06:15.462 232437 INFO nova.compute.manager [-] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Took 0.58 seconds to deallocate network for instance.#033[00m
Dec  6 03:06:15 np0005548731 nova_compute[232433]: 2025-12-06 08:06:15.477 232437 DEBUG nova.compute.manager [req-40e3b766-5028-4f09-9ca1-7ede88c7d87a req-871d5aff-ea8e-4106-ad53-76045fb54cd4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Received event network-vif-deleted-43e1b7c7-10b3-4138-8e30-33110bd9f83e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:06:15 np0005548731 nova_compute[232433]: 2025-12-06 08:06:15.514 232437 DEBUG oslo_concurrency.lockutils [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:06:15 np0005548731 nova_compute[232433]: 2025-12-06 08:06:15.515 232437 DEBUG oslo_concurrency.lockutils [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:06:15 np0005548731 nova_compute[232433]: 2025-12-06 08:06:15.765 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:06:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:15.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:06:15 np0005548731 nova_compute[232433]: 2025-12-06 08:06:15.903 232437 DEBUG oslo_concurrency.processutils [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:06:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:06:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1944957738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.333 232437 DEBUG oslo_concurrency.processutils [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.339 232437 DEBUG nova.compute.provider_tree [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.362 232437 DEBUG nova.scheduler.client.report [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.400 232437 DEBUG oslo_concurrency.lockutils [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.471 232437 INFO nova.scheduler.client.report [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 53cf05eb-a0e9-4817-a441-cbba2c770c4b#033[00m
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.546 232437 DEBUG oslo_concurrency.lockutils [None req-2db05f3e-a750-42ab-adbb-705b8141b336 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.572 232437 DEBUG nova.compute.manager [req-a4ba64e7-9ffc-4b8b-b22b-3d0c57444567 req-4ad293b4-5849-4b5e-b1f3-ee95a6de861d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Received event network-vif-plugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.573 232437 DEBUG oslo_concurrency.lockutils [req-a4ba64e7-9ffc-4b8b-b22b-3d0c57444567 req-4ad293b4-5849-4b5e-b1f3-ee95a6de861d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.573 232437 DEBUG oslo_concurrency.lockutils [req-a4ba64e7-9ffc-4b8b-b22b-3d0c57444567 req-4ad293b4-5849-4b5e-b1f3-ee95a6de861d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.573 232437 DEBUG oslo_concurrency.lockutils [req-a4ba64e7-9ffc-4b8b-b22b-3d0c57444567 req-4ad293b4-5849-4b5e-b1f3-ee95a6de861d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cf05eb-a0e9-4817-a441-cbba2c770c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.573 232437 DEBUG nova.compute.manager [req-a4ba64e7-9ffc-4b8b-b22b-3d0c57444567 req-4ad293b4-5849-4b5e-b1f3-ee95a6de861d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] No waiting events found dispatching network-vif-plugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:06:16 np0005548731 nova_compute[232433]: 2025-12-06 08:06:16.574 232437 WARNING nova.compute.manager [req-a4ba64e7-9ffc-4b8b-b22b-3d0c57444567 req-4ad293b4-5849-4b5e-b1f3-ee95a6de861d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Received unexpected event network-vif-plugged-43e1b7c7-10b3-4138-8e30-33110bd9f83e for instance with vm_state deleted and task_state None.#033[00m
Dec  6 03:06:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:17.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:06:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:17.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:06:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:19.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:19 np0005548731 nova_compute[232433]: 2025-12-06 08:06:19.489 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:06:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:19.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:06:20 np0005548731 nova_compute[232433]: 2025-12-06 08:06:20.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:20 np0005548731 podman[327241]: 2025-12-06 08:06:20.895508683 +0000 UTC m=+0.049807319 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 03:06:20 np0005548731 podman[327243]: 2025-12-06 08:06:20.911530805 +0000 UTC m=+0.058859640 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Dec  6 03:06:20 np0005548731 podman[327242]: 2025-12-06 08:06:20.929254328 +0000 UTC m=+0.080439208 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 03:06:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:21.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:21 np0005548731 nova_compute[232433]: 2025-12-06 08:06:21.360 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:21 np0005548731 nova_compute[232433]: 2025-12-06 08:06:21.475 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:21.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:06:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:23.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:06:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:23.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:24 np0005548731 nova_compute[232433]: 2025-12-06 08:06:24.491 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:25.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:06:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:06:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:06:25 np0005548731 nova_compute[232433]: 2025-12-06 08:06:25.822 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:25.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:27.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:27.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:29.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:29 np0005548731 nova_compute[232433]: 2025-12-06 08:06:29.465 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008374.4635062, 53cf05eb-a0e9-4817-a441-cbba2c770c4b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:06:29 np0005548731 nova_compute[232433]: 2025-12-06 08:06:29.465 232437 INFO nova.compute.manager [-] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:06:29 np0005548731 nova_compute[232433]: 2025-12-06 08:06:29.494 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:29.643 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:06:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:29.644 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:06:29 np0005548731 nova_compute[232433]: 2025-12-06 08:06:29.644 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:29 np0005548731 nova_compute[232433]: 2025-12-06 08:06:29.663 232437 DEBUG nova.compute.manager [None req-94d4e676-0f28-44a6-ab2e-07d260a4d3ce - - - - - -] [instance: 53cf05eb-a0e9-4817-a441-cbba2c770c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:06:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:29.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:30 np0005548731 nova_compute[232433]: 2025-12-06 08:06:30.824 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:31.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:06:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:31.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:06:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:06:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:06:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:33.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:33.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:34 np0005548731 nova_compute[232433]: 2025-12-06 08:06:34.497 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:35.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:35 np0005548731 nova_compute[232433]: 2025-12-06 08:06:35.826 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:35.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:37.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:37.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:06:38.647 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:06:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:39.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:39 np0005548731 nova_compute[232433]: 2025-12-06 08:06:39.499 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:06:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:39.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:06:40 np0005548731 nova_compute[232433]: 2025-12-06 08:06:40.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:06:40 np0005548731 nova_compute[232433]: 2025-12-06 08:06:40.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:06:40 np0005548731 nova_compute[232433]: 2025-12-06 08:06:40.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:06:40 np0005548731 nova_compute[232433]: 2025-12-06 08:06:40.123 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:06:40 np0005548731 nova_compute[232433]: 2025-12-06 08:06:40.827 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:41 np0005548731 nova_compute[232433]: 2025-12-06 08:06:41.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:06:41 np0005548731 nova_compute[232433]: 2025-12-06 08:06:41.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:06:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:41.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:41.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:42 np0005548731 nova_compute[232433]: 2025-12-06 08:06:42.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:06:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:43.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:06:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:43.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:06:44 np0005548731 nova_compute[232433]: 2025-12-06 08:06:44.501 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:45 np0005548731 nova_compute[232433]: 2025-12-06 08:06:45.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:06:45 np0005548731 nova_compute[232433]: 2025-12-06 08:06:45.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:06:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:45.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:45 np0005548731 nova_compute[232433]: 2025-12-06 08:06:45.829 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:06:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:45.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:06:46 np0005548731 nova_compute[232433]: 2025-12-06 08:06:46.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:06:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:47.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:47.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.131 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.131 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:06:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:06:48 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/27461616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.627 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.778 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.780 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4230MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.780 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.780 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.882 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.883 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:06:48 np0005548731 nova_compute[232433]: 2025-12-06 08:06:48.906 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:06:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:06:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:49.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:06:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:06:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1249092426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:06:49 np0005548731 nova_compute[232433]: 2025-12-06 08:06:49.365 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:06:49 np0005548731 nova_compute[232433]: 2025-12-06 08:06:49.370 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:06:49 np0005548731 nova_compute[232433]: 2025-12-06 08:06:49.387 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:06:49 np0005548731 nova_compute[232433]: 2025-12-06 08:06:49.409 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:06:49 np0005548731 nova_compute[232433]: 2025-12-06 08:06:49.409 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:06:49 np0005548731 nova_compute[232433]: 2025-12-06 08:06:49.503 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:06:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:49.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:06:50 np0005548731 nova_compute[232433]: 2025-12-06 08:06:50.831 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:06:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:51.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:06:51 np0005548731 podman[327649]: 2025-12-06 08:06:51.894788684 +0000 UTC m=+0.057430655 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  6 03:06:51 np0005548731 podman[327651]: 2025-12-06 08:06:51.899336395 +0000 UTC m=+0.060589432 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 03:06:51 np0005548731 podman[327650]: 2025-12-06 08:06:51.916771151 +0000 UTC m=+0.079104475 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 03:06:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:51.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:53.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:53 np0005548731 nova_compute[232433]: 2025-12-06 08:06:53.410 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:06:53 np0005548731 nova_compute[232433]: 2025-12-06 08:06:53.411 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:06:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:53.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:54 np0005548731 nova_compute[232433]: 2025-12-06 08:06:54.506 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:55.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:55 np0005548731 nova_compute[232433]: 2025-12-06 08:06:55.833 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:06:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:55.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:06:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:57.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:57.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:58 np0005548731 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  6 03:06:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:59.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:06:59 np0005548731 nova_compute[232433]: 2025-12-06 08:06:59.556 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:06:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:06:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:06:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:06:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:59.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:00 np0005548731 nova_compute[232433]: 2025-12-06 08:07:00.835 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:00.911 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:00.911 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:07:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:00.911 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:07:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:01.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:01 np0005548731 ovn_controller[133927]: 2025-12-06T08:07:01Z|00976|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  6 03:07:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:01.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:03.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:03.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:04 np0005548731 nova_compute[232433]: 2025-12-06 08:07:04.559 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:07:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:05.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:05 np0005548731 nova_compute[232433]: 2025-12-06 08:07:05.836 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:05.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:07.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:07.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:09.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:07:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2782371719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:07:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:07:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2782371719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:07:09 np0005548731 nova_compute[232433]: 2025-12-06 08:07:09.561 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:07:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:09.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:10 np0005548731 nova_compute[232433]: 2025-12-06 08:07:10.839 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:11.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:11.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:13.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:13.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:14 np0005548731 nova_compute[232433]: 2025-12-06 08:07:14.602 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:07:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:15.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:15 np0005548731 nova_compute[232433]: 2025-12-06 08:07:15.841 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:15.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:17.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:17.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:19.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:19 np0005548731 nova_compute[232433]: 2025-12-06 08:07:19.604 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:07:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:19.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:20 np0005548731 nova_compute[232433]: 2025-12-06 08:07:20.843 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:21.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:21.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:22 np0005548731 podman[327795]: 2025-12-06 08:07:22.898615993 +0000 UTC m=+0.056354478 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:07:22 np0005548731 podman[327793]: 2025-12-06 08:07:22.913249792 +0000 UTC m=+0.076187245 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  6 03:07:22 np0005548731 podman[327794]: 2025-12-06 08:07:22.92262894 +0000 UTC m=+0.083552753 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec  6 03:07:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:23.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:23.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:24 np0005548731 nova_compute[232433]: 2025-12-06 08:07:24.606 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:07:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:25.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:25 np0005548731 nova_compute[232433]: 2025-12-06 08:07:25.891 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:25.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:27.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:27.395 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:07:27 np0005548731 nova_compute[232433]: 2025-12-06 08:07:27.395 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:27.396 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.834598) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447834652, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2356, "num_deletes": 251, "total_data_size": 5709715, "memory_usage": 5759200, "flush_reason": "Manual Compaction"}
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447852667, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 3722802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72943, "largest_seqno": 75294, "table_properties": {"data_size": 3713404, "index_size": 5891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19496, "raw_average_key_size": 20, "raw_value_size": 3694645, "raw_average_value_size": 3848, "num_data_blocks": 258, "num_entries": 960, "num_filter_entries": 960, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008237, "oldest_key_time": 1765008237, "file_creation_time": 1765008447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 18144 microseconds, and 7408 cpu microseconds.
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.852746) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 3722802 bytes OK
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.852768) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.854178) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.854191) EVENT_LOG_v1 {"time_micros": 1765008447854186, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.854206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 5699380, prev total WAL file size 5699380, number of live WAL files 2.
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.856052) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(3635KB)], [147(11MB)]
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447856129, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 16201801, "oldest_snapshot_seqno": -1}
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 10473 keys, 14220385 bytes, temperature: kUnknown
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447950839, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 14220385, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14151813, "index_size": 41259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26245, "raw_key_size": 276248, "raw_average_key_size": 26, "raw_value_size": 13967352, "raw_average_value_size": 1333, "num_data_blocks": 1573, "num_entries": 10473, "num_filter_entries": 10473, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765008447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.951144) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 14220385 bytes
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.952209) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.8 rd, 149.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 11.9 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 10990, records dropped: 517 output_compression: NoCompression
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.952225) EVENT_LOG_v1 {"time_micros": 1765008447952217, "job": 94, "event": "compaction_finished", "compaction_time_micros": 94851, "compaction_time_cpu_micros": 34195, "output_level": 6, "num_output_files": 1, "total_output_size": 14220385, "num_input_records": 10990, "num_output_records": 10473, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447952940, "job": 94, "event": "table_file_deletion", "file_number": 149}
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447955141, "job": 94, "event": "table_file_deletion", "file_number": 147}
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.855901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.955247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.955253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.955255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.955257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:27 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:27.955259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:27.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:29.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:29 np0005548731 nova_compute[232433]: 2025-12-06 08:07:29.658 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:07:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:29.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:30 np0005548731 nova_compute[232433]: 2025-12-06 08:07:30.894 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:31.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:31.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:33.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 03:07:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:07:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:07:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:07:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:07:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:33.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:07:34 np0005548731 nova_compute[232433]: 2025-12-06 08:07:34.660 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:07:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:35.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:35 np0005548731 nova_compute[232433]: 2025-12-06 08:07:35.896 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:37.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:37 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:37.398 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:07:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:37.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:39.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:39 np0005548731 nova_compute[232433]: 2025-12-06 08:07:39.662 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:07:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:39.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.107 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.292 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:07:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:07:40 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.441 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "48c1afa0-f521-4427-888c-ca46a12e88e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.442 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.565 232437 DEBUG nova.compute.manager [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.728 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.729 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.736 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.737 232437 INFO nova.compute.claims [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:07:40 np0005548731 nova_compute[232433]: 2025-12-06 08:07:40.896 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:41 np0005548731 nova_compute[232433]: 2025-12-06 08:07:41.012 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:07:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:41.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:07:41 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3191702422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:07:41 np0005548731 nova_compute[232433]: 2025-12-06 08:07:41.432 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:07:41 np0005548731 nova_compute[232433]: 2025-12-06 08:07:41.437 232437 DEBUG nova.compute.provider_tree [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:07:41 np0005548731 nova_compute[232433]: 2025-12-06 08:07:41.495 232437 DEBUG nova.scheduler.client.report [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:07:41 np0005548731 nova_compute[232433]: 2025-12-06 08:07:41.754 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:07:41 np0005548731 nova_compute[232433]: 2025-12-06 08:07:41.755 232437 DEBUG nova.compute.manager [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:07:41 np0005548731 nova_compute[232433]: 2025-12-06 08:07:41.860 232437 DEBUG nova.compute.manager [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:07:41 np0005548731 nova_compute[232433]: 2025-12-06 08:07:41.861 232437 DEBUG nova.network.neutron [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:07:41 np0005548731 nova_compute[232433]: 2025-12-06 08:07:41.976 232437 INFO nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:07:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:41.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.042 232437 DEBUG nova.compute.manager [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.166 232437 DEBUG nova.compute.manager [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.167 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.167 232437 INFO nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Creating image(s)#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.196 232437 DEBUG nova.storage.rbd_utils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 48c1afa0-f521-4427-888c-ca46a12e88e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.224 232437 DEBUG nova.storage.rbd_utils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 48c1afa0-f521-4427-888c-ca46a12e88e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.250 232437 DEBUG nova.storage.rbd_utils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 48c1afa0-f521-4427-888c-ca46a12e88e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.254 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.328 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.329 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.330 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.330 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.355 232437 DEBUG nova.storage.rbd_utils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 48c1afa0-f521-4427-888c-ca46a12e88e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.358 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 48c1afa0-f521-4427-888c-ca46a12e88e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.717 232437 DEBUG nova.policy [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.822 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 48c1afa0-f521-4427-888c-ca46a12e88e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:07:42 np0005548731 nova_compute[232433]: 2025-12-06 08:07:42.896 232437 DEBUG nova.storage.rbd_utils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 48c1afa0-f521-4427-888c-ca46a12e88e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:07:43 np0005548731 nova_compute[232433]: 2025-12-06 08:07:43.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:07:43 np0005548731 nova_compute[232433]: 2025-12-06 08:07:43.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:07:43 np0005548731 nova_compute[232433]: 2025-12-06 08:07:43.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:07:43 np0005548731 nova_compute[232433]: 2025-12-06 08:07:43.333 232437 DEBUG nova.objects.instance [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 48c1afa0-f521-4427-888c-ca46a12e88e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:07:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:43.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:43 np0005548731 nova_compute[232433]: 2025-12-06 08:07:43.512 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:07:43 np0005548731 nova_compute[232433]: 2025-12-06 08:07:43.512 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Ensure instance console log exists: /var/lib/nova/instances/48c1afa0-f521-4427-888c-ca46a12e88e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:07:43 np0005548731 nova_compute[232433]: 2025-12-06 08:07:43.513 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:07:43 np0005548731 nova_compute[232433]: 2025-12-06 08:07:43.513 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:07:43 np0005548731 nova_compute[232433]: 2025-12-06 08:07:43.513 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:07:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:43.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:44 np0005548731 nova_compute[232433]: 2025-12-06 08:07:44.665 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:07:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:45.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:45 np0005548731 nova_compute[232433]: 2025-12-06 08:07:45.897 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:46.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:46 np0005548731 nova_compute[232433]: 2025-12-06 08:07:46.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:07:46 np0005548731 nova_compute[232433]: 2025-12-06 08:07:46.392 232437 DEBUG nova.network.neutron [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Successfully created port: 9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:07:47 np0005548731 nova_compute[232433]: 2025-12-06 08:07:47.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:07:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:47.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:47 np0005548731 nova_compute[232433]: 2025-12-06 08:07:47.545 232437 DEBUG nova.network.neutron [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Successfully updated port: 9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:07:47 np0005548731 nova_compute[232433]: 2025-12-06 08:07:47.610 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-48c1afa0-f521-4427-888c-ca46a12e88e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:07:47 np0005548731 nova_compute[232433]: 2025-12-06 08:07:47.611 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-48c1afa0-f521-4427-888c-ca46a12e88e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:07:47 np0005548731 nova_compute[232433]: 2025-12-06 08:07:47.611 232437 DEBUG nova.network.neutron [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:07:47 np0005548731 nova_compute[232433]: 2025-12-06 08:07:47.760 232437 DEBUG nova.network.neutron [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:07:47 np0005548731 nova_compute[232433]: 2025-12-06 08:07:47.771 232437 DEBUG nova.compute.manager [req-6608da41-c817-40f0-bfbb-2cd7388c60c6 req-41e26c74-0b25-4f67-aa2c-6c62d8ed5566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Received event network-changed-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:07:47 np0005548731 nova_compute[232433]: 2025-12-06 08:07:47.772 232437 DEBUG nova.compute.manager [req-6608da41-c817-40f0-bfbb-2cd7388c60c6 req-41e26c74-0b25-4f67-aa2c-6c62d8ed5566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Refreshing instance network info cache due to event network-changed-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:07:47 np0005548731 nova_compute[232433]: 2025-12-06 08:07:47.773 232437 DEBUG oslo_concurrency.lockutils [req-6608da41-c817-40f0-bfbb-2cd7388c60c6 req-41e26c74-0b25-4f67-aa2c-6c62d8ed5566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-48c1afa0-f521-4427-888c-ca46a12e88e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:07:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:07:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:48.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:07:48 np0005548731 nova_compute[232433]: 2025-12-06 08:07:48.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:07:48 np0005548731 nova_compute[232433]: 2025-12-06 08:07:48.961 232437 DEBUG nova.network.neutron [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Updating instance_info_cache with network_info: [{"id": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "address": "fa:16:3e:42:d0:27", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f4a6e70-da", "ovs_interfaceid": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:07:48 np0005548731 nova_compute[232433]: 2025-12-06 08:07:48.982 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-48c1afa0-f521-4427-888c-ca46a12e88e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:07:48 np0005548731 nova_compute[232433]: 2025-12-06 08:07:48.982 232437 DEBUG nova.compute.manager [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Instance network_info: |[{"id": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "address": "fa:16:3e:42:d0:27", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f4a6e70-da", "ovs_interfaceid": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:07:48 np0005548731 nova_compute[232433]: 2025-12-06 08:07:48.983 232437 DEBUG oslo_concurrency.lockutils [req-6608da41-c817-40f0-bfbb-2cd7388c60c6 req-41e26c74-0b25-4f67-aa2c-6c62d8ed5566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-48c1afa0-f521-4427-888c-ca46a12e88e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:07:48 np0005548731 nova_compute[232433]: 2025-12-06 08:07:48.983 232437 DEBUG nova.network.neutron [req-6608da41-c817-40f0-bfbb-2cd7388c60c6 req-41e26c74-0b25-4f67-aa2c-6c62d8ed5566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Refreshing network info cache for port 9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:07:48 np0005548731 nova_compute[232433]: 2025-12-06 08:07:48.986 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Start _get_guest_xml network_info=[{"id": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "address": "fa:16:3e:42:d0:27", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f4a6e70-da", "ovs_interfaceid": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:07:48 np0005548731 nova_compute[232433]: 2025-12-06 08:07:48.991 232437 WARNING nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:07:48 np0005548731 nova_compute[232433]: 2025-12-06 08:07:48.998 232437 DEBUG nova.virt.libvirt.host [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:07:48 np0005548731 nova_compute[232433]: 2025-12-06 08:07:48.998 232437 DEBUG nova.virt.libvirt.host [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.005 232437 DEBUG nova.virt.libvirt.host [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.005 232437 DEBUG nova.virt.libvirt.host [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.007 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.007 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.007 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.008 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.008 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.008 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.008 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.008 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.009 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.009 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.009 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.009 232437 DEBUG nova.virt.hardware [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.012 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.135 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.136 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.137 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.137 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.138 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:07:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:07:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:49.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:07:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:07:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/856270045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.510 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.535 232437 DEBUG nova.storage.rbd_utils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 48c1afa0-f521-4427-888c-ca46a12e88e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.539 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:07:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:07:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3251160969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.595 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.666 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.737 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.738 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4155MB free_disk=20.87627410888672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.738 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.739 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:07:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.820 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 48c1afa0-f521-4427-888c-ca46a12e88e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.821 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.821 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.862 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.972 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.974 232437 DEBUG nova.virt.libvirt.vif [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1530341563',display_name='tempest-TestNetworkBasicOps-server-1530341563',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1530341563',id=193,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGftxVSaWXFdw3+0W4YAJPVSKA5WztiGP+RC6dBXTWM2Ix2Cz1PI0an1VsWLvCpo+53Lcenm7aZa+hjh2xlSJWgRCDczZ3bTiz1yN2VJPN429Rd9uLpOo5kFaWBe4vrMGg==',key_name='tempest-TestNetworkBasicOps-243125875',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-pbazs4ud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:07:42Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=48c1afa0-f521-4427-888c-ca46a12e88e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "address": "fa:16:3e:42:d0:27", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f4a6e70-da", "ovs_interfaceid": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.975 232437 DEBUG nova.network.os_vif_util [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "address": "fa:16:3e:42:d0:27", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f4a6e70-da", "ovs_interfaceid": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.976 232437 DEBUG nova.network.os_vif_util [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:d0:27,bridge_name='br-int',has_traffic_filtering=True,id=9f4a6e70-da76-4bbc-8f15-2c12a822a1f5,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f4a6e70-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:07:49 np0005548731 nova_compute[232433]: 2025-12-06 08:07:49.977 232437 DEBUG nova.objects.instance [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 48c1afa0-f521-4427-888c-ca46a12e88e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:07:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:50.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.078 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  <uuid>48c1afa0-f521-4427-888c-ca46a12e88e3</uuid>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  <name>instance-000000c1</name>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestNetworkBasicOps-server-1530341563</nova:name>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:07:48</nova:creationTime>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <nova:port uuid="9f4a6e70-da76-4bbc-8f15-2c12a822a1f5">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <entry name="serial">48c1afa0-f521-4427-888c-ca46a12e88e3</entry>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <entry name="uuid">48c1afa0-f521-4427-888c-ca46a12e88e3</entry>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/48c1afa0-f521-4427-888c-ca46a12e88e3_disk">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/48c1afa0-f521-4427-888c-ca46a12e88e3_disk.config">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:42:d0:27"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <target dev="tap9f4a6e70-da"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/48c1afa0-f521-4427-888c-ca46a12e88e3/console.log" append="off"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:07:50 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:07:50 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:07:50 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:07:50 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.084 232437 DEBUG nova.compute.manager [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Preparing to wait for external event network-vif-plugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.084 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.085 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.085 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.086 232437 DEBUG nova.virt.libvirt.vif [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1530341563',display_name='tempest-TestNetworkBasicOps-server-1530341563',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1530341563',id=193,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGftxVSaWXFdw3+0W4YAJPVSKA5WztiGP+RC6dBXTWM2Ix2Cz1PI0an1VsWLvCpo+53Lcenm7aZa+hjh2xlSJWgRCDczZ3bTiz1yN2VJPN429Rd9uLpOo5kFaWBe4vrMGg==',key_name='tempest-TestNetworkBasicOps-243125875',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-pbazs4ud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:07:42Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=48c1afa0-f521-4427-888c-ca46a12e88e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "address": "fa:16:3e:42:d0:27", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f4a6e70-da", "ovs_interfaceid": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.086 232437 DEBUG nova.network.os_vif_util [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "address": "fa:16:3e:42:d0:27", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f4a6e70-da", "ovs_interfaceid": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.087 232437 DEBUG nova.network.os_vif_util [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:d0:27,bridge_name='br-int',has_traffic_filtering=True,id=9f4a6e70-da76-4bbc-8f15-2c12a822a1f5,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f4a6e70-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.087 232437 DEBUG os_vif [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:d0:27,bridge_name='br-int',has_traffic_filtering=True,id=9f4a6e70-da76-4bbc-8f15-2c12a822a1f5,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f4a6e70-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.088 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.089 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.089 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.092 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.092 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f4a6e70-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.093 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f4a6e70-da, col_values=(('external_ids', {'iface-id': '9f4a6e70-da76-4bbc-8f15-2c12a822a1f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:d0:27', 'vm-uuid': '48c1afa0-f521-4427-888c-ca46a12e88e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.094 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:50 np0005548731 NetworkManager[49182]: <info>  [1765008470.0954] manager: (tap9f4a6e70-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.097 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.100 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.101 232437 INFO os_vif [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:d0:27,bridge_name='br-int',has_traffic_filtering=True,id=9f4a6e70-da76-4bbc-8f15-2c12a822a1f5,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f4a6e70-da')#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.160 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.161 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.161 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:42:d0:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.161 232437 INFO nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Using config drive#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.185 232437 DEBUG nova.storage.rbd_utils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 48c1afa0-f521-4427-888c-ca46a12e88e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:07:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:07:50 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/228762705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.280 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.285 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.303 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.330 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.331 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.700 232437 INFO nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Creating config drive at /var/lib/nova/instances/48c1afa0-f521-4427-888c-ca46a12e88e3/disk.config#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.705 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/48c1afa0-f521-4427-888c-ca46a12e88e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdrpjvby8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.736 232437 DEBUG nova.network.neutron [req-6608da41-c817-40f0-bfbb-2cd7388c60c6 req-41e26c74-0b25-4f67-aa2c-6c62d8ed5566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Updated VIF entry in instance network info cache for port 9f4a6e70-da76-4bbc-8f15-2c12a822a1f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.737 232437 DEBUG nova.network.neutron [req-6608da41-c817-40f0-bfbb-2cd7388c60c6 req-41e26c74-0b25-4f67-aa2c-6c62d8ed5566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Updating instance_info_cache with network_info: [{"id": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "address": "fa:16:3e:42:d0:27", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f4a6e70-da", "ovs_interfaceid": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.761 232437 DEBUG oslo_concurrency.lockutils [req-6608da41-c817-40f0-bfbb-2cd7388c60c6 req-41e26c74-0b25-4f67-aa2c-6c62d8ed5566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-48c1afa0-f521-4427-888c-ca46a12e88e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.846 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/48c1afa0-f521-4427-888c-ca46a12e88e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdrpjvby8" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.877 232437 DEBUG nova.storage.rbd_utils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 48c1afa0-f521-4427-888c-ca46a12e88e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.881 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/48c1afa0-f521-4427-888c-ca46a12e88e3/disk.config 48c1afa0-f521-4427-888c-ca46a12e88e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:07:50 np0005548731 nova_compute[232433]: 2025-12-06 08:07:50.908 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.038 232437 DEBUG oslo_concurrency.processutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/48c1afa0-f521-4427-888c-ca46a12e88e3/disk.config 48c1afa0-f521-4427-888c-ca46a12e88e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.039 232437 INFO nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Deleting local config drive /var/lib/nova/instances/48c1afa0-f521-4427-888c-ca46a12e88e3/disk.config because it was imported into RBD.#033[00m
Dec  6 03:07:51 np0005548731 kernel: tap9f4a6e70-da: entered promiscuous mode
Dec  6 03:07:51 np0005548731 NetworkManager[49182]: <info>  [1765008471.0894] manager: (tap9f4a6e70-da): new Tun device (/org/freedesktop/NetworkManager/Devices/452)
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.087 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:51 np0005548731 ovn_controller[133927]: 2025-12-06T08:07:51Z|00977|binding|INFO|Claiming lport 9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 for this chassis.
Dec  6 03:07:51 np0005548731 ovn_controller[133927]: 2025-12-06T08:07:51Z|00978|binding|INFO|9f4a6e70-da76-4bbc-8f15-2c12a822a1f5: Claiming fa:16:3e:42:d0:27 10.100.0.20
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.091 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.097 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:d0:27 10.100.0.20'], port_security=['fa:16:3e:42:d0:27 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '48c1afa0-f521-4427-888c-ca46a12e88e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aad56a4a-2405-491c-a92d-879a428e508e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0019305c-8ebd-4d1b-ac3e-eb77d507b742, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9f4a6e70-da76-4bbc-8f15-2c12a822a1f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.099 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 in datapath b39fdb1c-6386-42dc-9c1d-e70684ee69f2 bound to our chassis#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.100 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b39fdb1c-6386-42dc-9c1d-e70684ee69f2#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.111 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1804b7-48ed-4ab8-8186-155bbe1499b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.112 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb39fdb1c-61 in ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.113 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb39fdb1c-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.113 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[95a469fa-82e3-40a4-af12-01750b327d34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.115 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f97e726f-4977-4bd2-9966-78fb68a294a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 systemd-udevd[328522]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:07:51 np0005548731 systemd-machined[195355]: New machine qemu-98-instance-000000c1.
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.125 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[c980e15d-0808-48c2-8b4d-feefe4ffecf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 systemd[1]: Started Virtual Machine qemu-98-instance-000000c1.
Dec  6 03:07:51 np0005548731 ovn_controller[133927]: 2025-12-06T08:07:51Z|00979|binding|INFO|Setting lport 9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 ovn-installed in OVS
Dec  6 03:07:51 np0005548731 ovn_controller[133927]: 2025-12-06T08:07:51Z|00980|binding|INFO|Setting lport 9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 up in Southbound
Dec  6 03:07:51 np0005548731 NetworkManager[49182]: <info>  [1765008471.1838] device (tap9f4a6e70-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.183 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:51 np0005548731 NetworkManager[49182]: <info>  [1765008471.1852] device (tap9f4a6e70-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.185 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[927a2b81-3ad3-4b0c-9703-caefcbdb362a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.217 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[ad06afd2-cc66-44d1-90fe-c7e9b7ac8d39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 NetworkManager[49182]: <info>  [1765008471.2238] manager: (tapb39fdb1c-60): new Veth device (/org/freedesktop/NetworkManager/Devices/453)
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.222 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b4b6df-5d8f-4a60-9fcf-ca3dcead54bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.253 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[252b3f65-6427-4edb-a80d-3c0fc657a7a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.258 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[65406b37-2ce2-45db-92c6-fa0e9b35e2d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 NetworkManager[49182]: <info>  [1765008471.2807] device (tapb39fdb1c-60): carrier: link connected
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.287 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5b69c0d0-cefd-44a7-b446-d0a6d833b82b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.303 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e9544767-a7dc-4a55-b2ac-30e158329de1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb39fdb1c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:c2:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873328, 'reachable_time': 29563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328554, 'error': None, 'target': 'ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.317 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[26518d34-af1f-4899-8933-d6e7c82e99f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:c284'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873328, 'tstamp': 873328}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328555, 'error': None, 'target': 'ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.326 232437 DEBUG nova.compute.manager [req-1321ad89-5875-4716-97ea-21f77a994146 req-5ef504b7-d79c-4bd0-ba88-2ff03747defe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Received event network-vif-plugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.327 232437 DEBUG oslo_concurrency.lockutils [req-1321ad89-5875-4716-97ea-21f77a994146 req-5ef504b7-d79c-4bd0-ba88-2ff03747defe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.327 232437 DEBUG oslo_concurrency.lockutils [req-1321ad89-5875-4716-97ea-21f77a994146 req-5ef504b7-d79c-4bd0-ba88-2ff03747defe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.327 232437 DEBUG oslo_concurrency.lockutils [req-1321ad89-5875-4716-97ea-21f77a994146 req-5ef504b7-d79c-4bd0-ba88-2ff03747defe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.327 232437 DEBUG nova.compute.manager [req-1321ad89-5875-4716-97ea-21f77a994146 req-5ef504b7-d79c-4bd0-ba88-2ff03747defe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Processing event network-vif-plugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.330 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2d0511-83aa-486e-a72c-056d1d2b440b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb39fdb1c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:c2:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873328, 'reachable_time': 29563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328556, 'error': None, 'target': 'ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.353 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[23a06358-d44f-4d06-a596-abfed3ff7be3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.397 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6246f49e-05ac-4220-acce-469563f9e748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.399 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb39fdb1c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.399 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.399 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb39fdb1c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.401 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:51 np0005548731 kernel: tapb39fdb1c-60: entered promiscuous mode
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.406 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:51 np0005548731 NetworkManager[49182]: <info>  [1765008471.4070] manager: (tapb39fdb1c-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.408 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb39fdb1c-60, col_values=(('external_ids', {'iface-id': 'dd1c1cd7-6c16-4899-898c-492393f63b35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.409 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:51 np0005548731 ovn_controller[133927]: 2025-12-06T08:07:51Z|00981|binding|INFO|Releasing lport dd1c1cd7-6c16-4899-898c-492393f63b35 from this chassis (sb_readonly=0)
Dec  6 03:07:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:51.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.413 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.414 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b39fdb1c-6386-42dc-9c1d-e70684ee69f2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b39fdb1c-6386-42dc-9c1d-e70684ee69f2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.417 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e922ed0a-89fa-4748-b101-a6cad4dede82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.418 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-b39fdb1c-6386-42dc-9c1d-e70684ee69f2
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/b39fdb1c-6386-42dc-9c1d-e70684ee69f2.pid.haproxy
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID b39fdb1c-6386-42dc-9c1d-e70684ee69f2
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:07:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:07:51.420 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'env', 'PROCESS_TAG=haproxy-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b39fdb1c-6386-42dc-9c1d-e70684ee69f2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.421 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.682 232437 DEBUG nova.compute.manager [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.683 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008471.6815786, 48c1afa0-f521-4427-888c-ca46a12e88e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.684 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] VM Started (Lifecycle Event)#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.688 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.691 232437 INFO nova.virt.libvirt.driver [-] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Instance spawned successfully.#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.692 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.721 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.728 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.732 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.733 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.733 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.734 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.734 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.735 232437 DEBUG nova.virt.libvirt.driver [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:07:51 np0005548731 podman[328627]: 2025-12-06 08:07:51.770114791 +0000 UTC m=+0.053101990 container create c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.780 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.781 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008471.682557, 48c1afa0-f521-4427-888c-ca46a12e88e3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.781 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:07:51 np0005548731 systemd[1]: Started libpod-conmon-c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7.scope.
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.827 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.830 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008471.686067, 48c1afa0-f521-4427-888c-ca46a12e88e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.831 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:07:51 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:07:51 np0005548731 podman[328627]: 2025-12-06 08:07:51.743347975 +0000 UTC m=+0.026335204 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:07:51 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829ab38e4d24126c24c2a8de6443f08668d299bb997acee7b715ad7cc5627b4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.852 232437 INFO nova.compute.manager [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Took 9.69 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.853 232437 DEBUG nova.compute.manager [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.855 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:07:51 np0005548731 podman[328627]: 2025-12-06 08:07:51.859200988 +0000 UTC m=+0.142188227 container init c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.860 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:07:51 np0005548731 podman[328627]: 2025-12-06 08:07:51.864351424 +0000 UTC m=+0.147338633 container start c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:07:51 np0005548731 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[328641]: [NOTICE]   (328645) : New worker (328647) forked
Dec  6 03:07:51 np0005548731 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[328641]: [NOTICE]   (328645) : Loading success.
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.901 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.929 232437 INFO nova.compute.manager [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Took 11.22 seconds to build instance.#033[00m
Dec  6 03:07:51 np0005548731 nova_compute[232433]: 2025-12-06 08:07:51.952 232437 DEBUG oslo_concurrency.lockutils [None req-21674c9c-f744-4119-909c-93082ae35154 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:07:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:52.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:53 np0005548731 nova_compute[232433]: 2025-12-06 08:07:53.331 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:07:53 np0005548731 nova_compute[232433]: 2025-12-06 08:07:53.331 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:07:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:53.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:53 np0005548731 nova_compute[232433]: 2025-12-06 08:07:53.455 232437 DEBUG nova.compute.manager [req-bb7f4e96-f0f1-44d8-9ddf-6a0720711a14 req-96c86dbc-ced8-403b-9bfd-0aa0477441e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Received event network-vif-plugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:07:53 np0005548731 nova_compute[232433]: 2025-12-06 08:07:53.456 232437 DEBUG oslo_concurrency.lockutils [req-bb7f4e96-f0f1-44d8-9ddf-6a0720711a14 req-96c86dbc-ced8-403b-9bfd-0aa0477441e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:07:53 np0005548731 nova_compute[232433]: 2025-12-06 08:07:53.456 232437 DEBUG oslo_concurrency.lockutils [req-bb7f4e96-f0f1-44d8-9ddf-6a0720711a14 req-96c86dbc-ced8-403b-9bfd-0aa0477441e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:07:53 np0005548731 nova_compute[232433]: 2025-12-06 08:07:53.456 232437 DEBUG oslo_concurrency.lockutils [req-bb7f4e96-f0f1-44d8-9ddf-6a0720711a14 req-96c86dbc-ced8-403b-9bfd-0aa0477441e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:07:53 np0005548731 nova_compute[232433]: 2025-12-06 08:07:53.456 232437 DEBUG nova.compute.manager [req-bb7f4e96-f0f1-44d8-9ddf-6a0720711a14 req-96c86dbc-ced8-403b-9bfd-0aa0477441e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] No waiting events found dispatching network-vif-plugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:07:53 np0005548731 nova_compute[232433]: 2025-12-06 08:07:53.456 232437 WARNING nova.compute.manager [req-bb7f4e96-f0f1-44d8-9ddf-6a0720711a14 req-96c86dbc-ced8-403b-9bfd-0aa0477441e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Received unexpected event network-vif-plugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:07:53 np0005548731 podman[328657]: 2025-12-06 08:07:53.890591935 +0000 UTC m=+0.053776725 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  6 03:07:53 np0005548731 podman[328659]: 2025-12-06 08:07:53.897709849 +0000 UTC m=+0.054928484 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 03:07:53 np0005548731 podman[328658]: 2025-12-06 08:07:53.923577442 +0000 UTC m=+0.085456191 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Dec  6 03:07:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:54.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:07:55 np0005548731 nova_compute[232433]: 2025-12-06 08:07:55.095 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:55.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:55 np0005548731 nova_compute[232433]: 2025-12-06 08:07:55.937 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:07:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:56.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:57.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:58.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.332354) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479332409, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 575, "num_deletes": 251, "total_data_size": 856159, "memory_usage": 866920, "flush_reason": "Manual Compaction"}
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479337454, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 426863, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75299, "largest_seqno": 75869, "table_properties": {"data_size": 424076, "index_size": 758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7616, "raw_average_key_size": 20, "raw_value_size": 418292, "raw_average_value_size": 1142, "num_data_blocks": 33, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008448, "oldest_key_time": 1765008448, "file_creation_time": 1765008479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 5134 microseconds, and 2578 cpu microseconds.
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.337493) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 426863 bytes OK
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.337510) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.338879) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.338891) EVENT_LOG_v1 {"time_micros": 1765008479338887, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.338905) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 852843, prev total WAL file size 852843, number of live WAL files 2.
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.339398) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353236' seq:72057594037927935, type:22 .. '6D6772737461740032373738' seq:0, type:0; will stop at (end)
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(416KB)], [150(13MB)]
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479339452, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 14647248, "oldest_snapshot_seqno": -1}
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 10333 keys, 10917091 bytes, temperature: kUnknown
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479420921, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 10917091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10853951, "index_size": 36211, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25861, "raw_key_size": 273576, "raw_average_key_size": 26, "raw_value_size": 10676379, "raw_average_value_size": 1033, "num_data_blocks": 1362, "num_entries": 10333, "num_filter_entries": 10333, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765008479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:07:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:07:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:07:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:59.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.421271) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10917091 bytes
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.422847) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.4 rd, 133.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.6 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(59.9) write-amplify(25.6) OK, records in: 10839, records dropped: 506 output_compression: NoCompression
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.422863) EVENT_LOG_v1 {"time_micros": 1765008479422855, "job": 96, "event": "compaction_finished", "compaction_time_micros": 81660, "compaction_time_cpu_micros": 26710, "output_level": 6, "num_output_files": 1, "total_output_size": 10917091, "num_input_records": 10839, "num_output_records": 10333, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479423015, "job": 96, "event": "table_file_deletion", "file_number": 152}
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479425052, "job": 96, "event": "table_file_deletion", "file_number": 150}
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.339332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.425125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.425130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.425132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.425134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:07:59.425135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:07:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:00.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:00 np0005548731 nova_compute[232433]: 2025-12-06 08:08:00.099 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:00.911 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:08:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:00.912 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:08:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:00.912 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:08:00 np0005548731 nova_compute[232433]: 2025-12-06 08:08:00.940 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:01.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:02.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:03 np0005548731 nova_compute[232433]: 2025-12-06 08:08:03.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:08:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:03.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:04.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:08:04Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:d0:27 10.100.0.20
Dec  6 03:08:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:08:04Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:d0:27 10.100.0.20
Dec  6 03:08:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:05 np0005548731 nova_compute[232433]: 2025-12-06 08:08:05.102 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:05 np0005548731 nova_compute[232433]: 2025-12-06 08:08:05.942 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:08:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:06.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:08:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:08:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:07.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:08:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:08.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:08:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/840833753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:08:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:08:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/840833753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:08:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:09.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:08:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:10.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:08:10 np0005548731 nova_compute[232433]: 2025-12-06 08:08:10.106 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:10 np0005548731 nova_compute[232433]: 2025-12-06 08:08:10.946 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:11.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:12.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:13.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:14.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:15 np0005548731 nova_compute[232433]: 2025-12-06 08:08:15.109 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:15.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:15 np0005548731 nova_compute[232433]: 2025-12-06 08:08:15.987 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:08:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:18.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:08:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:19.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:20.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:20 np0005548731 nova_compute[232433]: 2025-12-06 08:08:20.111 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:21 np0005548731 nova_compute[232433]: 2025-12-06 08:08:21.027 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:21.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:08:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:22.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:08:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:23.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:24.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:24 np0005548731 podman[328789]: 2025-12-06 08:08:24.925301742 +0000 UTC m=+0.064492648 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  6 03:08:24 np0005548731 podman[328791]: 2025-12-06 08:08:24.936438884 +0000 UTC m=+0.075054785 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 03:08:24 np0005548731 podman[328790]: 2025-12-06 08:08:24.960462723 +0000 UTC m=+0.099654519 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  6 03:08:25 np0005548731 nova_compute[232433]: 2025-12-06 08:08:25.113 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:25.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:26.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:26 np0005548731 nova_compute[232433]: 2025-12-06 08:08:26.052 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:27.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:28.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:29.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:08:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:30.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:08:30 np0005548731 nova_compute[232433]: 2025-12-06 08:08:30.115 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:31 np0005548731 nova_compute[232433]: 2025-12-06 08:08:31.054 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:31.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:31.720 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:08:31 np0005548731 nova_compute[232433]: 2025-12-06 08:08:31.720 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:31.722 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:08:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:31.724 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:08:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:32.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:33.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:34.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:35 np0005548731 nova_compute[232433]: 2025-12-06 08:08:35.117 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:35.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:36 np0005548731 nova_compute[232433]: 2025-12-06 08:08:36.058 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:08:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:36.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:08:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:08:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:37.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:08:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:38.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:39.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:40.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.120 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.449 232437 DEBUG oslo_concurrency.lockutils [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "48c1afa0-f521-4427-888c-ca46a12e88e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.450 232437 DEBUG oslo_concurrency.lockutils [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.451 232437 DEBUG oslo_concurrency.lockutils [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.451 232437 DEBUG oslo_concurrency.lockutils [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.451 232437 DEBUG oslo_concurrency.lockutils [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.453 232437 INFO nova.compute.manager [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Terminating instance#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.453 232437 DEBUG nova.compute.manager [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:08:40 np0005548731 kernel: tap9f4a6e70-da (unregistering): left promiscuous mode
Dec  6 03:08:40 np0005548731 NetworkManager[49182]: <info>  [1765008520.5143] device (tap9f4a6e70-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.525 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:40 np0005548731 ovn_controller[133927]: 2025-12-06T08:08:40Z|00982|binding|INFO|Releasing lport 9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 from this chassis (sb_readonly=0)
Dec  6 03:08:40 np0005548731 ovn_controller[133927]: 2025-12-06T08:08:40Z|00983|binding|INFO|Setting lport 9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 down in Southbound
Dec  6 03:08:40 np0005548731 ovn_controller[133927]: 2025-12-06T08:08:40Z|00984|binding|INFO|Removing iface tap9f4a6e70-da ovn-installed in OVS
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.535 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:d0:27 10.100.0.20'], port_security=['fa:16:3e:42:d0:27 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': '48c1afa0-f521-4427-888c-ca46a12e88e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aad56a4a-2405-491c-a92d-879a428e508e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0019305c-8ebd-4d1b-ac3e-eb77d507b742, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9f4a6e70-da76-4bbc-8f15-2c12a822a1f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.537 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 in datapath b39fdb1c-6386-42dc-9c1d-e70684ee69f2 unbound from our chassis#033[00m
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.540 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b39fdb1c-6386-42dc-9c1d-e70684ee69f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.542 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8a111d2b-895d-4124-982f-0f0c29aacda1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.543 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2 namespace which is not needed anymore#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.547 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:40 np0005548731 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000c1.scope: Deactivated successfully.
Dec  6 03:08:40 np0005548731 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000c1.scope: Consumed 15.442s CPU time.
Dec  6 03:08:40 np0005548731 systemd-machined[195355]: Machine qemu-98-instance-000000c1 terminated.
Dec  6 03:08:40 np0005548731 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[328641]: [NOTICE]   (328645) : haproxy version is 2.8.14-c23fe91
Dec  6 03:08:40 np0005548731 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[328641]: [NOTICE]   (328645) : path to executable is /usr/sbin/haproxy
Dec  6 03:08:40 np0005548731 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[328641]: [WARNING]  (328645) : Exiting Master process...
Dec  6 03:08:40 np0005548731 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[328641]: [ALERT]    (328645) : Current worker (328647) exited with code 143 (Terminated)
Dec  6 03:08:40 np0005548731 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[328641]: [WARNING]  (328645) : All workers exited. Exiting... (0)
Dec  6 03:08:40 np0005548731 systemd[1]: libpod-c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7.scope: Deactivated successfully.
Dec  6 03:08:40 np0005548731 podman[329069]: 2025-12-06 08:08:40.67683214 +0000 UTC m=+0.041893405 container died c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.693 232437 INFO nova.virt.libvirt.driver [-] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Instance destroyed successfully.#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.694 232437 DEBUG nova.objects.instance [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 48c1afa0-f521-4427-888c-ca46a12e88e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:08:40 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7-userdata-shm.mount: Deactivated successfully.
Dec  6 03:08:40 np0005548731 systemd[1]: var-lib-containers-storage-overlay-829ab38e4d24126c24c2a8de6443f08668d299bb997acee7b715ad7cc5627b4b-merged.mount: Deactivated successfully.
Dec  6 03:08:40 np0005548731 podman[329069]: 2025-12-06 08:08:40.713623491 +0000 UTC m=+0.078684756 container cleanup c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:08:40 np0005548731 systemd[1]: libpod-conmon-c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7.scope: Deactivated successfully.
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.725 232437 DEBUG nova.virt.libvirt.vif [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1530341563',display_name='tempest-TestNetworkBasicOps-server-1530341563',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1530341563',id=193,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGftxVSaWXFdw3+0W4YAJPVSKA5WztiGP+RC6dBXTWM2Ix2Cz1PI0an1VsWLvCpo+53Lcenm7aZa+hjh2xlSJWgRCDczZ3bTiz1yN2VJPN429Rd9uLpOo5kFaWBe4vrMGg==',key_name='tempest-TestNetworkBasicOps-243125875',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:07:51Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-pbazs4ud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:07:51Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=48c1afa0-f521-4427-888c-ca46a12e88e3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "address": "fa:16:3e:42:d0:27", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f4a6e70-da", "ovs_interfaceid": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.725 232437 DEBUG nova.network.os_vif_util [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "address": "fa:16:3e:42:d0:27", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f4a6e70-da", "ovs_interfaceid": "9f4a6e70-da76-4bbc-8f15-2c12a822a1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.726 232437 DEBUG nova.network.os_vif_util [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:d0:27,bridge_name='br-int',has_traffic_filtering=True,id=9f4a6e70-da76-4bbc-8f15-2c12a822a1f5,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f4a6e70-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.726 232437 DEBUG os_vif [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:d0:27,bridge_name='br-int',has_traffic_filtering=True,id=9f4a6e70-da76-4bbc-8f15-2c12a822a1f5,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f4a6e70-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.729 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.729 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f4a6e70-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.730 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.731 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.734 232437 INFO os_vif [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:d0:27,bridge_name='br-int',has_traffic_filtering=True,id=9f4a6e70-da76-4bbc-8f15-2c12a822a1f5,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f4a6e70-da')#033[00m
Dec  6 03:08:40 np0005548731 podman[329107]: 2025-12-06 08:08:40.770907931 +0000 UTC m=+0.038781659 container remove c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.776 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4626a8e3-1955-44bf-8c68-acddf65854e8]: (4, ('Sat Dec  6 08:08:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2 (c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7)\nc9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7\nSat Dec  6 08:08:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2 (c9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7)\nc9c7eab1bbb200450fd6fbdbc281104578ab5491ffaa6a7be9cd1826c81336a7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.777 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bef083cc-ada4-4a25-9e17-c442995c42ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.778 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb39fdb1c-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.779 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:40 np0005548731 kernel: tapb39fdb1c-60: left promiscuous mode
Dec  6 03:08:40 np0005548731 nova_compute[232433]: 2025-12-06 08:08:40.792 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.796 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d57542-df02-4500-bf76-79c09cbc68e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.817 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[47b0b7b8-beae-437f-b8b4-b93d3b9c1fe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.818 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a07f3ff6-5619-4be2-8b55-78543f226d68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.835 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2d580b4f-cd7c-41c6-9b32-394315cf4420]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873321, 'reachable_time': 21227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329141, 'error': None, 'target': 'ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:08:40 np0005548731 systemd[1]: run-netns-ovnmeta\x2db39fdb1c\x2d6386\x2d42dc\x2d9c1d\x2de70684ee69f2.mount: Deactivated successfully.
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.838 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:08:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:08:40.838 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[fc07b5d8-7885-489d-8e35-e3ae3b177895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.056 232437 DEBUG nova.compute.manager [req-9c55c4bd-c623-4ace-8be7-bb4baee7ca67 req-d3d6ce2a-202d-44ad-bf89-c11372564beb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Received event network-vif-unplugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.057 232437 DEBUG oslo_concurrency.lockutils [req-9c55c4bd-c623-4ace-8be7-bb4baee7ca67 req-d3d6ce2a-202d-44ad-bf89-c11372564beb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.057 232437 DEBUG oslo_concurrency.lockutils [req-9c55c4bd-c623-4ace-8be7-bb4baee7ca67 req-d3d6ce2a-202d-44ad-bf89-c11372564beb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.057 232437 DEBUG oslo_concurrency.lockutils [req-9c55c4bd-c623-4ace-8be7-bb4baee7ca67 req-d3d6ce2a-202d-44ad-bf89-c11372564beb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.058 232437 DEBUG nova.compute.manager [req-9c55c4bd-c623-4ace-8be7-bb4baee7ca67 req-d3d6ce2a-202d-44ad-bf89-c11372564beb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] No waiting events found dispatching network-vif-unplugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.058 232437 DEBUG nova.compute.manager [req-9c55c4bd-c623-4ace-8be7-bb4baee7ca67 req-d3d6ce2a-202d-44ad-bf89-c11372564beb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Received event network-vif-unplugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.096 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:08:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:08:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.192 232437 INFO nova.virt.libvirt.driver [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Deleting instance files /var/lib/nova/instances/48c1afa0-f521-4427-888c-ca46a12e88e3_del#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.193 232437 INFO nova.virt.libvirt.driver [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Deletion of /var/lib/nova/instances/48c1afa0-f521-4427-888c-ca46a12e88e3_del complete#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.312 232437 INFO nova.compute.manager [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.314 232437 DEBUG oslo.service.loopingcall [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.314 232437 DEBUG nova.compute.manager [-] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:08:41 np0005548731 nova_compute[232433]: 2025-12-06 08:08:41.314 232437 DEBUG nova.network.neutron [-] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:08:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:41.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:42.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:42 np0005548731 nova_compute[232433]: 2025-12-06 08:08:42.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:08:42 np0005548731 nova_compute[232433]: 2025-12-06 08:08:42.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:08:42 np0005548731 nova_compute[232433]: 2025-12-06 08:08:42.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:08:42 np0005548731 nova_compute[232433]: 2025-12-06 08:08:42.131 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  6 03:08:42 np0005548731 nova_compute[232433]: 2025-12-06 08:08:42.131 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.289 232437 DEBUG nova.compute.manager [req-c9ec917b-9f6b-452c-866b-bd2e9492bfa8 req-88e5f537-9daa-494e-b62c-59097ada6074 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Received event network-vif-plugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.289 232437 DEBUG oslo_concurrency.lockutils [req-c9ec917b-9f6b-452c-866b-bd2e9492bfa8 req-88e5f537-9daa-494e-b62c-59097ada6074 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.290 232437 DEBUG oslo_concurrency.lockutils [req-c9ec917b-9f6b-452c-866b-bd2e9492bfa8 req-88e5f537-9daa-494e-b62c-59097ada6074 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.290 232437 DEBUG oslo_concurrency.lockutils [req-c9ec917b-9f6b-452c-866b-bd2e9492bfa8 req-88e5f537-9daa-494e-b62c-59097ada6074 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.290 232437 DEBUG nova.compute.manager [req-c9ec917b-9f6b-452c-866b-bd2e9492bfa8 req-88e5f537-9daa-494e-b62c-59097ada6074 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] No waiting events found dispatching network-vif-plugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.290 232437 WARNING nova.compute.manager [req-c9ec917b-9f6b-452c-866b-bd2e9492bfa8 req-88e5f537-9daa-494e-b62c-59097ada6074 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Received unexpected event network-vif-plugged-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.296 232437 DEBUG nova.network.neutron [-] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.384 232437 INFO nova.compute.manager [-] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Took 2.07 seconds to deallocate network for instance.#033[00m
Dec  6 03:08:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:43.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.501 232437 DEBUG oslo_concurrency.lockutils [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.501 232437 DEBUG oslo_concurrency.lockutils [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.552 232437 DEBUG oslo_concurrency.processutils [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:08:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:08:43 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1075791896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.977 232437 DEBUG oslo_concurrency.processutils [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:08:43 np0005548731 nova_compute[232433]: 2025-12-06 08:08:43.983 232437 DEBUG nova.compute.provider_tree [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:08:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:44.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:44 np0005548731 nova_compute[232433]: 2025-12-06 08:08:44.101 232437 DEBUG nova.scheduler.client.report [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:08:44 np0005548731 nova_compute[232433]: 2025-12-06 08:08:44.508 232437 DEBUG oslo_concurrency.lockutils [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:08:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:45 np0005548731 nova_compute[232433]: 2025-12-06 08:08:45.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:08:45 np0005548731 nova_compute[232433]: 2025-12-06 08:08:45.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:08:45 np0005548731 nova_compute[232433]: 2025-12-06 08:08:45.125 232437 INFO nova.scheduler.client.report [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 48c1afa0-f521-4427-888c-ca46a12e88e3#033[00m
Dec  6 03:08:45 np0005548731 nova_compute[232433]: 2025-12-06 08:08:45.199 232437 DEBUG oslo_concurrency.lockutils [None req-b6b7d5c3-0e2b-4313-bb2e-e6daa0f6215e d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "48c1afa0-f521-4427-888c-ca46a12e88e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:08:45 np0005548731 nova_compute[232433]: 2025-12-06 08:08:45.373 232437 DEBUG nova.compute.manager [req-40cd0c41-2ed0-48b3-9fb5-e29d98d39620 req-8224bd08-61ce-466f-bf24-905eb0b7f1fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Received event network-vif-deleted-9f4a6e70-da76-4bbc-8f15-2c12a822a1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:08:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:45.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:45 np0005548731 nova_compute[232433]: 2025-12-06 08:08:45.732 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:46.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:46 np0005548731 nova_compute[232433]: 2025-12-06 08:08:46.098 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:46 np0005548731 nova_compute[232433]: 2025-12-06 08:08:46.102 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:08:47 np0005548731 nova_compute[232433]: 2025-12-06 08:08:47.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:08:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:08:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:48.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:08:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:08:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:48.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:08:49 np0005548731 nova_compute[232433]: 2025-12-06 08:08:49.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:08:49 np0005548731 nova_compute[232433]: 2025-12-06 08:08:49.432 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:08:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:08:49 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #58. Immutable memtables: 14.
Dec  6 03:08:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:08:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:50.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.126 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.127 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.127 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:08:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:50.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:08:50 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/6449198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.564 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.724 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.726 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4214MB free_disk=20.921585083007812GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.726 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.727 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.735 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.817 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.818 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:08:50 np0005548731 nova_compute[232433]: 2025-12-06 08:08:50.850 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:08:51 np0005548731 nova_compute[232433]: 2025-12-06 08:08:51.101 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:08:51 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2217186686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:08:51 np0005548731 nova_compute[232433]: 2025-12-06 08:08:51.270 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:08:51 np0005548731 nova_compute[232433]: 2025-12-06 08:08:51.277 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:08:51 np0005548731 nova_compute[232433]: 2025-12-06 08:08:51.355 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:08:51 np0005548731 nova_compute[232433]: 2025-12-06 08:08:51.444 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:08:51 np0005548731 nova_compute[232433]: 2025-12-06 08:08:51.445 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:08:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:08:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:08:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:52.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:54.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:54.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:54 np0005548731 nova_compute[232433]: 2025-12-06 08:08:54.446 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:08:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:08:55 np0005548731 nova_compute[232433]: 2025-12-06 08:08:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:08:55 np0005548731 nova_compute[232433]: 2025-12-06 08:08:55.691 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008520.6904793, 48c1afa0-f521-4427-888c-ca46a12e88e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:08:55 np0005548731 nova_compute[232433]: 2025-12-06 08:08:55.691 232437 INFO nova.compute.manager [-] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:08:55 np0005548731 nova_compute[232433]: 2025-12-06 08:08:55.738 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:55 np0005548731 nova_compute[232433]: 2025-12-06 08:08:55.806 232437 DEBUG nova.compute.manager [None req-803ac1d7-4cd9-41d5-87c5-f5fa011a7bc1 - - - - - -] [instance: 48c1afa0-f521-4427-888c-ca46a12e88e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:08:55 np0005548731 podman[329317]: 2025-12-06 08:08:55.894336777 +0000 UTC m=+0.054550129 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:08:55 np0005548731 podman[329319]: 2025-12-06 08:08:55.924929742 +0000 UTC m=+0.069222417 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  6 03:08:55 np0005548731 podman[329318]: 2025-12-06 08:08:55.930350725 +0000 UTC m=+0.087456782 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  6 03:08:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:56.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:56 np0005548731 nova_compute[232433]: 2025-12-06 08:08:56.140 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:08:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:56.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:58.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:08:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:08:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:58.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:08:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:00.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:00.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:00 np0005548731 nova_compute[232433]: 2025-12-06 08:09:00.743 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:00.912 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:00.912 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:00.913 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:01 np0005548731 nova_compute[232433]: 2025-12-06 08:09:01.189 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:02.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:02.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:09:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:04.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:09:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:04.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:05 np0005548731 nova_compute[232433]: 2025-12-06 08:09:05.746 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:06.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:06 np0005548731 nova_compute[232433]: 2025-12-06 08:09:06.239 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:06.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:08.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:08.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:09:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4283153530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:09:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:09:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4283153530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:09:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:10.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:09:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:10.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:09:10 np0005548731 nova_compute[232433]: 2025-12-06 08:09:10.750 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:11 np0005548731 nova_compute[232433]: 2025-12-06 08:09:11.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:11 np0005548731 nova_compute[232433]: 2025-12-06 08:09:11.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 03:09:11 np0005548731 nova_compute[232433]: 2025-12-06 08:09:11.158 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 03:09:11 np0005548731 nova_compute[232433]: 2025-12-06 08:09:11.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:09:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:09:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:12.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:14.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:09:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:14.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:09:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:15 np0005548731 nova_compute[232433]: 2025-12-06 08:09:15.789 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:09:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:16.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:09:16 np0005548731 nova_compute[232433]: 2025-12-06 08:09:16.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:16.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:18.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:19 np0005548731 nova_compute[232433]: 2025-12-06 08:09:19.867 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "1414ea75-392a-4bcc-9586-0267d5cf47ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:19 np0005548731 nova_compute[232433]: 2025-12-06 08:09:19.868 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:20.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:20 np0005548731 nova_compute[232433]: 2025-12-06 08:09:20.191 232437 DEBUG nova.compute.manager [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:09:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:20.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:20 np0005548731 nova_compute[232433]: 2025-12-06 08:09:20.454 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:20 np0005548731 nova_compute[232433]: 2025-12-06 08:09:20.454 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:20 np0005548731 nova_compute[232433]: 2025-12-06 08:09:20.466 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:09:20 np0005548731 nova_compute[232433]: 2025-12-06 08:09:20.466 232437 INFO nova.compute.claims [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:09:20 np0005548731 nova_compute[232433]: 2025-12-06 08:09:20.675 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:20 np0005548731 nova_compute[232433]: 2025-12-06 08:09:20.792 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:09:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/600799336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.126 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.131 232437 DEBUG nova.compute.provider_tree [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.147 232437 DEBUG nova.scheduler.client.report [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.169 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.170 232437 DEBUG nova.compute.manager [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.230 232437 DEBUG nova.compute.manager [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.231 232437 DEBUG nova.network.neutron [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.264 232437 INFO nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.283 232437 DEBUG nova.compute.manager [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.322 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.415 232437 DEBUG nova.compute.manager [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.416 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.417 232437 INFO nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Creating image(s)#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.442 232437 DEBUG nova.storage.rbd_utils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.470 232437 DEBUG nova.storage.rbd_utils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.502 232437 DEBUG nova.storage.rbd_utils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.506 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.589 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.590 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.590 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.591 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.617 232437 DEBUG nova.storage.rbd_utils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.622 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.901 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:21 np0005548731 nova_compute[232433]: 2025-12-06 08:09:21.947 232437 DEBUG nova.policy [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:09:22 np0005548731 nova_compute[232433]: 2025-12-06 08:09:22.012 232437 DEBUG nova.storage.rbd_utils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:09:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:09:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:22.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:09:22 np0005548731 nova_compute[232433]: 2025-12-06 08:09:22.109 232437 DEBUG nova.objects.instance [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 1414ea75-392a-4bcc-9586-0267d5cf47ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:09:22 np0005548731 nova_compute[232433]: 2025-12-06 08:09:22.127 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:09:22 np0005548731 nova_compute[232433]: 2025-12-06 08:09:22.128 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Ensure instance console log exists: /var/lib/nova/instances/1414ea75-392a-4bcc-9586-0267d5cf47ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:09:22 np0005548731 nova_compute[232433]: 2025-12-06 08:09:22.128 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:22 np0005548731 nova_compute[232433]: 2025-12-06 08:09:22.128 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:22 np0005548731 nova_compute[232433]: 2025-12-06 08:09:22.129 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:22.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:23 np0005548731 nova_compute[232433]: 2025-12-06 08:09:23.866 232437 DEBUG nova.network.neutron [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Successfully updated port: bd46dc09-5d50-46df-8d43-1b7dca95407f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:09:23 np0005548731 nova_compute[232433]: 2025-12-06 08:09:23.888 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-1414ea75-392a-4bcc-9586-0267d5cf47ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:09:23 np0005548731 nova_compute[232433]: 2025-12-06 08:09:23.889 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-1414ea75-392a-4bcc-9586-0267d5cf47ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:09:23 np0005548731 nova_compute[232433]: 2025-12-06 08:09:23.889 232437 DEBUG nova.network.neutron [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:09:24 np0005548731 nova_compute[232433]: 2025-12-06 08:09:24.036 232437 DEBUG nova.compute.manager [req-d454bc9a-e449-4b31-95bb-8647e94ff197 req-572d2230-ab18-4776-95fd-637c04323894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Received event network-changed-bd46dc09-5d50-46df-8d43-1b7dca95407f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:09:24 np0005548731 nova_compute[232433]: 2025-12-06 08:09:24.037 232437 DEBUG nova.compute.manager [req-d454bc9a-e449-4b31-95bb-8647e94ff197 req-572d2230-ab18-4776-95fd-637c04323894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Refreshing instance network info cache due to event network-changed-bd46dc09-5d50-46df-8d43-1b7dca95407f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:09:24 np0005548731 nova_compute[232433]: 2025-12-06 08:09:24.037 232437 DEBUG oslo_concurrency.lockutils [req-d454bc9a-e449-4b31-95bb-8647e94ff197 req-572d2230-ab18-4776-95fd-637c04323894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1414ea75-392a-4bcc-9586-0267d5cf47ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:09:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:09:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:24.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:09:24 np0005548731 nova_compute[232433]: 2025-12-06 08:09:24.118 232437 DEBUG nova.network.neutron [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:09:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:09:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:24.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:09:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:09:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 14K writes, 76K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1519 writes, 7522 keys, 1519 commit groups, 1.0 writes per commit group, ingest: 15.65 MB, 0.03 MB/s#012Interval WAL: 1519 writes, 1519 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     38.1      2.48              0.27        48    0.052       0      0       0.0       0.0#012  L6      1/0   10.41 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2    107.0     91.4      5.38              1.41        47    0.115    367K    25K       0.0       0.0#012 Sum      1/0   10.41 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     73.3     74.6      7.86              1.69        95    0.083    367K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9    162.1    158.8      0.51              0.24        12    0.043     63K   3034       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    107.0     91.4      5.38              1.41        47    0.115    367K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     38.1      2.48              0.27        47    0.053       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.092, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.57 GB write, 0.10 MB/s write, 0.56 GB read, 0.10 MB/s read, 7.9 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 62.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000269 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3558,60.00 MB,19.7379%) FilterBlock(95,1005.55 KB,0.32302%) IndexBlock(95,1.63 MB,0.536321%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.056 232437 DEBUG nova.network.neutron [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Updating instance_info_cache with network_info: [{"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.167 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-1414ea75-392a-4bcc-9586-0267d5cf47ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.167 232437 DEBUG nova.compute.manager [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Instance network_info: |[{"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.168 232437 DEBUG oslo_concurrency.lockutils [req-d454bc9a-e449-4b31-95bb-8647e94ff197 req-572d2230-ab18-4776-95fd-637c04323894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1414ea75-392a-4bcc-9586-0267d5cf47ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.168 232437 DEBUG nova.network.neutron [req-d454bc9a-e449-4b31-95bb-8647e94ff197 req-572d2230-ab18-4776-95fd-637c04323894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Refreshing network info cache for port bd46dc09-5d50-46df-8d43-1b7dca95407f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.172 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Start _get_guest_xml network_info=[{"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.176 232437 WARNING nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.180 232437 DEBUG nova.virt.libvirt.host [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.181 232437 DEBUG nova.virt.libvirt.host [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.184 232437 DEBUG nova.virt.libvirt.host [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.185 232437 DEBUG nova.virt.libvirt.host [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.186 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.186 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.187 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.187 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.187 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.187 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.188 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.188 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.188 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.188 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.189 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.189 232437 DEBUG nova.virt.hardware [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.192 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:09:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/295627267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.626 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.650 232437 DEBUG nova.storage.rbd_utils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.654 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:25 np0005548731 nova_compute[232433]: 2025-12-06 08:09:25.795 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:09:26 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1005039998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 03:09:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:26.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.122 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.123 232437 DEBUG nova.virt.libvirt.vif [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1047508929',display_name='tempest-TestNetworkBasicOps-server-1047508929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1047508929',id=195,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ0CAYIV/3GWGnHWk7RYMRkOyt2U/eOwlCCdyF6MmsyHWsBLaXuBSQYNSgoBNdnMzRm1oWL/OnJoL9e73t+ZmkpX++sKpngWSJd3/OoQG3aE8hAZugjw70q/BCf1OtyQg==',key_name='tempest-TestNetworkBasicOps-1044619278',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-pj1gxhp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:09:21Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=1414ea75-392a-4bcc-9586-0267d5cf47ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.124 232437 DEBUG nova.network.os_vif_util [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.124 232437 DEBUG nova.network.os_vif_util [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.125 232437 DEBUG nova.objects.instance [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 1414ea75-392a-4bcc-9586-0267d5cf47ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.280 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  <uuid>1414ea75-392a-4bcc-9586-0267d5cf47ab</uuid>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  <name>instance-000000c3</name>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestNetworkBasicOps-server-1047508929</nova:name>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:09:25</nova:creationTime>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <nova:port uuid="bd46dc09-5d50-46df-8d43-1b7dca95407f">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <entry name="serial">1414ea75-392a-4bcc-9586-0267d5cf47ab</entry>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <entry name="uuid">1414ea75-392a-4bcc-9586-0267d5cf47ab</entry>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/1414ea75-392a-4bcc-9586-0267d5cf47ab_disk">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/1414ea75-392a-4bcc-9586-0267d5cf47ab_disk.config">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:b4:d3:2e"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <target dev="tapbd46dc09-5d"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/1414ea75-392a-4bcc-9586-0267d5cf47ab/console.log" append="off"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:09:26 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:09:26 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:09:26 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:09:26 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.281 232437 DEBUG nova.compute.manager [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Preparing to wait for external event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.282 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.282 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.282 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.283 232437 DEBUG nova.virt.libvirt.vif [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1047508929',display_name='tempest-TestNetworkBasicOps-server-1047508929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1047508929',id=195,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ0CAYIV/3GWGnHWk7RYMRkOyt2U/eOwlCCdyF6MmsyHWsBLaXuBSQYNSgoBNdnMzRm1oWL/OnJoL9e73t+ZmkpX++sKpngWSJd3/OoQG3aE8hAZugjw70q/BCf1OtyQg==',key_name='tempest-TestNetworkBasicOps-1044619278',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-pj1gxhp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:09:21Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=1414ea75-392a-4bcc-9586-0267d5cf47ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.283 232437 DEBUG nova.network.os_vif_util [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.284 232437 DEBUG nova.network.os_vif_util [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.284 232437 DEBUG os_vif [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.285 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.286 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.289 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.289 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd46dc09-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.289 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd46dc09-5d, col_values=(('external_ids', {'iface-id': 'bd46dc09-5d50-46df-8d43-1b7dca95407f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:d3:2e', 'vm-uuid': '1414ea75-392a-4bcc-9586-0267d5cf47ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.298 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:26 np0005548731 NetworkManager[49182]: <info>  [1765008566.3007] manager: (tapbd46dc09-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.302 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.305 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.306 232437 INFO os_vif [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d')#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.322 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:09:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:26.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.502 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.503 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.503 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:b4:d3:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.504 232437 INFO nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Using config drive#033[00m
Dec  6 03:09:26 np0005548731 nova_compute[232433]: 2025-12-06 08:09:26.531 232437 DEBUG nova.storage.rbd_utils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:09:26 np0005548731 podman[329716]: 2025-12-06 08:09:26.890735143 +0000 UTC m=+0.050206434 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:09:26 np0005548731 podman[329718]: 2025-12-06 08:09:26.898168334 +0000 UTC m=+0.056176369 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:09:26 np0005548731 podman[329717]: 2025-12-06 08:09:26.943358655 +0000 UTC m=+0.103773719 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 03:09:27 np0005548731 nova_compute[232433]: 2025-12-06 08:09:27.756 232437 DEBUG nova.network.neutron [req-d454bc9a-e449-4b31-95bb-8647e94ff197 req-572d2230-ab18-4776-95fd-637c04323894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Updated VIF entry in instance network info cache for port bd46dc09-5d50-46df-8d43-1b7dca95407f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:09:27 np0005548731 nova_compute[232433]: 2025-12-06 08:09:27.756 232437 DEBUG nova.network.neutron [req-d454bc9a-e449-4b31-95bb-8647e94ff197 req-572d2230-ab18-4776-95fd-637c04323894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Updating instance_info_cache with network_info: [{"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:09:27 np0005548731 nova_compute[232433]: 2025-12-06 08:09:27.788 232437 DEBUG oslo_concurrency.lockutils [req-d454bc9a-e449-4b31-95bb-8647e94ff197 req-572d2230-ab18-4776-95fd-637c04323894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1414ea75-392a-4bcc-9586-0267d5cf47ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:09:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:28.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:28.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:28 np0005548731 nova_compute[232433]: 2025-12-06 08:09:28.930 232437 INFO nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Creating config drive at /var/lib/nova/instances/1414ea75-392a-4bcc-9586-0267d5cf47ab/disk.config#033[00m
Dec  6 03:09:28 np0005548731 nova_compute[232433]: 2025-12-06 08:09:28.935 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1414ea75-392a-4bcc-9586-0267d5cf47ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp557dnqd2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.072 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1414ea75-392a-4bcc-9586-0267d5cf47ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp557dnqd2" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.099 232437 DEBUG nova.storage.rbd_utils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.102 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1414ea75-392a-4bcc-9586-0267d5cf47ab/disk.config 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.340 232437 DEBUG oslo_concurrency.processutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1414ea75-392a-4bcc-9586-0267d5cf47ab/disk.config 1414ea75-392a-4bcc-9586-0267d5cf47ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.341 232437 INFO nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Deleting local config drive /var/lib/nova/instances/1414ea75-392a-4bcc-9586-0267d5cf47ab/disk.config because it was imported into RBD.#033[00m
Dec  6 03:09:29 np0005548731 kernel: tapbd46dc09-5d: entered promiscuous mode
Dec  6 03:09:29 np0005548731 NetworkManager[49182]: <info>  [1765008569.3920] manager: (tapbd46dc09-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/456)
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.392 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:29 np0005548731 ovn_controller[133927]: 2025-12-06T08:09:29Z|00985|binding|INFO|Claiming lport bd46dc09-5d50-46df-8d43-1b7dca95407f for this chassis.
Dec  6 03:09:29 np0005548731 ovn_controller[133927]: 2025-12-06T08:09:29Z|00986|binding|INFO|bd46dc09-5d50-46df-8d43-1b7dca95407f: Claiming fa:16:3e:b4:d3:2e 10.100.0.10
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.398 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:29 np0005548731 systemd-udevd[329833]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:09:29 np0005548731 NetworkManager[49182]: <info>  [1765008569.4298] device (tapbd46dc09-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:09:29 np0005548731 NetworkManager[49182]: <info>  [1765008569.4309] device (tapbd46dc09-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:09:29 np0005548731 systemd-machined[195355]: New machine qemu-99-instance-000000c3.
Dec  6 03:09:29 np0005548731 systemd[1]: Started Virtual Machine qemu-99-instance-000000c3.
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.459 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.461 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:d3:2e 10.100.0.10'], port_security=['fa:16:3e:b4:d3:2e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1970606987', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1414ea75-392a-4bcc-9586-0267d5cf47ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1970606987', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7c0486dd-d5db-4e6a-a51e-94ac8eaed290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cba6b1-9214-48c9-b7f6-652cc6503c7e, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=bd46dc09-5d50-46df-8d43-1b7dca95407f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.462 143965 INFO neutron.agent.ovn.metadata.agent [-] Port bd46dc09-5d50-46df-8d43-1b7dca95407f in datapath 2f82da1e-2277-4375-8c9e-32c60bb0f99c bound to our chassis#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.463 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f82da1e-2277-4375-8c9e-32c60bb0f99c#033[00m
Dec  6 03:09:29 np0005548731 ovn_controller[133927]: 2025-12-06T08:09:29Z|00987|binding|INFO|Setting lport bd46dc09-5d50-46df-8d43-1b7dca95407f ovn-installed in OVS
Dec  6 03:09:29 np0005548731 ovn_controller[133927]: 2025-12-06T08:09:29Z|00988|binding|INFO|Setting lport bd46dc09-5d50-46df-8d43-1b7dca95407f up in Southbound
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.466 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.476 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[87a95bbf-0ca9-481c-98a9-c40179f8a1f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.477 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f82da1e-21 in ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.479 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f82da1e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.479 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2fb321-4212-4df5-a41e-ad3c89f45fd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.480 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf17d6a-f981-47df-bdd6-89ca689478d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.492 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[ff583c99-bce3-4c75-b407-b3b9d71770ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.507 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac92642-c1ce-4450-b7d9-6153cdfbc606]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.537 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d69337-e08a-4490-9093-5255c7b822b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.541 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf42fc8-f905-4cf9-b970-79117168fce5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 NetworkManager[49182]: <info>  [1765008569.5426] manager: (tap2f82da1e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/457)
Dec  6 03:09:29 np0005548731 systemd-udevd[329838]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.571 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[789f8d01-a783-4878-a140-6844bb0ba604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.575 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddc1f0d-4877-4219-91d8-db3b147e8005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 NetworkManager[49182]: <info>  [1765008569.5959] device (tap2f82da1e-20): carrier: link connected
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.604 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bb252057-4f5a-40b8-a59f-e1d3617e22c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.630 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dac9270e-0adf-430e-9228-98d60701e69f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f82da1e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:c6:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 883160, 'reachable_time': 44845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329869, 'error': None, 'target': 'ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.655 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2c937618-2f6f-46cc-b164-1b4233baa298]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:c66c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 883160, 'tstamp': 883160}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329870, 'error': None, 'target': 'ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.677 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5df9bc06-e033-476b-aa28-1137d0d4bc26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f82da1e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:c6:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 883160, 'reachable_time': 44845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329871, 'error': None, 'target': 'ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.722 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4fce7057-9594-4a4d-9f08-8f1b861c05fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.754 232437 DEBUG nova.compute.manager [req-f25acea2-f8c9-4f2b-9d8b-b2294b41397e req-aa8ba3bc-228c-431a-a27e-433116b56b4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Received event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.754 232437 DEBUG oslo_concurrency.lockutils [req-f25acea2-f8c9-4f2b-9d8b-b2294b41397e req-aa8ba3bc-228c-431a-a27e-433116b56b4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.755 232437 DEBUG oslo_concurrency.lockutils [req-f25acea2-f8c9-4f2b-9d8b-b2294b41397e req-aa8ba3bc-228c-431a-a27e-433116b56b4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.755 232437 DEBUG oslo_concurrency.lockutils [req-f25acea2-f8c9-4f2b-9d8b-b2294b41397e req-aa8ba3bc-228c-431a-a27e-433116b56b4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.755 232437 DEBUG nova.compute.manager [req-f25acea2-f8c9-4f2b-9d8b-b2294b41397e req-aa8ba3bc-228c-431a-a27e-433116b56b4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Processing event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.791 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[29de1e0b-925e-493c-b6d8-222d8f483bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.792 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f82da1e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.792 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.793 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f82da1e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:09:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.829 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:29 np0005548731 NetworkManager[49182]: <info>  [1765008569.8301] manager: (tap2f82da1e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/458)
Dec  6 03:09:29 np0005548731 kernel: tap2f82da1e-20: entered promiscuous mode
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.834 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f82da1e-20, col_values=(('external_ids', {'iface-id': '23e37ebf-5927-4427-bb7f-41deb4ee9462'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.835 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:29 np0005548731 ovn_controller[133927]: 2025-12-06T08:09:29Z|00989|binding|INFO|Releasing lport 23e37ebf-5927-4427-bb7f-41deb4ee9462 from this chassis (sb_readonly=0)
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.837 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f82da1e-2277-4375-8c9e-32c60bb0f99c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f82da1e-2277-4375-8c9e-32c60bb0f99c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.838 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c1e682-b915-4a0f-a07e-be193e3ecf39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.839 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-2f82da1e-2277-4375-8c9e-32c60bb0f99c
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/2f82da1e-2277-4375-8c9e-32c60bb0f99c.pid.haproxy
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 2f82da1e-2277-4375-8c9e-32c60bb0f99c
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:09:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:29.840 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'env', 'PROCESS_TAG=haproxy-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f82da1e-2277-4375-8c9e-32c60bb0f99c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:09:29 np0005548731 nova_compute[232433]: 2025-12-06 08:09:29.848 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.025 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008570.0254703, 1414ea75-392a-4bcc-9586-0267d5cf47ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.027 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] VM Started (Lifecycle Event)#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.029 232437 DEBUG nova.compute.manager [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.033 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.036 232437 INFO nova.virt.libvirt.driver [-] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Instance spawned successfully.#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.037 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.045 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.049 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.056 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.057 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.058 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.058 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.059 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.059 232437 DEBUG nova.virt.libvirt.driver [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.066 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.067 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008570.0261903, 1414ea75-392a-4bcc-9586-0267d5cf47ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.067 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.096 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.100 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008570.0322323, 1414ea75-392a-4bcc-9586-0267d5cf47ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.100 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:09:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:09:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:30.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.126 232437 INFO nova.compute.manager [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Took 8.71 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.126 232437 DEBUG nova.compute.manager [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.127 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.134 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.168 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.192 232437 INFO nova.compute.manager [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Took 9.79 seconds to build instance.#033[00m
Dec  6 03:09:30 np0005548731 nova_compute[232433]: 2025-12-06 08:09:30.212 232437 DEBUG oslo_concurrency.lockutils [None req-a56fdc41-27c0-4d17-a987-f80f099cbf74 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:30 np0005548731 podman[329941]: 2025-12-06 08:09:30.177370797 +0000 UTC m=+0.021962735 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:09:30 np0005548731 podman[329941]: 2025-12-06 08:09:30.313459472 +0000 UTC m=+0.158051380 container create 27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:09:30 np0005548731 systemd[1]: Started libpod-conmon-27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2.scope.
Dec  6 03:09:30 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:09:30 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/611ae72a72d92663176a4f40472be9bd51da6819674e37b8e35bf36a94ee0668/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:09:30 np0005548731 podman[329941]: 2025-12-06 08:09:30.459072598 +0000 UTC m=+0.303664506 container init 27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 03:09:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:30.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:30 np0005548731 podman[329941]: 2025-12-06 08:09:30.46612727 +0000 UTC m=+0.310719178 container start 27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 03:09:30 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[329956]: [NOTICE]   (329961) : New worker (329963) forked
Dec  6 03:09:30 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[329956]: [NOTICE]   (329961) : Loading success.
Dec  6 03:09:31 np0005548731 nova_compute[232433]: 2025-12-06 08:09:31.299 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:31 np0005548731 nova_compute[232433]: 2025-12-06 08:09:31.324 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:32 np0005548731 nova_compute[232433]: 2025-12-06 08:09:32.013 232437 DEBUG nova.compute.manager [req-b8109215-ad9e-4d01-8292-13cc93be4e3b req-208bb1df-85f6-45f3-878f-abbc4788b370 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Received event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:09:32 np0005548731 nova_compute[232433]: 2025-12-06 08:09:32.013 232437 DEBUG oslo_concurrency.lockutils [req-b8109215-ad9e-4d01-8292-13cc93be4e3b req-208bb1df-85f6-45f3-878f-abbc4788b370 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:32 np0005548731 nova_compute[232433]: 2025-12-06 08:09:32.014 232437 DEBUG oslo_concurrency.lockutils [req-b8109215-ad9e-4d01-8292-13cc93be4e3b req-208bb1df-85f6-45f3-878f-abbc4788b370 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:32 np0005548731 nova_compute[232433]: 2025-12-06 08:09:32.014 232437 DEBUG oslo_concurrency.lockutils [req-b8109215-ad9e-4d01-8292-13cc93be4e3b req-208bb1df-85f6-45f3-878f-abbc4788b370 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:32 np0005548731 nova_compute[232433]: 2025-12-06 08:09:32.014 232437 DEBUG nova.compute.manager [req-b8109215-ad9e-4d01-8292-13cc93be4e3b req-208bb1df-85f6-45f3-878f-abbc4788b370 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] No waiting events found dispatching network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:09:32 np0005548731 nova_compute[232433]: 2025-12-06 08:09:32.014 232437 WARNING nova.compute.manager [req-b8109215-ad9e-4d01-8292-13cc93be4e3b req-208bb1df-85f6-45f3-878f-abbc4788b370 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Received unexpected event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f for instance with vm_state active and task_state None.#033[00m
Dec  6 03:09:32 np0005548731 nova_compute[232433]: 2025-12-06 08:09:32.052 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:32.052 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:09:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:32.053 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:09:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:09:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:32.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:09:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:32.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:33.055 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:09:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:09:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:34.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:09:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:34.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:36.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:36 np0005548731 nova_compute[232433]: 2025-12-06 08:09:36.303 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:36 np0005548731 nova_compute[232433]: 2025-12-06 08:09:36.326 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:36.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:37 np0005548731 NetworkManager[49182]: <info>  [1765008577.3267] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Dec  6 03:09:37 np0005548731 NetworkManager[49182]: <info>  [1765008577.3277] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Dec  6 03:09:37 np0005548731 nova_compute[232433]: 2025-12-06 08:09:37.326 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:37 np0005548731 ovn_controller[133927]: 2025-12-06T08:09:37Z|00990|binding|INFO|Releasing lport 23e37ebf-5927-4427-bb7f-41deb4ee9462 from this chassis (sb_readonly=0)
Dec  6 03:09:37 np0005548731 nova_compute[232433]: 2025-12-06 08:09:37.357 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:37 np0005548731 nova_compute[232433]: 2025-12-06 08:09:37.364 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:37 np0005548731 nova_compute[232433]: 2025-12-06 08:09:37.924 232437 DEBUG nova.compute.manager [req-07ac2d2c-1ac5-4e58-a623-f51480feca17 req-31e1c2fd-d8dc-4bf3-aa62-b0bcf2c74015 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Received event network-changed-bd46dc09-5d50-46df-8d43-1b7dca95407f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:09:37 np0005548731 nova_compute[232433]: 2025-12-06 08:09:37.925 232437 DEBUG nova.compute.manager [req-07ac2d2c-1ac5-4e58-a623-f51480feca17 req-31e1c2fd-d8dc-4bf3-aa62-b0bcf2c74015 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Refreshing instance network info cache due to event network-changed-bd46dc09-5d50-46df-8d43-1b7dca95407f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:09:37 np0005548731 nova_compute[232433]: 2025-12-06 08:09:37.925 232437 DEBUG oslo_concurrency.lockutils [req-07ac2d2c-1ac5-4e58-a623-f51480feca17 req-31e1c2fd-d8dc-4bf3-aa62-b0bcf2c74015 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1414ea75-392a-4bcc-9586-0267d5cf47ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:09:37 np0005548731 nova_compute[232433]: 2025-12-06 08:09:37.925 232437 DEBUG oslo_concurrency.lockutils [req-07ac2d2c-1ac5-4e58-a623-f51480feca17 req-31e1c2fd-d8dc-4bf3-aa62-b0bcf2c74015 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1414ea75-392a-4bcc-9586-0267d5cf47ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:09:37 np0005548731 nova_compute[232433]: 2025-12-06 08:09:37.926 232437 DEBUG nova.network.neutron [req-07ac2d2c-1ac5-4e58-a623-f51480feca17 req-31e1c2fd-d8dc-4bf3-aa62-b0bcf2c74015 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Refreshing network info cache for port bd46dc09-5d50-46df-8d43-1b7dca95407f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:09:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:09:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:38.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.199 232437 DEBUG oslo_concurrency.lockutils [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "1414ea75-392a-4bcc-9586-0267d5cf47ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.200 232437 DEBUG oslo_concurrency.lockutils [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.200 232437 DEBUG oslo_concurrency.lockutils [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.201 232437 DEBUG oslo_concurrency.lockutils [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.201 232437 DEBUG oslo_concurrency.lockutils [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.202 232437 INFO nova.compute.manager [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Terminating instance#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.203 232437 DEBUG nova.compute.manager [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:09:38 np0005548731 kernel: tapbd46dc09-5d (unregistering): left promiscuous mode
Dec  6 03:09:38 np0005548731 NetworkManager[49182]: <info>  [1765008578.2457] device (tapbd46dc09-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:09:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:09:38Z|00991|binding|INFO|Releasing lport bd46dc09-5d50-46df-8d43-1b7dca95407f from this chassis (sb_readonly=0)
Dec  6 03:09:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:09:38Z|00992|binding|INFO|Setting lport bd46dc09-5d50-46df-8d43-1b7dca95407f down in Southbound
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.253 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:09:38Z|00993|binding|INFO|Removing iface tapbd46dc09-5d ovn-installed in OVS
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.256 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.260 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:d3:2e 10.100.0.10'], port_security=['fa:16:3e:b4:d3:2e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1970606987', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1414ea75-392a-4bcc-9586-0267d5cf47ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1970606987', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c0486dd-d5db-4e6a-a51e-94ac8eaed290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cba6b1-9214-48c9-b7f6-652cc6503c7e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=bd46dc09-5d50-46df-8d43-1b7dca95407f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.262 143965 INFO neutron.agent.ovn.metadata.agent [-] Port bd46dc09-5d50-46df-8d43-1b7dca95407f in datapath 2f82da1e-2277-4375-8c9e-32c60bb0f99c unbound from our chassis#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.263 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f82da1e-2277-4375-8c9e-32c60bb0f99c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.264 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bce2457d-7476-4ab6-bbee-57d4e7087b7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.265 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c namespace which is not needed anymore#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.271 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:38 np0005548731 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000c3.scope: Deactivated successfully.
Dec  6 03:09:38 np0005548731 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000c3.scope: Consumed 8.948s CPU time.
Dec  6 03:09:38 np0005548731 systemd-machined[195355]: Machine qemu-99-instance-000000c3 terminated.
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.445 232437 INFO nova.virt.libvirt.driver [-] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Instance destroyed successfully.#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.446 232437 DEBUG nova.objects.instance [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 1414ea75-392a-4bcc-9586-0267d5cf47ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:09:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:09:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:38.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.536 232437 DEBUG nova.virt.libvirt.vif [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1047508929',display_name='tempest-TestNetworkBasicOps-server-1047508929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1047508929',id=195,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ0CAYIV/3GWGnHWk7RYMRkOyt2U/eOwlCCdyF6MmsyHWsBLaXuBSQYNSgoBNdnMzRm1oWL/OnJoL9e73t+ZmkpX++sKpngWSJd3/OoQG3aE8hAZugjw70q/BCf1OtyQg==',key_name='tempest-TestNetworkBasicOps-1044619278',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:09:30Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-pj1gxhp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:09:30Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=1414ea75-392a-4bcc-9586-0267d5cf47ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.537 232437 DEBUG nova.network.os_vif_util [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.538 232437 DEBUG nova.network.os_vif_util [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.538 232437 DEBUG os_vif [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.539 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.540 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd46dc09-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.575 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.577 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.579 232437 INFO os_vif [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d')#033[00m
Dec  6 03:09:38 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[329956]: [NOTICE]   (329961) : haproxy version is 2.8.14-c23fe91
Dec  6 03:09:38 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[329956]: [NOTICE]   (329961) : path to executable is /usr/sbin/haproxy
Dec  6 03:09:38 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[329956]: [WARNING]  (329961) : Exiting Master process...
Dec  6 03:09:38 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[329956]: [ALERT]    (329961) : Current worker (329963) exited with code 143 (Terminated)
Dec  6 03:09:38 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[329956]: [WARNING]  (329961) : All workers exited. Exiting... (0)
Dec  6 03:09:38 np0005548731 systemd[1]: libpod-27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2.scope: Deactivated successfully.
Dec  6 03:09:38 np0005548731 podman[330053]: 2025-12-06 08:09:38.593700348 +0000 UTC m=+0.246172165 container died 27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:09:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2-userdata-shm.mount: Deactivated successfully.
Dec  6 03:09:38 np0005548731 systemd[1]: var-lib-containers-storage-overlay-611ae72a72d92663176a4f40472be9bd51da6819674e37b8e35bf36a94ee0668-merged.mount: Deactivated successfully.
Dec  6 03:09:38 np0005548731 podman[330053]: 2025-12-06 08:09:38.641239596 +0000 UTC m=+0.293711413 container cleanup 27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 03:09:38 np0005548731 systemd[1]: libpod-conmon-27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2.scope: Deactivated successfully.
Dec  6 03:09:38 np0005548731 podman[330113]: 2025-12-06 08:09:38.704334713 +0000 UTC m=+0.042716491 container remove 27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.711 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a098b6e4-5326-48bb-82d0-430ff639627d]: (4, ('Sat Dec  6 08:09:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c (27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2)\n27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2\nSat Dec  6 08:09:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c (27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2)\n27b96862533d2e6eb005231ea6cf2ed75f8372cdb2cad08ff252e023a80376a2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.712 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e4187e-bea8-4aea-964a-3cb341fe0d6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.713 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f82da1e-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.714 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:38 np0005548731 kernel: tap2f82da1e-20: left promiscuous mode
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.728 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.730 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f74bd9df-7534-4b9a-96e1-2b46fb5dec70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.749 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4461e2-d55c-454b-84dc-e1288d09fa0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.751 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4720a3-521a-4b1e-8bb5-056ac10d6c56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.766 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[28967866-4a7d-4e0c-935b-f7dca6249da8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 883153, 'reachable_time': 21327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330128, 'error': None, 'target': 'ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:38 np0005548731 systemd[1]: run-netns-ovnmeta\x2d2f82da1e\x2d2277\x2d4375\x2d8c9e\x2d32c60bb0f99c.mount: Deactivated successfully.
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.770 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:09:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:09:38.770 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[58dc42a6-f72c-45e5-ba68-691bfbbc5001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.899 232437 DEBUG nova.compute.manager [req-c007627a-db60-4971-84be-43a0810276fd req-cc9d5d80-a6f5-4a09-b585-e9bb17c45d13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Received event network-vif-unplugged-bd46dc09-5d50-46df-8d43-1b7dca95407f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.900 232437 DEBUG oslo_concurrency.lockutils [req-c007627a-db60-4971-84be-43a0810276fd req-cc9d5d80-a6f5-4a09-b585-e9bb17c45d13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.900 232437 DEBUG oslo_concurrency.lockutils [req-c007627a-db60-4971-84be-43a0810276fd req-cc9d5d80-a6f5-4a09-b585-e9bb17c45d13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.900 232437 DEBUG oslo_concurrency.lockutils [req-c007627a-db60-4971-84be-43a0810276fd req-cc9d5d80-a6f5-4a09-b585-e9bb17c45d13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.900 232437 DEBUG nova.compute.manager [req-c007627a-db60-4971-84be-43a0810276fd req-cc9d5d80-a6f5-4a09-b585-e9bb17c45d13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] No waiting events found dispatching network-vif-unplugged-bd46dc09-5d50-46df-8d43-1b7dca95407f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.901 232437 DEBUG nova.compute.manager [req-c007627a-db60-4971-84be-43a0810276fd req-cc9d5d80-a6f5-4a09-b585-e9bb17c45d13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Received event network-vif-unplugged-bd46dc09-5d50-46df-8d43-1b7dca95407f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.952 232437 INFO nova.virt.libvirt.driver [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Deleting instance files /var/lib/nova/instances/1414ea75-392a-4bcc-9586-0267d5cf47ab_del#033[00m
Dec  6 03:09:38 np0005548731 nova_compute[232433]: 2025-12-06 08:09:38.952 232437 INFO nova.virt.libvirt.driver [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Deletion of /var/lib/nova/instances/1414ea75-392a-4bcc-9586-0267d5cf47ab_del complete#033[00m
Dec  6 03:09:39 np0005548731 nova_compute[232433]: 2025-12-06 08:09:39.113 232437 INFO nova.compute.manager [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Took 0.91 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:09:39 np0005548731 nova_compute[232433]: 2025-12-06 08:09:39.114 232437 DEBUG oslo.service.loopingcall [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:09:39 np0005548731 nova_compute[232433]: 2025-12-06 08:09:39.114 232437 DEBUG nova.compute.manager [-] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:09:39 np0005548731 nova_compute[232433]: 2025-12-06 08:09:39.114 232437 DEBUG nova.network.neutron [-] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:09:39 np0005548731 nova_compute[232433]: 2025-12-06 08:09:39.739 232437 DEBUG nova.network.neutron [req-07ac2d2c-1ac5-4e58-a623-f51480feca17 req-31e1c2fd-d8dc-4bf3-aa62-b0bcf2c74015 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Updated VIF entry in instance network info cache for port bd46dc09-5d50-46df-8d43-1b7dca95407f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:09:39 np0005548731 nova_compute[232433]: 2025-12-06 08:09:39.740 232437 DEBUG nova.network.neutron [req-07ac2d2c-1ac5-4e58-a623-f51480feca17 req-31e1c2fd-d8dc-4bf3-aa62-b0bcf2c74015 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Updating instance_info_cache with network_info: [{"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:09:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:09:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:40.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:09:40 np0005548731 nova_compute[232433]: 2025-12-06 08:09:40.332 232437 DEBUG oslo_concurrency.lockutils [req-07ac2d2c-1ac5-4e58-a623-f51480feca17 req-31e1c2fd-d8dc-4bf3-aa62-b0bcf2c74015 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1414ea75-392a-4bcc-9586-0267d5cf47ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:09:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:40.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:41 np0005548731 nova_compute[232433]: 2025-12-06 08:09:41.068 232437 DEBUG nova.compute.manager [req-64d4aacd-1dfd-422a-8895-06f533c026b4 req-98ccc3bf-a54a-4d58-8b39-2e525fec74df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Received event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:09:41 np0005548731 nova_compute[232433]: 2025-12-06 08:09:41.068 232437 DEBUG oslo_concurrency.lockutils [req-64d4aacd-1dfd-422a-8895-06f533c026b4 req-98ccc3bf-a54a-4d58-8b39-2e525fec74df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:41 np0005548731 nova_compute[232433]: 2025-12-06 08:09:41.069 232437 DEBUG oslo_concurrency.lockutils [req-64d4aacd-1dfd-422a-8895-06f533c026b4 req-98ccc3bf-a54a-4d58-8b39-2e525fec74df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:41 np0005548731 nova_compute[232433]: 2025-12-06 08:09:41.069 232437 DEBUG oslo_concurrency.lockutils [req-64d4aacd-1dfd-422a-8895-06f533c026b4 req-98ccc3bf-a54a-4d58-8b39-2e525fec74df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:41 np0005548731 nova_compute[232433]: 2025-12-06 08:09:41.069 232437 DEBUG nova.compute.manager [req-64d4aacd-1dfd-422a-8895-06f533c026b4 req-98ccc3bf-a54a-4d58-8b39-2e525fec74df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] No waiting events found dispatching network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:09:41 np0005548731 nova_compute[232433]: 2025-12-06 08:09:41.070 232437 WARNING nova.compute.manager [req-64d4aacd-1dfd-422a-8895-06f533c026b4 req-98ccc3bf-a54a-4d58-8b39-2e525fec74df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Received unexpected event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f for instance with vm_state active and task_state deleting.#033[00m
Dec  6 03:09:41 np0005548731 nova_compute[232433]: 2025-12-06 08:09:41.328 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:42 np0005548731 nova_compute[232433]: 2025-12-06 08:09:42.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:42 np0005548731 nova_compute[232433]: 2025-12-06 08:09:42.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:09:42 np0005548731 nova_compute[232433]: 2025-12-06 08:09:42.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:09:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:42.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:42 np0005548731 nova_compute[232433]: 2025-12-06 08:09:42.128 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  6 03:09:42 np0005548731 nova_compute[232433]: 2025-12-06 08:09:42.128 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:09:42 np0005548731 nova_compute[232433]: 2025-12-06 08:09:42.128 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:42.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:43 np0005548731 nova_compute[232433]: 2025-12-06 08:09:43.117 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:43 np0005548731 nova_compute[232433]: 2025-12-06 08:09:43.577 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:44.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:44 np0005548731 nova_compute[232433]: 2025-12-06 08:09:44.445 232437 DEBUG nova.network.neutron [-] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:09:44 np0005548731 nova_compute[232433]: 2025-12-06 08:09:44.465 232437 INFO nova.compute.manager [-] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Took 5.35 seconds to deallocate network for instance.#033[00m
Dec  6 03:09:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:44.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:44 np0005548731 nova_compute[232433]: 2025-12-06 08:09:44.542 232437 DEBUG oslo_concurrency.lockutils [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:44 np0005548731 nova_compute[232433]: 2025-12-06 08:09:44.543 232437 DEBUG oslo_concurrency.lockutils [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:44 np0005548731 nova_compute[232433]: 2025-12-06 08:09:44.606 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:44 np0005548731 nova_compute[232433]: 2025-12-06 08:09:44.633 232437 DEBUG oslo_concurrency.processutils [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:44 np0005548731 nova_compute[232433]: 2025-12-06 08:09:44.738 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:09:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2057202737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:09:45 np0005548731 nova_compute[232433]: 2025-12-06 08:09:45.068 232437 DEBUG oslo_concurrency.processutils [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:45 np0005548731 nova_compute[232433]: 2025-12-06 08:09:45.073 232437 DEBUG nova.compute.provider_tree [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:09:45 np0005548731 nova_compute[232433]: 2025-12-06 08:09:45.323 232437 DEBUG nova.scheduler.client.report [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:09:45 np0005548731 nova_compute[232433]: 2025-12-06 08:09:45.345 232437 DEBUG oslo_concurrency.lockutils [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:45 np0005548731 nova_compute[232433]: 2025-12-06 08:09:45.373 232437 INFO nova.scheduler.client.report [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 1414ea75-392a-4bcc-9586-0267d5cf47ab#033[00m
Dec  6 03:09:45 np0005548731 nova_compute[232433]: 2025-12-06 08:09:45.491 232437 DEBUG oslo_concurrency.lockutils [None req-13c9b848-4db2-4ce8-a663-49c8da5acff5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "1414ea75-392a-4bcc-9586-0267d5cf47ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:46 np0005548731 nova_compute[232433]: 2025-12-06 08:09:46.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:46 np0005548731 nova_compute[232433]: 2025-12-06 08:09:46.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:09:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:09:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:46.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:09:46 np0005548731 nova_compute[232433]: 2025-12-06 08:09:46.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:09:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:46.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:09:47 np0005548731 nova_compute[232433]: 2025-12-06 08:09:47.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:47 np0005548731 nova_compute[232433]: 2025-12-06 08:09:47.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:48.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:09:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:48.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:09:48 np0005548731 nova_compute[232433]: 2025-12-06 08:09:48.580 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:49 np0005548731 nova_compute[232433]: 2025-12-06 08:09:49.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:50.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.134 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.135 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.135 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:09:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:50.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:09:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:09:50 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1999819615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.561 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.721 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.722 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4165MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.722 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.722 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.792 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.792 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:09:50 np0005548731 nova_compute[232433]: 2025-12-06 08:09:50.871 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:09:51 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1936916397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:09:51 np0005548731 nova_compute[232433]: 2025-12-06 08:09:51.304 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:51 np0005548731 nova_compute[232433]: 2025-12-06 08:09:51.309 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:09:51 np0005548731 nova_compute[232433]: 2025-12-06 08:09:51.332 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:51 np0005548731 nova_compute[232433]: 2025-12-06 08:09:51.337 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:09:51 np0005548731 nova_compute[232433]: 2025-12-06 08:09:51.361 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:09:51 np0005548731 nova_compute[232433]: 2025-12-06 08:09:51.361 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:09:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:09:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:09:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:09:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:09:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:52.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:52.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:53 np0005548731 nova_compute[232433]: 2025-12-06 08:09:53.445 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008578.443891, 1414ea75-392a-4bcc-9586-0267d5cf47ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:09:53 np0005548731 nova_compute[232433]: 2025-12-06 08:09:53.445 232437 INFO nova.compute.manager [-] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:09:53 np0005548731 nova_compute[232433]: 2025-12-06 08:09:53.480 232437 DEBUG nova.compute.manager [None req-a670f107-e149-40fb-93a0-c40d473ab14b - - - - - -] [instance: 1414ea75-392a-4bcc-9586-0267d5cf47ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:09:53 np0005548731 nova_compute[232433]: 2025-12-06 08:09:53.584 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:54.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:54.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:55 np0005548731 nova_compute[232433]: 2025-12-06 08:09:55.363 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:55 np0005548731 nova_compute[232433]: 2025-12-06 08:09:55.843 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "689c0f7d-997a-4352-bbea-ecb615ccea8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:55 np0005548731 nova_compute[232433]: 2025-12-06 08:09:55.843 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:55 np0005548731 nova_compute[232433]: 2025-12-06 08:09:55.871 232437 DEBUG nova.compute.manager [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:09:55 np0005548731 nova_compute[232433]: 2025-12-06 08:09:55.950 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:55 np0005548731 nova_compute[232433]: 2025-12-06 08:09:55.950 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:55 np0005548731 nova_compute[232433]: 2025-12-06 08:09:55.956 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:09:55 np0005548731 nova_compute[232433]: 2025-12-06 08:09:55.957 232437 INFO nova.compute.claims [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.056 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:09:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:09:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:56.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.334 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:09:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3535291815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.493 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.498 232437 DEBUG nova.compute.provider_tree [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:09:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:56.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.550 232437 DEBUG nova.scheduler.client.report [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.681 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.682 232437 DEBUG nova.compute.manager [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.896 232437 DEBUG nova.compute.manager [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.896 232437 DEBUG nova.network.neutron [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.936 232437 INFO nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:09:56 np0005548731 nova_compute[232433]: 2025-12-06 08:09:56.967 232437 DEBUG nova.compute.manager [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.151 232437 DEBUG nova.compute.manager [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.152 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.152 232437 INFO nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Creating image(s)#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.178 232437 DEBUG nova.storage.rbd_utils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.209 232437 DEBUG nova.storage.rbd_utils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.238 232437 DEBUG nova.storage.rbd_utils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.243 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.316 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.317 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.317 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.318 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.341 232437 DEBUG nova.storage.rbd_utils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.344 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.604 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.674 232437 DEBUG nova.storage.rbd_utils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.771 232437 DEBUG nova.objects.instance [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 689c0f7d-997a-4352-bbea-ecb615ccea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.789 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.790 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Ensure instance console log exists: /var/lib/nova/instances/689c0f7d-997a-4352-bbea-ecb615ccea8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.790 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.791 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.791 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:09:57 np0005548731 nova_compute[232433]: 2025-12-06 08:09:57.887 232437 DEBUG nova.policy [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:09:57 np0005548731 podman[330576]: 2025-12-06 08:09:57.9133579 +0000 UTC m=+0.078837042 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  6 03:09:57 np0005548731 podman[330575]: 2025-12-06 08:09:57.91334566 +0000 UTC m=+0.078642776 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:09:57 np0005548731 podman[330577]: 2025-12-06 08:09:57.914410986 +0000 UTC m=+0.075624543 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Dec  6 03:09:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:09:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:58.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:09:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:09:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:09:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:58.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:09:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:09:58 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:09:58 np0005548731 nova_compute[232433]: 2025-12-06 08:09:58.587 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:09:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:09:59 np0005548731 nova_compute[232433]: 2025-12-06 08:09:59.891 232437 DEBUG nova.network.neutron [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Successfully updated port: bd46dc09-5d50-46df-8d43-1b7dca95407f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:09:59 np0005548731 nova_compute[232433]: 2025-12-06 08:09:59.925 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-689c0f7d-997a-4352-bbea-ecb615ccea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:09:59 np0005548731 nova_compute[232433]: 2025-12-06 08:09:59.925 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-689c0f7d-997a-4352-bbea-ecb615ccea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:09:59 np0005548731 nova_compute[232433]: 2025-12-06 08:09:59.926 232437 DEBUG nova.network.neutron [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:10:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:00.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:00 np0005548731 nova_compute[232433]: 2025-12-06 08:10:00.146 232437 DEBUG nova.compute.manager [req-bc8aefae-ede6-483e-bd2e-65382afd6310 req-1d93ee20-1bfa-494c-89a3-de46736e57f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Received event network-changed-bd46dc09-5d50-46df-8d43-1b7dca95407f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:10:00 np0005548731 nova_compute[232433]: 2025-12-06 08:10:00.147 232437 DEBUG nova.compute.manager [req-bc8aefae-ede6-483e-bd2e-65382afd6310 req-1d93ee20-1bfa-494c-89a3-de46736e57f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Refreshing instance network info cache due to event network-changed-bd46dc09-5d50-46df-8d43-1b7dca95407f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:10:00 np0005548731 nova_compute[232433]: 2025-12-06 08:10:00.147 232437 DEBUG oslo_concurrency.lockutils [req-bc8aefae-ede6-483e-bd2e-65382afd6310 req-1d93ee20-1bfa-494c-89a3-de46736e57f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-689c0f7d-997a-4352-bbea-ecb615ccea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:10:00 np0005548731 nova_compute[232433]: 2025-12-06 08:10:00.159 232437 DEBUG nova.network.neutron [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:10:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:10:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:00.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:10:00 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 03:10:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:00.913 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:00.913 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:00.913 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:01 np0005548731 nova_compute[232433]: 2025-12-06 08:10:01.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.067 232437 DEBUG nova.network.neutron [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Updating instance_info_cache with network_info: [{"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.085 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-689c0f7d-997a-4352-bbea-ecb615ccea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.086 232437 DEBUG nova.compute.manager [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Instance network_info: |[{"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.086 232437 DEBUG oslo_concurrency.lockutils [req-bc8aefae-ede6-483e-bd2e-65382afd6310 req-1d93ee20-1bfa-494c-89a3-de46736e57f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-689c0f7d-997a-4352-bbea-ecb615ccea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.086 232437 DEBUG nova.network.neutron [req-bc8aefae-ede6-483e-bd2e-65382afd6310 req-1d93ee20-1bfa-494c-89a3-de46736e57f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Refreshing network info cache for port bd46dc09-5d50-46df-8d43-1b7dca95407f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.089 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Start _get_guest_xml network_info=[{"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.093 232437 WARNING nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.097 232437 DEBUG nova.virt.libvirt.host [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.097 232437 DEBUG nova.virt.libvirt.host [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.100 232437 DEBUG nova.virt.libvirt.host [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.101 232437 DEBUG nova.virt.libvirt.host [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.101 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.102 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.102 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.102 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.102 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.103 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.103 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.103 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.103 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.103 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.104 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.104 232437 DEBUG nova.virt.hardware [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.106 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:10:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:02.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:10:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2883649163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:10:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:10:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:02.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.518 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.554 232437 DEBUG nova.storage.rbd_utils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.558 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:10:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:10:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3216410970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:10:02 np0005548731 nova_compute[232433]: 2025-12-06 08:10:02.998 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.000 232437 DEBUG nova.virt.libvirt.vif [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1926666901',display_name='tempest-TestNetworkBasicOps-server-1926666901',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1926666901',id=196,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXJcX9S5tuhjgSccvh5O+jGO7J2gXtm8NpJVHisgACqOBHiGEwJ/JKoT3Q7sz6/mR3QAGntLQRx1IJyWZChJGINMA5wcNi4+wHSbP5236n2Kq91E+5QKMKMUofRtNaQtg==',key_name='tempest-TestNetworkBasicOps-962264356',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-q0pax7ur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:09:57Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=689c0f7d-997a-4352-bbea-ecb615ccea8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.001 232437 DEBUG nova.network.os_vif_util [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.002 232437 DEBUG nova.network.os_vif_util [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.003 232437 DEBUG nova.objects.instance [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 689c0f7d-997a-4352-bbea-ecb615ccea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.023 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  <uuid>689c0f7d-997a-4352-bbea-ecb615ccea8f</uuid>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  <name>instance-000000c4</name>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestNetworkBasicOps-server-1926666901</nova:name>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:10:02</nova:creationTime>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <nova:port uuid="bd46dc09-5d50-46df-8d43-1b7dca95407f">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <entry name="serial">689c0f7d-997a-4352-bbea-ecb615ccea8f</entry>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <entry name="uuid">689c0f7d-997a-4352-bbea-ecb615ccea8f</entry>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/689c0f7d-997a-4352-bbea-ecb615ccea8f_disk">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/689c0f7d-997a-4352-bbea-ecb615ccea8f_disk.config">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:b4:d3:2e"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <target dev="tapbd46dc09-5d"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/689c0f7d-997a-4352-bbea-ecb615ccea8f/console.log" append="off"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:10:03 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:10:03 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:10:03 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:10:03 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.025 232437 DEBUG nova.compute.manager [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Preparing to wait for external event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.025 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.026 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.026 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.027 232437 DEBUG nova.virt.libvirt.vif [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1926666901',display_name='tempest-TestNetworkBasicOps-server-1926666901',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1926666901',id=196,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXJcX9S5tuhjgSccvh5O+jGO7J2gXtm8NpJVHisgACqOBHiGEwJ/JKoT3Q7sz6/mR3QAGntLQRx1IJyWZChJGINMA5wcNi4+wHSbP5236n2Kq91E+5QKMKMUofRtNaQtg==',key_name='tempest-TestNetworkBasicOps-962264356',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-q0pax7ur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:09:57Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=689c0f7d-997a-4352-bbea-ecb615ccea8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.027 232437 DEBUG nova.network.os_vif_util [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.028 232437 DEBUG nova.network.os_vif_util [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.028 232437 DEBUG os_vif [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.029 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.029 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.030 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.033 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.033 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd46dc09-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.034 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd46dc09-5d, col_values=(('external_ids', {'iface-id': 'bd46dc09-5d50-46df-8d43-1b7dca95407f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:d3:2e', 'vm-uuid': '689c0f7d-997a-4352-bbea-ecb615ccea8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.036 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:03 np0005548731 NetworkManager[49182]: <info>  [1765008603.0375] manager: (tapbd46dc09-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.039 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.042 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.043 232437 INFO os_vif [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d')#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.091 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.092 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.092 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:b4:d3:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.093 232437 INFO nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Using config drive#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.120 232437 DEBUG nova.storage.rbd_utils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.755 232437 INFO nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Creating config drive at /var/lib/nova/instances/689c0f7d-997a-4352-bbea-ecb615ccea8f/disk.config#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.760 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/689c0f7d-997a-4352-bbea-ecb615ccea8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zxus1ky execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.787 232437 DEBUG nova.network.neutron [req-bc8aefae-ede6-483e-bd2e-65382afd6310 req-1d93ee20-1bfa-494c-89a3-de46736e57f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Updated VIF entry in instance network info cache for port bd46dc09-5d50-46df-8d43-1b7dca95407f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.788 232437 DEBUG nova.network.neutron [req-bc8aefae-ede6-483e-bd2e-65382afd6310 req-1d93ee20-1bfa-494c-89a3-de46736e57f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Updating instance_info_cache with network_info: [{"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.803 232437 DEBUG oslo_concurrency.lockutils [req-bc8aefae-ede6-483e-bd2e-65382afd6310 req-1d93ee20-1bfa-494c-89a3-de46736e57f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-689c0f7d-997a-4352-bbea-ecb615ccea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.894 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/689c0f7d-997a-4352-bbea-ecb615ccea8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zxus1ky" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.931 232437 DEBUG nova.storage.rbd_utils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:10:03 np0005548731 nova_compute[232433]: 2025-12-06 08:10:03.935 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/689c0f7d-997a-4352-bbea-ecb615ccea8f/disk.config 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.103 232437 DEBUG oslo_concurrency.processutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/689c0f7d-997a-4352-bbea-ecb615ccea8f/disk.config 689c0f7d-997a-4352-bbea-ecb615ccea8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.104 232437 INFO nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Deleting local config drive /var/lib/nova/instances/689c0f7d-997a-4352-bbea-ecb615ccea8f/disk.config because it was imported into RBD.#033[00m
Dec  6 03:10:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:04.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:04 np0005548731 kernel: tapbd46dc09-5d: entered promiscuous mode
Dec  6 03:10:04 np0005548731 NetworkManager[49182]: <info>  [1765008604.1586] manager: (tapbd46dc09-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.159 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:04Z|00994|binding|INFO|Claiming lport bd46dc09-5d50-46df-8d43-1b7dca95407f for this chassis.
Dec  6 03:10:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:04Z|00995|binding|INFO|bd46dc09-5d50-46df-8d43-1b7dca95407f: Claiming fa:16:3e:b4:d3:2e 10.100.0.10
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.170 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.175 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.177 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:d3:2e 10.100.0.10'], port_security=['fa:16:3e:b4:d3:2e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1970606987', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '689c0f7d-997a-4352-bbea-ecb615ccea8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1970606987', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '7', 'neutron:security_group_ids': '7c0486dd-d5db-4e6a-a51e-94ac8eaed290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cba6b1-9214-48c9-b7f6-652cc6503c7e, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=bd46dc09-5d50-46df-8d43-1b7dca95407f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.178 143965 INFO neutron.agent.ovn.metadata.agent [-] Port bd46dc09-5d50-46df-8d43-1b7dca95407f in datapath 2f82da1e-2277-4375-8c9e-32c60bb0f99c bound to our chassis#033[00m
Dec  6 03:10:04 np0005548731 NetworkManager[49182]: <info>  [1765008604.1790] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.179 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f82da1e-2277-4375-8c9e-32c60bb0f99c#033[00m
Dec  6 03:10:04 np0005548731 NetworkManager[49182]: <info>  [1765008604.1819] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Dec  6 03:10:04 np0005548731 systemd-udevd[330823]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.192 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6d8f55-76ed-4634-88e7-ea2274b5b9f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.193 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f82da1e-21 in ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.195 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f82da1e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.195 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1e408ca6-890a-4196-b6d1-71a5498056a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.196 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1638fd-9187-4700-924c-87266c08a541]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 systemd-machined[195355]: New machine qemu-100-instance-000000c4.
Dec  6 03:10:04 np0005548731 NetworkManager[49182]: <info>  [1765008604.2033] device (tapbd46dc09-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:10:04 np0005548731 NetworkManager[49182]: <info>  [1765008604.2041] device (tapbd46dc09-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.212 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f5b000-9851-41a6-b3b3-38dbf61df9fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 systemd[1]: Started Virtual Machine qemu-100-instance-000000c4.
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.243 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8c15cd87-4228-4e5e-8da2-2b342323637c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.276 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[369689ed-75a2-4d71-852f-bb990f92b4ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.286 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8f540d-5c15-4abb-94fa-f356a075b83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 NetworkManager[49182]: <info>  [1765008604.2873] manager: (tap2f82da1e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/465)
Dec  6 03:10:04 np0005548731 systemd-udevd[330827]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.364 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.376 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.383 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[684eb27b-b7ac-4c31-8b05-7bf1f801f816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.386 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7856510d-7dd7-4d3b-a9fc-610851d6e5c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:04Z|00996|binding|INFO|Setting lport bd46dc09-5d50-46df-8d43-1b7dca95407f ovn-installed in OVS
Dec  6 03:10:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:04Z|00997|binding|INFO|Setting lport bd46dc09-5d50-46df-8d43-1b7dca95407f up in Southbound
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.388 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:04 np0005548731 NetworkManager[49182]: <info>  [1765008604.4143] device (tap2f82da1e-20): carrier: link connected
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.423 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[89a09bc4-9c2c-48ce-bf21-3c92de1e9b81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.440 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e2de8a95-0758-4edb-a5db-48233100f97c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f82da1e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:c6:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886642, 'reachable_time': 34900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330857, 'error': None, 'target': 'ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.456 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1601cbf8-8786-44a5-8ac9-65941b508546]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:c66c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 886642, 'tstamp': 886642}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330858, 'error': None, 'target': 'ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.473 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ca79aefc-282f-426a-8def-c8e28ac9f07c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f82da1e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:c6:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886642, 'reachable_time': 34900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330859, 'error': None, 'target': 'ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.501 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a0c8e6-5bbb-4ae6-981f-66be499b54cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:10:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:04.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.555 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[26fbb639-0307-48d5-b2a7-e1316def2c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.556 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f82da1e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.557 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.557 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f82da1e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:04 np0005548731 kernel: tap2f82da1e-20: entered promiscuous mode
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.559 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.561 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:04 np0005548731 NetworkManager[49182]: <info>  [1765008604.5621] manager: (tap2f82da1e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.565 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f82da1e-20, col_values=(('external_ids', {'iface-id': '23e37ebf-5927-4427-bb7f-41deb4ee9462'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.567 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:04Z|00998|binding|INFO|Releasing lport 23e37ebf-5927-4427-bb7f-41deb4ee9462 from this chassis (sb_readonly=0)
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.567 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.568 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f82da1e-2277-4375-8c9e-32c60bb0f99c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f82da1e-2277-4375-8c9e-32c60bb0f99c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.569 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[46292f51-87a0-4cd6-a4e0-378c67e24b1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.569 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-2f82da1e-2277-4375-8c9e-32c60bb0f99c
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/2f82da1e-2277-4375-8c9e-32c60bb0f99c.pid.haproxy
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 2f82da1e-2277-4375-8c9e-32c60bb0f99c
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:10:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:04.570 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'env', 'PROCESS_TAG=haproxy-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f82da1e-2277-4375-8c9e-32c60bb0f99c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.580 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.729 232437 DEBUG nova.compute.manager [req-c98aeb6c-23c0-4151-acc0-c4052b61a71c req-99b5d512-2850-46d4-9938-8dab1057bc75 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Received event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.735 232437 DEBUG oslo_concurrency.lockutils [req-c98aeb6c-23c0-4151-acc0-c4052b61a71c req-99b5d512-2850-46d4-9938-8dab1057bc75 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.735 232437 DEBUG oslo_concurrency.lockutils [req-c98aeb6c-23c0-4151-acc0-c4052b61a71c req-99b5d512-2850-46d4-9938-8dab1057bc75 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.736 232437 DEBUG oslo_concurrency.lockutils [req-c98aeb6c-23c0-4151-acc0-c4052b61a71c req-99b5d512-2850-46d4-9938-8dab1057bc75 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.736 232437 DEBUG nova.compute.manager [req-c98aeb6c-23c0-4151-acc0-c4052b61a71c req-99b5d512-2850-46d4-9938-8dab1057bc75 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Processing event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.845 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008604.8446372, 689c0f7d-997a-4352-bbea-ecb615ccea8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.846 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] VM Started (Lifecycle Event)#033[00m
Dec  6 03:10:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.848 232437 DEBUG nova.compute.manager [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.851 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.854 232437 INFO nova.virt.libvirt.driver [-] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Instance spawned successfully.#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.855 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.872 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.875 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.883 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.884 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.884 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.885 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.885 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.886 232437 DEBUG nova.virt.libvirt.driver [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.892 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.892 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008604.8455365, 689c0f7d-997a-4352-bbea-ecb615ccea8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.893 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.930 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.935 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008604.8507838, 689c0f7d-997a-4352-bbea-ecb615ccea8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.936 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:10:04 np0005548731 podman[330933]: 2025-12-06 08:10:04.950706219 +0000 UTC m=+0.052636563 container create 1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.974 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:10:04 np0005548731 nova_compute[232433]: 2025-12-06 08:10:04.982 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:10:04 np0005548731 systemd[1]: Started libpod-conmon-1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a.scope.
Dec  6 03:10:05 np0005548731 nova_compute[232433]: 2025-12-06 08:10:05.001 232437 INFO nova.compute.manager [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Took 7.85 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:10:05 np0005548731 nova_compute[232433]: 2025-12-06 08:10:05.002 232437 DEBUG nova.compute.manager [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:10:05 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:10:05 np0005548731 nova_compute[232433]: 2025-12-06 08:10:05.019 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:10:05 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b28914efb47b1f98b8837d8551ee30f1a31e490f821ed8a9069785da27b066/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:10:05 np0005548731 podman[330933]: 2025-12-06 08:10:04.92406189 +0000 UTC m=+0.025992254 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:10:05 np0005548731 podman[330933]: 2025-12-06 08:10:05.033396433 +0000 UTC m=+0.135326807 container init 1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:10:05 np0005548731 podman[330933]: 2025-12-06 08:10:05.0390276 +0000 UTC m=+0.140957944 container start 1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 03:10:05 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[330948]: [NOTICE]   (330952) : New worker (330954) forked
Dec  6 03:10:05 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[330948]: [NOTICE]   (330952) : Loading success.
Dec  6 03:10:05 np0005548731 nova_compute[232433]: 2025-12-06 08:10:05.082 232437 INFO nova.compute.manager [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Took 9.16 seconds to build instance.#033[00m
Dec  6 03:10:05 np0005548731 nova_compute[232433]: 2025-12-06 08:10:05.101 232437 DEBUG oslo_concurrency.lockutils [None req-4bc83001-bd4b-4c5b-8dd8-2b908da61d3d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:10:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:06.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:10:06 np0005548731 nova_compute[232433]: 2025-12-06 08:10:06.339 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:10:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:06.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:10:06 np0005548731 nova_compute[232433]: 2025-12-06 08:10:06.850 232437 DEBUG nova.compute.manager [req-da950768-b287-4f2d-8a2d-0e2b4d39f5e5 req-f3b4dc2a-82b8-488d-820a-1f6e96b19cd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Received event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:10:06 np0005548731 nova_compute[232433]: 2025-12-06 08:10:06.850 232437 DEBUG oslo_concurrency.lockutils [req-da950768-b287-4f2d-8a2d-0e2b4d39f5e5 req-f3b4dc2a-82b8-488d-820a-1f6e96b19cd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:06 np0005548731 nova_compute[232433]: 2025-12-06 08:10:06.851 232437 DEBUG oslo_concurrency.lockutils [req-da950768-b287-4f2d-8a2d-0e2b4d39f5e5 req-f3b4dc2a-82b8-488d-820a-1f6e96b19cd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:06 np0005548731 nova_compute[232433]: 2025-12-06 08:10:06.851 232437 DEBUG oslo_concurrency.lockutils [req-da950768-b287-4f2d-8a2d-0e2b4d39f5e5 req-f3b4dc2a-82b8-488d-820a-1f6e96b19cd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:06 np0005548731 nova_compute[232433]: 2025-12-06 08:10:06.852 232437 DEBUG nova.compute.manager [req-da950768-b287-4f2d-8a2d-0e2b4d39f5e5 req-f3b4dc2a-82b8-488d-820a-1f6e96b19cd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] No waiting events found dispatching network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:10:06 np0005548731 nova_compute[232433]: 2025-12-06 08:10:06.852 232437 WARNING nova.compute.manager [req-da950768-b287-4f2d-8a2d-0e2b4d39f5e5 req-f3b4dc2a-82b8-488d-820a-1f6e96b19cd1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Received unexpected event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f for instance with vm_state active and task_state None.#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.143 232437 DEBUG oslo_concurrency.lockutils [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "689c0f7d-997a-4352-bbea-ecb615ccea8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.144 232437 DEBUG oslo_concurrency.lockutils [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.144 232437 DEBUG oslo_concurrency.lockutils [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.144 232437 DEBUG oslo_concurrency.lockutils [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.145 232437 DEBUG oslo_concurrency.lockutils [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.146 232437 INFO nova.compute.manager [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Terminating instance#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.147 232437 DEBUG nova.compute.manager [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:10:07 np0005548731 kernel: tapbd46dc09-5d (unregistering): left promiscuous mode
Dec  6 03:10:07 np0005548731 NetworkManager[49182]: <info>  [1765008607.1877] device (tapbd46dc09-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:10:07 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:07Z|00999|binding|INFO|Releasing lport bd46dc09-5d50-46df-8d43-1b7dca95407f from this chassis (sb_readonly=0)
Dec  6 03:10:07 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:07Z|01000|binding|INFO|Setting lport bd46dc09-5d50-46df-8d43-1b7dca95407f down in Southbound
Dec  6 03:10:07 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:07Z|01001|binding|INFO|Removing iface tapbd46dc09-5d ovn-installed in OVS
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.199 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.203 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.214 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:d3:2e 10.100.0.10'], port_security=['fa:16:3e:b4:d3:2e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1970606987', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '689c0f7d-997a-4352-bbea-ecb615ccea8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1970606987', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '9', 'neutron:security_group_ids': '7c0486dd-d5db-4e6a-a51e-94ac8eaed290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.218', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cba6b1-9214-48c9-b7f6-652cc6503c7e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=bd46dc09-5d50-46df-8d43-1b7dca95407f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.216 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.215 143965 INFO neutron.agent.ovn.metadata.agent [-] Port bd46dc09-5d50-46df-8d43-1b7dca95407f in datapath 2f82da1e-2277-4375-8c9e-32c60bb0f99c unbound from our chassis#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.217 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f82da1e-2277-4375-8c9e-32c60bb0f99c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.218 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c5349dd8-66fd-4e03-b163-41c012c8476e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.219 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c namespace which is not needed anymore#033[00m
Dec  6 03:10:07 np0005548731 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000c4.scope: Deactivated successfully.
Dec  6 03:10:07 np0005548731 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000c4.scope: Consumed 3.069s CPU time.
Dec  6 03:10:07 np0005548731 systemd-machined[195355]: Machine qemu-100-instance-000000c4 terminated.
Dec  6 03:10:07 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[330948]: [NOTICE]   (330952) : haproxy version is 2.8.14-c23fe91
Dec  6 03:10:07 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[330948]: [NOTICE]   (330952) : path to executable is /usr/sbin/haproxy
Dec  6 03:10:07 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[330948]: [WARNING]  (330952) : Exiting Master process...
Dec  6 03:10:07 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[330948]: [WARNING]  (330952) : Exiting Master process...
Dec  6 03:10:07 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[330948]: [ALERT]    (330952) : Current worker (330954) exited with code 143 (Terminated)
Dec  6 03:10:07 np0005548731 neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c[330948]: [WARNING]  (330952) : All workers exited. Exiting... (0)
Dec  6 03:10:07 np0005548731 systemd[1]: libpod-1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a.scope: Deactivated successfully.
Dec  6 03:10:07 np0005548731 podman[330989]: 2025-12-06 08:10:07.343432022 +0000 UTC m=+0.041368569 container died 1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:10:07 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a-userdata-shm.mount: Deactivated successfully.
Dec  6 03:10:07 np0005548731 systemd[1]: var-lib-containers-storage-overlay-60b28914efb47b1f98b8837d8551ee30f1a31e490f821ed8a9069785da27b066-merged.mount: Deactivated successfully.
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.384 232437 INFO nova.virt.libvirt.driver [-] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Instance destroyed successfully.#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.386 232437 DEBUG nova.objects.instance [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 689c0f7d-997a-4352-bbea-ecb615ccea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:10:07 np0005548731 podman[330989]: 2025-12-06 08:10:07.387010933 +0000 UTC m=+0.084947500 container cleanup 1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:10:07 np0005548731 systemd[1]: libpod-conmon-1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a.scope: Deactivated successfully.
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.413 232437 DEBUG nova.virt.libvirt.vif [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1926666901',display_name='tempest-TestNetworkBasicOps-server-1926666901',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1926666901',id=196,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXJcX9S5tuhjgSccvh5O+jGO7J2gXtm8NpJVHisgACqOBHiGEwJ/JKoT3Q7sz6/mR3QAGntLQRx1IJyWZChJGINMA5wcNi4+wHSbP5236n2Kq91E+5QKMKMUofRtNaQtg==',key_name='tempest-TestNetworkBasicOps-962264356',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:10:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-q0pax7ur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:10:05Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=689c0f7d-997a-4352-bbea-ecb615ccea8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.414 232437 DEBUG nova.network.os_vif_util [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "address": "fa:16:3e:b4:d3:2e", "network": {"id": "2f82da1e-2277-4375-8c9e-32c60bb0f99c", "bridge": "br-int", "label": "tempest-network-smoke--1695874872", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd46dc09-5d", "ovs_interfaceid": "bd46dc09-5d50-46df-8d43-1b7dca95407f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.415 232437 DEBUG nova.network.os_vif_util [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.415 232437 DEBUG os_vif [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.417 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.417 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd46dc09-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.419 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.420 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.422 232437 INFO os_vif [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:d3:2e,bridge_name='br-int',has_traffic_filtering=True,id=bd46dc09-5d50-46df-8d43-1b7dca95407f,network=Network(2f82da1e-2277-4375-8c9e-32c60bb0f99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbd46dc09-5d')#033[00m
Dec  6 03:10:07 np0005548731 podman[331028]: 2025-12-06 08:10:07.451287448 +0000 UTC m=+0.041238846 container remove 1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.457 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1292c850-8728-44ce-bb89-96165cdb343e]: (4, ('Sat Dec  6 08:10:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c (1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a)\n1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a\nSat Dec  6 08:10:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c (1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a)\n1ebe70832c7566fddaf730bc1858d126d9e6c0b5e17385cea31c028b3ef36c6a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.459 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bae8921b-a88f-4762-adfb-9029b1ec58f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.460 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f82da1e-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.461 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:07 np0005548731 kernel: tap2f82da1e-20: left promiscuous mode
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.464 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.466 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[435e0e65-6f77-4daf-8b78-342b9b72b7e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.478 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.483 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b38ffb07-abec-4b40-ac9a-446397697b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.484 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[38ece687-2af2-4f8f-8c4c-5268bdfd624e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.499 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c087cec5-f3eb-46c6-a2a4-33a92b187ed7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886627, 'reachable_time': 40671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331061, 'error': None, 'target': 'ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:07 np0005548731 systemd[1]: run-netns-ovnmeta\x2d2f82da1e\x2d2277\x2d4375\x2d8c9e\x2d32c60bb0f99c.mount: Deactivated successfully.
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.503 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f82da1e-2277-4375-8c9e-32c60bb0f99c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:10:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:07.504 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7d9d69-5bda-4090-b383-a1eef83ba0dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.798 232437 INFO nova.virt.libvirt.driver [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Deleting instance files /var/lib/nova/instances/689c0f7d-997a-4352-bbea-ecb615ccea8f_del#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.800 232437 INFO nova.virt.libvirt.driver [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Deletion of /var/lib/nova/instances/689c0f7d-997a-4352-bbea-ecb615ccea8f_del complete#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.866 232437 INFO nova.compute.manager [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.867 232437 DEBUG oslo.service.loopingcall [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.867 232437 DEBUG nova.compute.manager [-] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:10:07 np0005548731 nova_compute[232433]: 2025-12-06 08:10:07.867 232437 DEBUG nova.network.neutron [-] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:10:07 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 03:10:07 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 03:10:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:08.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:10:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:08.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.977 232437 DEBUG nova.compute.manager [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Received event network-vif-unplugged-bd46dc09-5d50-46df-8d43-1b7dca95407f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.977 232437 DEBUG oslo_concurrency.lockutils [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.977 232437 DEBUG oslo_concurrency.lockutils [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.978 232437 DEBUG oslo_concurrency.lockutils [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.978 232437 DEBUG nova.compute.manager [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] No waiting events found dispatching network-vif-unplugged-bd46dc09-5d50-46df-8d43-1b7dca95407f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.978 232437 DEBUG nova.compute.manager [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Received event network-vif-unplugged-bd46dc09-5d50-46df-8d43-1b7dca95407f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.978 232437 DEBUG nova.compute.manager [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Received event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.978 232437 DEBUG oslo_concurrency.lockutils [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.978 232437 DEBUG oslo_concurrency.lockutils [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.978 232437 DEBUG oslo_concurrency.lockutils [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.979 232437 DEBUG nova.compute.manager [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] No waiting events found dispatching network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:10:08 np0005548731 nova_compute[232433]: 2025-12-06 08:10:08.979 232437 WARNING nova.compute.manager [req-61d7af20-979b-4dc9-b43f-0ce159292c52 req-e28c6507-888b-4a28-bb8f-ad310336cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Received unexpected event network-vif-plugged-bd46dc09-5d50-46df-8d43-1b7dca95407f for instance with vm_state active and task_state deleting.#033[00m
Dec  6 03:10:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:10:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1560501208' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:10:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:10:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1560501208' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:10:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:09 np0005548731 nova_compute[232433]: 2025-12-06 08:10:09.979 232437 DEBUG nova.network.neutron [-] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:10:10 np0005548731 nova_compute[232433]: 2025-12-06 08:10:10.003 232437 INFO nova.compute.manager [-] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Took 2.14 seconds to deallocate network for instance.#033[00m
Dec  6 03:10:10 np0005548731 nova_compute[232433]: 2025-12-06 08:10:10.072 232437 DEBUG oslo_concurrency.lockutils [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:10 np0005548731 nova_compute[232433]: 2025-12-06 08:10:10.073 232437 DEBUG oslo_concurrency.lockutils [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:10 np0005548731 nova_compute[232433]: 2025-12-06 08:10:10.129 232437 DEBUG oslo_concurrency.processutils [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:10:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:10.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:10:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2754319733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:10:10 np0005548731 nova_compute[232433]: 2025-12-06 08:10:10.541 232437 DEBUG oslo_concurrency.processutils [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:10:10 np0005548731 nova_compute[232433]: 2025-12-06 08:10:10.546 232437 DEBUG nova.compute.provider_tree [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:10:10 np0005548731 nova_compute[232433]: 2025-12-06 08:10:10.569 232437 DEBUG nova.scheduler.client.report [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:10:10 np0005548731 nova_compute[232433]: 2025-12-06 08:10:10.605 232437 DEBUG oslo_concurrency.lockutils [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:10 np0005548731 nova_compute[232433]: 2025-12-06 08:10:10.635 232437 INFO nova.scheduler.client.report [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 689c0f7d-997a-4352-bbea-ecb615ccea8f#033[00m
Dec  6 03:10:10 np0005548731 nova_compute[232433]: 2025-12-06 08:10:10.725 232437 DEBUG oslo_concurrency.lockutils [None req-4c3685db-a049-4203-9390-96590d4e5940 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "689c0f7d-997a-4352-bbea-ecb615ccea8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:11 np0005548731 nova_compute[232433]: 2025-12-06 08:10:11.382 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:12.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:12 np0005548731 nova_compute[232433]: 2025-12-06 08:10:12.461 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:12.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:10:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:14.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:10:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:14.359 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:10:14 np0005548731 nova_compute[232433]: 2025-12-06 08:10:14.359 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:14 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:14.360 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:10:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:10:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:14.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:10:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:16.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:16 np0005548731 nova_compute[232433]: 2025-12-06 08:10:16.384 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:16.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:17 np0005548731 nova_compute[232433]: 2025-12-06 08:10:17.462 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:10:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:18.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:10:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:18.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:10:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:20.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:10:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:20.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:21 np0005548731 nova_compute[232433]: 2025-12-06 08:10:21.420 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:22.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:22.362 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:22 np0005548731 nova_compute[232433]: 2025-12-06 08:10:22.383 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008607.3812313, 689c0f7d-997a-4352-bbea-ecb615ccea8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:10:22 np0005548731 nova_compute[232433]: 2025-12-06 08:10:22.383 232437 INFO nova.compute.manager [-] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:10:22 np0005548731 nova_compute[232433]: 2025-12-06 08:10:22.442 232437 DEBUG nova.compute.manager [None req-0052c4b4-c819-4c8e-ad95-df4223453208 - - - - - -] [instance: 689c0f7d-997a-4352-bbea-ecb615ccea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:10:22 np0005548731 nova_compute[232433]: 2025-12-06 08:10:22.497 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:22.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:10:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:24.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:10:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:10:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:24.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:10:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:26.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:26 np0005548731 nova_compute[232433]: 2025-12-06 08:10:26.424 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:26.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:27 np0005548731 nova_compute[232433]: 2025-12-06 08:10:27.543 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:10:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:28.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:10:28 np0005548731 nova_compute[232433]: 2025-12-06 08:10:28.525 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:28.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:28 np0005548731 nova_compute[232433]: 2025-12-06 08:10:28.709 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:28 np0005548731 podman[331150]: 2025-12-06 08:10:28.905488736 +0000 UTC m=+0.060329550 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  6 03:10:28 np0005548731 podman[331148]: 2025-12-06 08:10:28.906303996 +0000 UTC m=+0.067692220 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  6 03:10:28 np0005548731 podman[331149]: 2025-12-06 08:10:28.950741398 +0000 UTC m=+0.110002359 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 03:10:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:30.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:30.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:10:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6002.4 total, 600.0 interval#012Cumulative writes: 73K writes, 293K keys, 73K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.05 MB/s#012Cumulative WAL: 73K writes, 27K syncs, 2.69 writes per sync, written: 0.30 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5372 writes, 23K keys, 5372 commit groups, 1.0 writes per commit group, ingest: 25.15 MB, 0.04 MB/s#012Interval WAL: 5372 writes, 2074 syncs, 2.59 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5612cf175350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5612cf175350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Dec  6 03:10:31 np0005548731 nova_compute[232433]: 2025-12-06 08:10:31.425 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:32.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:32 np0005548731 nova_compute[232433]: 2025-12-06 08:10:32.546 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:32.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:34.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:10:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:34.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:10:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:10:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:36.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:10:36 np0005548731 nova_compute[232433]: 2025-12-06 08:10:36.429 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:36.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:37 np0005548731 nova_compute[232433]: 2025-12-06 08:10:37.548 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:38.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:38.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:10:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:40.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:10:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:40.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:41 np0005548731 nova_compute[232433]: 2025-12-06 08:10:41.430 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:42.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:42 np0005548731 nova_compute[232433]: 2025-12-06 08:10:42.550 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:10:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:42.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:10:44 np0005548731 nova_compute[232433]: 2025-12-06 08:10:44.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:10:44 np0005548731 nova_compute[232433]: 2025-12-06 08:10:44.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:10:44 np0005548731 nova_compute[232433]: 2025-12-06 08:10:44.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:10:44 np0005548731 nova_compute[232433]: 2025-12-06 08:10:44.134 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:10:44 np0005548731 nova_compute[232433]: 2025-12-06 08:10:44.135 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:10:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:44.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:44.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:45 np0005548731 nova_compute[232433]: 2025-12-06 08:10:45.322 232437 DEBUG nova.compute.manager [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Dec  6 03:10:45 np0005548731 nova_compute[232433]: 2025-12-06 08:10:45.437 232437 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:45 np0005548731 nova_compute[232433]: 2025-12-06 08:10:45.437 232437 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:45 np0005548731 nova_compute[232433]: 2025-12-06 08:10:45.464 232437 DEBUG nova.objects.instance [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_requests' on Instance uuid 5f0de650-9c62-4323-9354-1e018f4f06df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:10:45 np0005548731 nova_compute[232433]: 2025-12-06 08:10:45.495 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:10:45 np0005548731 nova_compute[232433]: 2025-12-06 08:10:45.496 232437 INFO nova.compute.claims [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:10:45 np0005548731 nova_compute[232433]: 2025-12-06 08:10:45.496 232437 DEBUG nova.objects.instance [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid 5f0de650-9c62-4323-9354-1e018f4f06df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:10:45 np0005548731 nova_compute[232433]: 2025-12-06 08:10:45.511 232437 DEBUG nova.objects.instance [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f0de650-9c62-4323-9354-1e018f4f06df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:10:45 np0005548731 nova_compute[232433]: 2025-12-06 08:10:45.575 232437 INFO nova.compute.resource_tracker [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating resource usage from migration 3c119e16-f7b2-44e7-8cac-2d8634d372fd#033[00m
Dec  6 03:10:45 np0005548731 nova_compute[232433]: 2025-12-06 08:10:45.575 232437 DEBUG nova.compute.resource_tracker [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Starting to track incoming migration 3c119e16-f7b2-44e7-8cac-2d8634d372fd with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Dec  6 03:10:45 np0005548731 nova_compute[232433]: 2025-12-06 08:10:45.626 232437 DEBUG oslo_concurrency.processutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:10:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:10:46 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2452691386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:10:46 np0005548731 nova_compute[232433]: 2025-12-06 08:10:46.070 232437 DEBUG oslo_concurrency.processutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:10:46 np0005548731 nova_compute[232433]: 2025-12-06 08:10:46.078 232437 DEBUG nova.compute.provider_tree [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:10:46 np0005548731 nova_compute[232433]: 2025-12-06 08:10:46.103 232437 DEBUG nova.scheduler.client.report [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:10:46 np0005548731 nova_compute[232433]: 2025-12-06 08:10:46.144 232437 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:46 np0005548731 nova_compute[232433]: 2025-12-06 08:10:46.145 232437 INFO nova.compute.manager [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Migrating#033[00m
Dec  6 03:10:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:10:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:46.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:10:46 np0005548731 nova_compute[232433]: 2025-12-06 08:10:46.431 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:46.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:47 np0005548731 nova_compute[232433]: 2025-12-06 08:10:47.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:10:47 np0005548731 nova_compute[232433]: 2025-12-06 08:10:47.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:10:47 np0005548731 nova_compute[232433]: 2025-12-06 08:10:47.551 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:48.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:48.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:49 np0005548731 nova_compute[232433]: 2025-12-06 08:10:49.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:10:49 np0005548731 nova_compute[232433]: 2025-12-06 08:10:49.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:10:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:50 np0005548731 nova_compute[232433]: 2025-12-06 08:10:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:10:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:50 np0005548731 systemd[1]: Created slice User Slice of UID 42436.
Dec  6 03:10:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:10:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:50.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:10:50 np0005548731 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec  6 03:10:50 np0005548731 systemd-logind[794]: New session 70 of user nova.
Dec  6 03:10:50 np0005548731 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec  6 03:10:50 np0005548731 systemd[1]: Starting User Manager for UID 42436...
Dec  6 03:10:50 np0005548731 systemd[331300]: Queued start job for default target Main User Target.
Dec  6 03:10:50 np0005548731 systemd[331300]: Created slice User Application Slice.
Dec  6 03:10:50 np0005548731 systemd[331300]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  6 03:10:50 np0005548731 systemd[331300]: Started Daily Cleanup of User's Temporary Directories.
Dec  6 03:10:50 np0005548731 systemd[331300]: Reached target Paths.
Dec  6 03:10:50 np0005548731 systemd[331300]: Reached target Timers.
Dec  6 03:10:50 np0005548731 systemd[331300]: Starting D-Bus User Message Bus Socket...
Dec  6 03:10:50 np0005548731 systemd[331300]: Starting Create User's Volatile Files and Directories...
Dec  6 03:10:50 np0005548731 systemd[331300]: Finished Create User's Volatile Files and Directories.
Dec  6 03:10:50 np0005548731 systemd[331300]: Listening on D-Bus User Message Bus Socket.
Dec  6 03:10:50 np0005548731 systemd[331300]: Reached target Sockets.
Dec  6 03:10:50 np0005548731 systemd[331300]: Reached target Basic System.
Dec  6 03:10:50 np0005548731 systemd[331300]: Reached target Main User Target.
Dec  6 03:10:50 np0005548731 systemd[331300]: Startup finished in 128ms.
Dec  6 03:10:50 np0005548731 systemd[1]: Started User Manager for UID 42436.
Dec  6 03:10:50 np0005548731 systemd[1]: Started Session 70 of User nova.
Dec  6 03:10:50 np0005548731 systemd[1]: session-70.scope: Deactivated successfully.
Dec  6 03:10:50 np0005548731 systemd-logind[794]: Session 70 logged out. Waiting for processes to exit.
Dec  6 03:10:50 np0005548731 systemd-logind[794]: Removed session 70.
Dec  6 03:10:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:50.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:50 np0005548731 systemd-logind[794]: New session 72 of user nova.
Dec  6 03:10:50 np0005548731 systemd[1]: Started Session 72 of User nova.
Dec  6 03:10:50 np0005548731 systemd[1]: session-72.scope: Deactivated successfully.
Dec  6 03:10:50 np0005548731 systemd-logind[794]: Session 72 logged out. Waiting for processes to exit.
Dec  6 03:10:50 np0005548731 systemd-logind[794]: Removed session 72.
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.131 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.131 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.434 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:10:51 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4129445202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.546 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.690 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.691 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4177MB free_disk=20.942710876464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.691 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.691 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.741 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Migration for instance 5f0de650-9c62-4323-9354-1e018f4f06df refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.779 232437 INFO nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating resource usage from migration 3c119e16-f7b2-44e7-8cac-2d8634d372fd#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.780 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Starting to track incoming migration 3c119e16-f7b2-44e7-8cac-2d8634d372fd with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.841 232437 WARNING nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 5f0de650-9c62-4323-9354-1e018f4f06df has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.842 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.842 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.865 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.882 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.882 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.896 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.934 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 03:10:51 np0005548731 nova_compute[232433]: 2025-12-06 08:10:51.965 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:10:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:52.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:10:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2490917646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:10:52 np0005548731 nova_compute[232433]: 2025-12-06 08:10:52.376 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:10:52 np0005548731 nova_compute[232433]: 2025-12-06 08:10:52.381 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:10:52 np0005548731 nova_compute[232433]: 2025-12-06 08:10:52.404 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:10:52 np0005548731 nova_compute[232433]: 2025-12-06 08:10:52.432 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:10:52 np0005548731 nova_compute[232433]: 2025-12-06 08:10:52.432 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:52 np0005548731 nova_compute[232433]: 2025-12-06 08:10:52.554 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:52.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:54.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.396429) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654396496, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1926, "num_deletes": 258, "total_data_size": 4545587, "memory_usage": 4590944, "flush_reason": "Manual Compaction"}
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654417234, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 2977267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75875, "largest_seqno": 77795, "table_properties": {"data_size": 2969422, "index_size": 4659, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16291, "raw_average_key_size": 19, "raw_value_size": 2953709, "raw_average_value_size": 3602, "num_data_blocks": 205, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008479, "oldest_key_time": 1765008479, "file_creation_time": 1765008654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 20924 microseconds, and 12305 cpu microseconds.
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.417350) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 2977267 bytes OK
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.417376) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.419343) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.419371) EVENT_LOG_v1 {"time_micros": 1765008654419362, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.419394) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 4536947, prev total WAL file size 4536947, number of live WAL files 2.
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.421644) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373737' seq:72057594037927935, type:22 .. '6C6F676D0033303331' seq:0, type:0; will stop at (end)
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(2907KB)], [153(10MB)]
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654421709, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 13894358, "oldest_snapshot_seqno": -1}
Dec  6 03:10:54 np0005548731 nova_compute[232433]: 2025-12-06 08:10:54.430 232437 INFO nova.network.neutron [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 with attributes {'binding:host_id': 'compute-2.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec  6 03:10:54 np0005548731 nova_compute[232433]: 2025-12-06 08:10:54.478 232437 DEBUG nova.compute.manager [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-unplugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:10:54 np0005548731 nova_compute[232433]: 2025-12-06 08:10:54.479 232437 DEBUG oslo_concurrency.lockutils [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:54 np0005548731 nova_compute[232433]: 2025-12-06 08:10:54.480 232437 DEBUG oslo_concurrency.lockutils [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:54 np0005548731 nova_compute[232433]: 2025-12-06 08:10:54.480 232437 DEBUG oslo_concurrency.lockutils [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:54 np0005548731 nova_compute[232433]: 2025-12-06 08:10:54.480 232437 DEBUG nova.compute.manager [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] No waiting events found dispatching network-vif-unplugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:10:54 np0005548731 nova_compute[232433]: 2025-12-06 08:10:54.481 232437 WARNING nova.compute.manager [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received unexpected event network-vif-unplugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 for instance with vm_state active and task_state resize_migrating.#033[00m
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 10624 keys, 13754985 bytes, temperature: kUnknown
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654571456, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 13754985, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13686994, "index_size": 40358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 280620, "raw_average_key_size": 26, "raw_value_size": 13501477, "raw_average_value_size": 1270, "num_data_blocks": 1537, "num_entries": 10624, "num_filter_entries": 10624, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765008654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.571736) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 13754985 bytes
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.574619) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.7 rd, 91.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 10.4 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(9.3) write-amplify(4.6) OK, records in: 11153, records dropped: 529 output_compression: NoCompression
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.574655) EVENT_LOG_v1 {"time_micros": 1765008654574639, "job": 98, "event": "compaction_finished", "compaction_time_micros": 149806, "compaction_time_cpu_micros": 61799, "output_level": 6, "num_output_files": 1, "total_output_size": 13754985, "num_input_records": 11153, "num_output_records": 10624, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654576010, "job": 98, "event": "table_file_deletion", "file_number": 155}
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654580065, "job": 98, "event": "table_file_deletion", "file_number": 153}
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.421506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.580167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.580175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.580177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.580179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:10:54.580182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:10:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:54.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:55 np0005548731 nova_compute[232433]: 2025-12-06 08:10:55.506 232437 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:10:55 np0005548731 nova_compute[232433]: 2025-12-06 08:10:55.507 232437 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:10:55 np0005548731 nova_compute[232433]: 2025-12-06 08:10:55.507 232437 DEBUG nova.network.neutron [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:10:55 np0005548731 nova_compute[232433]: 2025-12-06 08:10:55.637 232437 DEBUG nova.compute.manager [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-changed-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:10:55 np0005548731 nova_compute[232433]: 2025-12-06 08:10:55.637 232437 DEBUG nova.compute.manager [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Refreshing instance network info cache due to event network-changed-77949a7e-19a2-4c1f-812b-a7d9452af1e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:10:55 np0005548731 nova_compute[232433]: 2025-12-06 08:10:55.638 232437 DEBUG oslo_concurrency.lockutils [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:10:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:56.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.435 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:10:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:56.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.644 232437 DEBUG nova.compute.manager [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.644 232437 DEBUG oslo_concurrency.lockutils [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.645 232437 DEBUG oslo_concurrency.lockutils [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.645 232437 DEBUG oslo_concurrency.lockutils [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.645 232437 DEBUG nova.compute.manager [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] No waiting events found dispatching network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.645 232437 WARNING nova.compute.manager [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received unexpected event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 for instance with vm_state active and task_state resize_migrated.#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.878 232437 DEBUG nova.network.neutron [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating instance_info_cache with network_info: [{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.894 232437 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.897 232437 DEBUG oslo_concurrency.lockutils [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.898 232437 DEBUG nova.network.neutron [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Refreshing network info cache for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.984 232437 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.986 232437 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec  6 03:10:56 np0005548731 nova_compute[232433]: 2025-12-06 08:10:56.986 232437 INFO nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Creating image(s)#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.026 232437 DEBUG nova.storage.rbd_utils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] creating snapshot(nova-resize) on rbd image(5f0de650-9c62-4323-9354-1e018f4f06df_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  6 03:10:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e414 e414: 3 total, 3 up, 3 in
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.432 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.433 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.472 232437 DEBUG nova.objects.instance [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5f0de650-9c62-4323-9354-1e018f4f06df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.595 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.608 232437 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.608 232437 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Ensure instance console log exists: /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.609 232437 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.609 232437 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.609 232437 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.611 232437 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Start _get_guest_xml network_info=[{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1487736673", "vif_mac": "fa:16:3e:f6:d1:7b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.615 232437 WARNING nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.632 232437 DEBUG nova.virt.libvirt.host [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.633 232437 DEBUG nova.virt.libvirt.host [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.639 232437 DEBUG nova.virt.libvirt.host [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.640 232437 DEBUG nova.virt.libvirt.host [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.641 232437 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.641 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.641 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.641 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.642 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.642 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.642 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.642 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.642 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.643 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.643 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.643 232437 DEBUG nova.virt.hardware [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.643 232437 DEBUG nova.objects.instance [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5f0de650-9c62-4323-9354-1e018f4f06df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:10:57 np0005548731 nova_compute[232433]: 2025-12-06 08:10:57.661 232437 DEBUG oslo_concurrency.processutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:10:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:10:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/19922362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.080 232437 DEBUG oslo_concurrency.processutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.119 232437 DEBUG oslo_concurrency.processutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:10:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:58.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:10:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2171125071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.547 232437 DEBUG oslo_concurrency.processutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.552 232437 DEBUG nova.virt.libvirt.vif [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-522215966',display_name='tempest-TestNetworkAdvancedServerOps-server-522215966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-522215966',id=197,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBGFC8f+teCAymzAzt+sw9RK9rNVgD4R9/B96GYFflUw/uMoFI69FGiUrAFSq23wDgsBI/tn77SEM4/cRaOgXgCYvCjaKqwg+7LU55D+cQXQ9YzNukb2X32Bw/xb16Ugg==',key_name='tempest-TestNetworkAdvancedServerOps-562647975',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:10:14Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-5fgxfuhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:10:54Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=5f0de650-9c62-4323-9354-1e018f4f06df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1487736673", "vif_mac": "fa:16:3e:f6:d1:7b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.552 232437 DEBUG nova.network.os_vif_util [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1487736673", "vif_mac": "fa:16:3e:f6:d1:7b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.553 232437 DEBUG nova.network.os_vif_util [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.556 232437 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  <uuid>5f0de650-9c62-4323-9354-1e018f4f06df</uuid>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  <name>instance-000000c5</name>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  <memory>196608</memory>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-522215966</nova:name>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:10:57</nova:creationTime>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.micro">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <nova:memory>192</nova:memory>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <nova:port uuid="77949a7e-19a2-4c1f-812b-a7d9452af1e3">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <entry name="serial">5f0de650-9c62-4323-9354-1e018f4f06df</entry>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <entry name="uuid">5f0de650-9c62-4323-9354-1e018f4f06df</entry>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/5f0de650-9c62-4323-9354-1e018f4f06df_disk">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/5f0de650-9c62-4323-9354-1e018f4f06df_disk.config">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:f6:d1:7b"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <target dev="tap77949a7e-19"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/console.log" append="off"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:10:58 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:10:58 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:10:58 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:10:58 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.558 232437 DEBUG nova.virt.libvirt.vif [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-522215966',display_name='tempest-TestNetworkAdvancedServerOps-server-522215966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-522215966',id=197,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBGFC8f+teCAymzAzt+sw9RK9rNVgD4R9/B96GYFflUw/uMoFI69FGiUrAFSq23wDgsBI/tn77SEM4/cRaOgXgCYvCjaKqwg+7LU55D+cQXQ9YzNukb2X32Bw/xb16Ugg==',key_name='tempest-TestNetworkAdvancedServerOps-562647975',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:10:14Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-5fgxfuhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:10:54Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=5f0de650-9c62-4323-9354-1e018f4f06df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1487736673", "vif_mac": "fa:16:3e:f6:d1:7b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.559 232437 DEBUG nova.network.os_vif_util [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1487736673", "vif_mac": "fa:16:3e:f6:d1:7b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.560 232437 DEBUG nova.network.os_vif_util [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.560 232437 DEBUG os_vif [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.562 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.562 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.563 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.566 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.566 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77949a7e-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.567 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77949a7e-19, col_values=(('external_ids', {'iface-id': '77949a7e-19a2-4c1f-812b-a7d9452af1e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:d1:7b', 'vm-uuid': '5f0de650-9c62-4323-9354-1e018f4f06df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:58 np0005548731 NetworkManager[49182]: <info>  [1765008658.5695] manager: (tap77949a7e-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.568 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.571 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.577 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.577 232437 INFO os_vif [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19')#033[00m
Dec  6 03:10:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:10:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:10:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:58.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.648 232437 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.649 232437 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.649 232437 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No VIF found with MAC fa:16:3e:f6:d1:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.649 232437 INFO nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Using config drive#033[00m
Dec  6 03:10:58 np0005548731 kernel: tap77949a7e-19: entered promiscuous mode
Dec  6 03:10:58 np0005548731 NetworkManager[49182]: <info>  [1765008658.7430] manager: (tap77949a7e-19): new Tun device (/org/freedesktop/NetworkManager/Devices/468)
Dec  6 03:10:58 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:58Z|01002|binding|INFO|Claiming lport 77949a7e-19a2-4c1f-812b-a7d9452af1e3 for this chassis.
Dec  6 03:10:58 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:58Z|01003|binding|INFO|77949a7e-19a2-4c1f-812b-a7d9452af1e3: Claiming fa:16:3e:f6:d1:7b 10.100.0.12
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.746 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.750 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.758 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 NetworkManager[49182]: <info>  [1765008658.7603] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Dec  6 03:10:58 np0005548731 NetworkManager[49182]: <info>  [1765008658.7608] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.759 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.765 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:d1:7b 10.100.0.12'], port_security=['fa:16:3e:f6:d1:7b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5f0de650-9c62-4323-9354-1e018f4f06df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2de988d7-fe23-4a58-9aaa-b9003f6854a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ca9842-9e49-4df6-a044-c7e848c3bb8a, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=77949a7e-19a2-4c1f-812b-a7d9452af1e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.767 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 in datapath 69c999d2-fbd1-484e-8d21-c2d6762854f2 bound to our chassis#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.768 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69c999d2-fbd1-484e-8d21-c2d6762854f2#033[00m
Dec  6 03:10:58 np0005548731 systemd-udevd[331720]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.781 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[51e9eb20-5906-43b3-bc97-30cb3a8419fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.782 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap69c999d2-f1 in ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.784 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap69c999d2-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.784 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7c8d58fd-8a27-44ca-b62c-d958466ad9b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 NetworkManager[49182]: <info>  [1765008658.7863] device (tap77949a7e-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:10:58 np0005548731 NetworkManager[49182]: <info>  [1765008658.7870] device (tap77949a7e-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.785 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[03284f00-22bf-453d-a762-81c126c55c75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 systemd-machined[195355]: New machine qemu-101-instance-000000c5.
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.796 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc4af54-8a33-4097-ab2e-ec3bc9f62fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 systemd[1]: Started Virtual Machine qemu-101-instance-000000c5.
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.822 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a3aec36d-5bd9-4635-8c35-5d3acff319aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.856 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[a427fe9e-e72d-4cff-a6f6-219c5fbf04a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.864 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1de4a370-3d70-4e15-9f96-6671ca166252]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 NetworkManager[49182]: <info>  [1765008658.8655] manager: (tap69c999d2-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/471)
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.889 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.892 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.903 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.903 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[702e67e7-0c79-4c78-bf18-5f3abae39489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.906 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc3e972-b43c-4e72-a87d-6037e6762a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:58Z|01004|binding|INFO|Setting lport 77949a7e-19a2-4c1f-812b-a7d9452af1e3 ovn-installed in OVS
Dec  6 03:10:58 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:58Z|01005|binding|INFO|Setting lport 77949a7e-19a2-4c1f-812b-a7d9452af1e3 up in Southbound
Dec  6 03:10:58 np0005548731 nova_compute[232433]: 2025-12-06 08:10:58.917 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:58 np0005548731 NetworkManager[49182]: <info>  [1765008658.9314] device (tap69c999d2-f0): carrier: link connected
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.937 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[877375d0-0f53-4a5d-a4aa-ce101bd32f8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.956 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0904fc64-d9f9-4c4e-b1a4-4320e855cc52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69c999d2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:6d:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 892093, 'reachable_time': 30144, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331755, 'error': None, 'target': 'ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.972 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[844911f3-c8d5-49c1-9276-0e0bf115a6af]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:6df6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 892093, 'tstamp': 892093}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331756, 'error': None, 'target': 'ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:58 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:58.991 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9dba4488-0283-42bb-abbc-09a26fdab072]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69c999d2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:6d:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 892093, 'reachable_time': 30144, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331757, 'error': None, 'target': 'ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:59.019 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[364b3b9e-ab1b-4dec-bbf9-e748ab264bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:59.086 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c9a092-8f2d-4044-9963-c71ec613d42b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:59.087 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69c999d2-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:59.087 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:59.088 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69c999d2-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:59 np0005548731 NetworkManager[49182]: <info>  [1765008659.0901] manager: (tap69c999d2-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Dec  6 03:10:59 np0005548731 kernel: tap69c999d2-f0: entered promiscuous mode
Dec  6 03:10:59 np0005548731 nova_compute[232433]: 2025-12-06 08:10:59.089 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:59.092 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69c999d2-f0, col_values=(('external_ids', {'iface-id': '471bbdbd-5382-4586-8184-aed859e8406d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:10:59 np0005548731 nova_compute[232433]: 2025-12-06 08:10:59.092 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:59 np0005548731 ovn_controller[133927]: 2025-12-06T08:10:59Z|01006|binding|INFO|Releasing lport 471bbdbd-5382-4586-8184-aed859e8406d from this chassis (sb_readonly=0)
Dec  6 03:10:59 np0005548731 nova_compute[232433]: 2025-12-06 08:10:59.108 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:59.109 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/69c999d2-fbd1-484e-8d21-c2d6762854f2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/69c999d2-fbd1-484e-8d21-c2d6762854f2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:59.110 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4aa03b-161a-41ea-8128-51a20b6c3902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:59.111 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-69c999d2-fbd1-484e-8d21-c2d6762854f2
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/69c999d2-fbd1-484e-8d21-c2d6762854f2.pid.haproxy
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 69c999d2-fbd1-484e-8d21-c2d6762854f2
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:10:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:10:59.111 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'env', 'PROCESS_TAG=haproxy-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/69c999d2-fbd1-484e-8d21-c2d6762854f2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:10:59 np0005548731 podman[331791]: 2025-12-06 08:10:59.46478403 +0000 UTC m=+0.054260383 container create c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Dec  6 03:10:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:10:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:10:59 np0005548731 systemd[1]: Started libpod-conmon-c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5.scope.
Dec  6 03:10:59 np0005548731 podman[331791]: 2025-12-06 08:10:59.435130308 +0000 UTC m=+0.024606691 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:10:59 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:10:59 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761681895c1f6c1fc6ad011c8b299645eb9d6f734507a46c0af36a9f07b3f1c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:10:59 np0005548731 podman[331804]: 2025-12-06 08:10:59.572687257 +0000 UTC m=+0.065992598 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:10:59 np0005548731 podman[331791]: 2025-12-06 08:10:59.574098452 +0000 UTC m=+0.163574815 container init c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 03:10:59 np0005548731 podman[331806]: 2025-12-06 08:10:59.576693545 +0000 UTC m=+0.069102334 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:10:59 np0005548731 podman[331791]: 2025-12-06 08:10:59.583459169 +0000 UTC m=+0.172935512 container start c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:10:59 np0005548731 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[331834]: [NOTICE]   (331870) : New worker (331878) forked
Dec  6 03:10:59 np0005548731 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[331834]: [NOTICE]   (331870) : Loading success.
Dec  6 03:10:59 np0005548731 podman[331805]: 2025-12-06 08:10:59.60810235 +0000 UTC m=+0.099854243 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 03:10:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:10:59 np0005548731 nova_compute[232433]: 2025-12-06 08:10:59.913 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008659.9128587, 5f0de650-9c62-4323-9354-1e018f4f06df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:10:59 np0005548731 nova_compute[232433]: 2025-12-06 08:10:59.913 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:10:59 np0005548731 nova_compute[232433]: 2025-12-06 08:10:59.915 232437 DEBUG nova.compute.manager [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:10:59 np0005548731 nova_compute[232433]: 2025-12-06 08:10:59.919 232437 INFO nova.virt.libvirt.driver [-] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Instance running successfully.#033[00m
Dec  6 03:10:59 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 03:10:59 np0005548731 nova_compute[232433]: 2025-12-06 08:10:59.922 232437 DEBUG nova.virt.libvirt.guest [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 03:10:59 np0005548731 nova_compute[232433]: 2025-12-06 08:10:59.922 232437 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Dec  6 03:10:59 np0005548731 nova_compute[232433]: 2025-12-06 08:10:59.989 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:10:59 np0005548731 nova_compute[232433]: 2025-12-06 08:10:59.991 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.067 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.068 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008659.914639, 5f0de650-9c62-4323-9354-1e018f4f06df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.068 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] VM Started (Lifecycle Event)#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.114 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.118 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.151 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Dec  6 03:11:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:00.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.507758) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660507840, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 383, "num_deletes": 251, "total_data_size": 382837, "memory_usage": 391416, "flush_reason": "Manual Compaction"}
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660511707, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 252984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77800, "largest_seqno": 78178, "table_properties": {"data_size": 250560, "index_size": 523, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6131, "raw_average_key_size": 19, "raw_value_size": 245661, "raw_average_value_size": 777, "num_data_blocks": 21, "num_entries": 316, "num_filter_entries": 316, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008654, "oldest_key_time": 1765008654, "file_creation_time": 1765008660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 3981 microseconds, and 1655 cpu microseconds.
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.511752) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 252984 bytes OK
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.511770) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.513429) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.513443) EVENT_LOG_v1 {"time_micros": 1765008660513438, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.513461) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 380265, prev total WAL file size 380265, number of live WAL files 2.
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.514025) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(247KB)], [156(13MB)]
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660514072, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 14007969, "oldest_snapshot_seqno": -1}
Dec  6 03:11:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:00.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 10420 keys, 11996394 bytes, temperature: kUnknown
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660609501, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 11996394, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11931154, "index_size": 38055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26117, "raw_key_size": 277097, "raw_average_key_size": 26, "raw_value_size": 11750687, "raw_average_value_size": 1127, "num_data_blocks": 1432, "num_entries": 10420, "num_filter_entries": 10420, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765008660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.609797) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 11996394 bytes
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.738980) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.6 rd, 125.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.1 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(102.8) write-amplify(47.4) OK, records in: 10940, records dropped: 520 output_compression: NoCompression
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.739021) EVENT_LOG_v1 {"time_micros": 1765008660739005, "job": 100, "event": "compaction_finished", "compaction_time_micros": 95567, "compaction_time_cpu_micros": 37194, "output_level": 6, "num_output_files": 1, "total_output_size": 11996394, "num_input_records": 10940, "num_output_records": 10420, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660739454, "job": 100, "event": "table_file_deletion", "file_number": 158}
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660742224, "job": 100, "event": "table_file_deletion", "file_number": 156}
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.513939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.742382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.742389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.742391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.742393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:11:00 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:11:00.742395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:11:00 np0005548731 systemd[1]: Stopping User Manager for UID 42436...
Dec  6 03:11:00 np0005548731 systemd[331300]: Activating special unit Exit the Session...
Dec  6 03:11:00 np0005548731 systemd[331300]: Stopped target Main User Target.
Dec  6 03:11:00 np0005548731 systemd[331300]: Stopped target Basic System.
Dec  6 03:11:00 np0005548731 systemd[331300]: Stopped target Paths.
Dec  6 03:11:00 np0005548731 systemd[331300]: Stopped target Sockets.
Dec  6 03:11:00 np0005548731 systemd[331300]: Stopped target Timers.
Dec  6 03:11:00 np0005548731 systemd[331300]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  6 03:11:00 np0005548731 systemd[331300]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  6 03:11:00 np0005548731 systemd[331300]: Closed D-Bus User Message Bus Socket.
Dec  6 03:11:00 np0005548731 systemd[331300]: Stopped Create User's Volatile Files and Directories.
Dec  6 03:11:00 np0005548731 systemd[331300]: Removed slice User Application Slice.
Dec  6 03:11:00 np0005548731 systemd[331300]: Reached target Shutdown.
Dec  6 03:11:00 np0005548731 systemd[331300]: Finished Exit the Session.
Dec  6 03:11:00 np0005548731 systemd[331300]: Reached target Exit the Session.
Dec  6 03:11:00 np0005548731 systemd[1]: user@42436.service: Deactivated successfully.
Dec  6 03:11:00 np0005548731 systemd[1]: Stopped User Manager for UID 42436.
Dec  6 03:11:00 np0005548731 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec  6 03:11:00 np0005548731 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec  6 03:11:00 np0005548731 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec  6 03:11:00 np0005548731 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec  6 03:11:00 np0005548731 systemd[1]: Removed slice User Slice of UID 42436.
Dec  6 03:11:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:00.913 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:11:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:00.914 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:11:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:00.915 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.958 232437 DEBUG nova.compute.manager [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.958 232437 DEBUG oslo_concurrency.lockutils [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.959 232437 DEBUG oslo_concurrency.lockutils [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.959 232437 DEBUG oslo_concurrency.lockutils [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.959 232437 DEBUG nova.compute.manager [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] No waiting events found dispatching network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:11:00 np0005548731 nova_compute[232433]: 2025-12-06 08:11:00.959 232437 WARNING nova.compute.manager [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received unexpected event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 for instance with vm_state resized and task_state None.#033[00m
Dec  6 03:11:01 np0005548731 nova_compute[232433]: 2025-12-06 08:11:01.437 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:01 np0005548731 nova_compute[232433]: 2025-12-06 08:11:01.694 232437 DEBUG nova.network.neutron [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updated VIF entry in instance network info cache for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:11:01 np0005548731 nova_compute[232433]: 2025-12-06 08:11:01.694 232437 DEBUG nova.network.neutron [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating instance_info_cache with network_info: [{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:11:01 np0005548731 nova_compute[232433]: 2025-12-06 08:11:01.716 232437 DEBUG oslo_concurrency.lockutils [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:11:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:02.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:02.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:03 np0005548731 nova_compute[232433]: 2025-12-06 08:11:03.111 232437 DEBUG nova.compute.manager [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:11:03 np0005548731 nova_compute[232433]: 2025-12-06 08:11:03.112 232437 DEBUG oslo_concurrency.lockutils [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:11:03 np0005548731 nova_compute[232433]: 2025-12-06 08:11:03.113 232437 DEBUG oslo_concurrency.lockutils [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:11:03 np0005548731 nova_compute[232433]: 2025-12-06 08:11:03.114 232437 DEBUG oslo_concurrency.lockutils [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:11:03 np0005548731 nova_compute[232433]: 2025-12-06 08:11:03.114 232437 DEBUG nova.compute.manager [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] No waiting events found dispatching network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:11:03 np0005548731 nova_compute[232433]: 2025-12-06 08:11:03.115 232437 WARNING nova.compute.manager [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received unexpected event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 for instance with vm_state resized and task_state None.#033[00m
Dec  6 03:11:03 np0005548731 nova_compute[232433]: 2025-12-06 08:11:03.602 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:04.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:11:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:04.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:11:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:11:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:06.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:06 np0005548731 nova_compute[232433]: 2025-12-06 08:11:06.438 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:06.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e415 e415: 3 total, 3 up, 3 in
Dec  6 03:11:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:11:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:11:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:07.635 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:11:07 np0005548731 nova_compute[232433]: 2025-12-06 08:11:07.636 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:07.638 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:11:07 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:07.639 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:11:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:08.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:08 np0005548731 nova_compute[232433]: 2025-12-06 08:11:08.605 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:11:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:08.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:11:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:11:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:10.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:10.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:11 np0005548731 nova_compute[232433]: 2025-12-06 08:11:11.442 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:12.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:12.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:13 np0005548731 ovn_controller[133927]: 2025-12-06T08:11:13Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:d1:7b 10.100.0.12
Dec  6 03:11:13 np0005548731 nova_compute[232433]: 2025-12-06 08:11:13.609 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:11:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:14.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:11:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 e416: 3 total, 3 up, 3 in
Dec  6 03:11:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:11:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:14.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:11:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:11:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:16.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:16 np0005548731 nova_compute[232433]: 2025-12-06 08:11:16.443 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:16.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:18.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:18 np0005548731 nova_compute[232433]: 2025-12-06 08:11:18.612 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:11:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:11:18 np0005548731 nova_compute[232433]: 2025-12-06 08:11:18.765 232437 INFO nova.compute.manager [None req-7167a28b-09af-422e-a4de-9c26fad12a49 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Get console output#033[00m
Dec  6 03:11:18 np0005548731 nova_compute[232433]: 2025-12-06 08:11:18.771 261230 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 03:11:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:11:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:20.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:20.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:21 np0005548731 nova_compute[232433]: 2025-12-06 08:11:21.446 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.067 232437 DEBUG nova.compute.manager [req-d29565b7-5e2c-435b-85e4-2d4da5f15679 req-93eba0ec-593b-4705-b51b-eb7e1b839470 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-changed-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.068 232437 DEBUG nova.compute.manager [req-d29565b7-5e2c-435b-85e4-2d4da5f15679 req-93eba0ec-593b-4705-b51b-eb7e1b839470 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Refreshing instance network info cache due to event network-changed-77949a7e-19a2-4c1f-812b-a7d9452af1e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.068 232437 DEBUG oslo_concurrency.lockutils [req-d29565b7-5e2c-435b-85e4-2d4da5f15679 req-93eba0ec-593b-4705-b51b-eb7e1b839470 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.068 232437 DEBUG oslo_concurrency.lockutils [req-d29565b7-5e2c-435b-85e4-2d4da5f15679 req-93eba0ec-593b-4705-b51b-eb7e1b839470 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.068 232437 DEBUG nova.network.neutron [req-d29565b7-5e2c-435b-85e4-2d4da5f15679 req-93eba0ec-593b-4705-b51b-eb7e1b839470 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Refreshing network info cache for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:11:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:22.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.464 232437 DEBUG oslo_concurrency.lockutils [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.465 232437 DEBUG oslo_concurrency.lockutils [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.465 232437 DEBUG oslo_concurrency.lockutils [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.466 232437 DEBUG oslo_concurrency.lockutils [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.467 232437 DEBUG oslo_concurrency.lockutils [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.468 232437 INFO nova.compute.manager [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Terminating instance#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.470 232437 DEBUG nova.compute.manager [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:11:22 np0005548731 kernel: tap77949a7e-19 (unregistering): left promiscuous mode
Dec  6 03:11:22 np0005548731 NetworkManager[49182]: <info>  [1765008682.5352] device (tap77949a7e-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:11:22 np0005548731 ovn_controller[133927]: 2025-12-06T08:11:22Z|01007|binding|INFO|Releasing lport 77949a7e-19a2-4c1f-812b-a7d9452af1e3 from this chassis (sb_readonly=0)
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.544 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:22 np0005548731 ovn_controller[133927]: 2025-12-06T08:11:22Z|01008|binding|INFO|Setting lport 77949a7e-19a2-4c1f-812b-a7d9452af1e3 down in Southbound
Dec  6 03:11:22 np0005548731 ovn_controller[133927]: 2025-12-06T08:11:22Z|01009|binding|INFO|Removing iface tap77949a7e-19 ovn-installed in OVS
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.555 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:d1:7b 10.100.0.12'], port_security=['fa:16:3e:f6:d1:7b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5f0de650-9c62-4323-9354-1e018f4f06df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2de988d7-fe23-4a58-9aaa-b9003f6854a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ca9842-9e49-4df6-a044-c7e848c3bb8a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=77949a7e-19a2-4c1f-812b-a7d9452af1e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.557 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 in datapath 69c999d2-fbd1-484e-8d21-c2d6762854f2 unbound from our chassis#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.559 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69c999d2-fbd1-484e-8d21-c2d6762854f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.560 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5135d4ab-698f-4bd2-acd8-9c9a30b692c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.560 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2 namespace which is not needed anymore#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.565 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:22 np0005548731 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000c5.scope: Deactivated successfully.
Dec  6 03:11:22 np0005548731 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000c5.scope: Consumed 14.510s CPU time.
Dec  6 03:11:22 np0005548731 systemd-machined[195355]: Machine qemu-101-instance-000000c5 terminated.
Dec  6 03:11:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:11:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:22.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.693 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.698 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:22 np0005548731 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[331834]: [NOTICE]   (331870) : haproxy version is 2.8.14-c23fe91
Dec  6 03:11:22 np0005548731 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[331834]: [NOTICE]   (331870) : path to executable is /usr/sbin/haproxy
Dec  6 03:11:22 np0005548731 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[331834]: [WARNING]  (331870) : Exiting Master process...
Dec  6 03:11:22 np0005548731 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[331834]: [WARNING]  (331870) : Exiting Master process...
Dec  6 03:11:22 np0005548731 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[331834]: [ALERT]    (331870) : Current worker (331878) exited with code 143 (Terminated)
Dec  6 03:11:22 np0005548731 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[331834]: [WARNING]  (331870) : All workers exited. Exiting... (0)
Dec  6 03:11:22 np0005548731 systemd[1]: libpod-c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5.scope: Deactivated successfully.
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.712 232437 INFO nova.virt.libvirt.driver [-] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Instance destroyed successfully.#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.712 232437 DEBUG nova.objects.instance [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid 5f0de650-9c62-4323-9354-1e018f4f06df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:11:22 np0005548731 podman[332070]: 2025-12-06 08:11:22.713982391 +0000 UTC m=+0.048460102 container died c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.731 232437 DEBUG nova.virt.libvirt.vif [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-522215966',display_name='tempest-TestNetworkAdvancedServerOps-server-522215966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-522215966',id=197,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBGFC8f+teCAymzAzt+sw9RK9rNVgD4R9/B96GYFflUw/uMoFI69FGiUrAFSq23wDgsBI/tn77SEM4/cRaOgXgCYvCjaKqwg+7LU55D+cQXQ9YzNukb2X32Bw/xb16Ugg==',key_name='tempest-TestNetworkAdvancedServerOps-562647975',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:11:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-5fgxfuhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:11:08Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=5f0de650-9c62-4323-9354-1e018f4f06df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.732 232437 DEBUG nova.network.os_vif_util [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.733 232437 DEBUG nova.network.os_vif_util [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.733 232437 DEBUG os_vif [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.735 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.736 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77949a7e-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.738 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.741 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:11:22 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5-userdata-shm.mount: Deactivated successfully.
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.746 232437 INFO os_vif [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19')#033[00m
Dec  6 03:11:22 np0005548731 systemd[1]: var-lib-containers-storage-overlay-761681895c1f6c1fc6ad011c8b299645eb9d6f734507a46c0af36a9f07b3f1c0-merged.mount: Deactivated successfully.
Dec  6 03:11:22 np0005548731 podman[332070]: 2025-12-06 08:11:22.75873218 +0000 UTC m=+0.093209901 container cleanup c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 03:11:22 np0005548731 systemd[1]: libpod-conmon-c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5.scope: Deactivated successfully.
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.801 232437 DEBUG nova.compute.manager [req-463930fc-b95e-4045-a96a-8fa38cf004c6 req-ddea8b33-da6b-48c8-91aa-80e1630bcc6b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-unplugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.802 232437 DEBUG oslo_concurrency.lockutils [req-463930fc-b95e-4045-a96a-8fa38cf004c6 req-ddea8b33-da6b-48c8-91aa-80e1630bcc6b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.802 232437 DEBUG oslo_concurrency.lockutils [req-463930fc-b95e-4045-a96a-8fa38cf004c6 req-ddea8b33-da6b-48c8-91aa-80e1630bcc6b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.802 232437 DEBUG oslo_concurrency.lockutils [req-463930fc-b95e-4045-a96a-8fa38cf004c6 req-ddea8b33-da6b-48c8-91aa-80e1630bcc6b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.802 232437 DEBUG nova.compute.manager [req-463930fc-b95e-4045-a96a-8fa38cf004c6 req-ddea8b33-da6b-48c8-91aa-80e1630bcc6b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] No waiting events found dispatching network-vif-unplugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.803 232437 DEBUG nova.compute.manager [req-463930fc-b95e-4045-a96a-8fa38cf004c6 req-ddea8b33-da6b-48c8-91aa-80e1630bcc6b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-unplugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:11:22 np0005548731 podman[332127]: 2025-12-06 08:11:22.823033325 +0000 UTC m=+0.038911047 container remove c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.830 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[290c810e-60bd-4f61-9e77-0bf5cf7766be]: (4, ('Sat Dec  6 08:11:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2 (c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5)\nc5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5\nSat Dec  6 08:11:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2 (c5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5)\nc5d7cec350738ea9f4a28dd2e98c6ecbc8e33e616d93a22ee031ef6e38bb2bf5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.831 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1f869669-fe3b-4c82-af4a-9eb34bc1b53c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.832 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69c999d2-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:11:22 np0005548731 kernel: tap69c999d2-f0: left promiscuous mode
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.835 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:22 np0005548731 nova_compute[232433]: 2025-12-06 08:11:22.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.854 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3022ce4-acea-482d-a44d-06d10fe4c291]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.867 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[93124f67-9b73-48fc-8f24-7b3f3323ef14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.868 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1216c363-dc74-410c-ab60-f6aa23edc1db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.885 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7095ae4a-10ed-4b14-90ca-400f978398d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 892085, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332143, 'error': None, 'target': 'ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:11:22 np0005548731 systemd[1]: run-netns-ovnmeta\x2d69c999d2\x2dfbd1\x2d484e\x2d8d21\x2dc2d6762854f2.mount: Deactivated successfully.
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.888 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:11:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:22.889 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ada86c-4dc4-4ae1-8bff-8fe6833fa46d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:11:23 np0005548731 nova_compute[232433]: 2025-12-06 08:11:23.135 232437 INFO nova.virt.libvirt.driver [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Deleting instance files /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df_del#033[00m
Dec  6 03:11:23 np0005548731 nova_compute[232433]: 2025-12-06 08:11:23.136 232437 INFO nova.virt.libvirt.driver [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Deletion of /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df_del complete#033[00m
Dec  6 03:11:23 np0005548731 nova_compute[232433]: 2025-12-06 08:11:23.195 232437 INFO nova.compute.manager [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:11:23 np0005548731 nova_compute[232433]: 2025-12-06 08:11:23.196 232437 DEBUG oslo.service.loopingcall [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:11:23 np0005548731 nova_compute[232433]: 2025-12-06 08:11:23.196 232437 DEBUG nova.compute.manager [-] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:11:23 np0005548731 nova_compute[232433]: 2025-12-06 08:11:23.196 232437 DEBUG nova.network.neutron [-] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.076 232437 DEBUG nova.network.neutron [req-d29565b7-5e2c-435b-85e4-2d4da5f15679 req-93eba0ec-593b-4705-b51b-eb7e1b839470 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updated VIF entry in instance network info cache for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.077 232437 DEBUG nova.network.neutron [req-d29565b7-5e2c-435b-85e4-2d4da5f15679 req-93eba0ec-593b-4705-b51b-eb7e1b839470 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating instance_info_cache with network_info: [{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.101 232437 DEBUG oslo_concurrency.lockutils [req-d29565b7-5e2c-435b-85e4-2d4da5f15679 req-93eba0ec-593b-4705-b51b-eb7e1b839470 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:11:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:24.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.458 232437 DEBUG nova.network.neutron [-] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.485 232437 INFO nova.compute.manager [-] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Took 1.29 seconds to deallocate network for instance.#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.552 232437 DEBUG oslo_concurrency.lockutils [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.553 232437 DEBUG oslo_concurrency.lockutils [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.562 232437 DEBUG oslo_concurrency.lockutils [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:11:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:24.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.662 232437 INFO nova.scheduler.client.report [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Deleted allocations for instance 5f0de650-9c62-4323-9354-1e018f4f06df#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.765 232437 DEBUG oslo_concurrency.lockutils [None req-be4a8a15-6847-4dde-9581-67d435886776 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:11:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.885 232437 DEBUG nova.compute.manager [req-f2ab8718-c233-4b52-9db7-3c02a7d5e561 req-bae81f4a-a423-4da1-bec9-c0324e69a9a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.886 232437 DEBUG oslo_concurrency.lockutils [req-f2ab8718-c233-4b52-9db7-3c02a7d5e561 req-bae81f4a-a423-4da1-bec9-c0324e69a9a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.886 232437 DEBUG oslo_concurrency.lockutils [req-f2ab8718-c233-4b52-9db7-3c02a7d5e561 req-bae81f4a-a423-4da1-bec9-c0324e69a9a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.887 232437 DEBUG oslo_concurrency.lockutils [req-f2ab8718-c233-4b52-9db7-3c02a7d5e561 req-bae81f4a-a423-4da1-bec9-c0324e69a9a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.887 232437 DEBUG nova.compute.manager [req-f2ab8718-c233-4b52-9db7-3c02a7d5e561 req-bae81f4a-a423-4da1-bec9-c0324e69a9a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] No waiting events found dispatching network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.887 232437 WARNING nova.compute.manager [req-f2ab8718-c233-4b52-9db7-3c02a7d5e561 req-bae81f4a-a423-4da1-bec9-c0324e69a9a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received unexpected event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 03:11:24 np0005548731 nova_compute[232433]: 2025-12-06 08:11:24.888 232437 DEBUG nova.compute.manager [req-f2ab8718-c233-4b52-9db7-3c02a7d5e561 req-bae81f4a-a423-4da1-bec9-c0324e69a9a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-deleted-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:11:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:26.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:26 np0005548731 nova_compute[232433]: 2025-12-06 08:11:26.448 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:26.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:27 np0005548731 nova_compute[232433]: 2025-12-06 08:11:27.739 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000048s ======
Dec  6 03:11:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:28.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Dec  6 03:11:28 np0005548731 nova_compute[232433]: 2025-12-06 08:11:28.607 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:28.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:28 np0005548731 nova_compute[232433]: 2025-12-06 08:11:28.825 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:11:29 np0005548731 podman[332149]: 2025-12-06 08:11:29.912777438 +0000 UTC m=+0.064996604 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 03:11:29 np0005548731 podman[332151]: 2025-12-06 08:11:29.921637223 +0000 UTC m=+0.065790383 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  6 03:11:30 np0005548731 podman[332150]: 2025-12-06 08:11:30.008332444 +0000 UTC m=+0.147738488 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 03:11:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:11:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:30.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:11:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:30.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:31 np0005548731 nova_compute[232433]: 2025-12-06 08:11:31.449 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:32.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:32.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:32 np0005548731 nova_compute[232433]: 2025-12-06 08:11:32.742 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:34.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:34.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:11:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:36.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:36 np0005548731 nova_compute[232433]: 2025-12-06 08:11:36.450 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:36.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:37 np0005548731 nova_compute[232433]: 2025-12-06 08:11:37.711 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008682.708306, 5f0de650-9c62-4323-9354-1e018f4f06df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:11:37 np0005548731 nova_compute[232433]: 2025-12-06 08:11:37.711 232437 INFO nova.compute.manager [-] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:11:37 np0005548731 nova_compute[232433]: 2025-12-06 08:11:37.746 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:38.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:38 np0005548731 nova_compute[232433]: 2025-12-06 08:11:38.315 232437 DEBUG nova.compute.manager [None req-d8e6ea00-bf49-45f1-9044-ac8fd10b9bcf - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:11:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:11:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:38.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:11:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:11:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:40.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:40.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:41 np0005548731 nova_compute[232433]: 2025-12-06 08:11:41.452 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:11:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:42.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:11:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:42.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:42 np0005548731 nova_compute[232433]: 2025-12-06 08:11:42.772 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:44.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:44.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:11:46 np0005548731 nova_compute[232433]: 2025-12-06 08:11:46.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:11:46 np0005548731 nova_compute[232433]: 2025-12-06 08:11:46.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:11:46 np0005548731 nova_compute[232433]: 2025-12-06 08:11:46.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:11:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:46.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:46 np0005548731 nova_compute[232433]: 2025-12-06 08:11:46.454 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:46.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:46 np0005548731 nova_compute[232433]: 2025-12-06 08:11:46.985 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:11:46 np0005548731 nova_compute[232433]: 2025-12-06 08:11:46.985 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:11:47 np0005548731 nova_compute[232433]: 2025-12-06 08:11:47.774 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:48 np0005548731 nova_compute[232433]: 2025-12-06 08:11:48.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:11:48 np0005548731 nova_compute[232433]: 2025-12-06 08:11:48.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:11:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:48.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:48.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:48.994 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:11:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:48.995 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:11:49 np0005548731 nova_compute[232433]: 2025-12-06 08:11:49.047 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:49 np0005548731 nova_compute[232433]: 2025-12-06 08:11:49.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:11:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:11:50 np0005548731 nova_compute[232433]: 2025-12-06 08:11:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:11:50 np0005548731 nova_compute[232433]: 2025-12-06 08:11:50.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:11:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:50.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:50.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:51 np0005548731 nova_compute[232433]: 2025-12-06 08:11:51.457 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.153 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.154 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.154 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.155 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.155 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:11:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:11:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:52.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:11:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:11:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3257912044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.632 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:11:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:52.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.794 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.886 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.888 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4185MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.889 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:11:52 np0005548731 nova_compute[232433]: 2025-12-06 08:11:52.890 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:11:53 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:11:53.997 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:11:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:11:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:54.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:11:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:54.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:11:55 np0005548731 nova_compute[232433]: 2025-12-06 08:11:55.601 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:11:55 np0005548731 nova_compute[232433]: 2025-12-06 08:11:55.602 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:11:55 np0005548731 nova_compute[232433]: 2025-12-06 08:11:55.838 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:11:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:11:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3596922978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:11:56 np0005548731 nova_compute[232433]: 2025-12-06 08:11:56.286 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:11:56 np0005548731 nova_compute[232433]: 2025-12-06 08:11:56.293 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:11:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:56.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:56 np0005548731 nova_compute[232433]: 2025-12-06 08:11:56.319 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:11:56 np0005548731 nova_compute[232433]: 2025-12-06 08:11:56.356 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:11:56 np0005548731 nova_compute[232433]: 2025-12-06 08:11:56.357 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:11:56 np0005548731 nova_compute[232433]: 2025-12-06 08:11:56.504 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:56.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:57 np0005548731 nova_compute[232433]: 2025-12-06 08:11:57.796 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:11:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:11:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:58.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:11:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:11:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:11:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:58.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:11:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:12:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:00.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:12:00 np0005548731 nova_compute[232433]: 2025-12-06 08:12:00.357 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:00 np0005548731 nova_compute[232433]: 2025-12-06 08:12:00.358 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:00.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:00 np0005548731 podman[332376]: 2025-12-06 08:12:00.90524418 +0000 UTC m=+0.056113448 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:12:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:00.914 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:00.915 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:00.915 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:00 np0005548731 podman[332374]: 2025-12-06 08:12:00.918462032 +0000 UTC m=+0.075482579 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 03:12:00 np0005548731 podman[332375]: 2025-12-06 08:12:00.949391855 +0000 UTC m=+0.104990688 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  6 03:12:01 np0005548731 nova_compute[232433]: 2025-12-06 08:12:01.507 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:02.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:02.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:02 np0005548731 nova_compute[232433]: 2025-12-06 08:12:02.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:12:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:04.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:12:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:04.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:12:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:06.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:12:06 np0005548731 nova_compute[232433]: 2025-12-06 08:12:06.495 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:06 np0005548731 nova_compute[232433]: 2025-12-06 08:12:06.495 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:06 np0005548731 nova_compute[232433]: 2025-12-06 08:12:06.508 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:06 np0005548731 nova_compute[232433]: 2025-12-06 08:12:06.515 232437 DEBUG nova.compute.manager [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:12:06 np0005548731 nova_compute[232433]: 2025-12-06 08:12:06.625 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:06 np0005548731 nova_compute[232433]: 2025-12-06 08:12:06.626 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:06 np0005548731 nova_compute[232433]: 2025-12-06 08:12:06.636 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:12:06 np0005548731 nova_compute[232433]: 2025-12-06 08:12:06.636 232437 INFO nova.compute.claims [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:12:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:06.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:06 np0005548731 nova_compute[232433]: 2025-12-06 08:12:06.841 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.287 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.293 232437 DEBUG nova.compute.provider_tree [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.311 232437 DEBUG nova.scheduler.client.report [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.366 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.367 232437 DEBUG nova.compute.manager [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.446 232437 DEBUG nova.compute.manager [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.447 232437 DEBUG nova.network.neutron [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.481 232437 INFO nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.508 232437 DEBUG nova.compute.manager [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.614 232437 DEBUG nova.compute.manager [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.616 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.617 232437 INFO nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Creating image(s)#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.653 232437 DEBUG nova.storage.rbd_utils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image f13f96d6-bf80-493f-8c88-883a6da35105_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.686 232437 DEBUG nova.storage.rbd_utils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image f13f96d6-bf80-493f-8c88-883a6da35105_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.717 232437 DEBUG nova.storage.rbd_utils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image f13f96d6-bf80-493f-8c88-883a6da35105_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.722 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.755 232437 DEBUG nova.policy [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed2d17026504d70b893923a85cece4d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.796 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.797 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.797 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.798 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.821 232437 DEBUG nova.storage.rbd_utils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image f13f96d6-bf80-493f-8c88-883a6da35105_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.825 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f13f96d6-bf80-493f-8c88-883a6da35105_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:12:07 np0005548731 nova_compute[232433]: 2025-12-06 08:12:07.854 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:08 np0005548731 nova_compute[232433]: 2025-12-06 08:12:08.082 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f13f96d6-bf80-493f-8c88-883a6da35105_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:12:08 np0005548731 nova_compute[232433]: 2025-12-06 08:12:08.144 232437 DEBUG nova.storage.rbd_utils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] resizing rbd image f13f96d6-bf80-493f-8c88-883a6da35105_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:12:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:08.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:08.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:12:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:12:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:12:08 np0005548731 nova_compute[232433]: 2025-12-06 08:12:08.838 232437 DEBUG nova.objects.instance [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:12:08 np0005548731 nova_compute[232433]: 2025-12-06 08:12:08.869 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:12:08 np0005548731 nova_compute[232433]: 2025-12-06 08:12:08.870 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Ensure instance console log exists: /var/lib/nova/instances/f13f96d6-bf80-493f-8c88-883a6da35105/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:12:08 np0005548731 nova_compute[232433]: 2025-12-06 08:12:08.870 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:08 np0005548731 nova_compute[232433]: 2025-12-06 08:12:08.871 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:08 np0005548731 nova_compute[232433]: 2025-12-06 08:12:08.871 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:09 np0005548731 nova_compute[232433]: 2025-12-06 08:12:09.747 232437 DEBUG nova.network.neutron [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Successfully created port: a69d734a-1545-4982-9f2c-ed585fa6d7d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:12:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:10.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:10.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:11 np0005548731 nova_compute[232433]: 2025-12-06 08:12:11.510 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:12 np0005548731 nova_compute[232433]: 2025-12-06 08:12:12.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:12.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:12.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:12 np0005548731 nova_compute[232433]: 2025-12-06 08:12:12.857 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:13 np0005548731 nova_compute[232433]: 2025-12-06 08:12:13.817 232437 DEBUG nova.network.neutron [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Successfully updated port: a69d734a-1545-4982-9f2c-ed585fa6d7d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:12:13 np0005548731 nova_compute[232433]: 2025-12-06 08:12:13.845 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:12:13 np0005548731 nova_compute[232433]: 2025-12-06 08:12:13.846 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:12:13 np0005548731 nova_compute[232433]: 2025-12-06 08:12:13.846 232437 DEBUG nova.network.neutron [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:12:13 np0005548731 nova_compute[232433]: 2025-12-06 08:12:13.998 232437 DEBUG nova.compute.manager [req-1c9d2416-784c-468e-937d-c791d10f883c req-31639a4a-de4d-42d6-95d7-816eb66b14d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-changed-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:12:13 np0005548731 nova_compute[232433]: 2025-12-06 08:12:13.999 232437 DEBUG nova.compute.manager [req-1c9d2416-784c-468e-937d-c791d10f883c req-31639a4a-de4d-42d6-95d7-816eb66b14d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Refreshing instance network info cache due to event network-changed-a69d734a-1545-4982-9f2c-ed585fa6d7d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:12:14 np0005548731 nova_compute[232433]: 2025-12-06 08:12:13.999 232437 DEBUG oslo_concurrency.lockutils [req-1c9d2416-784c-468e-937d-c791d10f883c req-31639a4a-de4d-42d6-95d7-816eb66b14d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:12:14 np0005548731 nova_compute[232433]: 2025-12-06 08:12:14.232 232437 DEBUG nova.network.neutron [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:12:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:14.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:12:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:12:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:14.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:16.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:16 np0005548731 nova_compute[232433]: 2025-12-06 08:12:16.515 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:16 np0005548731 nova_compute[232433]: 2025-12-06 08:12:16.602 232437 DEBUG nova.network.neutron [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Updating instance_info_cache with network_info: [{"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:12:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:16.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:17 np0005548731 nova_compute[232433]: 2025-12-06 08:12:17.857 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:18.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:18.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.731 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.731 232437 DEBUG nova.compute.manager [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Instance network_info: |[{"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.732 232437 DEBUG oslo_concurrency.lockutils [req-1c9d2416-784c-468e-937d-c791d10f883c req-31639a4a-de4d-42d6-95d7-816eb66b14d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.732 232437 DEBUG nova.network.neutron [req-1c9d2416-784c-468e-937d-c791d10f883c req-31639a4a-de4d-42d6-95d7-816eb66b14d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Refreshing network info cache for port a69d734a-1545-4982-9f2c-ed585fa6d7d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.734 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Start _get_guest_xml network_info=[{"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.740 232437 WARNING nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.763 232437 DEBUG nova.virt.libvirt.host [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.764 232437 DEBUG nova.virt.libvirt.host [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.784 232437 DEBUG nova.virt.libvirt.host [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.784 232437 DEBUG nova.virt.libvirt.host [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.785 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.785 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.786 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.786 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.786 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.786 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.787 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.787 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.787 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.787 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.787 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.788 232437 DEBUG nova.virt.hardware [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:12:18 np0005548731 nova_compute[232433]: 2025-12-06 08:12:18.790 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:12:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:12:19 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/308335499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.264 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.293 232437 DEBUG nova.storage.rbd_utils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image f13f96d6-bf80-493f-8c88-883a6da35105_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.298 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:12:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:12:19 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3595195691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.716 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.718 232437 DEBUG nova.virt.libvirt.vif [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-193247361',display_name='tempest-TestNetworkAdvancedServerOps-server-193247361',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-193247361',id=199,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLypEWdRECJBPrBTAU9x3pM8rCRxZw7L301HpsN0K17uUZRgP5F7CBi5FWUnWRGAWn4/kMZFUFGjj2mxL7XQEmDUEDcFRVNsKsRyeKF5IIaPVdD+07YBf8H0xoIuS5vQEA==',key_name='tempest-TestNetworkAdvancedServerOps-731579489',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-wjpvo28t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:12:07Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=f13f96d6-bf80-493f-8c88-883a6da35105,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.719 232437 DEBUG nova.network.os_vif_util [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.720 232437 DEBUG nova.network.os_vif_util [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.721 232437 DEBUG nova.objects.instance [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.832 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  <uuid>f13f96d6-bf80-493f-8c88-883a6da35105</uuid>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  <name>instance-000000c7</name>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-193247361</nova:name>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:12:18</nova:creationTime>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <nova:port uuid="a69d734a-1545-4982-9f2c-ed585fa6d7d1">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <entry name="serial">f13f96d6-bf80-493f-8c88-883a6da35105</entry>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <entry name="uuid">f13f96d6-bf80-493f-8c88-883a6da35105</entry>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f13f96d6-bf80-493f-8c88-883a6da35105_disk">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f13f96d6-bf80-493f-8c88-883a6da35105_disk.config">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:7d:8c:ed"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <target dev="tapa69d734a-15"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/f13f96d6-bf80-493f-8c88-883a6da35105/console.log" append="off"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:12:19 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:12:19 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:12:19 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:12:19 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.834 232437 DEBUG nova.compute.manager [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Preparing to wait for external event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.835 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.835 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.835 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.836 232437 DEBUG nova.virt.libvirt.vif [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-193247361',display_name='tempest-TestNetworkAdvancedServerOps-server-193247361',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-193247361',id=199,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLypEWdRECJBPrBTAU9x3pM8rCRxZw7L301HpsN0K17uUZRgP5F7CBi5FWUnWRGAWn4/kMZFUFGjj2mxL7XQEmDUEDcFRVNsKsRyeKF5IIaPVdD+07YBf8H0xoIuS5vQEA==',key_name='tempest-TestNetworkAdvancedServerOps-731579489',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-wjpvo28t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:12:07Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=f13f96d6-bf80-493f-8c88-883a6da35105,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.837 232437 DEBUG nova.network.os_vif_util [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.837 232437 DEBUG nova.network.os_vif_util [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.838 232437 DEBUG os_vif [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.839 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.839 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.840 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.843 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.843 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa69d734a-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.844 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa69d734a-15, col_values=(('external_ids', {'iface-id': 'a69d734a-1545-4982-9f2c-ed585fa6d7d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:8c:ed', 'vm-uuid': 'f13f96d6-bf80-493f-8c88-883a6da35105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:19 np0005548731 NetworkManager[49182]: <info>  [1765008739.8461] manager: (tapa69d734a-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/473)
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.848 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.851 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:19 np0005548731 nova_compute[232433]: 2025-12-06 08:12:19.852 232437 INFO os_vif [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15')#033[00m
Dec  6 03:12:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:12:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:20.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:12:20 np0005548731 nova_compute[232433]: 2025-12-06 08:12:20.584 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:12:20 np0005548731 nova_compute[232433]: 2025-12-06 08:12:20.585 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:12:20 np0005548731 nova_compute[232433]: 2025-12-06 08:12:20.586 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No VIF found with MAC fa:16:3e:7d:8c:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:12:20 np0005548731 nova_compute[232433]: 2025-12-06 08:12:20.587 232437 INFO nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Using config drive#033[00m
Dec  6 03:12:20 np0005548731 nova_compute[232433]: 2025-12-06 08:12:20.619 232437 DEBUG nova.storage.rbd_utils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image f13f96d6-bf80-493f-8c88-883a6da35105_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:12:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:20.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:21 np0005548731 nova_compute[232433]: 2025-12-06 08:12:21.516 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:21 np0005548731 nova_compute[232433]: 2025-12-06 08:12:21.942 232437 INFO nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Creating config drive at /var/lib/nova/instances/f13f96d6-bf80-493f-8c88-883a6da35105/disk.config#033[00m
Dec  6 03:12:21 np0005548731 nova_compute[232433]: 2025-12-06 08:12:21.951 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f13f96d6-bf80-493f-8c88-883a6da35105/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6n3jqipz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.092 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f13f96d6-bf80-493f-8c88-883a6da35105/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6n3jqipz" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.125 232437 DEBUG nova.storage.rbd_utils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image f13f96d6-bf80-493f-8c88-883a6da35105_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.129 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f13f96d6-bf80-493f-8c88-883a6da35105/disk.config f13f96d6-bf80-493f-8c88-883a6da35105_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.286 232437 DEBUG oslo_concurrency.processutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f13f96d6-bf80-493f-8c88-883a6da35105/disk.config f13f96d6-bf80-493f-8c88-883a6da35105_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.287 232437 INFO nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Deleting local config drive /var/lib/nova/instances/f13f96d6-bf80-493f-8c88-883a6da35105/disk.config because it was imported into RBD.#033[00m
Dec  6 03:12:22 np0005548731 kernel: tapa69d734a-15: entered promiscuous mode
Dec  6 03:12:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:12:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:22.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:12:22 np0005548731 NetworkManager[49182]: <info>  [1765008742.3375] manager: (tapa69d734a-15): new Tun device (/org/freedesktop/NetworkManager/Devices/474)
Dec  6 03:12:22 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:22Z|01010|binding|INFO|Claiming lport a69d734a-1545-4982-9f2c-ed585fa6d7d1 for this chassis.
Dec  6 03:12:22 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:22Z|01011|binding|INFO|a69d734a-1545-4982-9f2c-ed585fa6d7d1: Claiming fa:16:3e:7d:8c:ed 10.100.0.12
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.338 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.344 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.357 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:8c:ed 10.100.0.12'], port_security=['fa:16:3e:7d:8c:ed 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f13f96d6-bf80-493f-8c88-883a6da35105', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f28f4192-1561-45a2-a0b3-b0c0baeb6833', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0bb66e4-1df3-4c78-8bff-c48b2b1586b5, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a69d734a-1545-4982-9f2c-ed585fa6d7d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.358 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a69d734a-1545-4982-9f2c-ed585fa6d7d1 in datapath 1fd5471b-8f3d-43f6-83ce-53aad6043b61 bound to our chassis#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.359 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1fd5471b-8f3d-43f6-83ce-53aad6043b61#033[00m
Dec  6 03:12:22 np0005548731 systemd-udevd[332998]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:12:22 np0005548731 systemd-machined[195355]: New machine qemu-102-instance-000000c7.
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.371 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[754fa509-822c-4647-935a-8d08f442acb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.372 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1fd5471b-81 in ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.374 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1fd5471b-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.374 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[af9ac391-1a86-4409-8f02-e35b7dd3733e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.375 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d258d79c-d036-470f-a5c9-e6acf0031567]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 NetworkManager[49182]: <info>  [1765008742.3807] device (tapa69d734a-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:12:22 np0005548731 NetworkManager[49182]: <info>  [1765008742.3826] device (tapa69d734a-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.386 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[b66196cd-89e3-45de-a75f-b56155c9ea31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 systemd[1]: Started Virtual Machine qemu-102-instance-000000c7.
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.404 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:22 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:22Z|01012|binding|INFO|Setting lport a69d734a-1545-4982-9f2c-ed585fa6d7d1 ovn-installed in OVS
Dec  6 03:12:22 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:22Z|01013|binding|INFO|Setting lport a69d734a-1545-4982-9f2c-ed585fa6d7d1 up in Southbound
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.411 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.413 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.412 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8a85aeeb-15a5-456f-b63f-37c0b58f1278]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.443 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc7d2de-3384-4984-89a8-d09baf413551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 NetworkManager[49182]: <info>  [1765008742.4503] manager: (tap1fd5471b-80): new Veth device (/org/freedesktop/NetworkManager/Devices/475)
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.449 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe7a07b-db07-448f-93de-80b9fe4f3602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.480 232437 DEBUG nova.network.neutron [req-1c9d2416-784c-468e-937d-c791d10f883c req-31639a4a-de4d-42d6-95d7-816eb66b14d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Updated VIF entry in instance network info cache for port a69d734a-1545-4982-9f2c-ed585fa6d7d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.481 232437 DEBUG nova.network.neutron [req-1c9d2416-784c-468e-937d-c791d10f883c req-31639a4a-de4d-42d6-95d7-816eb66b14d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Updating instance_info_cache with network_info: [{"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.481 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6c98898a-ce9c-4de0-a1f0-b8aca63b7942]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.485 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7ed5ca-601b-4bf4-924a-efde9e95775c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 NetworkManager[49182]: <info>  [1765008742.5141] device (tap1fd5471b-80): carrier: link connected
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.519 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[1b65c25b-d0e8-41e8-9b6d-2e1190af4d95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.533 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a2726b33-9879-43ba-9390-bf57ec1a01a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1fd5471b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:fa:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900452, 'reachable_time': 22738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333032, 'error': None, 'target': 'ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.549 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[76adae12-b01a-41ba-9c25-0843a620ceac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:faa7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 900452, 'tstamp': 900452}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333033, 'error': None, 'target': 'ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.570 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef0802d-01e5-47f4-9fe4-13101c229946]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1fd5471b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:fa:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900452, 'reachable_time': 22738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333034, 'error': None, 'target': 'ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.573 232437 DEBUG oslo_concurrency.lockutils [req-1c9d2416-784c-468e-937d-c791d10f883c req-31639a4a-de4d-42d6-95d7-816eb66b14d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.607 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[995c517b-f895-4bd3-93a3-b25c25939112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.665 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e971f856-ee8c-4fe2-81a4-5ffe94748624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.666 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1fd5471b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.666 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.667 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1fd5471b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.715 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:22 np0005548731 NetworkManager[49182]: <info>  [1765008742.7161] manager: (tap1fd5471b-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Dec  6 03:12:22 np0005548731 kernel: tap1fd5471b-80: entered promiscuous mode
Dec  6 03:12:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:12:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:22.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.721 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.722 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1fd5471b-80, col_values=(('external_ids', {'iface-id': '3a794737-9f96-4e47-b2d9-5ff59140d4e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:22 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:22Z|01014|binding|INFO|Releasing lport 3a794737-9f96-4e47-b2d9-5ff59140d4e7 from this chassis (sb_readonly=0)
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.723 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.737 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.738 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1fd5471b-8f3d-43f6-83ce-53aad6043b61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1fd5471b-8f3d-43f6-83ce-53aad6043b61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.739 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4694e0a0-3ea3-43aa-b8cf-08b5c9b260b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.740 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-1fd5471b-8f3d-43f6-83ce-53aad6043b61
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/1fd5471b-8f3d-43f6-83ce-53aad6043b61.pid.haproxy
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 1fd5471b-8f3d-43f6-83ce-53aad6043b61
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:12:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:22.741 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'env', 'PROCESS_TAG=haproxy-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1fd5471b-8f3d-43f6-83ce-53aad6043b61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.795 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008742.7952645, f13f96d6-bf80-493f-8c88-883a6da35105 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.796 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] VM Started (Lifecycle Event)#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.852 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.855 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008742.7953587, f13f96d6-bf80-493f-8c88-883a6da35105 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.855 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.878 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.880 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:12:22 np0005548731 nova_compute[232433]: 2025-12-06 08:12:22.952 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:12:23 np0005548731 podman[333108]: 2025-12-06 08:12:23.108697682 +0000 UTC m=+0.059413908 container create 559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 03:12:23 np0005548731 systemd[1]: Started libpod-conmon-559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025.scope.
Dec  6 03:12:23 np0005548731 podman[333108]: 2025-12-06 08:12:23.075805111 +0000 UTC m=+0.026521427 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:12:23 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:12:23 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be420d3dcecb2c8bfcfd35305357141a20b33b1f14a4bad312757ae74e3acfe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:12:23 np0005548731 podman[333108]: 2025-12-06 08:12:23.201743618 +0000 UTC m=+0.152459854 container init 559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 03:12:23 np0005548731 podman[333108]: 2025-12-06 08:12:23.208233816 +0000 UTC m=+0.158950042 container start 559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.211 232437 DEBUG nova.compute.manager [req-1a0cd124-4546-4cd4-b504-a9207a0661f3 req-fc544c60-5bd1-4545-82d8-db797607b1f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.212 232437 DEBUG oslo_concurrency.lockutils [req-1a0cd124-4546-4cd4-b504-a9207a0661f3 req-fc544c60-5bd1-4545-82d8-db797607b1f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.213 232437 DEBUG oslo_concurrency.lockutils [req-1a0cd124-4546-4cd4-b504-a9207a0661f3 req-fc544c60-5bd1-4545-82d8-db797607b1f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.213 232437 DEBUG oslo_concurrency.lockutils [req-1a0cd124-4546-4cd4-b504-a9207a0661f3 req-fc544c60-5bd1-4545-82d8-db797607b1f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.213 232437 DEBUG nova.compute.manager [req-1a0cd124-4546-4cd4-b504-a9207a0661f3 req-fc544c60-5bd1-4545-82d8-db797607b1f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Processing event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.214 232437 DEBUG nova.compute.manager [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.218 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008743.218467, f13f96d6-bf80-493f-8c88-883a6da35105 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.219 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.221 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.224 232437 INFO nova.virt.libvirt.driver [-] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Instance spawned successfully.#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.225 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:12:23 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333123]: [NOTICE]   (333127) : New worker (333129) forked
Dec  6 03:12:23 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333123]: [NOTICE]   (333127) : Loading success.
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.255 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.262 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.266 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.267 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.268 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.268 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.269 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.270 232437 DEBUG nova.virt.libvirt.driver [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.297 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.380 232437 INFO nova.compute.manager [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Took 15.77 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.381 232437 DEBUG nova.compute.manager [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.552 232437 INFO nova.compute.manager [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Took 16.97 seconds to build instance.#033[00m
Dec  6 03:12:23 np0005548731 nova_compute[232433]: 2025-12-06 08:12:23.623 232437 DEBUG oslo_concurrency.lockutils [None req-8c1c7ca7-abfc-4781-a232-a03dce439808 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:24.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:24.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:24 np0005548731 nova_compute[232433]: 2025-12-06 08:12:24.846 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:25 np0005548731 nova_compute[232433]: 2025-12-06 08:12:25.771 232437 DEBUG nova.compute.manager [req-0feb4045-1c9b-4738-a162-c7ba7d2dace9 req-7e224b5c-df26-4a36-8cf4-8e5cac1b1ee3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:12:25 np0005548731 nova_compute[232433]: 2025-12-06 08:12:25.771 232437 DEBUG oslo_concurrency.lockutils [req-0feb4045-1c9b-4738-a162-c7ba7d2dace9 req-7e224b5c-df26-4a36-8cf4-8e5cac1b1ee3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:25 np0005548731 nova_compute[232433]: 2025-12-06 08:12:25.772 232437 DEBUG oslo_concurrency.lockutils [req-0feb4045-1c9b-4738-a162-c7ba7d2dace9 req-7e224b5c-df26-4a36-8cf4-8e5cac1b1ee3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:25 np0005548731 nova_compute[232433]: 2025-12-06 08:12:25.772 232437 DEBUG oslo_concurrency.lockutils [req-0feb4045-1c9b-4738-a162-c7ba7d2dace9 req-7e224b5c-df26-4a36-8cf4-8e5cac1b1ee3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:25 np0005548731 nova_compute[232433]: 2025-12-06 08:12:25.772 232437 DEBUG nova.compute.manager [req-0feb4045-1c9b-4738-a162-c7ba7d2dace9 req-7e224b5c-df26-4a36-8cf4-8e5cac1b1ee3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] No waiting events found dispatching network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:12:25 np0005548731 nova_compute[232433]: 2025-12-06 08:12:25.772 232437 WARNING nova.compute.manager [req-0feb4045-1c9b-4738-a162-c7ba7d2dace9 req-7e224b5c-df26-4a36-8cf4-8e5cac1b1ee3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received unexpected event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:12:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:26.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:26 np0005548731 nova_compute[232433]: 2025-12-06 08:12:26.521 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:26.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:26.982 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:12:26 np0005548731 nova_compute[232433]: 2025-12-06 08:12:26.984 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:26 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:26.985 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:12:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:28.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:28.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:29 np0005548731 nova_compute[232433]: 2025-12-06 08:12:29.848 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:30.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:30.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:30.988 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:31 np0005548731 nova_compute[232433]: 2025-12-06 08:12:31.522 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:31 np0005548731 podman[333144]: 2025-12-06 08:12:31.901611064 +0000 UTC m=+0.058293590 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 03:12:31 np0005548731 podman[333146]: 2025-12-06 08:12:31.905324805 +0000 UTC m=+0.059610623 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  6 03:12:31 np0005548731 podman[333145]: 2025-12-06 08:12:31.92769171 +0000 UTC m=+0.081875916 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 03:12:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:12:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:32.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:12:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:12:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:32.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:12:34 np0005548731 nova_compute[232433]: 2025-12-06 08:12:34.303 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:34 np0005548731 NetworkManager[49182]: <info>  [1765008754.3074] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Dec  6 03:12:34 np0005548731 NetworkManager[49182]: <info>  [1765008754.3084] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Dec  6 03:12:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:34.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:34 np0005548731 nova_compute[232433]: 2025-12-06 08:12:34.376 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:34 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:34Z|01015|binding|INFO|Releasing lport 3a794737-9f96-4e47-b2d9-5ff59140d4e7 from this chassis (sb_readonly=0)
Dec  6 03:12:34 np0005548731 nova_compute[232433]: 2025-12-06 08:12:34.385 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:34.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:34 np0005548731 nova_compute[232433]: 2025-12-06 08:12:34.850 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:36.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:36 np0005548731 nova_compute[232433]: 2025-12-06 08:12:36.562 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:36 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:36Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:8c:ed 10.100.0.12
Dec  6 03:12:36 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:36Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:8c:ed 10.100.0.12
Dec  6 03:12:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:36.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:38.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:38 np0005548731 nova_compute[232433]: 2025-12-06 08:12:38.627 232437 DEBUG nova.compute.manager [req-ef3846eb-2925-44da-9e39-7b06592ef148 req-e55a5a96-e8ad-4f24-b0ee-307f728c2dd7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-changed-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:12:38 np0005548731 nova_compute[232433]: 2025-12-06 08:12:38.628 232437 DEBUG nova.compute.manager [req-ef3846eb-2925-44da-9e39-7b06592ef148 req-e55a5a96-e8ad-4f24-b0ee-307f728c2dd7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Refreshing instance network info cache due to event network-changed-a69d734a-1545-4982-9f2c-ed585fa6d7d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:12:38 np0005548731 nova_compute[232433]: 2025-12-06 08:12:38.628 232437 DEBUG oslo_concurrency.lockutils [req-ef3846eb-2925-44da-9e39-7b06592ef148 req-e55a5a96-e8ad-4f24-b0ee-307f728c2dd7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:12:38 np0005548731 nova_compute[232433]: 2025-12-06 08:12:38.628 232437 DEBUG oslo_concurrency.lockutils [req-ef3846eb-2925-44da-9e39-7b06592ef148 req-e55a5a96-e8ad-4f24-b0ee-307f728c2dd7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:12:38 np0005548731 nova_compute[232433]: 2025-12-06 08:12:38.628 232437 DEBUG nova.network.neutron [req-ef3846eb-2925-44da-9e39-7b06592ef148 req-e55a5a96-e8ad-4f24-b0ee-307f728c2dd7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Refreshing network info cache for port a69d734a-1545-4982-9f2c-ed585fa6d7d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:12:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:38.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:39 np0005548731 nova_compute[232433]: 2025-12-06 08:12:39.908 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:40.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:12:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:40.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:12:41 np0005548731 nova_compute[232433]: 2025-12-06 08:12:41.563 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:42.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:42 np0005548731 nova_compute[232433]: 2025-12-06 08:12:42.486 232437 INFO nova.compute.manager [None req-8cdfeff0-60ee-4f21-870d-ab39ed887869 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Get console output#033[00m
Dec  6 03:12:42 np0005548731 nova_compute[232433]: 2025-12-06 08:12:42.491 261230 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 03:12:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:42.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:43 np0005548731 nova_compute[232433]: 2025-12-06 08:12:43.033 232437 DEBUG oslo_concurrency.lockutils [None req-e45d7dcd-8bf2-40e3-881f-1f69a40a830e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:43 np0005548731 nova_compute[232433]: 2025-12-06 08:12:43.033 232437 DEBUG oslo_concurrency.lockutils [None req-e45d7dcd-8bf2-40e3-881f-1f69a40a830e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:43 np0005548731 nova_compute[232433]: 2025-12-06 08:12:43.034 232437 DEBUG nova.compute.manager [None req-e45d7dcd-8bf2-40e3-881f-1f69a40a830e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:12:43 np0005548731 nova_compute[232433]: 2025-12-06 08:12:43.037 232437 DEBUG nova.compute.manager [None req-e45d7dcd-8bf2-40e3-881f-1f69a40a830e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec  6 03:12:43 np0005548731 nova_compute[232433]: 2025-12-06 08:12:43.038 232437 DEBUG nova.objects.instance [None req-e45d7dcd-8bf2-40e3-881f-1f69a40a830e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'flavor' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:12:43 np0005548731 nova_compute[232433]: 2025-12-06 08:12:43.095 232437 DEBUG nova.virt.libvirt.driver [None req-e45d7dcd-8bf2-40e3-881f-1f69a40a830e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec  6 03:12:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:12:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:44.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:12:44 np0005548731 nova_compute[232433]: 2025-12-06 08:12:44.737 232437 DEBUG nova.network.neutron [req-ef3846eb-2925-44da-9e39-7b06592ef148 req-e55a5a96-e8ad-4f24-b0ee-307f728c2dd7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Updated VIF entry in instance network info cache for port a69d734a-1545-4982-9f2c-ed585fa6d7d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:12:44 np0005548731 nova_compute[232433]: 2025-12-06 08:12:44.738 232437 DEBUG nova.network.neutron [req-ef3846eb-2925-44da-9e39-7b06592ef148 req-e55a5a96-e8ad-4f24-b0ee-307f728c2dd7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Updating instance_info_cache with network_info: [{"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:12:44 np0005548731 nova_compute[232433]: 2025-12-06 08:12:44.777 232437 DEBUG oslo_concurrency.lockutils [req-ef3846eb-2925-44da-9e39-7b06592ef148 req-e55a5a96-e8ad-4f24-b0ee-307f728c2dd7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:12:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:44.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:44 np0005548731 nova_compute[232433]: 2025-12-06 08:12:44.910 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:46 np0005548731 nova_compute[232433]: 2025-12-06 08:12:46.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:46 np0005548731 nova_compute[232433]: 2025-12-06 08:12:46.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:12:46 np0005548731 nova_compute[232433]: 2025-12-06 08:12:46.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:12:46 np0005548731 nova_compute[232433]: 2025-12-06 08:12:46.126 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:12:46 np0005548731 nova_compute[232433]: 2025-12-06 08:12:46.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:12:46 np0005548731 nova_compute[232433]: 2025-12-06 08:12:46.127 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 03:12:46 np0005548731 nova_compute[232433]: 2025-12-06 08:12:46.128 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:12:46 np0005548731 nova_compute[232433]: 2025-12-06 08:12:46.566 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:46.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:46.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:47 np0005548731 nova_compute[232433]: 2025-12-06 08:12:47.757 232437 INFO nova.virt.libvirt.driver [None req-e45d7dcd-8bf2-40e3-881f-1f69a40a830e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Instance shutdown successfully after 4 seconds.#033[00m
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.195 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Updating instance_info_cache with network_info: [{"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.215 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.216 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.216 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:48 np0005548731 kernel: tapa69d734a-15 (unregistering): left promiscuous mode
Dec  6 03:12:48 np0005548731 NetworkManager[49182]: <info>  [1765008768.4677] device (tapa69d734a-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.470 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:48 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:48Z|01016|binding|INFO|Releasing lport a69d734a-1545-4982-9f2c-ed585fa6d7d1 from this chassis (sb_readonly=0)
Dec  6 03:12:48 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:48Z|01017|binding|INFO|Setting lport a69d734a-1545-4982-9f2c-ed585fa6d7d1 down in Southbound
Dec  6 03:12:48 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:48Z|01018|binding|INFO|Removing iface tapa69d734a-15 ovn-installed in OVS
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.473 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.483 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:8c:ed 10.100.0.12'], port_security=['fa:16:3e:7d:8c:ed 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f13f96d6-bf80-493f-8c88-883a6da35105', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f28f4192-1561-45a2-a0b3-b0c0baeb6833', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0bb66e4-1df3-4c78-8bff-c48b2b1586b5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a69d734a-1545-4982-9f2c-ed585fa6d7d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.485 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a69d734a-1545-4982-9f2c-ed585fa6d7d1 in datapath 1fd5471b-8f3d-43f6-83ce-53aad6043b61 unbound from our chassis#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.487 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1fd5471b-8f3d-43f6-83ce-53aad6043b61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.488 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2f63e7-d780-485b-b23a-da88084091e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.488 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61 namespace which is not needed anymore#033[00m
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.493 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:48 np0005548731 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000c7.scope: Deactivated successfully.
Dec  6 03:12:48 np0005548731 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000c7.scope: Consumed 14.003s CPU time.
Dec  6 03:12:48 np0005548731 systemd-machined[195355]: Machine qemu-102-instance-000000c7 terminated.
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.594 232437 INFO nova.virt.libvirt.driver [-] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Instance destroyed successfully.#033[00m
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.595 232437 DEBUG nova.objects.instance [None req-e45d7dcd-8bf2-40e3-881f-1f69a40a830e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'numa_topology' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.622 232437 DEBUG nova.compute.manager [None req-e45d7dcd-8bf2-40e3-881f-1f69a40a830e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:12:48 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333123]: [NOTICE]   (333127) : haproxy version is 2.8.14-c23fe91
Dec  6 03:12:48 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333123]: [NOTICE]   (333127) : path to executable is /usr/sbin/haproxy
Dec  6 03:12:48 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333123]: [WARNING]  (333127) : Exiting Master process...
Dec  6 03:12:48 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333123]: [ALERT]    (333127) : Current worker (333129) exited with code 143 (Terminated)
Dec  6 03:12:48 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333123]: [WARNING]  (333127) : All workers exited. Exiting... (0)
Dec  6 03:12:48 np0005548731 systemd[1]: libpod-559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025.scope: Deactivated successfully.
Dec  6 03:12:48 np0005548731 podman[333294]: 2025-12-06 08:12:48.653004226 +0000 UTC m=+0.052539930 container died 559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:12:48 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025-userdata-shm.mount: Deactivated successfully.
Dec  6 03:12:48 np0005548731 systemd[1]: var-lib-containers-storage-overlay-3be420d3dcecb2c8bfcfd35305357141a20b33b1f14a4bad312757ae74e3acfe-merged.mount: Deactivated successfully.
Dec  6 03:12:48 np0005548731 podman[333294]: 2025-12-06 08:12:48.691697129 +0000 UTC m=+0.091232823 container cleanup 559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  6 03:12:48 np0005548731 systemd[1]: libpod-conmon-559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025.scope: Deactivated successfully.
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.706 232437 DEBUG oslo_concurrency.lockutils [None req-e45d7dcd-8bf2-40e3-881f-1f69a40a830e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 5.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:48.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:48 np0005548731 podman[333334]: 2025-12-06 08:12:48.756443696 +0000 UTC m=+0.042306191 container remove 559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.762 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[20f01c30-0720-418c-9438-374145034fed]: (4, ('Sat Dec  6 08:12:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61 (559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025)\n559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025\nSat Dec  6 08:12:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61 (559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025)\n559d18183767ec2b97a9a78b62c4d269f805eb49dfb8821a7c5a95803a84e025\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.764 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[efdef7a7-f5eb-41fd-a859-f87aca9b605d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.765 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1fd5471b-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.766 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:48 np0005548731 kernel: tap1fd5471b-80: left promiscuous mode
Dec  6 03:12:48 np0005548731 nova_compute[232433]: 2025-12-06 08:12:48.784 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.786 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9d6422-045a-4744-9f1c-13d1813176e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:48.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.809 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[39c9cca3-f5f8-4bf3-a9a2-036d59a921aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.810 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[88348278-81fa-4918-a467-281e1502063b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.827 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[773e7075-71cb-47c8-a9e7-b9af05a7aba9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900444, 'reachable_time': 43540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333353, 'error': None, 'target': 'ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.829 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:12:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:48.829 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[67023275-1d50-4dea-a8ee-c12d1eced857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:48 np0005548731 systemd[1]: run-netns-ovnmeta\x2d1fd5471b\x2d8f3d\x2d43f6\x2d83ce\x2d53aad6043b61.mount: Deactivated successfully.
Dec  6 03:12:49 np0005548731 nova_compute[232433]: 2025-12-06 08:12:49.184 232437 DEBUG nova.compute.manager [req-0ed87fe3-8196-4b5f-bc67-991c23564667 req-24fb1def-3af1-4967-bd4b-f675123d5899 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-vif-unplugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:12:49 np0005548731 nova_compute[232433]: 2025-12-06 08:12:49.185 232437 DEBUG oslo_concurrency.lockutils [req-0ed87fe3-8196-4b5f-bc67-991c23564667 req-24fb1def-3af1-4967-bd4b-f675123d5899 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:49 np0005548731 nova_compute[232433]: 2025-12-06 08:12:49.185 232437 DEBUG oslo_concurrency.lockutils [req-0ed87fe3-8196-4b5f-bc67-991c23564667 req-24fb1def-3af1-4967-bd4b-f675123d5899 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:49 np0005548731 nova_compute[232433]: 2025-12-06 08:12:49.185 232437 DEBUG oslo_concurrency.lockutils [req-0ed87fe3-8196-4b5f-bc67-991c23564667 req-24fb1def-3af1-4967-bd4b-f675123d5899 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:49 np0005548731 nova_compute[232433]: 2025-12-06 08:12:49.185 232437 DEBUG nova.compute.manager [req-0ed87fe3-8196-4b5f-bc67-991c23564667 req-24fb1def-3af1-4967-bd4b-f675123d5899 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] No waiting events found dispatching network-vif-unplugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:12:49 np0005548731 nova_compute[232433]: 2025-12-06 08:12:49.186 232437 WARNING nova.compute.manager [req-0ed87fe3-8196-4b5f-bc67-991c23564667 req-24fb1def-3af1-4967-bd4b-f675123d5899 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received unexpected event network-vif-unplugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 for instance with vm_state stopped and task_state None.#033[00m
Dec  6 03:12:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:49 np0005548731 nova_compute[232433]: 2025-12-06 08:12:49.913 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:50 np0005548731 nova_compute[232433]: 2025-12-06 08:12:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:50 np0005548731 nova_compute[232433]: 2025-12-06 08:12:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:50 np0005548731 nova_compute[232433]: 2025-12-06 08:12:50.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:12:50 np0005548731 nova_compute[232433]: 2025-12-06 08:12:50.666 232437 INFO nova.compute.manager [None req-10d707b7-5fb1-4180-9cfe-4ed33bde814a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Get console output#033[00m
Dec  6 03:12:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:50.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:50.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.182 232437 DEBUG nova.objects.instance [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'flavor' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.203 232437 DEBUG oslo_concurrency.lockutils [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.203 232437 DEBUG oslo_concurrency.lockutils [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.204 232437 DEBUG nova.network.neutron [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.204 232437 DEBUG nova.objects.instance [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'info_cache' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.320 232437 DEBUG nova.compute.manager [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.321 232437 DEBUG oslo_concurrency.lockutils [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.321 232437 DEBUG oslo_concurrency.lockutils [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.321 232437 DEBUG oslo_concurrency.lockutils [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.322 232437 DEBUG nova.compute.manager [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] No waiting events found dispatching network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.322 232437 WARNING nova.compute.manager [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received unexpected event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 for instance with vm_state stopped and task_state powering-on.#033[00m
Dec  6 03:12:51 np0005548731 nova_compute[232433]: 2025-12-06 08:12:51.568 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:52 np0005548731 nova_compute[232433]: 2025-12-06 08:12:52.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:52 np0005548731 nova_compute[232433]: 2025-12-06 08:12:52.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:52 np0005548731 nova_compute[232433]: 2025-12-06 08:12:52.516 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:52 np0005548731 nova_compute[232433]: 2025-12-06 08:12:52.517 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:52 np0005548731 nova_compute[232433]: 2025-12-06 08:12:52.517 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:52 np0005548731 nova_compute[232433]: 2025-12-06 08:12:52.517 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:12:52 np0005548731 nova_compute[232433]: 2025-12-06 08:12:52.517 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:12:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:52.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:52.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:12:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1156600947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:12:52 np0005548731 nova_compute[232433]: 2025-12-06 08:12:52.943 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:12:52 np0005548731 nova_compute[232433]: 2025-12-06 08:12:52.948 232437 DEBUG nova.network.neutron [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Updating instance_info_cache with network_info: [{"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.107 232437 DEBUG oslo_concurrency.lockutils [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.184 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.184 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.190 232437 INFO nova.virt.libvirt.driver [-] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Instance destroyed successfully.#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.190 232437 DEBUG nova.objects.instance [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'numa_topology' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.326 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.328 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4187MB free_disk=20.921939849853516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.328 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.328 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.407 232437 DEBUG nova.objects.instance [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.918 232437 DEBUG nova.virt.libvirt.vif [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-193247361',display_name='tempest-TestNetworkAdvancedServerOps-server-193247361',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-193247361',id=199,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLypEWdRECJBPrBTAU9x3pM8rCRxZw7L301HpsN0K17uUZRgP5F7CBi5FWUnWRGAWn4/kMZFUFGjj2mxL7XQEmDUEDcFRVNsKsRyeKF5IIaPVdD+07YBf8H0xoIuS5vQEA==',key_name='tempest-TestNetworkAdvancedServerOps-731579489',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:12:23Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-wjpvo28t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:12:48Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=f13f96d6-bf80-493f-8c88-883a6da35105,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.918 232437 DEBUG nova.network.os_vif_util [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.919 232437 DEBUG nova.network.os_vif_util [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.919 232437 DEBUG os_vif [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.921 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.922 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa69d734a-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.925 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.927 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.929 232437 INFO os_vif [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15')#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.935 232437 DEBUG nova.virt.libvirt.driver [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Start _get_guest_xml network_info=[{"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.939 232437 WARNING nova.virt.libvirt.driver [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.951 232437 DEBUG nova.virt.libvirt.host [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.952 232437 DEBUG nova.virt.libvirt.host [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.955 232437 DEBUG nova.virt.libvirt.host [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.955 232437 DEBUG nova.virt.libvirt.host [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.956 232437 DEBUG nova.virt.libvirt.driver [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.957 232437 DEBUG nova.virt.hardware [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.957 232437 DEBUG nova.virt.hardware [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.957 232437 DEBUG nova.virt.hardware [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.957 232437 DEBUG nova.virt.hardware [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.957 232437 DEBUG nova.virt.hardware [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.958 232437 DEBUG nova.virt.hardware [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.958 232437 DEBUG nova.virt.hardware [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.958 232437 DEBUG nova.virt.hardware [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.958 232437 DEBUG nova.virt.hardware [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.958 232437 DEBUG nova.virt.hardware [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.958 232437 DEBUG nova.virt.hardware [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:12:53 np0005548731 nova_compute[232433]: 2025-12-06 08:12:53.959 232437 DEBUG nova.objects.instance [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'vcpu_model' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:12:54 np0005548731 nova_compute[232433]: 2025-12-06 08:12:54.011 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance f13f96d6-bf80-493f-8c88-883a6da35105 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:12:54 np0005548731 nova_compute[232433]: 2025-12-06 08:12:54.012 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:12:54 np0005548731 nova_compute[232433]: 2025-12-06 08:12:54.012 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:12:54 np0005548731 nova_compute[232433]: 2025-12-06 08:12:54.089 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:12:54 np0005548731 nova_compute[232433]: 2025-12-06 08:12:54.122 232437 DEBUG oslo_concurrency.processutils [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:12:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:54.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:54.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:12:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3952924626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:12:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:12:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3228740001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:12:54 np0005548731 nova_compute[232433]: 2025-12-06 08:12:54.863 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.773s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:12:54 np0005548731 nova_compute[232433]: 2025-12-06 08:12:54.868 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:12:54 np0005548731 nova_compute[232433]: 2025-12-06 08:12:54.951 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:12:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.120 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.120 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.130 232437 DEBUG oslo_concurrency.processutils [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.174 232437 DEBUG oslo_concurrency.processutils [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:12:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:12:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1362112174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.619 232437 DEBUG oslo_concurrency.processutils [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.622 232437 DEBUG nova.virt.libvirt.vif [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-193247361',display_name='tempest-TestNetworkAdvancedServerOps-server-193247361',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-193247361',id=199,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLypEWdRECJBPrBTAU9x3pM8rCRxZw7L301HpsN0K17uUZRgP5F7CBi5FWUnWRGAWn4/kMZFUFGjj2mxL7XQEmDUEDcFRVNsKsRyeKF5IIaPVdD+07YBf8H0xoIuS5vQEA==',key_name='tempest-TestNetworkAdvancedServerOps-731579489',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:12:23Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-wjpvo28t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:12:48Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=f13f96d6-bf80-493f-8c88-883a6da35105,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.622 232437 DEBUG nova.network.os_vif_util [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.623 232437 DEBUG nova.network.os_vif_util [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.626 232437 DEBUG nova.objects.instance [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.659 232437 DEBUG nova.virt.libvirt.driver [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  <uuid>f13f96d6-bf80-493f-8c88-883a6da35105</uuid>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  <name>instance-000000c7</name>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-193247361</nova:name>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:12:53</nova:creationTime>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <nova:port uuid="a69d734a-1545-4982-9f2c-ed585fa6d7d1">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <entry name="serial">f13f96d6-bf80-493f-8c88-883a6da35105</entry>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <entry name="uuid">f13f96d6-bf80-493f-8c88-883a6da35105</entry>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f13f96d6-bf80-493f-8c88-883a6da35105_disk">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/f13f96d6-bf80-493f-8c88-883a6da35105_disk.config">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:7d:8c:ed"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <target dev="tapa69d734a-15"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/f13f96d6-bf80-493f-8c88-883a6da35105/console.log" append="off"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <input type="keyboard" bus="usb"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:12:55 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:12:55 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:12:55 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:12:55 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.661 232437 DEBUG nova.virt.libvirt.driver [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.661 232437 DEBUG nova.virt.libvirt.driver [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.662 232437 DEBUG nova.virt.libvirt.vif [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-193247361',display_name='tempest-TestNetworkAdvancedServerOps-server-193247361',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-193247361',id=199,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLypEWdRECJBPrBTAU9x3pM8rCRxZw7L301HpsN0K17uUZRgP5F7CBi5FWUnWRGAWn4/kMZFUFGjj2mxL7XQEmDUEDcFRVNsKsRyeKF5IIaPVdD+07YBf8H0xoIuS5vQEA==',key_name='tempest-TestNetworkAdvancedServerOps-731579489',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:12:23Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-wjpvo28t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:12:48Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=f13f96d6-bf80-493f-8c88-883a6da35105,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.662 232437 DEBUG nova.network.os_vif_util [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.662 232437 DEBUG nova.network.os_vif_util [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.663 232437 DEBUG os_vif [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.663 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.664 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.664 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.667 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.668 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa69d734a-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.668 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa69d734a-15, col_values=(('external_ids', {'iface-id': 'a69d734a-1545-4982-9f2c-ed585fa6d7d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:8c:ed', 'vm-uuid': 'f13f96d6-bf80-493f-8c88-883a6da35105'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.670 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:55 np0005548731 NetworkManager[49182]: <info>  [1765008775.6705] manager: (tapa69d734a-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/479)
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.672 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.675 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.675 232437 INFO os_vif [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15')#033[00m
Dec  6 03:12:55 np0005548731 kernel: tapa69d734a-15: entered promiscuous mode
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.872 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:55 np0005548731 NetworkManager[49182]: <info>  [1765008775.8733] manager: (tapa69d734a-15): new Tun device (/org/freedesktop/NetworkManager/Devices/480)
Dec  6 03:12:55 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:55Z|01019|binding|INFO|Claiming lport a69d734a-1545-4982-9f2c-ed585fa6d7d1 for this chassis.
Dec  6 03:12:55 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:55Z|01020|binding|INFO|a69d734a-1545-4982-9f2c-ed585fa6d7d1: Claiming fa:16:3e:7d:8c:ed 10.100.0.12
Dec  6 03:12:55 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:55Z|01021|binding|INFO|Setting lport a69d734a-1545-4982-9f2c-ed585fa6d7d1 ovn-installed in OVS
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.891 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:55 np0005548731 nova_compute[232433]: 2025-12-06 08:12:55.894 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.897 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:8c:ed 10.100.0.12'], port_security=['fa:16:3e:7d:8c:ed 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f13f96d6-bf80-493f-8c88-883a6da35105', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f28f4192-1561-45a2-a0b3-b0c0baeb6833', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0bb66e4-1df3-4c78-8bff-c48b2b1586b5, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a69d734a-1545-4982-9f2c-ed585fa6d7d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:12:55 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:55Z|01022|binding|INFO|Setting lport a69d734a-1545-4982-9f2c-ed585fa6d7d1 up in Southbound
Dec  6 03:12:55 np0005548731 systemd-udevd[333525]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.900 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a69d734a-1545-4982-9f2c-ed585fa6d7d1 in datapath 1fd5471b-8f3d-43f6-83ce-53aad6043b61 bound to our chassis#033[00m
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.904 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1fd5471b-8f3d-43f6-83ce-53aad6043b61#033[00m
Dec  6 03:12:55 np0005548731 NetworkManager[49182]: <info>  [1765008775.9151] device (tapa69d734a-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:12:55 np0005548731 NetworkManager[49182]: <info>  [1765008775.9176] device (tapa69d734a-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.919 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3d646ce9-5de7-4c49-a381-4ba32a07ab54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.920 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1fd5471b-81 in ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.922 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1fd5471b-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.922 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1adf2055-331d-48b7-ad3e-ecb6c4023cde]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.924 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0ded27a1-ef5b-44a6-a180-d0fc07fcbd7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:55 np0005548731 systemd-machined[195355]: New machine qemu-103-instance-000000c7.
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.934 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[cd442d49-8907-4ba8-8fa9-a5da5f958749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:55 np0005548731 systemd[1]: Started Virtual Machine qemu-103-instance-000000c7.
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.949 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[57fa8486-361b-4ef0-bb49-4e20c37d6132]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.975 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[face05ec-89c4-4495-bd48-495cacc474f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:55.980 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e46de3-1806-4fcb-9322-c802662d69ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:55 np0005548731 NetworkManager[49182]: <info>  [1765008775.9807] manager: (tap1fd5471b-80): new Veth device (/org/freedesktop/NetworkManager/Devices/481)
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.010 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b8791b-4654-4e52-8022-72343d5a6a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.013 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[077d8120-898c-4475-bac0-1c576daac4c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:56 np0005548731 NetworkManager[49182]: <info>  [1765008776.0399] device (tap1fd5471b-80): carrier: link connected
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.045 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[59e7c154-bc4a-4fdf-8e60-6308ae461c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.063 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[77655c66-7878-4ce5-b263-b27bbe2c6815]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1fd5471b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:fa:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903804, 'reachable_time': 38912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333561, 'error': None, 'target': 'ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.079 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fcdf69db-2077-4a30-9242-1577e9e9ea69]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:faa7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903804, 'tstamp': 903804}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333562, 'error': None, 'target': 'ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.102 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e7404f-fc21-42d9-b7f1-9fb7a0d5a9b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1fd5471b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:fa:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903804, 'reachable_time': 38912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333563, 'error': None, 'target': 'ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.137 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[abe7c050-71d1-423c-a627-b7668f58d6fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.209 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3ab0c5-e945-4770-ab8c-3fd86ce63bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.212 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1fd5471b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.213 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.213 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1fd5471b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:56 np0005548731 NetworkManager[49182]: <info>  [1765008776.2163] manager: (tap1fd5471b-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Dec  6 03:12:56 np0005548731 kernel: tap1fd5471b-80: entered promiscuous mode
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.215 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.219 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1fd5471b-80, col_values=(('external_ids', {'iface-id': '3a794737-9f96-4e47-b2d9-5ff59140d4e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:12:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:12:56Z|01023|binding|INFO|Releasing lport 3a794737-9f96-4e47-b2d9-5ff59140d4e7 from this chassis (sb_readonly=0)
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.232 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.233 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1fd5471b-8f3d-43f6-83ce-53aad6043b61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1fd5471b-8f3d-43f6-83ce-53aad6043b61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.234 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c4dae6-46e0-4c2e-afaa-0dd25a7157e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.235 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-1fd5471b-8f3d-43f6-83ce-53aad6043b61
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/1fd5471b-8f3d-43f6-83ce-53aad6043b61.pid.haproxy
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 1fd5471b-8f3d-43f6-83ce-53aad6043b61
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:12:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:12:56.235 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'env', 'PROCESS_TAG=haproxy-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1fd5471b-8f3d-43f6-83ce-53aad6043b61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.551 232437 DEBUG nova.compute.manager [req-4223cdeb-8ff0-48c0-a32a-d1ecaa0dd78b req-3f557ed2-55cc-4e5d-bed8-2c07bc273002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.552 232437 DEBUG oslo_concurrency.lockutils [req-4223cdeb-8ff0-48c0-a32a-d1ecaa0dd78b req-3f557ed2-55cc-4e5d-bed8-2c07bc273002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.552 232437 DEBUG oslo_concurrency.lockutils [req-4223cdeb-8ff0-48c0-a32a-d1ecaa0dd78b req-3f557ed2-55cc-4e5d-bed8-2c07bc273002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.553 232437 DEBUG oslo_concurrency.lockutils [req-4223cdeb-8ff0-48c0-a32a-d1ecaa0dd78b req-3f557ed2-55cc-4e5d-bed8-2c07bc273002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.553 232437 DEBUG nova.compute.manager [req-4223cdeb-8ff0-48c0-a32a-d1ecaa0dd78b req-3f557ed2-55cc-4e5d-bed8-2c07bc273002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] No waiting events found dispatching network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.553 232437 WARNING nova.compute.manager [req-4223cdeb-8ff0-48c0-a32a-d1ecaa0dd78b req-3f557ed2-55cc-4e5d-bed8-2c07bc273002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received unexpected event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 for instance with vm_state stopped and task_state powering-on.#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.570 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:12:56 np0005548731 podman[333632]: 2025-12-06 08:12:56.621179615 +0000 UTC m=+0.063410355 container create b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 03:12:56 np0005548731 systemd[1]: Started libpod-conmon-b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94.scope.
Dec  6 03:12:56 np0005548731 podman[333632]: 2025-12-06 08:12:56.589156586 +0000 UTC m=+0.031387346 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:12:56 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:12:56 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41bc93069cc0a7ea2d3b7d4c37192bc93919ee6d740b3c81c06cd78896f8924e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.704 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for f13f96d6-bf80-493f-8c88-883a6da35105 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.705 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008776.7039263, f13f96d6-bf80-493f-8c88-883a6da35105 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.705 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.707 232437 DEBUG nova.compute.manager [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.712 232437 INFO nova.virt.libvirt.driver [-] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Instance rebooted successfully.#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.712 232437 DEBUG nova.compute.manager [None req-86d3fafc-1d6b-407d-b281-f2268ba041fd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:12:56 np0005548731 podman[333632]: 2025-12-06 08:12:56.716358793 +0000 UTC m=+0.158589543 container init b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  6 03:12:56 np0005548731 podman[333632]: 2025-12-06 08:12:56.722662847 +0000 UTC m=+0.164893597 container start b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 03:12:56 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333653]: [NOTICE]   (333657) : New worker (333659) forked
Dec  6 03:12:56 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333653]: [NOTICE]   (333657) : Loading success.
Dec  6 03:12:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:56.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:56.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.848 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:12:56 np0005548731 nova_compute[232433]: 2025-12-06 08:12:56.852 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:12:57 np0005548731 nova_compute[232433]: 2025-12-06 08:12:57.045 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008776.7050858, f13f96d6-bf80-493f-8c88-883a6da35105 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:12:57 np0005548731 nova_compute[232433]: 2025-12-06 08:12:57.046 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] VM Started (Lifecycle Event)#033[00m
Dec  6 03:12:57 np0005548731 nova_compute[232433]: 2025-12-06 08:12:57.119 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:12:57 np0005548731 nova_compute[232433]: 2025-12-06 08:12:57.123 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:12:58 np0005548731 nova_compute[232433]: 2025-12-06 08:12:58.639 232437 DEBUG nova.compute.manager [req-aa00e031-5ff1-49ce-87eb-c21cc804371e req-68c93c33-ca5e-422a-a8de-637de21ce5fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:12:58 np0005548731 nova_compute[232433]: 2025-12-06 08:12:58.640 232437 DEBUG oslo_concurrency.lockutils [req-aa00e031-5ff1-49ce-87eb-c21cc804371e req-68c93c33-ca5e-422a-a8de-637de21ce5fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:58 np0005548731 nova_compute[232433]: 2025-12-06 08:12:58.641 232437 DEBUG oslo_concurrency.lockutils [req-aa00e031-5ff1-49ce-87eb-c21cc804371e req-68c93c33-ca5e-422a-a8de-637de21ce5fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:58 np0005548731 nova_compute[232433]: 2025-12-06 08:12:58.641 232437 DEBUG oslo_concurrency.lockutils [req-aa00e031-5ff1-49ce-87eb-c21cc804371e req-68c93c33-ca5e-422a-a8de-637de21ce5fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:58 np0005548731 nova_compute[232433]: 2025-12-06 08:12:58.641 232437 DEBUG nova.compute.manager [req-aa00e031-5ff1-49ce-87eb-c21cc804371e req-68c93c33-ca5e-422a-a8de-637de21ce5fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] No waiting events found dispatching network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:12:58 np0005548731 nova_compute[232433]: 2025-12-06 08:12:58.642 232437 WARNING nova.compute.manager [req-aa00e031-5ff1-49ce-87eb-c21cc804371e req-68c93c33-ca5e-422a-a8de-637de21ce5fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received unexpected event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:12:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:58.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:12:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:12:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:58.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.107 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.107 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.108 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.109 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.109 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.110 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.110 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.130 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.148 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.149 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Image id 6efab05d-c7cf-4770-a5c3-c806a2739063 yields fingerprint 890368a5690a3dbdbb6650dcb9de9e2c9dc5acef _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.149 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] image 6efab05d-c7cf-4770-a5c3-c806a2739063 at (/var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef): checking#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.149 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] image 6efab05d-c7cf-4770-a5c3-c806a2739063 at (/var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.152 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.152 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] f13f96d6-bf80-493f-8c88-883a6da35105 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.152 232437 WARNING nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.153 232437 WARNING nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.153 232437 WARNING nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.153 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Active base files: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.153 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Removable base files: /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98 /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.154 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/3061ff2caf019ed81096e4bbefa75aa61b352e98#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.154 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.154 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/8d5797d05121f09476d1863b26e82e86b77e0d74#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.155 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.155 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.155 232437 DEBUG nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec  6 03:12:59 np0005548731 nova_compute[232433]: 2025-12-06 08:12:59.155 232437 INFO nova.virt.libvirt.imagecache [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec  6 03:13:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:13:00 np0005548731 nova_compute[232433]: 2025-12-06 08:13:00.671 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:00.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:00.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:00.916 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:13:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:00.916 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:13:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:00.917 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:13:01 np0005548731 nova_compute[232433]: 2025-12-06 08:13:01.574 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:02.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:02.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:02 np0005548731 podman[333674]: 2025-12-06 08:13:02.909327636 +0000 UTC m=+0.065149937 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec  6 03:13:02 np0005548731 podman[333672]: 2025-12-06 08:13:02.917322181 +0000 UTC m=+0.078840010 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  6 03:13:02 np0005548731 podman[333673]: 2025-12-06 08:13:02.955523322 +0000 UTC m=+0.115467613 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  6 03:13:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:04.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:04.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:13:05 np0005548731 nova_compute[232433]: 2025-12-06 08:13:05.675 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:06 np0005548731 nova_compute[232433]: 2025-12-06 08:13:06.577 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:06.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:13:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:06.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:13:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:13:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:08.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:13:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:08.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:13:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1088677073' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:13:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:13:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1088677073' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:13:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:13:10 np0005548731 nova_compute[232433]: 2025-12-06 08:13:10.677 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:10.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:10.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:11 np0005548731 nova_compute[232433]: 2025-12-06 08:13:11.577 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:12.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:12.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:14 np0005548731 ovn_controller[133927]: 2025-12-06T08:13:14Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:8c:ed 10.100.0.12
Dec  6 03:13:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:14.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:14.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:13:15 np0005548731 nova_compute[232433]: 2025-12-06 08:13:15.680 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:16 np0005548731 nova_compute[232433]: 2025-12-06 08:13:16.579 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:16.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:16.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:16 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:13:16 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:13:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:13:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:13:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:13:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:18.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:18.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:19 np0005548731 nova_compute[232433]: 2025-12-06 08:13:19.877 232437 INFO nova.compute.manager [None req-34a3778d-341b-4010-b641-169d2537d174 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Get console output#033[00m
Dec  6 03:13:19 np0005548731 nova_compute[232433]: 2025-12-06 08:13:19.883 261230 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 03:13:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.653 232437 DEBUG nova.compute.manager [req-05a62fc2-0d56-4f35-974e-4a16b223c955 req-af6efa73-fe61-4ff2-9289-05979c6e180f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-changed-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.653 232437 DEBUG nova.compute.manager [req-05a62fc2-0d56-4f35-974e-4a16b223c955 req-af6efa73-fe61-4ff2-9289-05979c6e180f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Refreshing instance network info cache due to event network-changed-a69d734a-1545-4982-9f2c-ed585fa6d7d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.653 232437 DEBUG oslo_concurrency.lockutils [req-05a62fc2-0d56-4f35-974e-4a16b223c955 req-af6efa73-fe61-4ff2-9289-05979c6e180f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.654 232437 DEBUG oslo_concurrency.lockutils [req-05a62fc2-0d56-4f35-974e-4a16b223c955 req-af6efa73-fe61-4ff2-9289-05979c6e180f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.654 232437 DEBUG nova.network.neutron [req-05a62fc2-0d56-4f35-974e-4a16b223c955 req-af6efa73-fe61-4ff2-9289-05979c6e180f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Refreshing network info cache for port a69d734a-1545-4982-9f2c-ed585fa6d7d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.683 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.733 232437 DEBUG oslo_concurrency.lockutils [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.734 232437 DEBUG oslo_concurrency.lockutils [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.734 232437 DEBUG oslo_concurrency.lockutils [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.734 232437 DEBUG oslo_concurrency.lockutils [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.735 232437 DEBUG oslo_concurrency.lockutils [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.736 232437 INFO nova.compute.manager [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Terminating instance#033[00m
Dec  6 03:13:20 np0005548731 nova_compute[232433]: 2025-12-06 08:13:20.737 232437 DEBUG nova.compute.manager [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:13:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:20.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:20.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.580 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:21 np0005548731 kernel: tapa69d734a-15 (unregistering): left promiscuous mode
Dec  6 03:13:21 np0005548731 NetworkManager[49182]: <info>  [1765008801.7836] device (tapa69d734a-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:13:21 np0005548731 ovn_controller[133927]: 2025-12-06T08:13:21Z|01024|binding|INFO|Releasing lport a69d734a-1545-4982-9f2c-ed585fa6d7d1 from this chassis (sb_readonly=0)
Dec  6 03:13:21 np0005548731 ovn_controller[133927]: 2025-12-06T08:13:21Z|01025|binding|INFO|Setting lport a69d734a-1545-4982-9f2c-ed585fa6d7d1 down in Southbound
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.835 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:21 np0005548731 ovn_controller[133927]: 2025-12-06T08:13:21Z|01026|binding|INFO|Removing iface tapa69d734a-15 ovn-installed in OVS
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.837 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:21.840 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:8c:ed 10.100.0.12'], port_security=['fa:16:3e:7d:8c:ed 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f13f96d6-bf80-493f-8c88-883a6da35105', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f28f4192-1561-45a2-a0b3-b0c0baeb6833', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0bb66e4-1df3-4c78-8bff-c48b2b1586b5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=a69d734a-1545-4982-9f2c-ed585fa6d7d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:13:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:21.842 143965 INFO neutron.agent.ovn.metadata.agent [-] Port a69d734a-1545-4982-9f2c-ed585fa6d7d1 in datapath 1fd5471b-8f3d-43f6-83ce-53aad6043b61 unbound from our chassis#033[00m
Dec  6 03:13:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:21.843 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1fd5471b-8f3d-43f6-83ce-53aad6043b61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:13:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:21.844 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[24feb93d-79be-44df-9cbc-8e44b9e90ce2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:13:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:21.845 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61 namespace which is not needed anymore#033[00m
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.848 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:21 np0005548731 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000c7.scope: Deactivated successfully.
Dec  6 03:13:21 np0005548731 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000c7.scope: Consumed 14.357s CPU time.
Dec  6 03:13:21 np0005548731 systemd-machined[195355]: Machine qemu-103-instance-000000c7 terminated.
Dec  6 03:13:21 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333653]: [NOTICE]   (333657) : haproxy version is 2.8.14-c23fe91
Dec  6 03:13:21 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333653]: [NOTICE]   (333657) : path to executable is /usr/sbin/haproxy
Dec  6 03:13:21 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333653]: [WARNING]  (333657) : Exiting Master process...
Dec  6 03:13:21 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333653]: [ALERT]    (333657) : Current worker (333659) exited with code 143 (Terminated)
Dec  6 03:13:21 np0005548731 neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61[333653]: [WARNING]  (333657) : All workers exited. Exiting... (0)
Dec  6 03:13:21 np0005548731 systemd[1]: libpod-b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94.scope: Deactivated successfully.
Dec  6 03:13:21 np0005548731 podman[333949]: 2025-12-06 08:13:21.968501615 +0000 UTC m=+0.042369773 container died b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.972 232437 INFO nova.virt.libvirt.driver [-] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Instance destroyed successfully.#033[00m
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.972 232437 DEBUG nova.objects.instance [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid f13f96d6-bf80-493f-8c88-883a6da35105 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.994 232437 DEBUG nova.virt.libvirt.vif [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-193247361',display_name='tempest-TestNetworkAdvancedServerOps-server-193247361',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-193247361',id=199,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLypEWdRECJBPrBTAU9x3pM8rCRxZw7L301HpsN0K17uUZRgP5F7CBi5FWUnWRGAWn4/kMZFUFGjj2mxL7XQEmDUEDcFRVNsKsRyeKF5IIaPVdD+07YBf8H0xoIuS5vQEA==',key_name='tempest-TestNetworkAdvancedServerOps-731579489',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:12:23Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-wjpvo28t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:12:56Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=f13f96d6-bf80-493f-8c88-883a6da35105,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.995 232437 DEBUG nova.network.os_vif_util [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.996 232437 DEBUG nova.network.os_vif_util [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.996 232437 DEBUG os_vif [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.998 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.998 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa69d734a-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:13:21 np0005548731 nova_compute[232433]: 2025-12-06 08:13:21.999 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.001 232437 DEBUG nova.network.neutron [req-05a62fc2-0d56-4f35-974e-4a16b223c955 req-af6efa73-fe61-4ff2-9289-05979c6e180f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Updated VIF entry in instance network info cache for port a69d734a-1545-4982-9f2c-ed585fa6d7d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:13:22 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94-userdata-shm.mount: Deactivated successfully.
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.002 232437 DEBUG nova.network.neutron [req-05a62fc2-0d56-4f35-974e-4a16b223c955 req-af6efa73-fe61-4ff2-9289-05979c6e180f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Updating instance_info_cache with network_info: [{"id": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "address": "fa:16:3e:7d:8c:ed", "network": {"id": "1fd5471b-8f3d-43f6-83ce-53aad6043b61", "bridge": "br-int", "label": "tempest-network-smoke--236738794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa69d734a-15", "ovs_interfaceid": "a69d734a-1545-4982-9f2c-ed585fa6d7d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.003 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:22 np0005548731 systemd[1]: var-lib-containers-storage-overlay-41bc93069cc0a7ea2d3b7d4c37192bc93919ee6d740b3c81c06cd78896f8924e-merged.mount: Deactivated successfully.
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.006 232437 INFO os_vif [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:8c:ed,bridge_name='br-int',has_traffic_filtering=True,id=a69d734a-1545-4982-9f2c-ed585fa6d7d1,network=Network(1fd5471b-8f3d-43f6-83ce-53aad6043b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa69d734a-15')#033[00m
Dec  6 03:13:22 np0005548731 podman[333949]: 2025-12-06 08:13:22.009433641 +0000 UTC m=+0.083301799 container cleanup b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  6 03:13:22 np0005548731 systemd[1]: libpod-conmon-b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94.scope: Deactivated successfully.
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.025 232437 DEBUG oslo_concurrency.lockutils [req-05a62fc2-0d56-4f35-974e-4a16b223c955 req-af6efa73-fe61-4ff2-9289-05979c6e180f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f13f96d6-bf80-493f-8c88-883a6da35105" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:13:22 np0005548731 podman[334003]: 2025-12-06 08:13:22.06849537 +0000 UTC m=+0.038545300 container remove b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:13:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:22.073 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6e72cb4c-ef64-4aa8-88af-94d554bae212]: (4, ('Sat Dec  6 08:13:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61 (b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94)\nb423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94\nSat Dec  6 08:13:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61 (b423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94)\nb423caaa8f6aad468499ce84feee42933511759cda20a603b4bb0b28cb81bb94\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:13:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:22.075 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc894bc-fd2e-4be4-96ae-efe7018b4b81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:13:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:22.076 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1fd5471b-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.077 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:22 np0005548731 kernel: tap1fd5471b-80: left promiscuous mode
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.090 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:22.094 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7887d7-ba59-4748-bfba-95ca3c749d7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:13:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:22.108 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[319177c0-33ed-4b3f-80a3-3158f8d9f363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:13:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:22.109 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0b67aaa4-f16a-4c38-84d5-6ac8990aa5fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:13:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:22.126 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a167beab-a37d-4047-bf8b-c879d10b348e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903797, 'reachable_time': 19437, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334024, 'error': None, 'target': 'ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:13:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:22.130 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1fd5471b-8f3d-43f6-83ce-53aad6043b61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:13:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:22.130 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[727b297d-8142-487e-8a63-3e29cfd33b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:13:22 np0005548731 systemd[1]: run-netns-ovnmeta\x2d1fd5471b\x2d8f3d\x2d43f6\x2d83ce\x2d53aad6043b61.mount: Deactivated successfully.
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.143 232437 DEBUG nova.compute.manager [req-932b3a08-c806-4eb8-a1dd-986f1e47d241 req-d800d07b-04ce-4800-b9ec-3821de0f86e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-vif-unplugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.143 232437 DEBUG oslo_concurrency.lockutils [req-932b3a08-c806-4eb8-a1dd-986f1e47d241 req-d800d07b-04ce-4800-b9ec-3821de0f86e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.143 232437 DEBUG oslo_concurrency.lockutils [req-932b3a08-c806-4eb8-a1dd-986f1e47d241 req-d800d07b-04ce-4800-b9ec-3821de0f86e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.143 232437 DEBUG oslo_concurrency.lockutils [req-932b3a08-c806-4eb8-a1dd-986f1e47d241 req-d800d07b-04ce-4800-b9ec-3821de0f86e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.144 232437 DEBUG nova.compute.manager [req-932b3a08-c806-4eb8-a1dd-986f1e47d241 req-d800d07b-04ce-4800-b9ec-3821de0f86e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] No waiting events found dispatching network-vif-unplugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.144 232437 DEBUG nova.compute.manager [req-932b3a08-c806-4eb8-a1dd-986f1e47d241 req-d800d07b-04ce-4800-b9ec-3821de0f86e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-vif-unplugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:13:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:22.252 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=92, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=91) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:13:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:22.253 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:13:22 np0005548731 nova_compute[232433]: 2025-12-06 08:13:22.253 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:22.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:24 np0005548731 nova_compute[232433]: 2025-12-06 08:13:24.265 232437 DEBUG nova.compute.manager [req-d8ff08a8-7cae-4e21-83d4-6b3c79f86b09 req-dfa70968-9bd5-4dce-a98a-5478f7dd11dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:13:24 np0005548731 nova_compute[232433]: 2025-12-06 08:13:24.266 232437 DEBUG oslo_concurrency.lockutils [req-d8ff08a8-7cae-4e21-83d4-6b3c79f86b09 req-dfa70968-9bd5-4dce-a98a-5478f7dd11dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:13:24 np0005548731 nova_compute[232433]: 2025-12-06 08:13:24.266 232437 DEBUG oslo_concurrency.lockutils [req-d8ff08a8-7cae-4e21-83d4-6b3c79f86b09 req-dfa70968-9bd5-4dce-a98a-5478f7dd11dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:13:24 np0005548731 nova_compute[232433]: 2025-12-06 08:13:24.266 232437 DEBUG oslo_concurrency.lockutils [req-d8ff08a8-7cae-4e21-83d4-6b3c79f86b09 req-dfa70968-9bd5-4dce-a98a-5478f7dd11dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:13:24 np0005548731 nova_compute[232433]: 2025-12-06 08:13:24.266 232437 DEBUG nova.compute.manager [req-d8ff08a8-7cae-4e21-83d4-6b3c79f86b09 req-dfa70968-9bd5-4dce-a98a-5478f7dd11dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] No waiting events found dispatching network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:13:24 np0005548731 nova_compute[232433]: 2025-12-06 08:13:24.266 232437 WARNING nova.compute.manager [req-d8ff08a8-7cae-4e21-83d4-6b3c79f86b09 req-dfa70968-9bd5-4dce-a98a-5478f7dd11dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received unexpected event network-vif-plugged-a69d734a-1545-4982-9f2c-ed585fa6d7d1 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 03:13:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:24.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:24.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:13:26 np0005548731 nova_compute[232433]: 2025-12-06 08:13:26.582 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:26.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:26.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:27 np0005548731 nova_compute[232433]: 2025-12-06 08:13:26.999 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:27 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:13:27.255 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '92'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:13:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:28.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:13:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:28.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:13:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:13:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:30.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:30.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:31 np0005548731 nova_compute[232433]: 2025-12-06 08:13:31.623 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:31 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 03:13:32 np0005548731 nova_compute[232433]: 2025-12-06 08:13:32.001 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:32.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:32.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:33 np0005548731 podman[334035]: 2025-12-06 08:13:33.903318843 +0000 UTC m=+0.054099649 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:13:33 np0005548731 podman[334033]: 2025-12-06 08:13:33.9044041 +0000 UTC m=+0.054557460 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 03:13:34 np0005548731 podman[334034]: 2025-12-06 08:13:34.007462909 +0000 UTC m=+0.158311056 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 03:13:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:34.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:34.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:13:35 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 03:13:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).paxos(paxos updating c 6778..7443) lease_timeout -- calling new election
Dec  6 03:13:35 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 03:13:35 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(62) init, last seen epoch 62
Dec  6 03:13:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 03:13:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 03:13:36 np0005548731 nova_compute[232433]: 2025-12-06 08:13:36.625 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:36.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:36.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:36 np0005548731 nova_compute[232433]: 2025-12-06 08:13:36.968 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008801.9672353, f13f96d6-bf80-493f-8c88-883a6da35105 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:13:36 np0005548731 nova_compute[232433]: 2025-12-06 08:13:36.969 232437 INFO nova.compute.manager [-] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:13:37 np0005548731 nova_compute[232433]: 2025-12-06 08:13:37.003 232437 DEBUG nova.compute.manager [None req-ca1c74a5-4c47-467e-b8b1-4c44053fed83 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:13:37 np0005548731 nova_compute[232433]: 2025-12-06 08:13:37.003 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:37 np0005548731 nova_compute[232433]: 2025-12-06 08:13:37.007 232437 DEBUG nova.compute.manager [None req-ca1c74a5-4c47-467e-b8b1-4c44053fed83 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:13:37 np0005548731 nova_compute[232433]: 2025-12-06 08:13:37.036 232437 INFO nova.compute.manager [None req-ca1c74a5-4c47-467e-b8b1-4c44053fed83 - - - - - -] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Dec  6 03:13:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:38.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 03:13:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:38.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 03:13:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 03:13:39 np0005548731 ceph-mds[82074]: mds.beacon.cephfs.compute-2.tjfgow missed beacon ack from the monitors
Dec  6 03:13:39 np0005548731 nova_compute[232433]: 2025-12-06 08:13:39.847 232437 INFO nova.virt.libvirt.driver [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Deleting instance files /var/lib/nova/instances/f13f96d6-bf80-493f-8c88-883a6da35105_del#033[00m
Dec  6 03:13:39 np0005548731 nova_compute[232433]: 2025-12-06 08:13:39.849 232437 INFO nova.virt.libvirt.driver [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Deletion of /var/lib/nova/instances/f13f96d6-bf80-493f-8c88-883a6da35105_del complete#033[00m
Dec  6 03:13:39 np0005548731 nova_compute[232433]: 2025-12-06 08:13:39.917 232437 INFO nova.compute.manager [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Took 19.18 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:13:39 np0005548731 nova_compute[232433]: 2025-12-06 08:13:39.918 232437 DEBUG oslo.service.loopingcall [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:13:39 np0005548731 nova_compute[232433]: 2025-12-06 08:13:39.918 232437 DEBUG nova.compute.manager [-] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:13:39 np0005548731 nova_compute[232433]: 2025-12-06 08:13:39.919 232437 DEBUG nova.network.neutron [-] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:13:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 03:13:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:40.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:40.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 03:13:41 np0005548731 nova_compute[232433]: 2025-12-06 08:13:41.668 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:41 np0005548731 ceph-mon[77458]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Dec  6 03:13:41 np0005548731 ceph-mon[77458]: paxos.1).electionLogic(66) init, last seen epoch 66
Dec  6 03:13:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 03:13:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 03:13:42 np0005548731 nova_compute[232433]: 2025-12-06 08:13:42.005 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 03:13:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Dec  6 03:13:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 03:13:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:42.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:13:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:42.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.077 232437 DEBUG nova.network.neutron [-] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.118 232437 INFO nova.compute.manager [-] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Took 3.20 seconds to deallocate network for instance.#033[00m
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.167 232437 DEBUG oslo_concurrency.lockutils [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.168 232437 DEBUG oslo_concurrency.lockutils [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.206 232437 DEBUG nova.compute.manager [req-678ab833-93f1-45ab-882b-2339c10e33e5 req-6744325f-099e-4ac4-8fbe-781446de41b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f13f96d6-bf80-493f-8c88-883a6da35105] Received event network-vif-deleted-a69d734a-1545-4982-9f2c-ed585fa6d7d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.239 232437 DEBUG oslo_concurrency.processutils [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/138621338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3746927814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.673 232437 DEBUG oslo_concurrency.processutils [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.679 232437 DEBUG nova.compute.provider_tree [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: mon.compute-1 calling monitor election
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: mon.compute-0 calling monitor election
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: mon.compute-2 calling monitor election
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: Cluster is now healthy
Dec  6 03:13:43 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.696 232437 DEBUG nova.scheduler.client.report [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.727 232437 DEBUG oslo_concurrency.lockutils [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.753 232437 INFO nova.scheduler.client.report [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Deleted allocations for instance f13f96d6-bf80-493f-8c88-883a6da35105#033[00m
Dec  6 03:13:43 np0005548731 nova_compute[232433]: 2025-12-06 08:13:43.817 232437 DEBUG oslo_concurrency.lockutils [None req-54ae528b-5451-4480-b2e5-54482cd29921 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "f13f96d6-bf80-493f-8c88-883a6da35105" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 23.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:13:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:44.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:44.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:13:46 np0005548731 nova_compute[232433]: 2025-12-06 08:13:46.670 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:46.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:46.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:47 np0005548731 nova_compute[232433]: 2025-12-06 08:13:47.006 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:47 np0005548731 nova_compute[232433]: 2025-12-06 08:13:47.154 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:13:47 np0005548731 nova_compute[232433]: 2025-12-06 08:13:47.154 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:13:47 np0005548731 nova_compute[232433]: 2025-12-06 08:13:47.155 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:13:47 np0005548731 nova_compute[232433]: 2025-12-06 08:13:47.235 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:13:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:48.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:13:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:48.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:13:49 np0005548731 nova_compute[232433]: 2025-12-06 08:13:49.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:13:50 np0005548731 nova_compute[232433]: 2025-12-06 08:13:50.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:13:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:13:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:50.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:13:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:50.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:13:51 np0005548731 nova_compute[232433]: 2025-12-06 08:13:51.671 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.009 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.200 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.200 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.201 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.201 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.201 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:13:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:13:52 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2634793710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.622 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:13:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:52.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.804 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.806 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4156MB free_disk=20.89736557006836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.806 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:13:52 np0005548731 nova_compute[232433]: 2025-12-06 08:13:52.806 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:13:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:13:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:52.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:13:53 np0005548731 nova_compute[232433]: 2025-12-06 08:13:53.513 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:13:53 np0005548731 nova_compute[232433]: 2025-12-06 08:13:53.513 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:13:53 np0005548731 nova_compute[232433]: 2025-12-06 08:13:53.534 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:13:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:13:53 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2605282351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:13:54 np0005548731 nova_compute[232433]: 2025-12-06 08:13:54.004 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:13:54 np0005548731 nova_compute[232433]: 2025-12-06 08:13:54.011 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:13:54 np0005548731 nova_compute[232433]: 2025-12-06 08:13:54.042 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:13:54 np0005548731 nova_compute[232433]: 2025-12-06 08:13:54.074 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:13:54 np0005548731 nova_compute[232433]: 2025-12-06 08:13:54.074 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:13:54 np0005548731 nova_compute[232433]: 2025-12-06 08:13:54.629 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:54 np0005548731 nova_compute[232433]: 2025-12-06 08:13:54.779 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:54.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:54.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:13:56 np0005548731 nova_compute[232433]: 2025-12-06 08:13:56.074 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:13:56 np0005548731 nova_compute[232433]: 2025-12-06 08:13:56.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:13:56 np0005548731 nova_compute[232433]: 2025-12-06 08:13:56.674 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:56.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:56.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:57 np0005548731 nova_compute[232433]: 2025-12-06 08:13:57.011 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:13:58 np0005548731 nova_compute[232433]: 2025-12-06 08:13:58.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:13:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:13:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:58.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:13:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:13:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:13:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:58.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:14:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:00.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:00.917 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:00.917 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:00.917 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:00.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:01 np0005548731 nova_compute[232433]: 2025-12-06 08:14:01.718 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:02 np0005548731 nova_compute[232433]: 2025-12-06 08:14:02.012 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:02.419 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=93, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=92) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:14:02 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:02.419 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:14:02 np0005548731 nova_compute[232433]: 2025-12-06 08:14:02.420 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:02.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:02.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:04.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:04 np0005548731 podman[334334]: 2025-12-06 08:14:04.912060965 +0000 UTC m=+0.065643470 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec  6 03:14:04 np0005548731 podman[334336]: 2025-12-06 08:14:04.920026248 +0000 UTC m=+0.066497430 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 03:14:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:04.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:04 np0005548731 podman[334335]: 2025-12-06 08:14:04.943664424 +0000 UTC m=+0.094834170 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:14:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:06.421 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '93'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:06 np0005548731 nova_compute[232433]: 2025-12-06 08:14:06.730 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:06.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:06.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:07 np0005548731 nova_compute[232433]: 2025-12-06 08:14:07.014 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:08.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:08.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:14:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3462730774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:14:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:14:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3462730774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:14:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:10 np0005548731 nova_compute[232433]: 2025-12-06 08:14:10.384 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:10 np0005548731 nova_compute[232433]: 2025-12-06 08:14:10.385 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:10 np0005548731 nova_compute[232433]: 2025-12-06 08:14:10.502 232437 DEBUG nova.compute.manager [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:14:10 np0005548731 nova_compute[232433]: 2025-12-06 08:14:10.606 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:10 np0005548731 nova_compute[232433]: 2025-12-06 08:14:10.607 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:10 np0005548731 nova_compute[232433]: 2025-12-06 08:14:10.614 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:14:10 np0005548731 nova_compute[232433]: 2025-12-06 08:14:10.614 232437 INFO nova.compute.claims [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:14:10 np0005548731 nova_compute[232433]: 2025-12-06 08:14:10.746 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:14:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:10.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:10.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:14:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/607142524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.169 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.175 232437 DEBUG nova.compute.provider_tree [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.276 232437 DEBUG nova.scheduler.client.report [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.317 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.318 232437 DEBUG nova.compute.manager [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.383 232437 DEBUG nova.compute.manager [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.384 232437 DEBUG nova.network.neutron [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.405 232437 INFO nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.422 232437 DEBUG nova.compute.manager [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.528 232437 DEBUG nova.compute.manager [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.529 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.530 232437 INFO nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Creating image(s)#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.565 232437 DEBUG nova.storage.rbd_utils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.593 232437 DEBUG nova.storage.rbd_utils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.632 232437 DEBUG nova.storage.rbd_utils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.637 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.720 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.721 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.723 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.723 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.761 232437 DEBUG nova.storage.rbd_utils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.765 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.802 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:11 np0005548731 nova_compute[232433]: 2025-12-06 08:14:11.988 232437 DEBUG nova.policy [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed2d17026504d70b893923a85cece4d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:14:12 np0005548731 nova_compute[232433]: 2025-12-06 08:14:12.015 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:12 np0005548731 nova_compute[232433]: 2025-12-06 08:14:12.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:12 np0005548731 nova_compute[232433]: 2025-12-06 08:14:12.256 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:14:12 np0005548731 nova_compute[232433]: 2025-12-06 08:14:12.348 232437 DEBUG nova.storage.rbd_utils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] resizing rbd image b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:14:12 np0005548731 nova_compute[232433]: 2025-12-06 08:14:12.479 232437 DEBUG nova.objects.instance [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid b2ee70f1-6a92-4de1-a8d3-63f69565e29e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:14:12 np0005548731 nova_compute[232433]: 2025-12-06 08:14:12.495 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:14:12 np0005548731 nova_compute[232433]: 2025-12-06 08:14:12.496 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Ensure instance console log exists: /var/lib/nova/instances/b2ee70f1-6a92-4de1-a8d3-63f69565e29e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:14:12 np0005548731 nova_compute[232433]: 2025-12-06 08:14:12.497 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:12 np0005548731 nova_compute[232433]: 2025-12-06 08:14:12.497 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:12 np0005548731 nova_compute[232433]: 2025-12-06 08:14:12.498 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:12.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:12.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:13 np0005548731 nova_compute[232433]: 2025-12-06 08:14:13.695 232437 DEBUG nova.network.neutron [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Successfully created port: 9c5705d0-f551-4726-a476-7a9060442f88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:14:14 np0005548731 nova_compute[232433]: 2025-12-06 08:14:14.596 232437 DEBUG nova.network.neutron [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Successfully updated port: 9c5705d0-f551-4726-a476-7a9060442f88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:14:14 np0005548731 nova_compute[232433]: 2025-12-06 08:14:14.612 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:14:14 np0005548731 nova_compute[232433]: 2025-12-06 08:14:14.613 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:14:14 np0005548731 nova_compute[232433]: 2025-12-06 08:14:14.613 232437 DEBUG nova.network.neutron [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:14:14 np0005548731 nova_compute[232433]: 2025-12-06 08:14:14.671 232437 DEBUG nova.compute.manager [req-e7554600-e87f-483a-96f8-b7b20f344d9f req-600052e5-e93e-4da2-a83a-ced422e4d379 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-changed-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:14 np0005548731 nova_compute[232433]: 2025-12-06 08:14:14.672 232437 DEBUG nova.compute.manager [req-e7554600-e87f-483a-96f8-b7b20f344d9f req-600052e5-e93e-4da2-a83a-ced422e4d379 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Refreshing instance network info cache due to event network-changed-9c5705d0-f551-4726-a476-7a9060442f88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:14:14 np0005548731 nova_compute[232433]: 2025-12-06 08:14:14.672 232437 DEBUG oslo_concurrency.lockutils [req-e7554600-e87f-483a-96f8-b7b20f344d9f req-600052e5-e93e-4da2-a83a-ced422e4d379 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:14:14 np0005548731 nova_compute[232433]: 2025-12-06 08:14:14.758 232437 DEBUG nova.network.neutron [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:14:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:14.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:14.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:16 np0005548731 nova_compute[232433]: 2025-12-06 08:14:16.733 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:16.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:16.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.004 232437 DEBUG nova.network.neutron [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Updating instance_info_cache with network_info: [{"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.017 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.029 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.030 232437 DEBUG nova.compute.manager [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Instance network_info: |[{"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.030 232437 DEBUG oslo_concurrency.lockutils [req-e7554600-e87f-483a-96f8-b7b20f344d9f req-600052e5-e93e-4da2-a83a-ced422e4d379 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.031 232437 DEBUG nova.network.neutron [req-e7554600-e87f-483a-96f8-b7b20f344d9f req-600052e5-e93e-4da2-a83a-ced422e4d379 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Refreshing network info cache for port 9c5705d0-f551-4726-a476-7a9060442f88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.034 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Start _get_guest_xml network_info=[{"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.038 232437 WARNING nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.043 232437 DEBUG nova.virt.libvirt.host [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.044 232437 DEBUG nova.virt.libvirt.host [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.047 232437 DEBUG nova.virt.libvirt.host [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.048 232437 DEBUG nova.virt.libvirt.host [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.049 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.050 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.051 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.051 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.052 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.052 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.052 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.052 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.053 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.053 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.053 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.054 232437 DEBUG nova.virt.hardware [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.057 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.133 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 03:14:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:14:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3375879178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.534 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.562 232437 DEBUG nova.storage.rbd_utils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.566 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:14:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:14:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2525064073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.995 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.996 232437 DEBUG nova.virt.libvirt.vif [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:14:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1967673920',display_name='tempest-TestNetworkAdvancedServerOps-server-1967673920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1967673920',id=202,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE7K+0TQYdf6/UDgfdSEP1ZSscXf10O1YcJvJ4OnffpKNQOT8fnXRuOBwRzeckYQj5cV9UoztH3ENvALVUX8qfmg5l022XYIuytOwU07Yk+vPdHH//qvTGSTmErglU1U5g==',key_name='tempest-TestNetworkAdvancedServerOps-1235494473',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-kygr1lne',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:14:11Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=b2ee70f1-6a92-4de1-a8d3-63f69565e29e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.997 232437 DEBUG nova.network.os_vif_util [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.998 232437 DEBUG nova.network.os_vif_util [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:bf:3b,bridge_name='br-int',has_traffic_filtering=True,id=9c5705d0-f551-4726-a476-7a9060442f88,network=Network(a5a2ec32-8155-4bca-8340-a3a1ef336f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c5705d0-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:14:17 np0005548731 nova_compute[232433]: 2025-12-06 08:14:17.998 232437 DEBUG nova.objects.instance [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid b2ee70f1-6a92-4de1-a8d3-63f69565e29e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.037 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  <uuid>b2ee70f1-6a92-4de1-a8d3-63f69565e29e</uuid>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  <name>instance-000000ca</name>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1967673920</nova:name>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:14:17</nova:creationTime>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <nova:port uuid="9c5705d0-f551-4726-a476-7a9060442f88">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <entry name="serial">b2ee70f1-6a92-4de1-a8d3-63f69565e29e</entry>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <entry name="uuid">b2ee70f1-6a92-4de1-a8d3-63f69565e29e</entry>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk.config">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:d2:bf:3b"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <target dev="tap9c5705d0-f5"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/b2ee70f1-6a92-4de1-a8d3-63f69565e29e/console.log" append="off"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:14:18 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:14:18 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:14:18 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:14:18 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.038 232437 DEBUG nova.compute.manager [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Preparing to wait for external event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.039 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.039 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.039 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.040 232437 DEBUG nova.virt.libvirt.vif [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:14:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1967673920',display_name='tempest-TestNetworkAdvancedServerOps-server-1967673920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1967673920',id=202,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE7K+0TQYdf6/UDgfdSEP1ZSscXf10O1YcJvJ4OnffpKNQOT8fnXRuOBwRzeckYQj5cV9UoztH3ENvALVUX8qfmg5l022XYIuytOwU07Yk+vPdHH//qvTGSTmErglU1U5g==',key_name='tempest-TestNetworkAdvancedServerOps-1235494473',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-kygr1lne',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:14:11Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=b2ee70f1-6a92-4de1-a8d3-63f69565e29e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.040 232437 DEBUG nova.network.os_vif_util [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.041 232437 DEBUG nova.network.os_vif_util [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:bf:3b,bridge_name='br-int',has_traffic_filtering=True,id=9c5705d0-f551-4726-a476-7a9060442f88,network=Network(a5a2ec32-8155-4bca-8340-a3a1ef336f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c5705d0-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.041 232437 DEBUG os_vif [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:bf:3b,bridge_name='br-int',has_traffic_filtering=True,id=9c5705d0-f551-4726-a476-7a9060442f88,network=Network(a5a2ec32-8155-4bca-8340-a3a1ef336f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c5705d0-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.042 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.042 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.042 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.044 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.045 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c5705d0-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.045 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c5705d0-f5, col_values=(('external_ids', {'iface-id': '9c5705d0-f551-4726-a476-7a9060442f88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:bf:3b', 'vm-uuid': 'b2ee70f1-6a92-4de1-a8d3-63f69565e29e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.085 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:18 np0005548731 NetworkManager[49182]: <info>  [1765008858.0873] manager: (tap9c5705d0-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.089 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.093 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.094 232437 INFO os_vif [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:bf:3b,bridge_name='br-int',has_traffic_filtering=True,id=9c5705d0-f551-4726-a476-7a9060442f88,network=Network(a5a2ec32-8155-4bca-8340-a3a1ef336f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c5705d0-f5')#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.149 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.150 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.150 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No VIF found with MAC fa:16:3e:d2:bf:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.151 232437 INFO nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Using config drive#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.178 232437 DEBUG nova.storage.rbd_utils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:14:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:18.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:18.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.993 232437 DEBUG nova.network.neutron [req-e7554600-e87f-483a-96f8-b7b20f344d9f req-600052e5-e93e-4da2-a83a-ced422e4d379 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Updated VIF entry in instance network info cache for port 9c5705d0-f551-4726-a476-7a9060442f88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:14:18 np0005548731 nova_compute[232433]: 2025-12-06 08:14:18.994 232437 DEBUG nova.network.neutron [req-e7554600-e87f-483a-96f8-b7b20f344d9f req-600052e5-e93e-4da2-a83a-ced422e4d379 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Updating instance_info_cache with network_info: [{"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.021 232437 DEBUG oslo_concurrency.lockutils [req-e7554600-e87f-483a-96f8-b7b20f344d9f req-600052e5-e93e-4da2-a83a-ced422e4d379 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.158 232437 INFO nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Creating config drive at /var/lib/nova/instances/b2ee70f1-6a92-4de1-a8d3-63f69565e29e/disk.config#033[00m
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.163 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b2ee70f1-6a92-4de1-a8d3-63f69565e29e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdxx_1t0p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.298 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b2ee70f1-6a92-4de1-a8d3-63f69565e29e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdxx_1t0p" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.326 232437 DEBUG nova.storage.rbd_utils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.330 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b2ee70f1-6a92-4de1-a8d3-63f69565e29e/disk.config b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.692 232437 DEBUG oslo_concurrency.processutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b2ee70f1-6a92-4de1-a8d3-63f69565e29e/disk.config b2ee70f1-6a92-4de1-a8d3-63f69565e29e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.693 232437 INFO nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Deleting local config drive /var/lib/nova/instances/b2ee70f1-6a92-4de1-a8d3-63f69565e29e/disk.config because it was imported into RBD.#033[00m
Dec  6 03:14:19 np0005548731 kernel: tap9c5705d0-f5: entered promiscuous mode
Dec  6 03:14:19 np0005548731 NetworkManager[49182]: <info>  [1765008859.7384] manager: (tap9c5705d0-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/484)
Dec  6 03:14:19 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:19Z|01027|binding|INFO|Claiming lport 9c5705d0-f551-4726-a476-7a9060442f88 for this chassis.
Dec  6 03:14:19 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:19Z|01028|binding|INFO|9c5705d0-f551-4726-a476-7a9060442f88: Claiming fa:16:3e:d2:bf:3b 10.100.0.5
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.740 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.745 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.753 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:bf:3b 10.100.0.5'], port_security=['fa:16:3e:d2:bf:3b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b2ee70f1-6a92-4de1-a8d3-63f69565e29e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2b618342-4b21-4494-867f-57d49b127e96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7e181fc-238b-465b-9a13-8c46b34bbffb, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9c5705d0-f551-4726-a476-7a9060442f88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.754 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9c5705d0-f551-4726-a476-7a9060442f88 in datapath a5a2ec32-8155-4bca-8340-a3a1ef336f77 bound to our chassis#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.756 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a5a2ec32-8155-4bca-8340-a3a1ef336f77#033[00m
Dec  6 03:14:19 np0005548731 systemd-udevd[334728]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.766 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[49915ac4-1fee-494e-bfff-64223913955d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.767 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa5a2ec32-81 in ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.769 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa5a2ec32-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.769 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[66be7406-a27f-49fc-8aab-117560711c7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.771 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8dfcd2-d874-4be0-82d2-9b503716dd40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 systemd-machined[195355]: New machine qemu-104-instance-000000ca.
Dec  6 03:14:19 np0005548731 NetworkManager[49182]: <info>  [1765008859.7844] device (tap9c5705d0-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:14:19 np0005548731 NetworkManager[49182]: <info>  [1765008859.7856] device (tap9c5705d0-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.783 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[9268750a-0058-4284-8e6d-d841ddc18afd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.801 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:19 np0005548731 systemd[1]: Started Virtual Machine qemu-104-instance-000000ca.
Dec  6 03:14:19 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:19Z|01029|binding|INFO|Setting lport 9c5705d0-f551-4726-a476-7a9060442f88 ovn-installed in OVS
Dec  6 03:14:19 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:19Z|01030|binding|INFO|Setting lport 9c5705d0-f551-4726-a476-7a9060442f88 up in Southbound
Dec  6 03:14:19 np0005548731 nova_compute[232433]: 2025-12-06 08:14:19.808 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.809 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2ced23-265e-4ef4-9037-ef8c3ebf2363]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.840 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6e414bc9-f325-45eb-bb28-5c719639d6d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.844 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[01589ce5-56d8-41a4-9962-8ca0937ba4cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 NetworkManager[49182]: <info>  [1765008859.8457] manager: (tapa5a2ec32-80): new Veth device (/org/freedesktop/NetworkManager/Devices/485)
Dec  6 03:14:19 np0005548731 systemd-udevd[334732]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.879 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b5a1a4-748c-4413-8c26-cf04d7be2d75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.881 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[be221f29-3f19-4b48-b372-c0970abfecc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 NetworkManager[49182]: <info>  [1765008859.9040] device (tapa5a2ec32-80): carrier: link connected
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.909 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9782ad-e8b3-49c1-bf15-83338670e415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.925 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[865d63d0-a42b-44b2-b3e3-a638749b8029]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5a2ec32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:24:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912191, 'reachable_time': 43569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334761, 'error': None, 'target': 'ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.942 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ca5981-7960-4a21-926d-687a926b749a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:24c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 912191, 'tstamp': 912191}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334762, 'error': None, 'target': 'ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.963 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3e42f622-7aff-4262-9f96-f07302dbb3e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5a2ec32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:24:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912191, 'reachable_time': 43569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334763, 'error': None, 'target': 'ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:19.990 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b802d729-c527-435b-a4a7-bbc38d5400cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:20.047 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[85d09abd-e911-43c2-9fe6-baa3dbed75ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:20.048 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5a2ec32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:20.048 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:20.048 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5a2ec32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.050 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:20 np0005548731 kernel: tapa5a2ec32-80: entered promiscuous mode
Dec  6 03:14:20 np0005548731 NetworkManager[49182]: <info>  [1765008860.0513] manager: (tapa5a2ec32-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.052 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:20.056 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa5a2ec32-80, col_values=(('external_ids', {'iface-id': 'e4bc5e15-f64d-44cd-a19a-57bd42fe3ca9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:20 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:20Z|01031|binding|INFO|Releasing lport e4bc5e15-f64d-44cd-a19a-57bd42fe3ca9 from this chassis (sb_readonly=0)
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:20.070 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a5a2ec32-8155-4bca-8340-a3a1ef336f77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a5a2ec32-8155-4bca-8340-a3a1ef336f77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.073 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:20.074 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[97d83dcc-b13c-4c4e-93dd-74f45535953e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:20.074 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-a5a2ec32-8155-4bca-8340-a3a1ef336f77
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/a5a2ec32-8155-4bca-8340-a3a1ef336f77.pid.haproxy
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID a5a2ec32-8155-4bca-8340-a3a1ef336f77
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:14:20 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:20.075 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'env', 'PROCESS_TAG=haproxy-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a5a2ec32-8155-4bca-8340-a3a1ef336f77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:14:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.379 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008860.3781297, b2ee70f1-6a92-4de1-a8d3-63f69565e29e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.379 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] VM Started (Lifecycle Event)#033[00m
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.400 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.404 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008860.3782609, b2ee70f1-6a92-4de1-a8d3-63f69565e29e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.405 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.424 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.427 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:14:20 np0005548731 nova_compute[232433]: 2025-12-06 08:14:20.450 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:14:20 np0005548731 podman[334838]: 2025-12-06 08:14:20.446406448 +0000 UTC m=+0.025286988 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:14:20 np0005548731 podman[334838]: 2025-12-06 08:14:20.584171112 +0000 UTC m=+0.163051632 container create 78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 03:14:20 np0005548731 systemd[1]: Started libpod-conmon-78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf.scope.
Dec  6 03:14:20 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:14:20 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd839f8798c1bae00c40bde81e798bdd11ad7e82daa130ced89483d939d5e15a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:14:20 np0005548731 podman[334838]: 2025-12-06 08:14:20.662048179 +0000 UTC m=+0.240928699 container init 78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  6 03:14:20 np0005548731 podman[334838]: 2025-12-06 08:14:20.668816823 +0000 UTC m=+0.247697343 container start 78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:14:20 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[334855]: [NOTICE]   (334859) : New worker (334861) forked
Dec  6 03:14:20 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[334855]: [NOTICE]   (334859) : Loading success.
Dec  6 03:14:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:20.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:20.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.240 232437 DEBUG nova.compute.manager [req-dff9c3d9-5429-4e6d-a36a-1916fb70c640 req-112121da-89e8-4570-bdd8-bf984c9dd3d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.241 232437 DEBUG oslo_concurrency.lockutils [req-dff9c3d9-5429-4e6d-a36a-1916fb70c640 req-112121da-89e8-4570-bdd8-bf984c9dd3d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.241 232437 DEBUG oslo_concurrency.lockutils [req-dff9c3d9-5429-4e6d-a36a-1916fb70c640 req-112121da-89e8-4570-bdd8-bf984c9dd3d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.241 232437 DEBUG oslo_concurrency.lockutils [req-dff9c3d9-5429-4e6d-a36a-1916fb70c640 req-112121da-89e8-4570-bdd8-bf984c9dd3d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.241 232437 DEBUG nova.compute.manager [req-dff9c3d9-5429-4e6d-a36a-1916fb70c640 req-112121da-89e8-4570-bdd8-bf984c9dd3d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Processing event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.242 232437 DEBUG nova.compute.manager [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.245 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008861.2455127, b2ee70f1-6a92-4de1-a8d3-63f69565e29e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.245 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.247 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.250 232437 INFO nova.virt.libvirt.driver [-] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Instance spawned successfully.#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.250 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.278 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.284 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.287 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.287 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.288 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.288 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.288 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.289 232437 DEBUG nova.virt.libvirt.driver [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.328 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.372 232437 INFO nova.compute.manager [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Took 9.84 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.372 232437 DEBUG nova.compute.manager [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.433 232437 INFO nova.compute.manager [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Took 10.85 seconds to build instance.#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.461 232437 DEBUG oslo_concurrency.lockutils [None req-2e39c80e-9243-4c98-9201-0246ae5be929 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:21 np0005548731 nova_compute[232433]: 2025-12-06 08:14:21.773 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:22.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:22.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:23 np0005548731 nova_compute[232433]: 2025-12-06 08:14:23.125 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:23 np0005548731 nova_compute[232433]: 2025-12-06 08:14:23.376 232437 DEBUG nova.compute.manager [req-6fd696d8-5d25-4637-87b3-c8b1ae747c6d req-32f61dd3-8a0c-49d0-9612-8533f2807d2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:23 np0005548731 nova_compute[232433]: 2025-12-06 08:14:23.377 232437 DEBUG oslo_concurrency.lockutils [req-6fd696d8-5d25-4637-87b3-c8b1ae747c6d req-32f61dd3-8a0c-49d0-9612-8533f2807d2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:23 np0005548731 nova_compute[232433]: 2025-12-06 08:14:23.377 232437 DEBUG oslo_concurrency.lockutils [req-6fd696d8-5d25-4637-87b3-c8b1ae747c6d req-32f61dd3-8a0c-49d0-9612-8533f2807d2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:23 np0005548731 nova_compute[232433]: 2025-12-06 08:14:23.377 232437 DEBUG oslo_concurrency.lockutils [req-6fd696d8-5d25-4637-87b3-c8b1ae747c6d req-32f61dd3-8a0c-49d0-9612-8533f2807d2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:23 np0005548731 nova_compute[232433]: 2025-12-06 08:14:23.377 232437 DEBUG nova.compute.manager [req-6fd696d8-5d25-4637-87b3-c8b1ae747c6d req-32f61dd3-8a0c-49d0-9612-8533f2807d2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] No waiting events found dispatching network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:14:23 np0005548731 nova_compute[232433]: 2025-12-06 08:14:23.377 232437 WARNING nova.compute.manager [req-6fd696d8-5d25-4637-87b3-c8b1ae747c6d req-32f61dd3-8a0c-49d0-9612-8533f2807d2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received unexpected event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:14:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:24.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:24.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:26 np0005548731 NetworkManager[49182]: <info>  [1765008866.1869] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/487)
Dec  6 03:14:26 np0005548731 NetworkManager[49182]: <info>  [1765008866.1885] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Dec  6 03:14:26 np0005548731 nova_compute[232433]: 2025-12-06 08:14:26.186 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:26 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:26Z|01032|binding|INFO|Releasing lport e4bc5e15-f64d-44cd-a19a-57bd42fe3ca9 from this chassis (sb_readonly=0)
Dec  6 03:14:26 np0005548731 nova_compute[232433]: 2025-12-06 08:14:26.261 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:26 np0005548731 nova_compute[232433]: 2025-12-06 08:14:26.268 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:26 np0005548731 nova_compute[232433]: 2025-12-06 08:14:26.510 232437 DEBUG nova.compute.manager [req-c7a80a96-1065-4c49-9346-6d7d9335abbc req-70a197c0-2e79-4e1f-917b-a315a40d436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-changed-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:26 np0005548731 nova_compute[232433]: 2025-12-06 08:14:26.511 232437 DEBUG nova.compute.manager [req-c7a80a96-1065-4c49-9346-6d7d9335abbc req-70a197c0-2e79-4e1f-917b-a315a40d436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Refreshing instance network info cache due to event network-changed-9c5705d0-f551-4726-a476-7a9060442f88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:14:26 np0005548731 nova_compute[232433]: 2025-12-06 08:14:26.511 232437 DEBUG oslo_concurrency.lockutils [req-c7a80a96-1065-4c49-9346-6d7d9335abbc req-70a197c0-2e79-4e1f-917b-a315a40d436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:14:26 np0005548731 nova_compute[232433]: 2025-12-06 08:14:26.511 232437 DEBUG oslo_concurrency.lockutils [req-c7a80a96-1065-4c49-9346-6d7d9335abbc req-70a197c0-2e79-4e1f-917b-a315a40d436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:14:26 np0005548731 nova_compute[232433]: 2025-12-06 08:14:26.512 232437 DEBUG nova.network.neutron [req-c7a80a96-1065-4c49-9346-6d7d9335abbc req-70a197c0-2e79-4e1f-917b-a315a40d436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Refreshing network info cache for port 9c5705d0-f551-4726-a476-7a9060442f88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:14:26 np0005548731 nova_compute[232433]: 2025-12-06 08:14:26.774 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:26.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:26.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:28 np0005548731 nova_compute[232433]: 2025-12-06 08:14:28.129 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:28 np0005548731 nova_compute[232433]: 2025-12-06 08:14:28.276 232437 DEBUG nova.network.neutron [req-c7a80a96-1065-4c49-9346-6d7d9335abbc req-70a197c0-2e79-4e1f-917b-a315a40d436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Updated VIF entry in instance network info cache for port 9c5705d0-f551-4726-a476-7a9060442f88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:14:28 np0005548731 nova_compute[232433]: 2025-12-06 08:14:28.277 232437 DEBUG nova.network.neutron [req-c7a80a96-1065-4c49-9346-6d7d9335abbc req-70a197c0-2e79-4e1f-917b-a315a40d436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Updating instance_info_cache with network_info: [{"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:14:28 np0005548731 nova_compute[232433]: 2025-12-06 08:14:28.301 232437 DEBUG oslo_concurrency.lockutils [req-c7a80a96-1065-4c49-9346-6d7d9335abbc req-70a197c0-2e79-4e1f-917b-a315a40d436a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:14:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:28.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:28.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:30.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:30.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:31 np0005548731 nova_compute[232433]: 2025-12-06 08:14:31.810 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:32 np0005548731 nova_compute[232433]: 2025-12-06 08:14:32.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:32 np0005548731 nova_compute[232433]: 2025-12-06 08:14:32.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 03:14:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:32.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:32.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:33 np0005548731 nova_compute[232433]: 2025-12-06 08:14:33.173 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:34.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:34.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:35 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:35Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:bf:3b 10.100.0.5
Dec  6 03:14:35 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:35Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:bf:3b 10.100.0.5
Dec  6 03:14:35 np0005548731 podman[334929]: 2025-12-06 08:14:35.895436353 +0000 UTC m=+0.057504431 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  6 03:14:35 np0005548731 podman[334931]: 2025-12-06 08:14:35.898247861 +0000 UTC m=+0.055247516 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:14:35 np0005548731 podman[334930]: 2025-12-06 08:14:35.923315112 +0000 UTC m=+0.083293449 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 03:14:36 np0005548731 nova_compute[232433]: 2025-12-06 08:14:36.815 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:36.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:36.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:38 np0005548731 nova_compute[232433]: 2025-12-06 08:14:38.175 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:38.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:38.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:40.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:40.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:41 np0005548731 nova_compute[232433]: 2025-12-06 08:14:41.847 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:41 np0005548731 nova_compute[232433]: 2025-12-06 08:14:41.918 232437 INFO nova.compute.manager [None req-5c03d480-607b-4e99-acd2-01d2cceb2ec9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Get console output#033[00m
Dec  6 03:14:41 np0005548731 nova_compute[232433]: 2025-12-06 08:14:41.922 261230 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 03:14:42 np0005548731 nova_compute[232433]: 2025-12-06 08:14:42.294 232437 DEBUG nova.objects.instance [None req-896bff9c-7395-4abe-a114-c3f184ccc284 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid b2ee70f1-6a92-4de1-a8d3-63f69565e29e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:14:42 np0005548731 nova_compute[232433]: 2025-12-06 08:14:42.332 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008882.3317928, b2ee70f1-6a92-4de1-a8d3-63f69565e29e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:14:42 np0005548731 nova_compute[232433]: 2025-12-06 08:14:42.332 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:14:42 np0005548731 nova_compute[232433]: 2025-12-06 08:14:42.365 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:14:42 np0005548731 nova_compute[232433]: 2025-12-06 08:14:42.373 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:14:42 np0005548731 nova_compute[232433]: 2025-12-06 08:14:42.407 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec  6 03:14:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:42.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:42.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:43 np0005548731 nova_compute[232433]: 2025-12-06 08:14:43.214 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:14:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:14:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 03:14:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 03:14:43 np0005548731 podman[335343]: 2025-12-06 08:14:43.262328427 +0000 UTC m=+0.286457598 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.263 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=94, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=93) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.265 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:14:43 np0005548731 nova_compute[232433]: 2025-12-06 08:14:43.264 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:43 np0005548731 kernel: tap9c5705d0-f5 (unregistering): left promiscuous mode
Dec  6 03:14:43 np0005548731 NetworkManager[49182]: <info>  [1765008883.2721] device (tap9c5705d0-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:14:43 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:43Z|01033|binding|INFO|Releasing lport 9c5705d0-f551-4726-a476-7a9060442f88 from this chassis (sb_readonly=0)
Dec  6 03:14:43 np0005548731 nova_compute[232433]: 2025-12-06 08:14:43.280 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:43 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:43Z|01034|binding|INFO|Setting lport 9c5705d0-f551-4726-a476-7a9060442f88 down in Southbound
Dec  6 03:14:43 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:43Z|01035|binding|INFO|Removing iface tap9c5705d0-f5 ovn-installed in OVS
Dec  6 03:14:43 np0005548731 nova_compute[232433]: 2025-12-06 08:14:43.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:43 np0005548731 nova_compute[232433]: 2025-12-06 08:14:43.299 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:43 np0005548731 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000ca.scope: Deactivated successfully.
Dec  6 03:14:43 np0005548731 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000ca.scope: Consumed 14.242s CPU time.
Dec  6 03:14:43 np0005548731 systemd-machined[195355]: Machine qemu-104-instance-000000ca terminated.
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.343 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:bf:3b 10.100.0.5'], port_security=['fa:16:3e:d2:bf:3b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b2ee70f1-6a92-4de1-a8d3-63f69565e29e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2b618342-4b21-4494-867f-57d49b127e96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7e181fc-238b-465b-9a13-8c46b34bbffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9c5705d0-f551-4726-a476-7a9060442f88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.344 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9c5705d0-f551-4726-a476-7a9060442f88 in datapath a5a2ec32-8155-4bca-8340-a3a1ef336f77 unbound from our chassis#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.345 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a5a2ec32-8155-4bca-8340-a3a1ef336f77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.346 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[25900fab-d19d-4a42-8cc6-3c4ad423f1a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.346 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77 namespace which is not needed anymore#033[00m
Dec  6 03:14:43 np0005548731 podman[335343]: 2025-12-06 08:14:43.366873823 +0000 UTC m=+0.391002994 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  6 03:14:43 np0005548731 nova_compute[232433]: 2025-12-06 08:14:43.445 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:43 np0005548731 nova_compute[232433]: 2025-12-06 08:14:43.454 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:43 np0005548731 nova_compute[232433]: 2025-12-06 08:14:43.462 232437 DEBUG nova.compute.manager [None req-896bff9c-7395-4abe-a114-c3f184ccc284 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:14:43 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[334855]: [NOTICE]   (334859) : haproxy version is 2.8.14-c23fe91
Dec  6 03:14:43 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[334855]: [NOTICE]   (334859) : path to executable is /usr/sbin/haproxy
Dec  6 03:14:43 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[334855]: [WARNING]  (334859) : Exiting Master process...
Dec  6 03:14:43 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[334855]: [ALERT]    (334859) : Current worker (334861) exited with code 143 (Terminated)
Dec  6 03:14:43 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[334855]: [WARNING]  (334859) : All workers exited. Exiting... (0)
Dec  6 03:14:43 np0005548731 systemd[1]: libpod-78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf.scope: Deactivated successfully.
Dec  6 03:14:43 np0005548731 podman[335420]: 2025-12-06 08:14:43.523042996 +0000 UTC m=+0.045788166 container died 78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  6 03:14:43 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf-userdata-shm.mount: Deactivated successfully.
Dec  6 03:14:43 np0005548731 systemd[1]: var-lib-containers-storage-overlay-cd839f8798c1bae00c40bde81e798bdd11ad7e82daa130ced89483d939d5e15a-merged.mount: Deactivated successfully.
Dec  6 03:14:43 np0005548731 podman[335420]: 2025-12-06 08:14:43.573205668 +0000 UTC m=+0.095950828 container cleanup 78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 03:14:43 np0005548731 systemd[1]: libpod-conmon-78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf.scope: Deactivated successfully.
Dec  6 03:14:43 np0005548731 podman[335462]: 2025-12-06 08:14:43.639394569 +0000 UTC m=+0.042424704 container remove 78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.644 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[696a203e-2b7b-44a5-8e64-c7df81912dfa]: (4, ('Sat Dec  6 08:14:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77 (78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf)\n78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf\nSat Dec  6 08:14:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77 (78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf)\n78274f2c2361a134a60e344f3a9ad035fb8523e0c154b991b33e399af70a56bf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.646 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8c527bea-cc46-4b49-8854-417d7f9275b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.647 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5a2ec32-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:43 np0005548731 nova_compute[232433]: 2025-12-06 08:14:43.649 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:43 np0005548731 kernel: tapa5a2ec32-80: left promiscuous mode
Dec  6 03:14:43 np0005548731 nova_compute[232433]: 2025-12-06 08:14:43.667 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.670 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[71c0fb77-775e-42e0-9a30-59459e688b33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.684 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[149ec196-ce8b-4488-bf54-18eff6860ba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.685 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c10ac653-6777-483d-9dc9-683e927fb561]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.699 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cb0b8c2c-34b4-453d-a6b1-a6e9ab9a56d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912184, 'reachable_time': 28478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335503, 'error': None, 'target': 'ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.701 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:14:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:43.702 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[923a886f-6e23-4d70-8ba2-ad122b019533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:43 np0005548731 systemd[1]: run-netns-ovnmeta\x2da5a2ec32\x2d8155\x2d4bca\x2d8340\x2da3a1ef336f77.mount: Deactivated successfully.
Dec  6 03:14:44 np0005548731 podman[335568]: 2025-12-06 08:14:44.086652843 +0000 UTC m=+0.073355359 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 03:14:44 np0005548731 podman[335568]: 2025-12-06 08:14:44.124117254 +0000 UTC m=+0.110819760 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 03:14:44 np0005548731 podman[335630]: 2025-12-06 08:14:44.366322423 +0000 UTC m=+0.060390932 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, release=1793, distribution-scope=public, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vcs-type=git, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  6 03:14:44 np0005548731 podman[335630]: 2025-12-06 08:14:44.377373752 +0000 UTC m=+0.071442281 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, name=keepalived, com.redhat.component=keepalived-container, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  6 03:14:44 np0005548731 nova_compute[232433]: 2025-12-06 08:14:44.497 232437 DEBUG nova.compute.manager [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-vif-unplugged-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:44 np0005548731 nova_compute[232433]: 2025-12-06 08:14:44.498 232437 DEBUG oslo_concurrency.lockutils [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:44 np0005548731 nova_compute[232433]: 2025-12-06 08:14:44.498 232437 DEBUG oslo_concurrency.lockutils [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:44 np0005548731 nova_compute[232433]: 2025-12-06 08:14:44.499 232437 DEBUG oslo_concurrency.lockutils [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:44 np0005548731 nova_compute[232433]: 2025-12-06 08:14:44.499 232437 DEBUG nova.compute.manager [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] No waiting events found dispatching network-vif-unplugged-9c5705d0-f551-4726-a476-7a9060442f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:14:44 np0005548731 nova_compute[232433]: 2025-12-06 08:14:44.499 232437 WARNING nova.compute.manager [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received unexpected event network-vif-unplugged-9c5705d0-f551-4726-a476-7a9060442f88 for instance with vm_state suspended and task_state None.#033[00m
Dec  6 03:14:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:44.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:14:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:14:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:14:45 np0005548731 nova_compute[232433]: 2025-12-06 08:14:45.995 232437 INFO nova.compute.manager [None req-cbd4255f-71f4-4b09-af63-439fbc4b1512 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Get console output#033[00m
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.397 232437 INFO nova.compute.manager [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Resuming#033[00m
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.398 232437 DEBUG nova.objects.instance [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'flavor' on Instance uuid b2ee70f1-6a92-4de1-a8d3-63f69565e29e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.566 232437 DEBUG oslo_concurrency.lockutils [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.566 232437 DEBUG oslo_concurrency.lockutils [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.567 232437 DEBUG nova.network.neutron [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.689 232437 DEBUG nova.compute.manager [req-095e71c6-c9ba-472b-8c78-8b15b3b2f86f req-d29e99e3-d520-4e4e-b2d5-91b035608ad0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.690 232437 DEBUG oslo_concurrency.lockutils [req-095e71c6-c9ba-472b-8c78-8b15b3b2f86f req-d29e99e3-d520-4e4e-b2d5-91b035608ad0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.690 232437 DEBUG oslo_concurrency.lockutils [req-095e71c6-c9ba-472b-8c78-8b15b3b2f86f req-d29e99e3-d520-4e4e-b2d5-91b035608ad0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.691 232437 DEBUG oslo_concurrency.lockutils [req-095e71c6-c9ba-472b-8c78-8b15b3b2f86f req-d29e99e3-d520-4e4e-b2d5-91b035608ad0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.691 232437 DEBUG nova.compute.manager [req-095e71c6-c9ba-472b-8c78-8b15b3b2f86f req-d29e99e3-d520-4e4e-b2d5-91b035608ad0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] No waiting events found dispatching network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.692 232437 WARNING nova.compute.manager [req-095e71c6-c9ba-472b-8c78-8b15b3b2f86f req-d29e99e3-d520-4e4e-b2d5-91b035608ad0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received unexpected event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 for instance with vm_state suspended and task_state resuming.#033[00m
Dec  6 03:14:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:14:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:14:46 np0005548731 nova_compute[232433]: 2025-12-06 08:14:46.848 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:46.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:46.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:47 np0005548731 nova_compute[232433]: 2025-12-06 08:14:47.118 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:47 np0005548731 nova_compute[232433]: 2025-12-06 08:14:47.118 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:14:47 np0005548731 nova_compute[232433]: 2025-12-06 08:14:47.118 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:14:47 np0005548731 nova_compute[232433]: 2025-12-06 08:14:47.179 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.217 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:48.267 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '94'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.837 232437 DEBUG nova.network.neutron [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Updating instance_info_cache with network_info: [{"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.865 232437 DEBUG oslo_concurrency.lockutils [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.866 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.866 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.867 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b2ee70f1-6a92-4de1-a8d3-63f69565e29e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:14:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:48.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.881 232437 DEBUG nova.virt.libvirt.vif [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:14:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1967673920',display_name='tempest-TestNetworkAdvancedServerOps-server-1967673920',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1967673920',id=202,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE7K+0TQYdf6/UDgfdSEP1ZSscXf10O1YcJvJ4OnffpKNQOT8fnXRuOBwRzeckYQj5cV9UoztH3ENvALVUX8qfmg5l022XYIuytOwU07Yk+vPdHH//qvTGSTmErglU1U5g==',key_name='tempest-TestNetworkAdvancedServerOps-1235494473',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:14:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-kygr1lne',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:14:43Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=b2ee70f1-6a92-4de1-a8d3-63f69565e29e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.882 232437 DEBUG nova.network.os_vif_util [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.883 232437 DEBUG nova.network.os_vif_util [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:bf:3b,bridge_name='br-int',has_traffic_filtering=True,id=9c5705d0-f551-4726-a476-7a9060442f88,network=Network(a5a2ec32-8155-4bca-8340-a3a1ef336f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c5705d0-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.884 232437 DEBUG os_vif [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:bf:3b,bridge_name='br-int',has_traffic_filtering=True,id=9c5705d0-f551-4726-a476-7a9060442f88,network=Network(a5a2ec32-8155-4bca-8340-a3a1ef336f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c5705d0-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.884 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.885 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.886 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.891 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.892 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c5705d0-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.892 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c5705d0-f5, col_values=(('external_ids', {'iface-id': '9c5705d0-f551-4726-a476-7a9060442f88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:bf:3b', 'vm-uuid': 'b2ee70f1-6a92-4de1-a8d3-63f69565e29e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.893 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.893 232437 INFO os_vif [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:bf:3b,bridge_name='br-int',has_traffic_filtering=True,id=9c5705d0-f551-4726-a476-7a9060442f88,network=Network(a5a2ec32-8155-4bca-8340-a3a1ef336f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c5705d0-f5')#033[00m
Dec  6 03:14:48 np0005548731 nova_compute[232433]: 2025-12-06 08:14:48.915 232437 DEBUG nova.objects.instance [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'numa_topology' on Instance uuid b2ee70f1-6a92-4de1-a8d3-63f69565e29e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:14:48 np0005548731 kernel: tap9c5705d0-f5: entered promiscuous mode
Dec  6 03:14:49 np0005548731 NetworkManager[49182]: <info>  [1765008888.9857] manager: (tap9c5705d0-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/489)
Dec  6 03:14:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:49 np0005548731 systemd-udevd[335809]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.030 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:49 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:49Z|01036|binding|INFO|Claiming lport 9c5705d0-f551-4726-a476-7a9060442f88 for this chassis.
Dec  6 03:14:49 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:49Z|01037|binding|INFO|9c5705d0-f551-4726-a476-7a9060442f88: Claiming fa:16:3e:d2:bf:3b 10.100.0.5
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.038 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:bf:3b 10.100.0.5'], port_security=['fa:16:3e:d2:bf:3b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b2ee70f1-6a92-4de1-a8d3-63f69565e29e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2b618342-4b21-4494-867f-57d49b127e96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7e181fc-238b-465b-9a13-8c46b34bbffb, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9c5705d0-f551-4726-a476-7a9060442f88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.039 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9c5705d0-f551-4726-a476-7a9060442f88 in datapath a5a2ec32-8155-4bca-8340-a3a1ef336f77 bound to our chassis#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.040 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a5a2ec32-8155-4bca-8340-a3a1ef336f77#033[00m
Dec  6 03:14:49 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:49Z|01038|binding|INFO|Setting lport 9c5705d0-f551-4726-a476-7a9060442f88 ovn-installed in OVS
Dec  6 03:14:49 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:49Z|01039|binding|INFO|Setting lport 9c5705d0-f551-4726-a476-7a9060442f88 up in Southbound
Dec  6 03:14:49 np0005548731 NetworkManager[49182]: <info>  [1765008889.0477] device (tap9c5705d0-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:14:49 np0005548731 NetworkManager[49182]: <info>  [1765008889.0489] device (tap9c5705d0-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.051 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.055 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f98225d0-357c-4129-ad12-77a6870c155a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.056 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa5a2ec32-81 in ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.058 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa5a2ec32-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.058 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1aa734-be7b-4697-97d9-4adf41b816da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.059 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b980c045-1dba-4bcd-af85-5de9c1cec5f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 systemd-machined[195355]: New machine qemu-105-instance-000000ca.
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.072 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a8c81d-0d9b-4bc4-844a-d11149140781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 systemd[1]: Started Virtual Machine qemu-105-instance-000000ca.
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.095 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f9173153-9047-4929-8e6c-b83151760dd9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.139 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[00514944-308b-4b43-b16b-54ad852a6980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.144 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d84f42-8dea-4f56-9193-7514df82e485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 NetworkManager[49182]: <info>  [1765008889.1462] manager: (tapa5a2ec32-80): new Veth device (/org/freedesktop/NetworkManager/Devices/490)
Dec  6 03:14:49 np0005548731 systemd-udevd[335814]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.176 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[158c66c9-04a8-4652-abe4-eafdcb570eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.179 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3c7b84-4b7d-4275-b564-584fecdf10ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 NetworkManager[49182]: <info>  [1765008889.2015] device (tapa5a2ec32-80): carrier: link connected
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.206 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[59587bbb-37f8-4b99-a522-432d79a4284b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.225 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e453fa-e8c7-418d-8b17-3eb26823b6b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5a2ec32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:24:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 915120, 'reachable_time': 26316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335845, 'error': None, 'target': 'ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.240 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[81ceccce-ac8c-4213-9cd5-99ed4b6b8547]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:24c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 915120, 'tstamp': 915120}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335853, 'error': None, 'target': 'ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.260 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dc601fe3-1a2b-4867-94ae-02b253ce2589]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5a2ec32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:24:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 915120, 'reachable_time': 26316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335863, 'error': None, 'target': 'ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.294 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[18e04f4f-103d-4a81-a100-27a0a1be24ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.356 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5992b8c8-70db-455f-9882-cf7c7c177eb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.357 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5a2ec32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.357 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.357 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5a2ec32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:49 np0005548731 NetworkManager[49182]: <info>  [1765008889.3599] manager: (tapa5a2ec32-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/491)
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.360 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:49 np0005548731 kernel: tapa5a2ec32-80: entered promiscuous mode
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.363 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa5a2ec32-80, col_values=(('external_ids', {'iface-id': 'e4bc5e15-f64d-44cd-a19a-57bd42fe3ca9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:49 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:49Z|01040|binding|INFO|Releasing lport e4bc5e15-f64d-44cd-a19a-57bd42fe3ca9 from this chassis (sb_readonly=0)
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.365 232437 DEBUG nova.compute.manager [req-3eb8f303-a0e5-487a-b577-62da5b79402d req-ade1d613-a96c-4da4-a722-11d8cb9d0fed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.365 232437 DEBUG oslo_concurrency.lockutils [req-3eb8f303-a0e5-487a-b577-62da5b79402d req-ade1d613-a96c-4da4-a722-11d8cb9d0fed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.366 232437 DEBUG oslo_concurrency.lockutils [req-3eb8f303-a0e5-487a-b577-62da5b79402d req-ade1d613-a96c-4da4-a722-11d8cb9d0fed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.366 232437 DEBUG oslo_concurrency.lockutils [req-3eb8f303-a0e5-487a-b577-62da5b79402d req-ade1d613-a96c-4da4-a722-11d8cb9d0fed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.366 232437 DEBUG nova.compute.manager [req-3eb8f303-a0e5-487a-b577-62da5b79402d req-ade1d613-a96c-4da4-a722-11d8cb9d0fed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] No waiting events found dispatching network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.366 232437 WARNING nova.compute.manager [req-3eb8f303-a0e5-487a-b577-62da5b79402d req-ade1d613-a96c-4da4-a722-11d8cb9d0fed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received unexpected event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 for instance with vm_state suspended and task_state resuming.#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.367 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.383 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.384 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a5a2ec32-8155-4bca-8340-a3a1ef336f77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a5a2ec32-8155-4bca-8340-a3a1ef336f77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.385 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[43003379-90fa-4dcc-a5b7-6096afd36d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.386 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-a5a2ec32-8155-4bca-8340-a3a1ef336f77
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/a5a2ec32-8155-4bca-8340-a3a1ef336f77.pid.haproxy
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID a5a2ec32-8155-4bca-8340-a3a1ef336f77
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:14:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:49.386 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'env', 'PROCESS_TAG=haproxy-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a5a2ec32-8155-4bca-8340-a3a1ef336f77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.520 232437 DEBUG nova.virt.libvirt.host [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Removed pending event for b2ee70f1-6a92-4de1-a8d3-63f69565e29e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.520 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008889.5169947, b2ee70f1-6a92-4de1-a8d3-63f69565e29e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.520 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] VM Started (Lifecycle Event)#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.547 232437 DEBUG nova.compute.manager [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.548 232437 DEBUG nova.objects.instance [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid b2ee70f1-6a92-4de1-a8d3-63f69565e29e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.550 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.554 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.575 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.575 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008889.5195487, b2ee70f1-6a92-4de1-a8d3-63f69565e29e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.576 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.583 232437 INFO nova.virt.libvirt.driver [-] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Instance running successfully.#033[00m
Dec  6 03:14:49 np0005548731 virtqemud[232080]: argument unsupported: QEMU guest agent is not configured
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.587 232437 DEBUG nova.virt.libvirt.guest [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.588 232437 DEBUG nova.compute.manager [None req-9fd5ec9a-44e6-472f-899a-df2ed8dc0b13 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.594 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.596 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:14:49 np0005548731 nova_compute[232433]: 2025-12-06 08:14:49.623 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Dec  6 03:14:49 np0005548731 podman[335921]: 2025-12-06 08:14:49.734071939 +0000 UTC m=+0.050595153 container create 8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 03:14:49 np0005548731 systemd[1]: Started libpod-conmon-8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2.scope.
Dec  6 03:14:49 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:14:49 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b99b7ba5bf1ebc2b81909338d0f45eec03299ca268c439881ef58bd532600d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:14:49 np0005548731 podman[335921]: 2025-12-06 08:14:49.707888392 +0000 UTC m=+0.024411626 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:14:49 np0005548731 podman[335921]: 2025-12-06 08:14:49.813049613 +0000 UTC m=+0.129572847 container init 8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 03:14:49 np0005548731 podman[335921]: 2025-12-06 08:14:49.81828374 +0000 UTC m=+0.134806954 container start 8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 03:14:49 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[335936]: [NOTICE]   (335940) : New worker (335942) forked
Dec  6 03:14:49 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[335936]: [NOTICE]   (335940) : Loading success.
Dec  6 03:14:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:50 np0005548731 nova_compute[232433]: 2025-12-06 08:14:50.328 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Updating instance_info_cache with network_info: [{"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:14:50 np0005548731 nova_compute[232433]: 2025-12-06 08:14:50.367 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:14:50 np0005548731 nova_compute[232433]: 2025-12-06 08:14:50.368 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 03:14:50 np0005548731 nova_compute[232433]: 2025-12-06 08:14:50.368 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:50 np0005548731 nova_compute[232433]: 2025-12-06 08:14:50.369 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:50.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:50 np0005548731 nova_compute[232433]: 2025-12-06 08:14:50.990 232437 INFO nova.compute.manager [None req-920f9d12-d3c9-42be-aded-1ddd67e9c9ca 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Get console output#033[00m
Dec  6 03:14:50 np0005548731 nova_compute[232433]: 2025-12-06 08:14:50.995 261230 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  6 03:14:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.211796) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891211869, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2414, "num_deletes": 252, "total_data_size": 5961640, "memory_usage": 6045168, "flush_reason": "Manual Compaction"}
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891230453, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 3875037, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78183, "largest_seqno": 80592, "table_properties": {"data_size": 3865185, "index_size": 6281, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20466, "raw_average_key_size": 20, "raw_value_size": 3845433, "raw_average_value_size": 3880, "num_data_blocks": 274, "num_entries": 991, "num_filter_entries": 991, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008661, "oldest_key_time": 1765008661, "file_creation_time": 1765008891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 18816 microseconds, and 8223 cpu microseconds.
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.230529) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 3875037 bytes OK
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.230644) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.232864) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.232906) EVENT_LOG_v1 {"time_micros": 1765008891232896, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.232927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 5951078, prev total WAL file size 5951078, number of live WAL files 2.
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.234694) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(3784KB)], [159(11MB)]
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891234744, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 15871431, "oldest_snapshot_seqno": -1}
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.260 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.260 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.287 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Triggering sync for uuid b2ee70f1-6a92-4de1-a8d3-63f69565e29e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.287 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.288 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 10878 keys, 13926927 bytes, temperature: kUnknown
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891314270, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 13926927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13857007, "index_size": 41647, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27205, "raw_key_size": 287481, "raw_average_key_size": 26, "raw_value_size": 13666813, "raw_average_value_size": 1256, "num_data_blocks": 1582, "num_entries": 10878, "num_filter_entries": 10878, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765008891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.315 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.314507) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 13926927 bytes
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.316323) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.4 rd, 174.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 11.4 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 11411, records dropped: 533 output_compression: NoCompression
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.316338) EVENT_LOG_v1 {"time_micros": 1765008891316331, "job": 102, "event": "compaction_finished", "compaction_time_micros": 79608, "compaction_time_cpu_micros": 33717, "output_level": 6, "num_output_files": 1, "total_output_size": 13926927, "num_input_records": 11411, "num_output_records": 10878, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891317149, "job": 102, "event": "table_file_deletion", "file_number": 161}
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891319002, "job": 102, "event": "table_file_deletion", "file_number": 159}
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.234628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.319069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.319073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.319074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.319075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:14:51.319077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:14:51 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.462 232437 DEBUG nova.compute.manager [req-df0328de-0a51-487f-a8c5-5d974fa4bd3e req-b0e409bd-cd00-46d2-b4cd-80f8f31cade5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.463 232437 DEBUG oslo_concurrency.lockutils [req-df0328de-0a51-487f-a8c5-5d974fa4bd3e req-b0e409bd-cd00-46d2-b4cd-80f8f31cade5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.463 232437 DEBUG oslo_concurrency.lockutils [req-df0328de-0a51-487f-a8c5-5d974fa4bd3e req-b0e409bd-cd00-46d2-b4cd-80f8f31cade5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.463 232437 DEBUG oslo_concurrency.lockutils [req-df0328de-0a51-487f-a8c5-5d974fa4bd3e req-b0e409bd-cd00-46d2-b4cd-80f8f31cade5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.463 232437 DEBUG nova.compute.manager [req-df0328de-0a51-487f-a8c5-5d974fa4bd3e req-b0e409bd-cd00-46d2-b4cd-80f8f31cade5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] No waiting events found dispatching network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.463 232437 WARNING nova.compute.manager [req-df0328de-0a51-487f-a8c5-5d974fa4bd3e req-b0e409bd-cd00-46d2-b4cd-80f8f31cade5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received unexpected event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:14:51 np0005548731 nova_compute[232433]: 2025-12-06 08:14:51.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:52 np0005548731 nova_compute[232433]: 2025-12-06 08:14:52.131 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:52.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:53.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:53 np0005548731 nova_compute[232433]: 2025-12-06 08:14:53.220 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.106 232437 DEBUG nova.compute.manager [req-90a95d99-c899-4cdc-b004-823dfb583324 req-1e14ce61-31e0-42fb-bb34-77c17a0b868c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-changed-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.106 232437 DEBUG nova.compute.manager [req-90a95d99-c899-4cdc-b004-823dfb583324 req-1e14ce61-31e0-42fb-bb34-77c17a0b868c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Refreshing instance network info cache due to event network-changed-9c5705d0-f551-4726-a476-7a9060442f88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.106 232437 DEBUG oslo_concurrency.lockutils [req-90a95d99-c899-4cdc-b004-823dfb583324 req-1e14ce61-31e0-42fb-bb34-77c17a0b868c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.106 232437 DEBUG oslo_concurrency.lockutils [req-90a95d99-c899-4cdc-b004-823dfb583324 req-1e14ce61-31e0-42fb-bb34-77c17a0b868c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.107 232437 DEBUG nova.network.neutron [req-90a95d99-c899-4cdc-b004-823dfb583324 req-1e14ce61-31e0-42fb-bb34-77c17a0b868c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Refreshing network info cache for port 9c5705d0-f551-4726-a476-7a9060442f88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.107 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.133 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.133 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.133 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.187 232437 DEBUG oslo_concurrency.lockutils [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.187 232437 DEBUG oslo_concurrency.lockutils [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.188 232437 DEBUG oslo_concurrency.lockutils [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.188 232437 DEBUG oslo_concurrency.lockutils [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.188 232437 DEBUG oslo_concurrency.lockutils [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.190 232437 INFO nova.compute.manager [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Terminating instance#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.191 232437 DEBUG nova.compute.manager [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:14:54 np0005548731 kernel: tap9c5705d0-f5 (unregistering): left promiscuous mode
Dec  6 03:14:54 np0005548731 NetworkManager[49182]: <info>  [1765008894.2348] device (tap9c5705d0-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:14:54 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:54Z|01041|binding|INFO|Releasing lport 9c5705d0-f551-4726-a476-7a9060442f88 from this chassis (sb_readonly=0)
Dec  6 03:14:54 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:54Z|01042|binding|INFO|Setting lport 9c5705d0-f551-4726-a476-7a9060442f88 down in Southbound
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.270 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:54 np0005548731 ovn_controller[133927]: 2025-12-06T08:14:54Z|01043|binding|INFO|Removing iface tap9c5705d0-f5 ovn-installed in OVS
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.279 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.281 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:bf:3b 10.100.0.5'], port_security=['fa:16:3e:d2:bf:3b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b2ee70f1-6a92-4de1-a8d3-63f69565e29e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2b618342-4b21-4494-867f-57d49b127e96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7e181fc-238b-465b-9a13-8c46b34bbffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=9c5705d0-f551-4726-a476-7a9060442f88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.283 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 9c5705d0-f551-4726-a476-7a9060442f88 in datapath a5a2ec32-8155-4bca-8340-a3a1ef336f77 unbound from our chassis#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.284 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a5a2ec32-8155-4bca-8340-a3a1ef336f77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.285 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[35759ea1-a830-469c-9cde-baa6147f3996]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.285 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77 namespace which is not needed anymore#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.296 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:54 np0005548731 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000ca.scope: Deactivated successfully.
Dec  6 03:14:54 np0005548731 systemd-machined[195355]: Machine qemu-105-instance-000000ca terminated.
Dec  6 03:14:54 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[335936]: [NOTICE]   (335940) : haproxy version is 2.8.14-c23fe91
Dec  6 03:14:54 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[335936]: [NOTICE]   (335940) : path to executable is /usr/sbin/haproxy
Dec  6 03:14:54 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[335936]: [WARNING]  (335940) : Exiting Master process...
Dec  6 03:14:54 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[335936]: [ALERT]    (335940) : Current worker (335942) exited with code 143 (Terminated)
Dec  6 03:14:54 np0005548731 neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77[335936]: [WARNING]  (335940) : All workers exited. Exiting... (0)
Dec  6 03:14:54 np0005548731 systemd[1]: libpod-8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2.scope: Deactivated successfully.
Dec  6 03:14:54 np0005548731 podman[336047]: 2025-12-06 08:14:54.428512277 +0000 UTC m=+0.055080092 container died 8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.431 232437 INFO nova.virt.libvirt.driver [-] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Instance destroyed successfully.#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.432 232437 DEBUG nova.objects.instance [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid b2ee70f1-6a92-4de1-a8d3-63f69565e29e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.453 232437 DEBUG nova.virt.libvirt.vif [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:14:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1967673920',display_name='tempest-TestNetworkAdvancedServerOps-server-1967673920',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1967673920',id=202,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE7K+0TQYdf6/UDgfdSEP1ZSscXf10O1YcJvJ4OnffpKNQOT8fnXRuOBwRzeckYQj5cV9UoztH3ENvALVUX8qfmg5l022XYIuytOwU07Yk+vPdHH//qvTGSTmErglU1U5g==',key_name='tempest-TestNetworkAdvancedServerOps-1235494473',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:14:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-kygr1lne',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:14:49Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=b2ee70f1-6a92-4de1-a8d3-63f69565e29e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.453 232437 DEBUG nova.network.os_vif_util [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.454 232437 DEBUG nova.network.os_vif_util [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:bf:3b,bridge_name='br-int',has_traffic_filtering=True,id=9c5705d0-f551-4726-a476-7a9060442f88,network=Network(a5a2ec32-8155-4bca-8340-a3a1ef336f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c5705d0-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.455 232437 DEBUG os_vif [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:bf:3b,bridge_name='br-int',has_traffic_filtering=True,id=9c5705d0-f551-4726-a476-7a9060442f88,network=Network(a5a2ec32-8155-4bca-8340-a3a1ef336f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c5705d0-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:14:54 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2-userdata-shm.mount: Deactivated successfully.
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.460 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:54 np0005548731 systemd[1]: var-lib-containers-storage-overlay-a4b99b7ba5bf1ebc2b81909338d0f45eec03299ca268c439881ef58bd532600d-merged.mount: Deactivated successfully.
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.463 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c5705d0-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.464 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.466 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.468 232437 INFO os_vif [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:bf:3b,bridge_name='br-int',has_traffic_filtering=True,id=9c5705d0-f551-4726-a476-7a9060442f88,network=Network(a5a2ec32-8155-4bca-8340-a3a1ef336f77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c5705d0-f5')#033[00m
Dec  6 03:14:54 np0005548731 podman[336047]: 2025-12-06 08:14:54.471159715 +0000 UTC m=+0.097727530 container cleanup 8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  6 03:14:54 np0005548731 systemd[1]: libpod-conmon-8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2.scope: Deactivated successfully.
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.500 232437 DEBUG nova.compute.manager [req-313d126d-602d-475f-931c-eb7066a139f3 req-66eef874-7c06-4436-a8b4-e22fb0b4c68c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-vif-unplugged-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.501 232437 DEBUG oslo_concurrency.lockutils [req-313d126d-602d-475f-931c-eb7066a139f3 req-66eef874-7c06-4436-a8b4-e22fb0b4c68c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.502 232437 DEBUG oslo_concurrency.lockutils [req-313d126d-602d-475f-931c-eb7066a139f3 req-66eef874-7c06-4436-a8b4-e22fb0b4c68c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.502 232437 DEBUG oslo_concurrency.lockutils [req-313d126d-602d-475f-931c-eb7066a139f3 req-66eef874-7c06-4436-a8b4-e22fb0b4c68c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.502 232437 DEBUG nova.compute.manager [req-313d126d-602d-475f-931c-eb7066a139f3 req-66eef874-7c06-4436-a8b4-e22fb0b4c68c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] No waiting events found dispatching network-vif-unplugged-9c5705d0-f551-4726-a476-7a9060442f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.502 232437 DEBUG nova.compute.manager [req-313d126d-602d-475f-931c-eb7066a139f3 req-66eef874-7c06-4436-a8b4-e22fb0b4c68c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-vif-unplugged-9c5705d0-f551-4726-a476-7a9060442f88 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:14:54 np0005548731 podman[336102]: 2025-12-06 08:14:54.531893265 +0000 UTC m=+0.039717028 container remove 8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.537 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf45583-e8e5-487b-b437-f522944eaa08]: (4, ('Sat Dec  6 08:14:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77 (8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2)\n8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2\nSat Dec  6 08:14:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77 (8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2)\n8cbf91dabcabe2d5aef072de839ca25569c2affb1baf06cb4d5c80ff0882ddb2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.539 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[945af752-5878-4666-9b7c-fc01504b87f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.540 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5a2ec32-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:54 np0005548731 kernel: tapa5a2ec32-80: left promiscuous mode
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.556 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.561 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[695a2109-4a20-48e7-b71d-069ff3fa79fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.573 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[51aefc6b-3ed9-4d63-947b-8ce8fed9e0e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.574 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a703d119-a975-49c7-80da-8aac84700a30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.588 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f3abd00d-8961-4c3b-bcc7-b393510e2313]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 915113, 'reachable_time': 28747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336121, 'error': None, 'target': 'ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.591 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a5a2ec32-8155-4bca-8340-a3a1ef336f77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:14:54 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:14:54.591 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[add8df3e-853e-4059-9d4b-7dcef6fcd0cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:14:54 np0005548731 systemd[1]: run-netns-ovnmeta\x2da5a2ec32\x2d8155\x2d4bca\x2d8340\x2da3a1ef336f77.mount: Deactivated successfully.
Dec  6 03:14:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:14:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1644300321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.628 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.709 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.710 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.859 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.860 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4125MB free_disk=20.921855926513672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.860 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.861 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:54.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.890 232437 INFO nova.virt.libvirt.driver [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Deleting instance files /var/lib/nova/instances/b2ee70f1-6a92-4de1-a8d3-63f69565e29e_del#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.891 232437 INFO nova.virt.libvirt.driver [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Deletion of /var/lib/nova/instances/b2ee70f1-6a92-4de1-a8d3-63f69565e29e_del complete#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.953 232437 INFO nova.compute.manager [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.954 232437 DEBUG oslo.service.loopingcall [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.954 232437 DEBUG nova.compute.manager [-] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.955 232437 DEBUG nova.network.neutron [-] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.962 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance b2ee70f1-6a92-4de1-a8d3-63f69565e29e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.963 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:14:54 np0005548731 nova_compute[232433]: 2025-12-06 08:14:54.963 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:14:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:55.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:55 np0005548731 nova_compute[232433]: 2025-12-06 08:14:55.059 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:14:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:14:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:14:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1336330205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:14:55 np0005548731 nova_compute[232433]: 2025-12-06 08:14:55.520 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:14:55 np0005548731 nova_compute[232433]: 2025-12-06 08:14:55.528 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:14:55 np0005548731 nova_compute[232433]: 2025-12-06 08:14:55.779 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:14:55 np0005548731 nova_compute[232433]: 2025-12-06 08:14:55.980 232437 DEBUG nova.network.neutron [req-90a95d99-c899-4cdc-b004-823dfb583324 req-1e14ce61-31e0-42fb-bb34-77c17a0b868c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Updated VIF entry in instance network info cache for port 9c5705d0-f551-4726-a476-7a9060442f88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:14:55 np0005548731 nova_compute[232433]: 2025-12-06 08:14:55.981 232437 DEBUG nova.network.neutron [req-90a95d99-c899-4cdc-b004-823dfb583324 req-1e14ce61-31e0-42fb-bb34-77c17a0b868c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Updating instance_info_cache with network_info: [{"id": "9c5705d0-f551-4726-a476-7a9060442f88", "address": "fa:16:3e:d2:bf:3b", "network": {"id": "a5a2ec32-8155-4bca-8340-a3a1ef336f77", "bridge": "br-int", "label": "tempest-network-smoke--2053374407", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c5705d0-f5", "ovs_interfaceid": "9c5705d0-f551-4726-a476-7a9060442f88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:14:56 np0005548731 nova_compute[232433]: 2025-12-06 08:14:56.406 232437 DEBUG oslo_concurrency.lockutils [req-90a95d99-c899-4cdc-b004-823dfb583324 req-1e14ce61-31e0-42fb-bb34-77c17a0b868c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b2ee70f1-6a92-4de1-a8d3-63f69565e29e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:14:56 np0005548731 nova_compute[232433]: 2025-12-06 08:14:56.409 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:14:56 np0005548731 nova_compute[232433]: 2025-12-06 08:14:56.409 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:56 np0005548731 nova_compute[232433]: 2025-12-06 08:14:56.618 232437 DEBUG nova.compute.manager [req-1f83588b-f5d1-4cae-81fd-b0453f8070e6 req-3bb5bd5e-6470-49be-b15d-92d7995ab193 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:56 np0005548731 nova_compute[232433]: 2025-12-06 08:14:56.619 232437 DEBUG oslo_concurrency.lockutils [req-1f83588b-f5d1-4cae-81fd-b0453f8070e6 req-3bb5bd5e-6470-49be-b15d-92d7995ab193 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:56 np0005548731 nova_compute[232433]: 2025-12-06 08:14:56.620 232437 DEBUG oslo_concurrency.lockutils [req-1f83588b-f5d1-4cae-81fd-b0453f8070e6 req-3bb5bd5e-6470-49be-b15d-92d7995ab193 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:56 np0005548731 nova_compute[232433]: 2025-12-06 08:14:56.620 232437 DEBUG oslo_concurrency.lockutils [req-1f83588b-f5d1-4cae-81fd-b0453f8070e6 req-3bb5bd5e-6470-49be-b15d-92d7995ab193 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:56 np0005548731 nova_compute[232433]: 2025-12-06 08:14:56.620 232437 DEBUG nova.compute.manager [req-1f83588b-f5d1-4cae-81fd-b0453f8070e6 req-3bb5bd5e-6470-49be-b15d-92d7995ab193 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] No waiting events found dispatching network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:14:56 np0005548731 nova_compute[232433]: 2025-12-06 08:14:56.621 232437 WARNING nova.compute.manager [req-1f83588b-f5d1-4cae-81fd-b0453f8070e6 req-3bb5bd5e-6470-49be-b15d-92d7995ab193 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received unexpected event network-vif-plugged-9c5705d0-f551-4726-a476-7a9060442f88 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 03:14:56 np0005548731 nova_compute[232433]: 2025-12-06 08:14:56.853 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:14:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:56.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:14:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:57.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:14:57 np0005548731 nova_compute[232433]: 2025-12-06 08:14:57.026 232437 DEBUG nova.network.neutron [-] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:14:57 np0005548731 nova_compute[232433]: 2025-12-06 08:14:57.048 232437 INFO nova.compute.manager [-] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Took 2.09 seconds to deallocate network for instance.#033[00m
Dec  6 03:14:57 np0005548731 nova_compute[232433]: 2025-12-06 08:14:57.124 232437 DEBUG oslo_concurrency.lockutils [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:14:57 np0005548731 nova_compute[232433]: 2025-12-06 08:14:57.125 232437 DEBUG oslo_concurrency.lockutils [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:14:57 np0005548731 nova_compute[232433]: 2025-12-06 08:14:57.168 232437 DEBUG oslo_concurrency.processutils [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:14:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:14:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1889324331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:14:57 np0005548731 nova_compute[232433]: 2025-12-06 08:14:57.619 232437 DEBUG oslo_concurrency.processutils [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:14:57 np0005548731 nova_compute[232433]: 2025-12-06 08:14:57.626 232437 DEBUG nova.compute.provider_tree [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:14:57 np0005548731 nova_compute[232433]: 2025-12-06 08:14:57.643 232437 DEBUG nova.scheduler.client.report [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:14:57 np0005548731 nova_compute[232433]: 2025-12-06 08:14:57.663 232437 DEBUG oslo_concurrency.lockutils [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:57 np0005548731 nova_compute[232433]: 2025-12-06 08:14:57.686 232437 INFO nova.scheduler.client.report [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Deleted allocations for instance b2ee70f1-6a92-4de1-a8d3-63f69565e29e#033[00m
Dec  6 03:14:57 np0005548731 nova_compute[232433]: 2025-12-06 08:14:57.757 232437 DEBUG oslo_concurrency.lockutils [None req-3f65e81a-2b82-4d35-bb95-c777d8ee7fd8 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "b2ee70f1-6a92-4de1-a8d3-63f69565e29e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:14:58 np0005548731 nova_compute[232433]: 2025-12-06 08:14:58.410 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:58 np0005548731 nova_compute[232433]: 2025-12-06 08:14:58.411 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:14:58 np0005548731 nova_compute[232433]: 2025-12-06 08:14:58.698 232437 DEBUG nova.compute.manager [req-c28fb640-3089-4464-ba34-185178853b1b req-3b29b8aa-d102-44e4-bd43-26f1bc407b95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Received event network-vif-deleted-9c5705d0-f551-4726-a476-7a9060442f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:14:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:58.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:14:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:14:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:59.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:14:59 np0005548731 nova_compute[232433]: 2025-12-06 08:14:59.509 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:00 np0005548731 nova_compute[232433]: 2025-12-06 08:15:00.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:15:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:00.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:00.918 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:15:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:00.918 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:15:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:00.918 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:15:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:01.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:01 np0005548731 nova_compute[232433]: 2025-12-06 08:15:01.870 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:02.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:02 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 03:15:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:03.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:03 np0005548731 nova_compute[232433]: 2025-12-06 08:15:03.613 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:03 np0005548731 nova_compute[232433]: 2025-12-06 08:15:03.769 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:04 np0005548731 nova_compute[232433]: 2025-12-06 08:15:04.511 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:04.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:05.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:06 np0005548731 nova_compute[232433]: 2025-12-06 08:15:06.872 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:06.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:06 np0005548731 podman[336229]: 2025-12-06 08:15:06.892394082 +0000 UTC m=+0.054140280 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  6 03:15:06 np0005548731 podman[336231]: 2025-12-06 08:15:06.902286473 +0000 UTC m=+0.059383497 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 03:15:06 np0005548731 podman[336230]: 2025-12-06 08:15:06.926366209 +0000 UTC m=+0.085868192 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 03:15:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:07.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:08.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:09 np0005548731 nova_compute[232433]: 2025-12-06 08:15:09.430 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008894.4294739, b2ee70f1-6a92-4de1-a8d3-63f69565e29e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:15:09 np0005548731 nova_compute[232433]: 2025-12-06 08:15:09.431 232437 INFO nova.compute.manager [-] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:15:09 np0005548731 nova_compute[232433]: 2025-12-06 08:15:09.512 232437 DEBUG nova.compute.manager [None req-228067ef-6cf9-4d1e-9a2c-85e350e66d66 - - - - - -] [instance: b2ee70f1-6a92-4de1-a8d3-63f69565e29e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:15:09 np0005548731 nova_compute[232433]: 2025-12-06 08:15:09.515 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:10.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:11.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:11 np0005548731 nova_compute[232433]: 2025-12-06 08:15:11.926 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:15:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:12.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:15:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:13.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:14 np0005548731 nova_compute[232433]: 2025-12-06 08:15:14.570 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:15:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:14.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:15:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:15.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:16.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:16 np0005548731 nova_compute[232433]: 2025-12-06 08:15:16.928 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:17.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:15:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:18.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:15:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:15:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:19.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:15:19 np0005548731 nova_compute[232433]: 2025-12-06 08:15:19.573 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:15:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:20.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:15:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:15:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:21.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:15:21 np0005548731 nova_compute[232433]: 2025-12-06 08:15:21.935 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:22.220 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=95, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=94) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:15:22 np0005548731 nova_compute[232433]: 2025-12-06 08:15:22.220 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:22 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:22.221 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:15:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:22.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:23.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:24 np0005548731 nova_compute[232433]: 2025-12-06 08:15:24.576 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:24.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:15:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:25.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:15:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:25.224 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '95'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:15:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:15:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:26.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:15:26 np0005548731 nova_compute[232433]: 2025-12-06 08:15:26.937 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:15:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:27.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:15:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:28.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:15:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:29.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:15:29 np0005548731 nova_compute[232433]: 2025-12-06 08:15:29.632 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:30.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:31.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:31 np0005548731 nova_compute[232433]: 2025-12-06 08:15:31.940 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:15:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:32.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:15:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:15:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:33.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:15:34 np0005548731 nova_compute[232433]: 2025-12-06 08:15:34.687 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:34.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:35.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:36.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:36 np0005548731 nova_compute[232433]: 2025-12-06 08:15:36.982 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:37.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:37 np0005548731 podman[336367]: 2025-12-06 08:15:37.907351368 +0000 UTC m=+0.059465719 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:15:37 np0005548731 podman[336365]: 2025-12-06 08:15:37.916100261 +0000 UTC m=+0.079037906 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 03:15:37 np0005548731 podman[336366]: 2025-12-06 08:15:37.932335467 +0000 UTC m=+0.087442060 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  6 03:15:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:38.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:39.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:39 np0005548731 nova_compute[232433]: 2025-12-06 08:15:39.689 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:40.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:41.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:41 np0005548731 nova_compute[232433]: 2025-12-06 08:15:41.984 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:42.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:43.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:44 np0005548731 nova_compute[232433]: 2025-12-06 08:15:44.692 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:45.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:46.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:46 np0005548731 nova_compute[232433]: 2025-12-06 08:15:46.985 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:47.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.131 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.392 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.393 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.422 232437 DEBUG nova.compute.manager [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.499 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.499 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.506 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.507 232437 INFO nova.compute.claims [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:15:48 np0005548731 nova_compute[232433]: 2025-12-06 08:15:48.620 232437 DEBUG oslo_concurrency.processutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:15:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:48.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:15:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1497060737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.086 232437 DEBUG oslo_concurrency.processutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.093 232437 DEBUG nova.compute.provider_tree [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:15:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:49.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.113 232437 DEBUG nova.scheduler.client.report [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.138 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.138 232437 DEBUG nova.compute.manager [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.179 232437 DEBUG nova.compute.manager [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.179 232437 DEBUG nova.network.neutron [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.197 232437 INFO nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.214 232437 DEBUG nova.compute.manager [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.262 232437 INFO nova.virt.block_device [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Booting with volume 0ad7c9d2-a5e5-41f2-b829-63fa58265115 at /dev/vda#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.418 232437 DEBUG os_brick.utils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.420 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.430 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.430 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[da73c11a-3ad5-457c-ba75-fffc98be25e4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.431 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.437 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.437 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[8de6f389-dcbc-4118-ac62-582bcf345d4f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.438 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.447 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.447 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[629acb0b-5819-4f59-b39e-c0c269c85b88]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.449 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7700762c-b11d-4869-8e80-55addd890dd2]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.449 232437 DEBUG oslo_concurrency.processutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.477 232437 DEBUG oslo_concurrency.processutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.480 232437 DEBUG os_brick.initiator.connectors.lightos [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.480 232437 DEBUG os_brick.initiator.connectors.lightos [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.480 232437 DEBUG os_brick.initiator.connectors.lightos [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.481 232437 DEBUG os_brick.utils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.481 232437 DEBUG nova.virt.block_device [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Updating existing volume attachment record: 787f5f22-2874-4af8-a0cf-ee1d95931ae7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 03:15:49 np0005548731 nova_compute[232433]: 2025-12-06 08:15:49.695 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:50 np0005548731 nova_compute[232433]: 2025-12-06 08:15:50.037 232437 DEBUG nova.policy [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e8feb4540af4e2caa45a88a9202dbe2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:15:50 np0005548731 nova_compute[232433]: 2025-12-06 08:15:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:15:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:50 np0005548731 nova_compute[232433]: 2025-12-06 08:15:50.516 232437 DEBUG nova.compute.manager [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:15:50 np0005548731 nova_compute[232433]: 2025-12-06 08:15:50.517 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:15:50 np0005548731 nova_compute[232433]: 2025-12-06 08:15:50.518 232437 INFO nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Creating image(s)#033[00m
Dec  6 03:15:50 np0005548731 nova_compute[232433]: 2025-12-06 08:15:50.518 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 03:15:50 np0005548731 nova_compute[232433]: 2025-12-06 08:15:50.518 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Ensure instance console log exists: /var/lib/nova/instances/326c22d5-36a0-4d26-bfbf-6dc1811a1ea4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:15:50 np0005548731 nova_compute[232433]: 2025-12-06 08:15:50.519 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:15:50 np0005548731 nova_compute[232433]: 2025-12-06 08:15:50.519 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:15:50 np0005548731 nova_compute[232433]: 2025-12-06 08:15:50.519 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:15:50 np0005548731 nova_compute[232433]: 2025-12-06 08:15:50.702 232437 DEBUG nova.network.neutron [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Successfully created port: 2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:15:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:50.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:51.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:51 np0005548731 ovn_controller[133927]: 2025-12-06T08:15:51Z|01044|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Dec  6 03:15:51 np0005548731 nova_compute[232433]: 2025-12-06 08:15:51.527 232437 DEBUG nova.network.neutron [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Successfully updated port: 2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:15:51 np0005548731 nova_compute[232433]: 2025-12-06 08:15:51.562 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "refresh_cache-326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:15:51 np0005548731 nova_compute[232433]: 2025-12-06 08:15:51.562 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquired lock "refresh_cache-326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:15:51 np0005548731 nova_compute[232433]: 2025-12-06 08:15:51.563 232437 DEBUG nova.network.neutron [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:15:51 np0005548731 nova_compute[232433]: 2025-12-06 08:15:51.681 232437 DEBUG nova.compute.manager [req-61960553-76c3-4a35-9e2e-1224c3b2f438 req-e3e52eeb-caeb-4426-833e-d1fee1f5e7c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Received event network-changed-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:15:51 np0005548731 nova_compute[232433]: 2025-12-06 08:15:51.682 232437 DEBUG nova.compute.manager [req-61960553-76c3-4a35-9e2e-1224c3b2f438 req-e3e52eeb-caeb-4426-833e-d1fee1f5e7c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Refreshing instance network info cache due to event network-changed-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:15:51 np0005548731 nova_compute[232433]: 2025-12-06 08:15:51.682 232437 DEBUG oslo_concurrency.lockutils [req-61960553-76c3-4a35-9e2e-1224c3b2f438 req-e3e52eeb-caeb-4426-833e-d1fee1f5e7c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:15:51 np0005548731 nova_compute[232433]: 2025-12-06 08:15:51.736 232437 DEBUG nova.network.neutron [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:15:51 np0005548731 nova_compute[232433]: 2025-12-06 08:15:51.986 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:52 np0005548731 nova_compute[232433]: 2025-12-06 08:15:52.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:15:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:15:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:15:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:15:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:15:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:52.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:15:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:53.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:53 np0005548731 nova_compute[232433]: 2025-12-06 08:15:53.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.032 232437 DEBUG nova.network.neutron [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Updating instance_info_cache with network_info: [{"id": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "address": "fa:16:3e:1a:aa:95", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb2e839-a8", "ovs_interfaceid": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.052 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Releasing lock "refresh_cache-326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.053 232437 DEBUG nova.compute.manager [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Instance network_info: |[{"id": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "address": "fa:16:3e:1a:aa:95", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb2e839-a8", "ovs_interfaceid": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.053 232437 DEBUG oslo_concurrency.lockutils [req-61960553-76c3-4a35-9e2e-1224c3b2f438 req-e3e52eeb-caeb-4426-833e-d1fee1f5e7c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.053 232437 DEBUG nova.network.neutron [req-61960553-76c3-4a35-9e2e-1224c3b2f438 req-e3e52eeb-caeb-4426-833e-d1fee1f5e7c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Refreshing network info cache for port 2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.055 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Start _get_guest_xml network_info=[{"id": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "address": "fa:16:3e:1a:aa:95", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb2e839-a8", "ovs_interfaceid": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0ad7c9d2-a5e5-41f2-b829-63fa58265115', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0ad7c9d2-a5e5-41f2-b829-63fa58265115', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '326c22d5-36a0-4d26-bfbf-6dc1811a1ea4', 'attached_at': '', 'detached_at': '', 'volume_id': '0ad7c9d2-a5e5-41f2-b829-63fa58265115', 'serial': '0ad7c9d2-a5e5-41f2-b829-63fa58265115'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'attachment_id': '787f5f22-2874-4af8-a0cf-ee1d95931ae7', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.061 232437 WARNING nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.066 232437 DEBUG nova.virt.libvirt.host [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.066 232437 DEBUG nova.virt.libvirt.host [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.069 232437 DEBUG nova.virt.libvirt.host [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.070 232437 DEBUG nova.virt.libvirt.host [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.071 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.071 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.071 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.071 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.072 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.072 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.072 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.072 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.072 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.072 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.072 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.073 232437 DEBUG nova.virt.hardware [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.104 232437 DEBUG nova.storage.rbd_utils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.108 232437 DEBUG oslo_concurrency.processutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:15:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:15:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2826303976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.538 232437 DEBUG oslo_concurrency.processutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.662 232437 DEBUG os_brick.encryptors [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Using volume encryption metadata '{'encryption_key_id': '75728468-de3d-4c09-a185-2a907c7b8906', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0ad7c9d2-a5e5-41f2-b829-63fa58265115', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0ad7c9d2-a5e5-41f2-b829-63fa58265115', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '326c22d5-36a0-4d26-bfbf-6dc1811a1ea4', 'attached_at': '', 'detached_at': '', 'volume_id': '0ad7c9d2-a5e5-41f2-b829-63fa58265115', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.666 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.689 232437 DEBUG barbicanclient.v1.secrets [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/75728468-de3d-4c09-a185-2a907c7b8906 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.690 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.698 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.711 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.712 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.735 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.736 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.754 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.755 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.782 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.783 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.806 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.807 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.834 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.834 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.863 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.864 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.887 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.888 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.911 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.912 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.933 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.934 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:54.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.951 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.952 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.976 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:54 np0005548731 nova_compute[232433]: 2025-12-06 08:15:54.976 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.004 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.005 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.028 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.029 232437 INFO barbicanclient.base [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Calculated Secrets uuid ref: secrets/75728468-de3d-4c09-a185-2a907c7b8906#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.050 232437 DEBUG barbicanclient.client [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.051 232437 DEBUG nova.virt.libvirt.host [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <usage type="volume">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <volume>0ad7c9d2-a5e5-41f2-b829-63fa58265115</volume>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  </usage>
Dec  6 03:15:55 np0005548731 nova_compute[232433]: </secret>
Dec  6 03:15:55 np0005548731 nova_compute[232433]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.095 232437 DEBUG nova.virt.libvirt.vif [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:15:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2129660522',display_name='tempest-TestVolumeBootPattern-server-2129660522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2129660522',id=205,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-2o7nux00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:15:49Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=326c22d5-36a0-4d26-bfbf-6dc1811a1ea4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "address": "fa:16:3e:1a:aa:95", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb2e839-a8", "ovs_interfaceid": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.095 232437 DEBUG nova.network.os_vif_util [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "address": "fa:16:3e:1a:aa:95", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb2e839-a8", "ovs_interfaceid": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.096 232437 DEBUG nova.network.os_vif_util [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=2fb2e839-a8bc-4fe9-9dc5-09183d3bb063,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb2e839-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.099 232437 DEBUG nova.objects.instance [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'pci_devices' on Instance uuid 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.111 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <uuid>326c22d5-36a0-4d26-bfbf-6dc1811a1ea4</uuid>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <name>instance-000000cd</name>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestVolumeBootPattern-server-2129660522</nova:name>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:15:54</nova:creationTime>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <nova:user uuid="8e8feb4540af4e2caa45a88a9202dbe2">tempest-TestVolumeBootPattern-97496240-project-member</nova:user>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <nova:project uuid="4b2dc4b8729f446a9c7ac69ca446f71d">tempest-TestVolumeBootPattern-97496240</nova:project>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <nova:port uuid="2fb2e839-a8bc-4fe9-9dc5-09183d3bb063">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <entry name="serial">326c22d5-36a0-4d26-bfbf-6dc1811a1ea4</entry>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <entry name="uuid">326c22d5-36a0-4d26-bfbf-6dc1811a1ea4</entry>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/326c22d5-36a0-4d26-bfbf-6dc1811a1ea4_disk.config">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-0ad7c9d2-a5e5-41f2-b829-63fa58265115">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <serial>0ad7c9d2-a5e5-41f2-b829-63fa58265115</serial>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <encryption format="luks">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:        <secret type="passphrase" uuid="753d4441-9316-4fb5-b8e8-f6266cf62132"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      </encryption>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:1a:aa:95"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <target dev="tap2fb2e839-a8"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/326c22d5-36a0-4d26-bfbf-6dc1811a1ea4/console.log" append="off"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:15:55 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:15:55 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:15:55 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:15:55 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.112 232437 DEBUG nova.compute.manager [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Preparing to wait for external event network-vif-plugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.112 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.113 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.113 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.113 232437 DEBUG nova.virt.libvirt.vif [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:15:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2129660522',display_name='tempest-TestVolumeBootPattern-server-2129660522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2129660522',id=205,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-2o7nux00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:15:49Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=326c22d5-36a0-4d26-bfbf-6dc1811a1ea4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "address": "fa:16:3e:1a:aa:95", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb2e839-a8", "ovs_interfaceid": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.114 232437 DEBUG nova.network.os_vif_util [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "address": "fa:16:3e:1a:aa:95", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb2e839-a8", "ovs_interfaceid": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:15:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:55.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.114 232437 DEBUG nova.network.os_vif_util [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=2fb2e839-a8bc-4fe9-9dc5-09183d3bb063,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb2e839-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.115 232437 DEBUG os_vif [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=2fb2e839-a8bc-4fe9-9dc5-09183d3bb063,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb2e839-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.115 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.115 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.116 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.119 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.120 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fb2e839-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.120 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fb2e839-a8, col_values=(('external_ids', {'iface-id': '2fb2e839-a8bc-4fe9-9dc5-09183d3bb063', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:aa:95', 'vm-uuid': '326c22d5-36a0-4d26-bfbf-6dc1811a1ea4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.121 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:55 np0005548731 NetworkManager[49182]: <info>  [1765008955.1224] manager: (tap2fb2e839-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.124 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.131 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.133 232437 INFO os_vif [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=2fb2e839-a8bc-4fe9-9dc5-09183d3bb063,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb2e839-a8')#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.199 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.200 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.200 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No VIF found with MAC fa:16:3e:1a:aa:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.201 232437 INFO nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Using config drive#033[00m
Dec  6 03:15:55 np0005548731 nova_compute[232433]: 2025-12-06 08:15:55.232 232437 DEBUG nova.storage.rbd_utils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:15:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.064 232437 DEBUG nova.network.neutron [req-61960553-76c3-4a35-9e2e-1224c3b2f438 req-e3e52eeb-caeb-4426-833e-d1fee1f5e7c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Updated VIF entry in instance network info cache for port 2fb2e839-a8bc-4fe9-9dc5-09183d3bb063. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.065 232437 DEBUG nova.network.neutron [req-61960553-76c3-4a35-9e2e-1224c3b2f438 req-e3e52eeb-caeb-4426-833e-d1fee1f5e7c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Updating instance_info_cache with network_info: [{"id": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "address": "fa:16:3e:1a:aa:95", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb2e839-a8", "ovs_interfaceid": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.087 232437 DEBUG oslo_concurrency.lockutils [req-61960553-76c3-4a35-9e2e-1224c3b2f438 req-e3e52eeb-caeb-4426-833e-d1fee1f5e7c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.126 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.127 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.128 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.168 232437 INFO nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Creating config drive at /var/lib/nova/instances/326c22d5-36a0-4d26-bfbf-6dc1811a1ea4/disk.config#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.175 232437 DEBUG oslo_concurrency.processutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/326c22d5-36a0-4d26-bfbf-6dc1811a1ea4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppa0m2p07 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.322 232437 DEBUG oslo_concurrency.processutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/326c22d5-36a0-4d26-bfbf-6dc1811a1ea4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppa0m2p07" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.352 232437 DEBUG nova.storage.rbd_utils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.356 232437 DEBUG oslo_concurrency.processutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/326c22d5-36a0-4d26-bfbf-6dc1811a1ea4/disk.config 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.521 232437 DEBUG oslo_concurrency.processutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/326c22d5-36a0-4d26-bfbf-6dc1811a1ea4/disk.config 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.522 232437 INFO nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Deleting local config drive /var/lib/nova/instances/326c22d5-36a0-4d26-bfbf-6dc1811a1ea4/disk.config because it was imported into RBD.#033[00m
Dec  6 03:15:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:15:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1161032478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:15:56 np0005548731 kernel: tap2fb2e839-a8: entered promiscuous mode
Dec  6 03:15:56 np0005548731 NetworkManager[49182]: <info>  [1765008956.5769] manager: (tap2fb2e839-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/493)
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.575 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:15:56Z|01045|binding|INFO|Claiming lport 2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 for this chassis.
Dec  6 03:15:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:15:56Z|01046|binding|INFO|2fb2e839-a8bc-4fe9-9dc5-09183d3bb063: Claiming fa:16:3e:1a:aa:95 10.100.0.4
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.579 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.592 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:aa:95 10.100.0.4'], port_security=['fa:16:3e:1a:aa:95 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '326c22d5-36a0-4d26-bfbf-6dc1811a1ea4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0d5802d0-c57b-4fcd-911a-7dedb970b91e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=2fb2e839-a8bc-4fe9-9dc5-09183d3bb063) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.594 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a bound to our chassis#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.597 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4ef1374-9c77-45a7-8776-50aa60c7d84a#033[00m
Dec  6 03:15:56 np0005548731 systemd-machined[195355]: New machine qemu-106-instance-000000cd.
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.608 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0520f5b9-81c9-478f-8075-0002c5ca8e57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.609 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4ef1374-91 in ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.610 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4ef1374-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.610 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c1c7e7-51c9-46a4-992c-d88fbf1b3151]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.611 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b02286-870e-4faa-86e0-a25cb5bdd2a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 systemd-udevd[336785]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:15:56 np0005548731 NetworkManager[49182]: <info>  [1765008956.6237] device (tap2fb2e839-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.622 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[89879adf-2ddb-4109-808e-5b6bcb9595db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 NetworkManager[49182]: <info>  [1765008956.6244] device (tap2fb2e839-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:15:56 np0005548731 systemd[1]: Started Virtual Machine qemu-106-instance-000000cd.
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.641 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:15:56Z|01047|binding|INFO|Setting lport 2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 ovn-installed in OVS
Dec  6 03:15:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:15:56Z|01048|binding|INFO|Setting lport 2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 up in Southbound
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.645 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.650 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1afec92a-1dc5-401c-bd09-43aa40da4151]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.675 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[75128f25-cb31-4792-be74-5bc935d3e9bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 systemd-udevd[336788]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.680 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bce75286-1c6b-4909-9fb1-dd946fe68ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 NetworkManager[49182]: <info>  [1765008956.6822] manager: (tapb4ef1374-90): new Veth device (/org/freedesktop/NetworkManager/Devices/494)
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.716 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4fdfc8-a046-4122-9f87-12a42628f055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.720 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b782fb-5736-4a40-b436-f9e5e1de2fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 NetworkManager[49182]: <info>  [1765008956.7412] device (tapb4ef1374-90): carrier: link connected
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.748 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[82865dc2-12f3-405e-bc83-d4985f7401e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.749 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000cd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.749 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000cd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.764 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b82b218c-3a01-4f11-bce7-f2b200c0bb18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921874, 'reachable_time': 44410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336817, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.778 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[02756420-434b-4aec-8ce7-44be0f4c9c5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:d4b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 921874, 'tstamp': 921874}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336818, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.794 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ec332433-2b4a-4a86-9e03-6ab52a690c09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921874, 'reachable_time': 44410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336819, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.832 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[93ab2598-1fba-4c41-b8fe-a4dfa592e165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.893 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a04d1af3-3d68-4c42-85c7-42d089fe12ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.894 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.895 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.895 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4ef1374-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:15:56 np0005548731 kernel: tapb4ef1374-90: entered promiscuous mode
Dec  6 03:15:56 np0005548731 NetworkManager[49182]: <info>  [1765008956.8983] manager: (tapb4ef1374-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/495)
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.897 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.900 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4ef1374-90, col_values=(('external_ids', {'iface-id': '32c82c25-6496-4edd-ba74-1791824b99ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:15:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:15:56Z|01049|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.902 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.916 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.917 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.918 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cb145b1c-4b5c-4dc6-932e-c9883dfe1d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.919 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:15:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:56.920 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'env', 'PROCESS_TAG=haproxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4ef1374-9c77-45a7-8776-50aa60c7d84a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.931 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.932 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4101MB free_disk=20.967517852783203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.933 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.933 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:15:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:56.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:56 np0005548731 nova_compute[232433]: 2025-12-06 08:15:56.987 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.038 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.039 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.039 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.053 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.072 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.073 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.091 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.115 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 03:15:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:57.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.150 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.237 232437 DEBUG nova.compute.manager [req-26ba9cb5-3ea4-44dc-ba30-0a8ccbe0adc9 req-07a76706-0c08-4b89-b9f1-2ec924cf2ac4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Received event network-vif-plugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.238 232437 DEBUG oslo_concurrency.lockutils [req-26ba9cb5-3ea4-44dc-ba30-0a8ccbe0adc9 req-07a76706-0c08-4b89-b9f1-2ec924cf2ac4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.238 232437 DEBUG oslo_concurrency.lockutils [req-26ba9cb5-3ea4-44dc-ba30-0a8ccbe0adc9 req-07a76706-0c08-4b89-b9f1-2ec924cf2ac4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.238 232437 DEBUG oslo_concurrency.lockutils [req-26ba9cb5-3ea4-44dc-ba30-0a8ccbe0adc9 req-07a76706-0c08-4b89-b9f1-2ec924cf2ac4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.239 232437 DEBUG nova.compute.manager [req-26ba9cb5-3ea4-44dc-ba30-0a8ccbe0adc9 req-07a76706-0c08-4b89-b9f1-2ec924cf2ac4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Processing event network-vif-plugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:15:57 np0005548731 podman[336888]: 2025-12-06 08:15:57.273271856 +0000 UTC m=+0.053043853 container create 6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 03:15:57 np0005548731 systemd[1]: Started libpod-conmon-6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc.scope.
Dec  6 03:15:57 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:15:57 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7469c93c6d6d14d9a513c201f6b31cdb54e5e0ac455a48946040d1532c997e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:15:57 np0005548731 podman[336888]: 2025-12-06 08:15:57.247093038 +0000 UTC m=+0.026865065 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:15:57 np0005548731 podman[336888]: 2025-12-06 08:15:57.362603331 +0000 UTC m=+0.142375358 container init 6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 03:15:57 np0005548731 podman[336888]: 2025-12-06 08:15:57.367944801 +0000 UTC m=+0.147716828 container start 6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  6 03:15:57 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[336922]: [NOTICE]   (336926) : New worker (336928) forked
Dec  6 03:15:57 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[336922]: [NOTICE]   (336926) : Loading success.
Dec  6 03:15:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:15:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2520721463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.615 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.624 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.674 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.953 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:15:57 np0005548731 nova_compute[232433]: 2025-12-06 08:15:57.954 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:15:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:58.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:15:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:15:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:59.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.366 232437 DEBUG nova.compute.manager [req-65a09019-eb52-49e5-b40b-fc45c4c32df1 req-a0f640c3-ceea-4f61-955e-43fc049f0003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Received event network-vif-plugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.367 232437 DEBUG oslo_concurrency.lockutils [req-65a09019-eb52-49e5-b40b-fc45c4c32df1 req-a0f640c3-ceea-4f61-955e-43fc049f0003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.367 232437 DEBUG oslo_concurrency.lockutils [req-65a09019-eb52-49e5-b40b-fc45c4c32df1 req-a0f640c3-ceea-4f61-955e-43fc049f0003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.367 232437 DEBUG oslo_concurrency.lockutils [req-65a09019-eb52-49e5-b40b-fc45c4c32df1 req-a0f640c3-ceea-4f61-955e-43fc049f0003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.368 232437 DEBUG nova.compute.manager [req-65a09019-eb52-49e5-b40b-fc45c4c32df1 req-a0f640c3-ceea-4f61-955e-43fc049f0003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] No waiting events found dispatching network-vif-plugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.368 232437 WARNING nova.compute.manager [req-65a09019-eb52-49e5-b40b-fc45c4c32df1 req-a0f640c3-ceea-4f61-955e-43fc049f0003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Received unexpected event network-vif-plugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 for instance with vm_state building and task_state spawning.#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.500 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008959.4998944, 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.500 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] VM Started (Lifecycle Event)#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.502 232437 DEBUG nova.compute.manager [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.506 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.509 232437 INFO nova.virt.libvirt.driver [-] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Instance spawned successfully.#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.510 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:15:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:15:59 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.530 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.535 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.538 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.538 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.538 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.539 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.539 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.539 232437 DEBUG nova.virt.libvirt.driver [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.588 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.589 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008959.5001087, 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.589 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.634 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.637 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008959.5055845, 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.637 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.668 232437 INFO nova.compute.manager [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Took 9.15 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.669 232437 DEBUG nova.compute.manager [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.670 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.681 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.731 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:15:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:59.746 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=96, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=95) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.747 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:15:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:15:59.748 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.758 232437 INFO nova.compute.manager [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Took 11.29 seconds to build instance.#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.796 232437 DEBUG oslo_concurrency.lockutils [None req-56685c18-7a67-4377-9203-025a37e0b256 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.955 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:15:59 np0005548731 nova_compute[232433]: 2025-12-06 08:15:59.955 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:16:00 np0005548731 nova_compute[232433]: 2025-12-06 08:16:00.122 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:00.918 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:00.919 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:00.919 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:00.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:01.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:01 np0005548731 nova_compute[232433]: 2025-12-06 08:16:01.989 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:02 np0005548731 nova_compute[232433]: 2025-12-06 08:16:02.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:16:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:02.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:03.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.211 232437 DEBUG oslo_concurrency.lockutils [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.211 232437 DEBUG oslo_concurrency.lockutils [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.211 232437 DEBUG oslo_concurrency.lockutils [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.212 232437 DEBUG oslo_concurrency.lockutils [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.212 232437 DEBUG oslo_concurrency.lockutils [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.213 232437 INFO nova.compute.manager [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Terminating instance#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.215 232437 DEBUG nova.compute.manager [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:16:03 np0005548731 kernel: tap2fb2e839-a8 (unregistering): left promiscuous mode
Dec  6 03:16:03 np0005548731 NetworkManager[49182]: <info>  [1765008963.4346] device (tap2fb2e839-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:16:03 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:03Z|01050|binding|INFO|Releasing lport 2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 from this chassis (sb_readonly=0)
Dec  6 03:16:03 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:03Z|01051|binding|INFO|Setting lport 2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 down in Southbound
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.443 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:03 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:03Z|01052|binding|INFO|Removing iface tap2fb2e839-a8 ovn-installed in OVS
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.445 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.449 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:aa:95 10.100.0.4'], port_security=['fa:16:3e:1a:aa:95 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '326c22d5-36a0-4d26-bfbf-6dc1811a1ea4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0d5802d0-c57b-4fcd-911a-7dedb970b91e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=2fb2e839-a8bc-4fe9-9dc5-09183d3bb063) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.450 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a unbound from our chassis#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.452 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4ef1374-9c77-45a7-8776-50aa60c7d84a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.452 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a08492ee-4f48-46c2-a1fa-985ca87a749c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.453 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace which is not needed anymore#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.462 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:03 np0005548731 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000cd.scope: Deactivated successfully.
Dec  6 03:16:03 np0005548731 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000cd.scope: Consumed 2.991s CPU time.
Dec  6 03:16:03 np0005548731 systemd-machined[195355]: Machine qemu-106-instance-000000cd terminated.
Dec  6 03:16:03 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[336922]: [NOTICE]   (336926) : haproxy version is 2.8.14-c23fe91
Dec  6 03:16:03 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[336922]: [NOTICE]   (336926) : path to executable is /usr/sbin/haproxy
Dec  6 03:16:03 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[336922]: [WARNING]  (336926) : Exiting Master process...
Dec  6 03:16:03 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[336922]: [ALERT]    (336926) : Current worker (336928) exited with code 143 (Terminated)
Dec  6 03:16:03 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[336922]: [WARNING]  (336926) : All workers exited. Exiting... (0)
Dec  6 03:16:03 np0005548731 systemd[1]: libpod-6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc.scope: Deactivated successfully.
Dec  6 03:16:03 np0005548731 podman[337074]: 2025-12-06 08:16:03.591798938 +0000 UTC m=+0.043176972 container died 6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:16:03 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc-userdata-shm.mount: Deactivated successfully.
Dec  6 03:16:03 np0005548731 systemd[1]: var-lib-containers-storage-overlay-5c7469c93c6d6d14d9a513c201f6b31cdb54e5e0ac455a48946040d1532c997e-merged.mount: Deactivated successfully.
Dec  6 03:16:03 np0005548731 podman[337074]: 2025-12-06 08:16:03.629416134 +0000 UTC m=+0.080794168 container cleanup 6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.635 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.639 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:03 np0005548731 systemd[1]: libpod-conmon-6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc.scope: Deactivated successfully.
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.651 232437 INFO nova.virt.libvirt.driver [-] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Instance destroyed successfully.#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.652 232437 DEBUG nova.objects.instance [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'resources' on Instance uuid 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:16:03 np0005548731 podman[337109]: 2025-12-06 08:16:03.713363778 +0000 UTC m=+0.057062571 container remove 6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.719 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[22a32e18-fda7-43bf-9606-07f92ecf027e]: (4, ('Sat Dec  6 08:16:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc)\n6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc\nSat Dec  6 08:16:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc)\n6471f175ab28414e82ca3ee9daaabd6eb13a8e34eb6b8b221d947b8df829bebc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.722 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7c44aec9-7362-4a13-9b0f-170a854322f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.723 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:03 np0005548731 kernel: tapb4ef1374-90: left promiscuous mode
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.725 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.742 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.746 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3f30ed90-527d-4eff-b64f-87390c6db78b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.764 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1b2da9db-911a-46a9-8df6-0e598a2f6ab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.766 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9f0847-0918-470c-9cca-d6e4fd7a18bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.774 232437 DEBUG nova.compute.manager [req-e6048943-344b-4a12-a1ab-bcbce59583c2 req-ba0b98a4-2a78-4fa4-b40e-9ab7272ee4d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Received event network-vif-unplugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.774 232437 DEBUG oslo_concurrency.lockutils [req-e6048943-344b-4a12-a1ab-bcbce59583c2 req-ba0b98a4-2a78-4fa4-b40e-9ab7272ee4d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.775 232437 DEBUG oslo_concurrency.lockutils [req-e6048943-344b-4a12-a1ab-bcbce59583c2 req-ba0b98a4-2a78-4fa4-b40e-9ab7272ee4d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.775 232437 DEBUG oslo_concurrency.lockutils [req-e6048943-344b-4a12-a1ab-bcbce59583c2 req-ba0b98a4-2a78-4fa4-b40e-9ab7272ee4d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.775 232437 DEBUG nova.compute.manager [req-e6048943-344b-4a12-a1ab-bcbce59583c2 req-ba0b98a4-2a78-4fa4-b40e-9ab7272ee4d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] No waiting events found dispatching network-vif-unplugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.775 232437 DEBUG nova.compute.manager [req-e6048943-344b-4a12-a1ab-bcbce59583c2 req-ba0b98a4-2a78-4fa4-b40e-9ab7272ee4d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Received event network-vif-unplugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.777 232437 DEBUG nova.virt.libvirt.vif [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:15:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2129660522',display_name='tempest-TestVolumeBootPattern-server-2129660522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2129660522',id=205,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:15:59Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-2o7nux00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:15:59Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=326c22d5-36a0-4d26-bfbf-6dc1811a1ea4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "address": "fa:16:3e:1a:aa:95", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb2e839-a8", "ovs_interfaceid": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.777 232437 DEBUG nova.network.os_vif_util [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "address": "fa:16:3e:1a:aa:95", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb2e839-a8", "ovs_interfaceid": "2fb2e839-a8bc-4fe9-9dc5-09183d3bb063", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.778 232437 DEBUG nova.network.os_vif_util [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=2fb2e839-a8bc-4fe9-9dc5-09183d3bb063,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb2e839-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.779 232437 DEBUG os_vif [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=2fb2e839-a8bc-4fe9-9dc5-09183d3bb063,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb2e839-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.780 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.781 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fb2e839-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.782 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.784 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.786 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3b158aa5-63b8-45ee-bf52-adf2bdec1b31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921867, 'reachable_time': 39641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337133, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:03 np0005548731 nova_compute[232433]: 2025-12-06 08:16:03.788 232437 INFO os_vif [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:aa:95,bridge_name='br-int',has_traffic_filtering=True,id=2fb2e839-a8bc-4fe9-9dc5-09183d3bb063,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb2e839-a8')#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.789 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:16:03 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:03.790 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[1378b9a3-3454-4680-a7aa-a76628228894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:03 np0005548731 systemd[1]: run-netns-ovnmeta\x2db4ef1374\x2d9c77\x2d45a7\x2d8776\x2d50aa60c7d84a.mount: Deactivated successfully.
Dec  6 03:16:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:04.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:05.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.280 232437 INFO nova.virt.libvirt.driver [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Deleting instance files /var/lib/nova/instances/326c22d5-36a0-4d26-bfbf-6dc1811a1ea4_del#033[00m
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.281 232437 INFO nova.virt.libvirt.driver [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Deletion of /var/lib/nova/instances/326c22d5-36a0-4d26-bfbf-6dc1811a1ea4_del complete#033[00m
Dec  6 03:16:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.330 232437 INFO nova.compute.manager [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Took 2.11 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.330 232437 DEBUG oslo.service.loopingcall [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.331 232437 DEBUG nova.compute.manager [-] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.331 232437 DEBUG nova.network.neutron [-] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.866 232437 DEBUG nova.compute.manager [req-0ff86584-1155-401b-9072-5fe2a40c109a req-9e675e08-b7f5-4cb3-944f-b854c1eee615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Received event network-vif-plugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.866 232437 DEBUG oslo_concurrency.lockutils [req-0ff86584-1155-401b-9072-5fe2a40c109a req-9e675e08-b7f5-4cb3-944f-b854c1eee615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.867 232437 DEBUG oslo_concurrency.lockutils [req-0ff86584-1155-401b-9072-5fe2a40c109a req-9e675e08-b7f5-4cb3-944f-b854c1eee615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.867 232437 DEBUG oslo_concurrency.lockutils [req-0ff86584-1155-401b-9072-5fe2a40c109a req-9e675e08-b7f5-4cb3-944f-b854c1eee615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.867 232437 DEBUG nova.compute.manager [req-0ff86584-1155-401b-9072-5fe2a40c109a req-9e675e08-b7f5-4cb3-944f-b854c1eee615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] No waiting events found dispatching network-vif-plugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:16:05 np0005548731 nova_compute[232433]: 2025-12-06 08:16:05.867 232437 WARNING nova.compute.manager [req-0ff86584-1155-401b-9072-5fe2a40c109a req-9e675e08-b7f5-4cb3-944f-b854c1eee615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Received unexpected event network-vif-plugged-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 03:16:06 np0005548731 nova_compute[232433]: 2025-12-06 08:16:06.063 232437 DEBUG nova.network.neutron [-] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:16:06 np0005548731 nova_compute[232433]: 2025-12-06 08:16:06.082 232437 INFO nova.compute.manager [-] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Took 0.75 seconds to deallocate network for instance.#033[00m
Dec  6 03:16:06 np0005548731 nova_compute[232433]: 2025-12-06 08:16:06.142 232437 DEBUG nova.compute.manager [req-e3a72db5-0c47-4371-aa60-6408d6343ada req-702b0375-edfd-4f49-b1d8-e88831920897 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Received event network-vif-deleted-2fb2e839-a8bc-4fe9-9dc5-09183d3bb063 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:06 np0005548731 nova_compute[232433]: 2025-12-06 08:16:06.295 232437 INFO nova.compute.manager [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Dec  6 03:16:06 np0005548731 nova_compute[232433]: 2025-12-06 08:16:06.343 232437 DEBUG oslo_concurrency.lockutils [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:06 np0005548731 nova_compute[232433]: 2025-12-06 08:16:06.343 232437 DEBUG oslo_concurrency.lockutils [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:06 np0005548731 nova_compute[232433]: 2025-12-06 08:16:06.388 232437 DEBUG oslo_concurrency.processutils [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:06.750 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '96'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:16:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1624445191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:16:06 np0005548731 nova_compute[232433]: 2025-12-06 08:16:06.806 232437 DEBUG oslo_concurrency.processutils [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:06 np0005548731 nova_compute[232433]: 2025-12-06 08:16:06.812 232437 DEBUG nova.compute.provider_tree [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:16:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:06.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:06 np0005548731 nova_compute[232433]: 2025-12-06 08:16:06.991 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:07.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:07 np0005548731 nova_compute[232433]: 2025-12-06 08:16:07.285 232437 DEBUG nova.scheduler.client.report [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:16:07 np0005548731 nova_compute[232433]: 2025-12-06 08:16:07.509 232437 DEBUG oslo_concurrency.lockutils [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:07 np0005548731 nova_compute[232433]: 2025-12-06 08:16:07.540 232437 INFO nova.scheduler.client.report [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Deleted allocations for instance 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4#033[00m
Dec  6 03:16:07 np0005548731 nova_compute[232433]: 2025-12-06 08:16:07.635 232437 DEBUG oslo_concurrency.lockutils [None req-84d10ed6-cbb4-43bc-a284-a956943d0f55 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "326c22d5-36a0-4d26-bfbf-6dc1811a1ea4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:07 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Dec  6 03:16:08 np0005548731 nova_compute[232433]: 2025-12-06 08:16:08.784 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:08 np0005548731 podman[337178]: 2025-12-06 08:16:08.901273356 +0000 UTC m=+0.051867175 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 03:16:08 np0005548731 podman[337185]: 2025-12-06 08:16:08.921468708 +0000 UTC m=+0.054438607 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd)
Dec  6 03:16:08 np0005548731 podman[337179]: 2025-12-06 08:16:08.934882275 +0000 UTC m=+0.073854620 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  6 03:16:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:08.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:16:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:09.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:16:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:10.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:16:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:11.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:16:12 np0005548731 nova_compute[232433]: 2025-12-06 08:16:12.039 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:12.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:13 np0005548731 nova_compute[232433]: 2025-12-06 08:16:13.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:16:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:13.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:13 np0005548731 nova_compute[232433]: 2025-12-06 08:16:13.786 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:14.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:15.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:16.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:17 np0005548731 nova_compute[232433]: 2025-12-06 08:16:17.075 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:17.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:18 np0005548731 nova_compute[232433]: 2025-12-06 08:16:18.650 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008963.6494548, 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:16:18 np0005548731 nova_compute[232433]: 2025-12-06 08:16:18.651 232437 INFO nova.compute.manager [-] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:16:18 np0005548731 nova_compute[232433]: 2025-12-06 08:16:18.673 232437 DEBUG nova.compute.manager [None req-8666a486-1baf-4b50-ac12-64685fa8f145 - - - - - -] [instance: 326c22d5-36a0-4d26-bfbf-6dc1811a1ea4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:16:18 np0005548731 nova_compute[232433]: 2025-12-06 08:16:18.787 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:18.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:19.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:20.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:21.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e417 e417: 3 total, 3 up, 3 in
Dec  6 03:16:22 np0005548731 nova_compute[232433]: 2025-12-06 08:16:22.077 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:22.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:23.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:23 np0005548731 nova_compute[232433]: 2025-12-06 08:16:23.789 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:24.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:25.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:26.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:27 np0005548731 nova_compute[232433]: 2025-12-06 08:16:27.080 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:27.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:27 np0005548731 nova_compute[232433]: 2025-12-06 08:16:27.494 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:27 np0005548731 nova_compute[232433]: 2025-12-06 08:16:27.495 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:27 np0005548731 nova_compute[232433]: 2025-12-06 08:16:27.517 232437 DEBUG nova.compute.manager [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:16:27 np0005548731 nova_compute[232433]: 2025-12-06 08:16:27.598 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:27 np0005548731 nova_compute[232433]: 2025-12-06 08:16:27.598 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:27 np0005548731 nova_compute[232433]: 2025-12-06 08:16:27.607 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:16:27 np0005548731 nova_compute[232433]: 2025-12-06 08:16:27.607 232437 INFO nova.compute.claims [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:16:27 np0005548731 nova_compute[232433]: 2025-12-06 08:16:27.774 232437 DEBUG oslo_concurrency.processutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:28 np0005548731 nova_compute[232433]: 2025-12-06 08:16:28.791 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:28.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:29 np0005548731 nova_compute[232433]: 2025-12-06 08:16:29.177 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Acquiring lock "73127705-004d-4660-88f2-e3758c8f14c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:29 np0005548731 nova_compute[232433]: 2025-12-06 08:16:29.178 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:29 np0005548731 nova_compute[232433]: 2025-12-06 08:16:29.193 232437 DEBUG nova.compute.manager [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:16:29 np0005548731 nova_compute[232433]: 2025-12-06 08:16:29.263 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:16:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1315743655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:16:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.340 232437 DEBUG oslo_concurrency.processutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.346 232437 DEBUG nova.compute.provider_tree [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.366 232437 DEBUG nova.scheduler.client.report [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.387 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.387 232437 DEBUG nova.compute.manager [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.389 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.401 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.402 232437 INFO nova.compute.claims [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.463 232437 DEBUG nova.compute.manager [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.464 232437 DEBUG nova.network.neutron [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.488 232437 INFO nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.547 232437 DEBUG nova.compute.manager [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.569 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.602 232437 INFO nova.virt.block_device [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Booting with volume snapshot 65e19d19-35ce-41d2-a291-8681f1e9bcd8 at /dev/vda#033[00m
Dec  6 03:16:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:16:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3150436561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:16:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:30.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.989 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:30 np0005548731 nova_compute[232433]: 2025-12-06 08:16:30.995 232437 DEBUG nova.compute.provider_tree [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.023 232437 DEBUG nova.scheduler.client.report [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.051 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.052 232437 DEBUG nova.compute.manager [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.056 232437 DEBUG nova.policy [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e8feb4540af4e2caa45a88a9202dbe2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.132 232437 DEBUG nova.compute.manager [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.132 232437 DEBUG nova.network.neutron [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:16:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.182 232437 INFO nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.212 232437 DEBUG nova.compute.manager [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.387 232437 DEBUG nova.compute.manager [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.389 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.390 232437 INFO nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Creating image(s)#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.416 232437 DEBUG nova.storage.rbd_utils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] rbd image 73127705-004d-4660-88f2-e3758c8f14c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.440 232437 DEBUG nova.storage.rbd_utils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] rbd image 73127705-004d-4660-88f2-e3758c8f14c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.464 232437 DEBUG nova.storage.rbd_utils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] rbd image 73127705-004d-4660-88f2-e3758c8f14c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.467 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.512 232437 DEBUG nova.policy [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e07a31bfb9584e07a5620ecf91e88a4b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c9b819f443be4cbe852a843ef1c9e3c7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.532 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.533 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.534 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.534 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.557 232437 DEBUG nova.storage.rbd_utils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] rbd image 73127705-004d-4660-88f2-e3758c8f14c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.560 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 73127705-004d-4660-88f2-e3758c8f14c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.913 232437 DEBUG nova.network.neutron [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Successfully created port: f212f53a-4f91-4f42-93d2-4e480ddb53d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.923 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 73127705-004d-4660-88f2-e3758c8f14c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:31 np0005548731 nova_compute[232433]: 2025-12-06 08:16:31.988 232437 DEBUG nova.storage.rbd_utils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] resizing rbd image 73127705-004d-4660-88f2-e3758c8f14c0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.084 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.091 232437 DEBUG nova.objects.instance [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 73127705-004d-4660-88f2-e3758c8f14c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.110 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.111 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Ensure instance console log exists: /var/lib/nova/instances/73127705-004d-4660-88f2-e3758c8f14c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.111 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.111 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.112 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.326 232437 DEBUG nova.network.neutron [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Successfully created port: 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.680 232437 DEBUG nova.network.neutron [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Successfully updated port: f212f53a-4f91-4f42-93d2-4e480ddb53d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.706 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "refresh_cache-1afa9f23-5514-4eb1-9d88-856a8f5e6744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.707 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquired lock "refresh_cache-1afa9f23-5514-4eb1-9d88-856a8f5e6744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.707 232437 DEBUG nova.network.neutron [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.802 232437 DEBUG nova.compute.manager [req-caec6e17-5d41-48bf-9a09-ad313a23f764 req-28b86ac9-bbdd-4096-a9b1-57f5d48a293f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Received event network-changed-f212f53a-4f91-4f42-93d2-4e480ddb53d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.803 232437 DEBUG nova.compute.manager [req-caec6e17-5d41-48bf-9a09-ad313a23f764 req-28b86ac9-bbdd-4096-a9b1-57f5d48a293f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Refreshing instance network info cache due to event network-changed-f212f53a-4f91-4f42-93d2-4e480ddb53d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.803 232437 DEBUG oslo_concurrency.lockutils [req-caec6e17-5d41-48bf-9a09-ad313a23f764 req-28b86ac9-bbdd-4096-a9b1-57f5d48a293f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1afa9f23-5514-4eb1-9d88-856a8f5e6744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:16:32 np0005548731 nova_compute[232433]: 2025-12-06 08:16:32.883 232437 DEBUG nova.network.neutron [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:16:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:32.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:33.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:33 np0005548731 nova_compute[232433]: 2025-12-06 08:16:33.793 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.104 232437 DEBUG nova.network.neutron [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Successfully updated port: 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.123 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Acquiring lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.123 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Acquired lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.123 232437 DEBUG nova.network.neutron [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.252 232437 DEBUG nova.compute.manager [req-5890210e-4df6-4b42-ab79-550c3c6ca212 req-a2353569-03a6-4670-9190-0279f92dee4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Received event network-changed-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.252 232437 DEBUG nova.compute.manager [req-5890210e-4df6-4b42-ab79-550c3c6ca212 req-a2353569-03a6-4670-9190-0279f92dee4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Refreshing instance network info cache due to event network-changed-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.253 232437 DEBUG oslo_concurrency.lockutils [req-5890210e-4df6-4b42-ab79-550c3c6ca212 req-a2353569-03a6-4670-9190-0279f92dee4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.423 232437 DEBUG nova.network.neutron [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.505 232437 DEBUG nova.network.neutron [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Updating instance_info_cache with network_info: [{"id": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "address": "fa:16:3e:a0:81:13", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf212f53a-4f", "ovs_interfaceid": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.533 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Releasing lock "refresh_cache-1afa9f23-5514-4eb1-9d88-856a8f5e6744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.533 232437 DEBUG nova.compute.manager [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Instance network_info: |[{"id": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "address": "fa:16:3e:a0:81:13", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf212f53a-4f", "ovs_interfaceid": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.534 232437 DEBUG oslo_concurrency.lockutils [req-caec6e17-5d41-48bf-9a09-ad313a23f764 req-28b86ac9-bbdd-4096-a9b1-57f5d48a293f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1afa9f23-5514-4eb1-9d88-856a8f5e6744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:16:34 np0005548731 nova_compute[232433]: 2025-12-06 08:16:34.534 232437 DEBUG nova.network.neutron [req-caec6e17-5d41-48bf-9a09-ad313a23f764 req-28b86ac9-bbdd-4096-a9b1-57f5d48a293f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Refreshing network info cache for port f212f53a-4f91-4f42-93d2-4e480ddb53d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:16:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:34 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Dec  6 03:16:34 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:34.989556) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:16:34 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Dec  6 03:16:34 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008994989619, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 251, "total_data_size": 2701414, "memory_usage": 2735592, "flush_reason": "Manual Compaction"}
Dec  6 03:16:34 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Dec  6 03:16:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:34.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995000429, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 1075464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80597, "largest_seqno": 81824, "table_properties": {"data_size": 1071257, "index_size": 1730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11354, "raw_average_key_size": 20, "raw_value_size": 1062089, "raw_average_value_size": 1955, "num_data_blocks": 77, "num_entries": 543, "num_filter_entries": 543, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008891, "oldest_key_time": 1765008891, "file_creation_time": 1765008994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 11048 microseconds, and 3489 cpu microseconds.
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.000606) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 1075464 bytes OK
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.000631) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.015982) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.016027) EVENT_LOG_v1 {"time_micros": 1765008995016017, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.016051) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 2695587, prev total WAL file size 2695587, number of live WAL files 2.
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.017009) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373737' seq:72057594037927935, type:22 .. '6D6772737461740033303239' seq:0, type:0; will stop at (end)
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(1050KB)], [162(13MB)]
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995017103, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 15002391, "oldest_snapshot_seqno": -1}
Dec  6 03:16:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:16:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:35.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 10948 keys, 11852999 bytes, temperature: kUnknown
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995196605, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 11852999, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11785969, "index_size": 38565, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27397, "raw_key_size": 289179, "raw_average_key_size": 26, "raw_value_size": 11597934, "raw_average_value_size": 1059, "num_data_blocks": 1455, "num_entries": 10948, "num_filter_entries": 10948, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765008995, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.196860) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11852999 bytes
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.201991) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.5 rd, 66.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.3 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(25.0) write-amplify(11.0) OK, records in: 11421, records dropped: 473 output_compression: NoCompression
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.202029) EVENT_LOG_v1 {"time_micros": 1765008995202015, "job": 104, "event": "compaction_finished", "compaction_time_micros": 179570, "compaction_time_cpu_micros": 31101, "output_level": 6, "num_output_files": 1, "total_output_size": 11852999, "num_input_records": 11421, "num_output_records": 10948, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995202413, "job": 104, "event": "table_file_deletion", "file_number": 164}
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995205170, "job": 104, "event": "table_file_deletion", "file_number": 162}
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.016950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.205315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.205321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.205323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.205324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:16:35.205326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.208 232437 DEBUG nova.network.neutron [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Updating instance_info_cache with network_info: [{"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.229 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Releasing lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.229 232437 DEBUG nova.compute.manager [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Instance network_info: |[{"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.230 232437 DEBUG oslo_concurrency.lockutils [req-5890210e-4df6-4b42-ab79-550c3c6ca212 req-a2353569-03a6-4670-9190-0279f92dee4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.230 232437 DEBUG nova.network.neutron [req-5890210e-4df6-4b42-ab79-550c3c6ca212 req-a2353569-03a6-4670-9190-0279f92dee4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Refreshing network info cache for port 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.233 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Start _get_guest_xml network_info=[{"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.236 232437 DEBUG os_brick.utils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.237 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.240 232437 WARNING nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.245 232437 DEBUG nova.virt.libvirt.host [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.245 232437 DEBUG nova.virt.libvirt.host [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.248 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.248 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[a12c0a3d-9c68-4ef1-a967-bab2b2c0e524]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.249 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.254 232437 DEBUG nova.virt.libvirt.host [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.255 232437 DEBUG nova.virt.libvirt.host [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.256 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.257 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.257 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.257 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.257 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.258 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.258 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.258 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.258 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.258 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.259 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.259 232437 DEBUG nova.virt.hardware [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.262 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.257 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.257 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[bdebb988-e7e6-40b1-af75-c80b338a6182]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.291 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.299 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.299 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[402fc886-4ae2-4131-bea6-2ce1f768eb77]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.303 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[d6876637-d5c1-430a-9b95-6ee9a6f28c80]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.303 232437 DEBUG oslo_concurrency.processutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.338 232437 DEBUG oslo_concurrency.processutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.340 232437 DEBUG os_brick.initiator.connectors.lightos [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.340 232437 DEBUG os_brick.initiator.connectors.lightos [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.340 232437 DEBUG os_brick.initiator.connectors.lightos [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.341 232437 DEBUG os_brick.utils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] <== get_connector_properties: return (104ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.341 232437 DEBUG nova.virt.block_device [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Updating existing volume attachment record: d2614f9e-fadc-4f11-bbc8-fcaeae80a1cd _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:16:35 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3745854789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.694 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.719 232437 DEBUG nova.storage.rbd_utils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] rbd image 73127705-004d-4660-88f2-e3758c8f14c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:16:35 np0005548731 nova_compute[232433]: 2025-12-06 08:16:35.722 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:16:36 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1508888652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.147 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.149 232437 DEBUG nova.virt.libvirt.vif [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:16:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-193167232-access_point-539910206',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-193167232-access_point-539910206',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-193167232-acc',id=207,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHMmZWG/O6PqcvpqRlVFaRu5xBPj4FqN4KXTvc680k96sNfIz4Sh3CFmI/fthirdlUn2yG6bxynVBLf1/e7VCngDUtWmrOk2Wxd8iITW+fjCRZcW6KQpfbYi88t8MRnpgg==',key_name='tempest-TestSecurityGroupsBasicOps-1647941493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9b819f443be4cbe852a843ef1c9e3c7',ramdisk_id='',reservation_id='r-zzimchur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-193167232',owner_user_name='tempest-TestSecurityGroupsBasicOps-193167232-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:16:31Z,user_data=None,user_id='e07a31bfb9584e07a5620ecf91e88a4b',uuid=73127705-004d-4660-88f2-e3758c8f14c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.149 232437 DEBUG nova.network.os_vif_util [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Converting VIF {"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.150 232437 DEBUG nova.network.os_vif_util [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:c6:6e,bridge_name='br-int',has_traffic_filtering=True,id=2e36dbbd-5e69-4b13-9b5d-12b3e57951c1,network=Network(cf4b9c80-699e-493c-9b5c-f95417bc87b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e36dbbd-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.151 232437 DEBUG nova.objects.instance [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73127705-004d-4660-88f2-e3758c8f14c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.168 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  <uuid>73127705-004d-4660-88f2-e3758c8f14c0</uuid>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  <name>instance-000000cf</name>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-193167232-access_point-539910206</nova:name>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:16:35</nova:creationTime>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <nova:user uuid="e07a31bfb9584e07a5620ecf91e88a4b">tempest-TestSecurityGroupsBasicOps-193167232-project-member</nova:user>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <nova:project uuid="c9b819f443be4cbe852a843ef1c9e3c7">tempest-TestSecurityGroupsBasicOps-193167232</nova:project>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <nova:port uuid="2e36dbbd-5e69-4b13-9b5d-12b3e57951c1">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <entry name="serial">73127705-004d-4660-88f2-e3758c8f14c0</entry>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <entry name="uuid">73127705-004d-4660-88f2-e3758c8f14c0</entry>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/73127705-004d-4660-88f2-e3758c8f14c0_disk">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/73127705-004d-4660-88f2-e3758c8f14c0_disk.config">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:72:c6:6e"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <target dev="tap2e36dbbd-5e"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/73127705-004d-4660-88f2-e3758c8f14c0/console.log" append="off"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:16:36 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:16:36 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:16:36 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:16:36 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.169 232437 DEBUG nova.compute.manager [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Preparing to wait for external event network-vif-plugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.169 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Acquiring lock "73127705-004d-4660-88f2-e3758c8f14c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.170 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.170 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.170 232437 DEBUG nova.virt.libvirt.vif [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:16:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-193167232-access_point-539910206',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-193167232-access_point-539910206',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-193167232-acc',id=207,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHMmZWG/O6PqcvpqRlVFaRu5xBPj4FqN4KXTvc680k96sNfIz4Sh3CFmI/fthirdlUn2yG6bxynVBLf1/e7VCngDUtWmrOk2Wxd8iITW+fjCRZcW6KQpfbYi88t8MRnpgg==',key_name='tempest-TestSecurityGroupsBasicOps-1647941493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9b819f443be4cbe852a843ef1c9e3c7',ramdisk_id='',reservation_id='r-zzimchur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-193167232',owner_user_name='tempest-TestSecurityGroupsBasicOps-193167232-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:16:31Z,user_data=None,user_id='e07a31bfb9584e07a5620ecf91e88a4b',uuid=73127705-004d-4660-88f2-e3758c8f14c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.171 232437 DEBUG nova.network.os_vif_util [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Converting VIF {"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.171 232437 DEBUG nova.network.os_vif_util [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:c6:6e,bridge_name='br-int',has_traffic_filtering=True,id=2e36dbbd-5e69-4b13-9b5d-12b3e57951c1,network=Network(cf4b9c80-699e-493c-9b5c-f95417bc87b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e36dbbd-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.172 232437 DEBUG os_vif [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:c6:6e,bridge_name='br-int',has_traffic_filtering=True,id=2e36dbbd-5e69-4b13-9b5d-12b3e57951c1,network=Network(cf4b9c80-699e-493c-9b5c-f95417bc87b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e36dbbd-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.172 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.173 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.173 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.176 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.176 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e36dbbd-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.176 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e36dbbd-5e, col_values=(('external_ids', {'iface-id': '2e36dbbd-5e69-4b13-9b5d-12b3e57951c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:c6:6e', 'vm-uuid': '73127705-004d-4660-88f2-e3758c8f14c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.177 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:36 np0005548731 NetworkManager[49182]: <info>  [1765008996.1788] manager: (tap2e36dbbd-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/496)
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.180 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.183 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.184 232437 INFO os_vif [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:c6:6e,bridge_name='br-int',has_traffic_filtering=True,id=2e36dbbd-5e69-4b13-9b5d-12b3e57951c1,network=Network(cf4b9c80-699e-493c-9b5c-f95417bc87b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e36dbbd-5e')#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.353 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.354 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.354 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] No VIF found with MAC fa:16:3e:72:c6:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.355 232437 INFO nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Using config drive#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.380 232437 DEBUG nova.storage.rbd_utils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] rbd image 73127705-004d-4660-88f2-e3758c8f14c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.404 232437 DEBUG nova.network.neutron [req-caec6e17-5d41-48bf-9a09-ad313a23f764 req-28b86ac9-bbdd-4096-a9b1-57f5d48a293f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Updated VIF entry in instance network info cache for port f212f53a-4f91-4f42-93d2-4e480ddb53d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.405 232437 DEBUG nova.network.neutron [req-caec6e17-5d41-48bf-9a09-ad313a23f764 req-28b86ac9-bbdd-4096-a9b1-57f5d48a293f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Updating instance_info_cache with network_info: [{"id": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "address": "fa:16:3e:a0:81:13", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf212f53a-4f", "ovs_interfaceid": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.425 232437 DEBUG oslo_concurrency.lockutils [req-caec6e17-5d41-48bf-9a09-ad313a23f764 req-28b86ac9-bbdd-4096-a9b1-57f5d48a293f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1afa9f23-5514-4eb1-9d88-856a8f5e6744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.576 232437 DEBUG nova.compute.manager [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.577 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.578 232437 INFO nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Creating image(s)#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.578 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.578 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Ensure instance console log exists: /var/lib/nova/instances/1afa9f23-5514-4eb1-9d88-856a8f5e6744/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.579 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.579 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.579 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.581 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Start _get_guest_xml network_info=[{"id": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "address": "fa:16:3e:a0:81:13", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf212f53a-4f", "ovs_interfaceid": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a427e5a0-fe95-4f1e-be90-c6d37b0a00af', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a427e5a0-fe95-4f1e-be90-c6d37b0a00af', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '1afa9f23-5514-4eb1-9d88-856a8f5e6744', 'attached_at': '', 'detached_at': '', 'volume_id': 'a427e5a0-fe95-4f1e-be90-c6d37b0a00af', 'serial': 'a427e5a0-fe95-4f1e-be90-c6d37b0a00af'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'attachment_id': 'd2614f9e-fadc-4f11-bbc8-fcaeae80a1cd', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.584 232437 WARNING nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.589 232437 DEBUG nova.virt.libvirt.host [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.589 232437 DEBUG nova.virt.libvirt.host [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.592 232437 DEBUG nova.virt.libvirt.host [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.592 232437 DEBUG nova.virt.libvirt.host [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.593 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.593 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.594 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.594 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.594 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.594 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.594 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.595 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.595 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.595 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.595 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.595 232437 DEBUG nova.virt.hardware [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.963 232437 DEBUG nova.storage.rbd_utils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image 1afa9f23-5514-4eb1-9d88-856a8f5e6744_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:16:36 np0005548731 nova_compute[232433]: 2025-12-06 08:16:36.967 232437 DEBUG oslo_concurrency.processutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:36.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.045 232437 INFO nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Creating config drive at /var/lib/nova/instances/73127705-004d-4660-88f2-e3758c8f14c0/disk.config#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.050 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73127705-004d-4660-88f2-e3758c8f14c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1kbpkmjd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.081 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:37.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.185 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73127705-004d-4660-88f2-e3758c8f14c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1kbpkmjd" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.214 232437 DEBUG nova.storage.rbd_utils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] rbd image 73127705-004d-4660-88f2-e3758c8f14c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.217 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73127705-004d-4660-88f2-e3758c8f14c0/disk.config 73127705-004d-4660-88f2-e3758c8f14c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:16:37 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3235342965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.389 232437 DEBUG oslo_concurrency.processutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.438 232437 DEBUG nova.virt.libvirt.vif [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:16:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-270380989',display_name='tempest-TestVolumeBootPattern-server-270380989',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-270380989',id=206,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-81jjc4kq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:16:30Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=1afa9f23-5514-4eb1-9d88-856a8f5e6744,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "address": "fa:16:3e:a0:81:13", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf212f53a-4f", "ovs_interfaceid": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.439 232437 DEBUG nova.network.os_vif_util [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "address": "fa:16:3e:a0:81:13", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf212f53a-4f", "ovs_interfaceid": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.440 232437 DEBUG nova.network.os_vif_util [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:81:13,bridge_name='br-int',has_traffic_filtering=True,id=f212f53a-4f91-4f42-93d2-4e480ddb53d4,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf212f53a-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.441 232437 DEBUG nova.objects.instance [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1afa9f23-5514-4eb1-9d88-856a8f5e6744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.458 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  <uuid>1afa9f23-5514-4eb1-9d88-856a8f5e6744</uuid>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  <name>instance-000000ce</name>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestVolumeBootPattern-server-270380989</nova:name>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:16:36</nova:creationTime>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <nova:user uuid="8e8feb4540af4e2caa45a88a9202dbe2">tempest-TestVolumeBootPattern-97496240-project-member</nova:user>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <nova:project uuid="4b2dc4b8729f446a9c7ac69ca446f71d">tempest-TestVolumeBootPattern-97496240</nova:project>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <nova:port uuid="f212f53a-4f91-4f42-93d2-4e480ddb53d4">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <entry name="serial">1afa9f23-5514-4eb1-9d88-856a8f5e6744</entry>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <entry name="uuid">1afa9f23-5514-4eb1-9d88-856a8f5e6744</entry>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/1afa9f23-5514-4eb1-9d88-856a8f5e6744_disk.config">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-a427e5a0-fe95-4f1e-be90-c6d37b0a00af">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <serial>a427e5a0-fe95-4f1e-be90-c6d37b0a00af</serial>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:a0:81:13"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <target dev="tapf212f53a-4f"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/1afa9f23-5514-4eb1-9d88-856a8f5e6744/console.log" append="off"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:16:37 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:16:37 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:16:37 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:16:37 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.459 232437 DEBUG nova.compute.manager [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Preparing to wait for external event network-vif-plugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.460 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.460 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.460 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.461 232437 DEBUG nova.virt.libvirt.vif [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:16:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-270380989',display_name='tempest-TestVolumeBootPattern-server-270380989',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-270380989',id=206,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-81jjc4kq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:16:30Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=1afa9f23-5514-4eb1-9d88-856a8f5e6744,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "address": "fa:16:3e:a0:81:13", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf212f53a-4f", "ovs_interfaceid": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.461 232437 DEBUG nova.network.os_vif_util [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "address": "fa:16:3e:a0:81:13", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf212f53a-4f", "ovs_interfaceid": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.462 232437 DEBUG nova.network.os_vif_util [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:81:13,bridge_name='br-int',has_traffic_filtering=True,id=f212f53a-4f91-4f42-93d2-4e480ddb53d4,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf212f53a-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.462 232437 DEBUG os_vif [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:81:13,bridge_name='br-int',has_traffic_filtering=True,id=f212f53a-4f91-4f42-93d2-4e480ddb53d4,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf212f53a-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.463 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.463 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.464 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.466 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.466 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf212f53a-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.466 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf212f53a-4f, col_values=(('external_ids', {'iface-id': 'f212f53a-4f91-4f42-93d2-4e480ddb53d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:81:13', 'vm-uuid': '1afa9f23-5514-4eb1-9d88-856a8f5e6744'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.468 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:37 np0005548731 NetworkManager[49182]: <info>  [1765008997.4687] manager: (tapf212f53a-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.470 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.474 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.474 232437 INFO os_vif [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:81:13,bridge_name='br-int',has_traffic_filtering=True,id=f212f53a-4f91-4f42-93d2-4e480ddb53d4,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf212f53a-4f')#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.538 232437 DEBUG nova.network.neutron [req-5890210e-4df6-4b42-ab79-550c3c6ca212 req-a2353569-03a6-4670-9190-0279f92dee4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Updated VIF entry in instance network info cache for port 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.538 232437 DEBUG nova.network.neutron [req-5890210e-4df6-4b42-ab79-550c3c6ca212 req-a2353569-03a6-4670-9190-0279f92dee4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Updating instance_info_cache with network_info: [{"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.554 232437 DEBUG oslo_concurrency.lockutils [req-5890210e-4df6-4b42-ab79-550c3c6ca212 req-a2353569-03a6-4670-9190-0279f92dee4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.803 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.804 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.804 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No VIF found with MAC fa:16:3e:a0:81:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:16:37 np0005548731 nova_compute[232433]: 2025-12-06 08:16:37.804 232437 INFO nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Using config drive#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.126 232437 DEBUG nova.storage.rbd_utils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image 1afa9f23-5514-4eb1-9d88-856a8f5e6744_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.417 232437 DEBUG oslo_concurrency.processutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73127705-004d-4660-88f2-e3758c8f14c0/disk.config 73127705-004d-4660-88f2-e3758c8f14c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.417 232437 INFO nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Deleting local config drive /var/lib/nova/instances/73127705-004d-4660-88f2-e3758c8f14c0/disk.config because it was imported into RBD.#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.445 232437 INFO nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Creating config drive at /var/lib/nova/instances/1afa9f23-5514-4eb1-9d88-856a8f5e6744/disk.config#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.451 232437 DEBUG oslo_concurrency.processutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1afa9f23-5514-4eb1-9d88-856a8f5e6744/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5t3wwcjz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:38 np0005548731 kernel: tap2e36dbbd-5e: entered promiscuous mode
Dec  6 03:16:38 np0005548731 NetworkManager[49182]: <info>  [1765008998.4711] manager: (tap2e36dbbd-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/498)
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.477 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:38Z|01053|binding|INFO|Claiming lport 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 for this chassis.
Dec  6 03:16:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:38Z|01054|binding|INFO|2e36dbbd-5e69-4b13-9b5d-12b3e57951c1: Claiming fa:16:3e:72:c6:6e 10.100.0.9
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.486 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:c6:6e 10.100.0.9'], port_security=['fa:16:3e:72:c6:6e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73127705-004d-4660-88f2-e3758c8f14c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf4b9c80-699e-493c-9b5c-f95417bc87b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9b819f443be4cbe852a843ef1c9e3c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8eb9efa9-fc9a-4511-8391-9ec775bfc17f bb40e599-4106-4c20-b561-62f2212db0c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c077d174-55fe-4d5a-bef2-34f4c94a958a, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=2e36dbbd-5e69-4b13-9b5d-12b3e57951c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.488 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 in datapath cf4b9c80-699e-493c-9b5c-f95417bc87b3 bound to our chassis#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.490 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf4b9c80-699e-493c-9b5c-f95417bc87b3#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.500 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ca46a281-26fd-4937-8e1f-d895542d31a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.502 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf4b9c80-61 in ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:16:38 np0005548731 systemd-udevd[337725]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:16:38 np0005548731 systemd-machined[195355]: New machine qemu-107-instance-000000cf.
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.507 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf4b9c80-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.507 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[da893332-a556-43d2-b798-851345127f20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.509 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5529e4df-d753-47ac-8f35-4f604f2e84d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 NetworkManager[49182]: <info>  [1765008998.5169] device (tap2e36dbbd-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:16:38 np0005548731 NetworkManager[49182]: <info>  [1765008998.5181] device (tap2e36dbbd-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.522 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[3cdb6c8f-aa5f-465b-b8e2-dba4af0e4d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 systemd[1]: Started Virtual Machine qemu-107-instance-000000cf.
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.543 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.547 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[62937d59-a0fe-4dc9-a4fc-70c9245554fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:38Z|01055|binding|INFO|Setting lport 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 ovn-installed in OVS
Dec  6 03:16:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:38Z|01056|binding|INFO|Setting lport 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 up in Southbound
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.555 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.576 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[7893c611-a414-4dbf-9a40-4c8a14809345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.581 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b5bd54-e349-4ade-8de6-4e58b8cccc2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 NetworkManager[49182]: <info>  [1765008998.5835] manager: (tapcf4b9c80-60): new Veth device (/org/freedesktop/NetworkManager/Devices/499)
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.587 232437 DEBUG oslo_concurrency.processutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1afa9f23-5514-4eb1-9d88-856a8f5e6744/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5t3wwcjz" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.613 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f21292e0-64b7-4761-ba43-8be825e1a22d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.616 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[26c6f650-872b-400f-86d2-1c8d858a65c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.624 232437 DEBUG nova.storage.rbd_utils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image 1afa9f23-5514-4eb1-9d88-856a8f5e6744_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.628 232437 DEBUG oslo_concurrency.processutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1afa9f23-5514-4eb1-9d88-856a8f5e6744/disk.config 1afa9f23-5514-4eb1-9d88-856a8f5e6744_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:38 np0005548731 NetworkManager[49182]: <info>  [1765008998.6397] device (tapcf4b9c80-60): carrier: link connected
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.646 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[65b155f2-6d73-4483-a0a4-d8498ad4201f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.662 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[701cd70d-56e3-4fd9-9a13-a50eb62f81eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf4b9c80-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:fd:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926064, 'reachable_time': 17823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337781, 'error': None, 'target': 'ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.677 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4a5989-7da6-4e36-aebc-888fed7ee64b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:fdc2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 926064, 'tstamp': 926064}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337782, 'error': None, 'target': 'ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.693 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bf2f4bd7-7803-4a15-8193-d4152d4a8bce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf4b9c80-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:fd:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926064, 'reachable_time': 17823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337783, 'error': None, 'target': 'ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.729 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[299c89c4-1d1e-4070-a58c-33aee20c6fbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.792 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7f18b558-e900-457f-a864-96f36cce404d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.793 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf4b9c80-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.793 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.794 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf4b9c80-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:38 np0005548731 NetworkManager[49182]: <info>  [1765008998.7958] manager: (tapcf4b9c80-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.796 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:38 np0005548731 kernel: tapcf4b9c80-60: entered promiscuous mode
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.801 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf4b9c80-60, col_values=(('external_ids', {'iface-id': '88e0123a-9776-4b2d-beec-18a6ac7f9e65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.800 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:38Z|01057|binding|INFO|Releasing lport 88e0123a-9776-4b2d-beec-18a6ac7f9e65 from this chassis (sb_readonly=0)
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.819 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.820 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf4b9c80-699e-493c-9b5c-f95417bc87b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf4b9c80-699e-493c-9b5c-f95417bc87b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.821 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[28c0cabf-c189-4137-b246-04cd03a85eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.822 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-cf4b9c80-699e-493c-9b5c-f95417bc87b3
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/cf4b9c80-699e-493c-9b5c-f95417bc87b3.pid.haproxy
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID cf4b9c80-699e-493c-9b5c-f95417bc87b3
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.822 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3', 'env', 'PROCESS_TAG=haproxy-cf4b9c80-699e-493c-9b5c-f95417bc87b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf4b9c80-699e-493c-9b5c-f95417bc87b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.853 232437 DEBUG oslo_concurrency.processutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1afa9f23-5514-4eb1-9d88-856a8f5e6744/disk.config 1afa9f23-5514-4eb1-9d88-856a8f5e6744_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.854 232437 INFO nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Deleting local config drive /var/lib/nova/instances/1afa9f23-5514-4eb1-9d88-856a8f5e6744/disk.config because it was imported into RBD.#033[00m
Dec  6 03:16:38 np0005548731 kernel: tapf212f53a-4f: entered promiscuous mode
Dec  6 03:16:38 np0005548731 systemd-udevd[337757]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:16:38 np0005548731 NetworkManager[49182]: <info>  [1765008998.9016] manager: (tapf212f53a-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Dec  6 03:16:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:38Z|01058|binding|INFO|Claiming lport f212f53a-4f91-4f42-93d2-4e480ddb53d4 for this chassis.
Dec  6 03:16:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:38Z|01059|binding|INFO|f212f53a-4f91-4f42-93d2-4e480ddb53d4: Claiming fa:16:3e:a0:81:13 10.100.0.7
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.902 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:38.908 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:81:13 10.100.0.7'], port_security=['fa:16:3e:a0:81:13 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1afa9f23-5514-4eb1-9d88-856a8f5e6744', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0d5802d0-c57b-4fcd-911a-7dedb970b91e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f212f53a-4f91-4f42-93d2-4e480ddb53d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:16:38 np0005548731 NetworkManager[49182]: <info>  [1765008998.9124] device (tapf212f53a-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:16:38 np0005548731 NetworkManager[49182]: <info>  [1765008998.9142] device (tapf212f53a-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:16:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:38Z|01060|binding|INFO|Setting lport f212f53a-4f91-4f42-93d2-4e480ddb53d4 ovn-installed in OVS
Dec  6 03:16:38 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:38Z|01061|binding|INFO|Setting lport f212f53a-4f91-4f42-93d2-4e480ddb53d4 up in Southbound
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.917 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008998.9166417, 73127705-004d-4660-88f2-e3758c8f14c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.917 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] VM Started (Lifecycle Event)#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.919 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.941 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.948 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008998.9168906, 73127705-004d-4660-88f2-e3758c8f14c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.949 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.970 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:16:38 np0005548731 systemd-machined[195355]: New machine qemu-108-instance-000000ce.
Dec  6 03:16:38 np0005548731 systemd[1]: Started Virtual Machine qemu-108-instance-000000ce.
Dec  6 03:16:38 np0005548731 nova_compute[232433]: 2025-12-06 08:16:38.981 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:16:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:38.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.006 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:16:39 np0005548731 podman[337869]: 2025-12-06 08:16:39.02542925 +0000 UTC m=+0.083324840 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 03:16:39 np0005548731 podman[337870]: 2025-12-06 08:16:39.025577144 +0000 UTC m=+0.081990038 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:16:39 np0005548731 podman[337884]: 2025-12-06 08:16:39.071599325 +0000 UTC m=+0.083850183 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.111 232437 DEBUG nova.compute.manager [req-37fd5bc7-6472-4081-afd2-b1a9514a06d5 req-319f768e-9344-402e-8cbc-595b0326fe53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Received event network-vif-plugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.111 232437 DEBUG oslo_concurrency.lockutils [req-37fd5bc7-6472-4081-afd2-b1a9514a06d5 req-319f768e-9344-402e-8cbc-595b0326fe53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "73127705-004d-4660-88f2-e3758c8f14c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.111 232437 DEBUG oslo_concurrency.lockutils [req-37fd5bc7-6472-4081-afd2-b1a9514a06d5 req-319f768e-9344-402e-8cbc-595b0326fe53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.112 232437 DEBUG oslo_concurrency.lockutils [req-37fd5bc7-6472-4081-afd2-b1a9514a06d5 req-319f768e-9344-402e-8cbc-595b0326fe53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.112 232437 DEBUG nova.compute.manager [req-37fd5bc7-6472-4081-afd2-b1a9514a06d5 req-319f768e-9344-402e-8cbc-595b0326fe53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Processing event network-vif-plugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.113 232437 DEBUG nova.compute.manager [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.117 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008999.1173005, 73127705-004d-4660-88f2-e3758c8f14c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.117 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.128 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.132 232437 DEBUG nova.compute.manager [req-3a80e8f2-6c87-45d3-a639-ab4a432b94c4 req-24d2774d-132d-4dff-9d6d-d36cbca20224 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Received event network-vif-plugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.133 232437 DEBUG oslo_concurrency.lockutils [req-3a80e8f2-6c87-45d3-a639-ab4a432b94c4 req-24d2774d-132d-4dff-9d6d-d36cbca20224 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.133 232437 DEBUG oslo_concurrency.lockutils [req-3a80e8f2-6c87-45d3-a639-ab4a432b94c4 req-24d2774d-132d-4dff-9d6d-d36cbca20224 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.133 232437 DEBUG oslo_concurrency.lockutils [req-3a80e8f2-6c87-45d3-a639-ab4a432b94c4 req-24d2774d-132d-4dff-9d6d-d36cbca20224 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.134 232437 DEBUG nova.compute.manager [req-3a80e8f2-6c87-45d3-a639-ab4a432b94c4 req-24d2774d-132d-4dff-9d6d-d36cbca20224 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Processing event network-vif-plugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.135 232437 INFO nova.virt.libvirt.driver [-] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Instance spawned successfully.#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.135 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.140 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.144 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.154 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.155 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.156 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.156 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.156 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.157 232437 DEBUG nova.virt.libvirt.driver [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.165 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:16:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:39.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:39 np0005548731 podman[337962]: 2025-12-06 08:16:39.187678851 +0000 UTC m=+0.051002543 container create e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.218 232437 INFO nova.compute.manager [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Took 7.83 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.218 232437 DEBUG nova.compute.manager [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:16:39 np0005548731 systemd[1]: Started libpod-conmon-e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb.scope.
Dec  6 03:16:39 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:16:39 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac4503fb981d68335a0686f7edaa32e0b45b16087b820aa53506e181f76e9e3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:16:39 np0005548731 podman[337962]: 2025-12-06 08:16:39.162616331 +0000 UTC m=+0.025939963 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:16:39 np0005548731 podman[337962]: 2025-12-06 08:16:39.259411359 +0000 UTC m=+0.122735001 container init e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:16:39 np0005548731 podman[337962]: 2025-12-06 08:16:39.264651035 +0000 UTC m=+0.127974627 container start e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 03:16:39 np0005548731 neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3[337979]: [NOTICE]   (337983) : New worker (337985) forked
Dec  6 03:16:39 np0005548731 neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3[337979]: [NOTICE]   (337983) : Loading success.
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.303 232437 INFO nova.compute.manager [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Took 10.07 seconds to build instance.#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.319 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f212f53a-4f91-4f42-93d2-4e480ddb53d4 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a unbound from our chassis#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.320 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4ef1374-9c77-45a7-8776-50aa60c7d84a#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.324 232437 DEBUG oslo_concurrency.lockutils [None req-4eb84f0c-1c64-4e03-a60c-4af58eebc66d e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.334 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd00e09-d5e3-42e2-991e-4d584e176981]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.335 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4ef1374-91 in ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.337 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4ef1374-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.337 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bd947453-831f-4ef4-889c-f8f87b0084b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.337 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c4879783-84c7-4e3a-bf13-a47d7a6b5c75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.349 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[d81b77d1-6a27-4aed-8b59-5a83f4e72192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.361 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5de876-e0e6-4e3c-931d-772107b733c8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.390 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[9935c1ec-6102-440b-aa62-5a4ef139c3d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.398 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3e16b0-7845-4995-bb43-58695f744869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 NetworkManager[49182]: <info>  [1765008999.3987] manager: (tapb4ef1374-90): new Veth device (/org/freedesktop/NetworkManager/Devices/502)
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.432 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b295f3cb-12c3-478c-a88c-a990a71be6e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.436 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[286febcf-4f56-4902-9c3b-ee48dcfd50e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 NetworkManager[49182]: <info>  [1765008999.4599] device (tapb4ef1374-90): carrier: link connected
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.469 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[f3609121-4aef-42c0-8d4c-f9ee92fee760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.488 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[998f6afe-c8b8-4aca-9991-86b6f144a6d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926146, 'reachable_time': 30668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338037, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.504 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea41472-98b6-4e6f-a5d5-6db1f39ba846]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:d4b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 926146, 'tstamp': 926146}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338041, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.523 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[934fea3c-71d9-4597-86cf-8dd12a949b4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926146, 'reachable_time': 30668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338042, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.555 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8f24b6-88e6-4d9c-a45b-5ecb86e45196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.612 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e484f5-ce77-4c6a-956a-f0014c17b9a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.613 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.613 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.614 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4ef1374-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.616 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:39 np0005548731 kernel: tapb4ef1374-90: entered promiscuous mode
Dec  6 03:16:39 np0005548731 NetworkManager[49182]: <info>  [1765008999.6184] manager: (tapb4ef1374-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.618 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.619 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4ef1374-90, col_values=(('external_ids', {'iface-id': '32c82c25-6496-4edd-ba74-1791824b99ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.620 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:39 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:39Z|01062|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.628 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008999.6284258, 1afa9f23-5514-4eb1-9d88-856a8f5e6744 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.629 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] VM Started (Lifecycle Event)#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.631 232437 DEBUG nova.compute.manager [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.636 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.637 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.638 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4b543365-a7ab-4cc1-aea7-220274c70baf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.639 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:16:39 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:39.640 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'env', 'PROCESS_TAG=haproxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4ef1374-9c77-45a7-8776-50aa60c7d84a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.648 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.652 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.655 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.657 232437 INFO nova.virt.libvirt.driver [-] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Instance spawned successfully.#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.657 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.681 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.681 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008999.6286216, 1afa9f23-5514-4eb1-9d88-856a8f5e6744 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.681 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.689 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.690 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.690 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.691 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.691 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.692 232437 DEBUG nova.virt.libvirt.driver [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.702 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.705 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765008999.6396108, 1afa9f23-5514-4eb1-9d88-856a8f5e6744 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.705 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.721 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.725 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.745 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.754 232437 INFO nova.compute.manager [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Took 3.18 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.754 232437 DEBUG nova.compute.manager [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.810 232437 INFO nova.compute.manager [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Took 12.24 seconds to build instance.#033[00m
Dec  6 03:16:39 np0005548731 nova_compute[232433]: 2025-12-06 08:16:39.839 232437 DEBUG oslo_concurrency.lockutils [None req-cd26e520-6f65-4c43-b3cb-5ee0386ded28 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:40 np0005548731 podman[338080]: 2025-12-06 08:16:40.016278951 +0000 UTC m=+0.044270779 container create 30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:16:40 np0005548731 systemd[1]: Started libpod-conmon-30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51.scope.
Dec  6 03:16:40 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:16:40 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168ac394a5d6956ac5faab13279cf7f8ab838a19a6b173c86a847ae9840006af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:16:40 np0005548731 podman[338080]: 2025-12-06 08:16:39.993586158 +0000 UTC m=+0.021578016 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:16:40 np0005548731 podman[338080]: 2025-12-06 08:16:40.096422283 +0000 UTC m=+0.124414131 container init 30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:16:40 np0005548731 podman[338080]: 2025-12-06 08:16:40.101313082 +0000 UTC m=+0.129304910 container start 30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:16:40 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[338096]: [NOTICE]   (338100) : New worker (338102) forked
Dec  6 03:16:40 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[338096]: [NOTICE]   (338100) : Loading success.
Dec  6 03:16:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:40.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:41.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.208 232437 DEBUG nova.compute.manager [req-6e1dc46a-488e-4133-a94a-01857116dee5 req-64c2f1ab-bbdd-4002-b429-ddabbb047500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Received event network-vif-plugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.208 232437 DEBUG oslo_concurrency.lockutils [req-6e1dc46a-488e-4133-a94a-01857116dee5 req-64c2f1ab-bbdd-4002-b429-ddabbb047500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "73127705-004d-4660-88f2-e3758c8f14c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.209 232437 DEBUG oslo_concurrency.lockutils [req-6e1dc46a-488e-4133-a94a-01857116dee5 req-64c2f1ab-bbdd-4002-b429-ddabbb047500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.210 232437 DEBUG oslo_concurrency.lockutils [req-6e1dc46a-488e-4133-a94a-01857116dee5 req-64c2f1ab-bbdd-4002-b429-ddabbb047500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.210 232437 DEBUG nova.compute.manager [req-6e1dc46a-488e-4133-a94a-01857116dee5 req-64c2f1ab-bbdd-4002-b429-ddabbb047500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] No waiting events found dispatching network-vif-plugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.210 232437 WARNING nova.compute.manager [req-6e1dc46a-488e-4133-a94a-01857116dee5 req-64c2f1ab-bbdd-4002-b429-ddabbb047500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Received unexpected event network-vif-plugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.240 232437 DEBUG nova.compute.manager [req-8e3a3423-f0e2-4193-8b45-d2ddf8c7666d req-5d54d8a5-7f23-44ec-b0a3-96b4880cfedf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Received event network-vif-plugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.241 232437 DEBUG oslo_concurrency.lockutils [req-8e3a3423-f0e2-4193-8b45-d2ddf8c7666d req-5d54d8a5-7f23-44ec-b0a3-96b4880cfedf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.241 232437 DEBUG oslo_concurrency.lockutils [req-8e3a3423-f0e2-4193-8b45-d2ddf8c7666d req-5d54d8a5-7f23-44ec-b0a3-96b4880cfedf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.241 232437 DEBUG oslo_concurrency.lockutils [req-8e3a3423-f0e2-4193-8b45-d2ddf8c7666d req-5d54d8a5-7f23-44ec-b0a3-96b4880cfedf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.242 232437 DEBUG nova.compute.manager [req-8e3a3423-f0e2-4193-8b45-d2ddf8c7666d req-5d54d8a5-7f23-44ec-b0a3-96b4880cfedf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] No waiting events found dispatching network-vif-plugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:16:41 np0005548731 nova_compute[232433]: 2025-12-06 08:16:41.242 232437 WARNING nova.compute.manager [req-8e3a3423-f0e2-4193-8b45-d2ddf8c7666d req-5d54d8a5-7f23-44ec-b0a3-96b4880cfedf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Received unexpected event network-vif-plugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:16:42 np0005548731 nova_compute[232433]: 2025-12-06 08:16:42.083 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:42 np0005548731 nova_compute[232433]: 2025-12-06 08:16:42.468 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:42 np0005548731 nova_compute[232433]: 2025-12-06 08:16:42.514 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:42 np0005548731 NetworkManager[49182]: <info>  [1765009002.5262] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/504)
Dec  6 03:16:42 np0005548731 NetworkManager[49182]: <info>  [1765009002.5273] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Dec  6 03:16:42 np0005548731 nova_compute[232433]: 2025-12-06 08:16:42.632 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:42 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:42Z|01063|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec  6 03:16:42 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:42Z|01064|binding|INFO|Releasing lport 88e0123a-9776-4b2d-beec-18a6ac7f9e65 from this chassis (sb_readonly=0)
Dec  6 03:16:42 np0005548731 nova_compute[232433]: 2025-12-06 08:16:42.661 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:16:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:42.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:16:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:43.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.433 232437 DEBUG nova.compute.manager [req-f71e4835-ebdc-4cab-90f7-a1cb877a6edd req-654dfb70-aaa1-43c0-80b4-df21753f10de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Received event network-changed-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.433 232437 DEBUG nova.compute.manager [req-f71e4835-ebdc-4cab-90f7-a1cb877a6edd req-654dfb70-aaa1-43c0-80b4-df21753f10de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Refreshing instance network info cache due to event network-changed-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.434 232437 DEBUG oslo_concurrency.lockutils [req-f71e4835-ebdc-4cab-90f7-a1cb877a6edd req-654dfb70-aaa1-43c0-80b4-df21753f10de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.434 232437 DEBUG oslo_concurrency.lockutils [req-f71e4835-ebdc-4cab-90f7-a1cb877a6edd req-654dfb70-aaa1-43c0-80b4-df21753f10de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.434 232437 DEBUG nova.network.neutron [req-f71e4835-ebdc-4cab-90f7-a1cb877a6edd req-654dfb70-aaa1-43c0-80b4-df21753f10de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Refreshing network info cache for port 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.871 232437 DEBUG oslo_concurrency.lockutils [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.872 232437 DEBUG oslo_concurrency.lockutils [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.872 232437 DEBUG oslo_concurrency.lockutils [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.872 232437 DEBUG oslo_concurrency.lockutils [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.873 232437 DEBUG oslo_concurrency.lockutils [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.874 232437 INFO nova.compute.manager [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Terminating instance#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.875 232437 DEBUG nova.compute.manager [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:16:43 np0005548731 kernel: tapf212f53a-4f (unregistering): left promiscuous mode
Dec  6 03:16:43 np0005548731 NetworkManager[49182]: <info>  [1765009003.9228] device (tapf212f53a-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.973 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:43 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:43Z|01065|binding|INFO|Releasing lport f212f53a-4f91-4f42-93d2-4e480ddb53d4 from this chassis (sb_readonly=0)
Dec  6 03:16:43 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:43Z|01066|binding|INFO|Setting lport f212f53a-4f91-4f42-93d2-4e480ddb53d4 down in Southbound
Dec  6 03:16:43 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:43Z|01067|binding|INFO|Removing iface tapf212f53a-4f ovn-installed in OVS
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.979 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:43.985 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:81:13 10.100.0.7'], port_security=['fa:16:3e:a0:81:13 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1afa9f23-5514-4eb1-9d88-856a8f5e6744', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0d5802d0-c57b-4fcd-911a-7dedb970b91e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f212f53a-4f91-4f42-93d2-4e480ddb53d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:16:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:43.986 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f212f53a-4f91-4f42-93d2-4e480ddb53d4 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a unbound from our chassis#033[00m
Dec  6 03:16:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:43.988 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4ef1374-9c77-45a7-8776-50aa60c7d84a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:16:43 np0005548731 nova_compute[232433]: 2025-12-06 08:16:43.989 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:43.989 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1eafafab-8f5c-4b37-baed-bdc9cbd088bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:43 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:43.990 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace which is not needed anymore#033[00m
Dec  6 03:16:44 np0005548731 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000ce.scope: Deactivated successfully.
Dec  6 03:16:44 np0005548731 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000ce.scope: Consumed 4.997s CPU time.
Dec  6 03:16:44 np0005548731 systemd-machined[195355]: Machine qemu-108-instance-000000ce terminated.
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.104 232437 INFO nova.virt.libvirt.driver [-] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Instance destroyed successfully.#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.105 232437 DEBUG nova.objects.instance [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'resources' on Instance uuid 1afa9f23-5514-4eb1-9d88-856a8f5e6744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:16:44 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[338096]: [NOTICE]   (338100) : haproxy version is 2.8.14-c23fe91
Dec  6 03:16:44 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[338096]: [NOTICE]   (338100) : path to executable is /usr/sbin/haproxy
Dec  6 03:16:44 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[338096]: [WARNING]  (338100) : Exiting Master process...
Dec  6 03:16:44 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[338096]: [WARNING]  (338100) : Exiting Master process...
Dec  6 03:16:44 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[338096]: [ALERT]    (338100) : Current worker (338102) exited with code 143 (Terminated)
Dec  6 03:16:44 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[338096]: [WARNING]  (338100) : All workers exited. Exiting... (0)
Dec  6 03:16:44 np0005548731 systemd[1]: libpod-30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51.scope: Deactivated successfully.
Dec  6 03:16:44 np0005548731 podman[338188]: 2025-12-06 08:16:44.157971667 +0000 UTC m=+0.089681845 container died 30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:16:44 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51-userdata-shm.mount: Deactivated successfully.
Dec  6 03:16:44 np0005548731 systemd[1]: var-lib-containers-storage-overlay-168ac394a5d6956ac5faab13279cf7f8ab838a19a6b173c86a847ae9840006af-merged.mount: Deactivated successfully.
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.353 232437 DEBUG nova.virt.libvirt.vif [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:16:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-270380989',display_name='tempest-TestVolumeBootPattern-server-270380989',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-270380989',id=206,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:16:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-81jjc4kq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:16:39Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=1afa9f23-5514-4eb1-9d88-856a8f5e6744,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "address": "fa:16:3e:a0:81:13", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf212f53a-4f", "ovs_interfaceid": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.354 232437 DEBUG nova.network.os_vif_util [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "address": "fa:16:3e:a0:81:13", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf212f53a-4f", "ovs_interfaceid": "f212f53a-4f91-4f42-93d2-4e480ddb53d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.355 232437 DEBUG nova.network.os_vif_util [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:81:13,bridge_name='br-int',has_traffic_filtering=True,id=f212f53a-4f91-4f42-93d2-4e480ddb53d4,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf212f53a-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.355 232437 DEBUG os_vif [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:81:13,bridge_name='br-int',has_traffic_filtering=True,id=f212f53a-4f91-4f42-93d2-4e480ddb53d4,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf212f53a-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.356 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.357 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf212f53a-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.358 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.360 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.361 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.363 232437 INFO os_vif [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:81:13,bridge_name='br-int',has_traffic_filtering=True,id=f212f53a-4f91-4f42-93d2-4e480ddb53d4,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf212f53a-4f')#033[00m
Dec  6 03:16:44 np0005548731 podman[338188]: 2025-12-06 08:16:44.473523673 +0000 UTC m=+0.405233841 container cleanup 30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 03:16:44 np0005548731 systemd[1]: libpod-conmon-30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51.scope: Deactivated successfully.
Dec  6 03:16:44 np0005548731 podman[338249]: 2025-12-06 08:16:44.577579017 +0000 UTC m=+0.078862741 container remove 30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:16:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:44.583 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7e473952-0422-4ab7-939f-b467994d59c0]: (4, ('Sat Dec  6 08:16:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51)\n30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51\nSat Dec  6 08:16:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51)\n30c3dc76ed267353759b7922d9d2baca40afe1dd377d1de7abbe1c043bb41c51\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:44.585 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e36d93e4-1967-43de-aef3-25cc54851e9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:44.586 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:16:44 np0005548731 kernel: tapb4ef1374-90: left promiscuous mode
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.590 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.602 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:44.606 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6656f7-c8ed-48e3-abd0-5b419dba5861]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:44.624 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f48de97f-28ea-4413-bfc1-74371cad18e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:44.625 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7b79323c-7d39-4e53-9c8f-3c3bdfa95ac7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:44.642 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[acb70946-783e-4ca2-9ebb-5b29cc1acf38]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926139, 'reachable_time': 43947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338264, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:44.646 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:16:44 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:16:44.647 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[20091dca-f2ed-4e08-a1fb-4c272d2b0042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:16:44 np0005548731 systemd[1]: run-netns-ovnmeta\x2db4ef1374\x2d9c77\x2d45a7\x2d8776\x2d50aa60c7d84a.mount: Deactivated successfully.
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.775 232437 DEBUG nova.network.neutron [req-f71e4835-ebdc-4cab-90f7-a1cb877a6edd req-654dfb70-aaa1-43c0-80b4-df21753f10de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Updated VIF entry in instance network info cache for port 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.775 232437 DEBUG nova.network.neutron [req-f71e4835-ebdc-4cab-90f7-a1cb877a6edd req-654dfb70-aaa1-43c0-80b4-df21753f10de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Updating instance_info_cache with network_info: [{"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:16:44 np0005548731 nova_compute[232433]: 2025-12-06 08:16:44.801 232437 DEBUG oslo_concurrency.lockutils [req-f71e4835-ebdc-4cab-90f7-a1cb877a6edd req-654dfb70-aaa1-43c0-80b4-df21753f10de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:16:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:44.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:45.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.524 232437 DEBUG nova.compute.manager [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Received event network-vif-unplugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.524 232437 DEBUG oslo_concurrency.lockutils [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.525 232437 DEBUG oslo_concurrency.lockutils [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.525 232437 DEBUG oslo_concurrency.lockutils [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.525 232437 DEBUG nova.compute.manager [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] No waiting events found dispatching network-vif-unplugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.526 232437 DEBUG nova.compute.manager [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Received event network-vif-unplugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.526 232437 DEBUG nova.compute.manager [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Received event network-vif-plugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.526 232437 DEBUG oslo_concurrency.lockutils [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.526 232437 DEBUG oslo_concurrency.lockutils [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.527 232437 DEBUG oslo_concurrency.lockutils [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.527 232437 DEBUG nova.compute.manager [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] No waiting events found dispatching network-vif-plugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:16:45 np0005548731 nova_compute[232433]: 2025-12-06 08:16:45.527 232437 WARNING nova.compute.manager [req-84dc8f2c-8d12-45d2-9ac5-4342c45d2594 req-2b406cd2-a728-419f-bb88-54030e5fafda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Received unexpected event network-vif-plugged-f212f53a-4f91-4f42-93d2-4e480ddb53d4 for instance with vm_state active and task_state deleting.#033[00m
Dec  6 03:16:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:47.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:47 np0005548731 nova_compute[232433]: 2025-12-06 08:16:47.085 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:47.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:47 np0005548731 nova_compute[232433]: 2025-12-06 08:16:47.491 232437 INFO nova.virt.libvirt.driver [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Deleting instance files /var/lib/nova/instances/1afa9f23-5514-4eb1-9d88-856a8f5e6744_del#033[00m
Dec  6 03:16:47 np0005548731 nova_compute[232433]: 2025-12-06 08:16:47.492 232437 INFO nova.virt.libvirt.driver [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Deletion of /var/lib/nova/instances/1afa9f23-5514-4eb1-9d88-856a8f5e6744_del complete#033[00m
Dec  6 03:16:47 np0005548731 nova_compute[232433]: 2025-12-06 08:16:47.552 232437 INFO nova.compute.manager [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Took 3.68 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:16:47 np0005548731 nova_compute[232433]: 2025-12-06 08:16:47.553 232437 DEBUG oslo.service.loopingcall [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:16:47 np0005548731 nova_compute[232433]: 2025-12-06 08:16:47.553 232437 DEBUG nova.compute.manager [-] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:16:47 np0005548731 nova_compute[232433]: 2025-12-06 08:16:47.553 232437 DEBUG nova.network.neutron [-] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:16:48 np0005548731 nova_compute[232433]: 2025-12-06 08:16:48.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:16:48 np0005548731 nova_compute[232433]: 2025-12-06 08:16:48.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:16:48 np0005548731 nova_compute[232433]: 2025-12-06 08:16:48.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:16:48 np0005548731 nova_compute[232433]: 2025-12-06 08:16:48.137 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  6 03:16:48 np0005548731 nova_compute[232433]: 2025-12-06 08:16:48.982 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:16:48 np0005548731 nova_compute[232433]: 2025-12-06 08:16:48.983 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:16:48 np0005548731 nova_compute[232433]: 2025-12-06 08:16:48.984 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 03:16:48 np0005548731 nova_compute[232433]: 2025-12-06 08:16:48.984 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 73127705-004d-4660-88f2-e3758c8f14c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:16:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:49.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:49 np0005548731 nova_compute[232433]: 2025-12-06 08:16:49.360 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:49 np0005548731 nova_compute[232433]: 2025-12-06 08:16:49.375 232437 DEBUG nova.network.neutron [-] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:16:49 np0005548731 nova_compute[232433]: 2025-12-06 08:16:49.394 232437 INFO nova.compute.manager [-] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Took 1.84 seconds to deallocate network for instance.#033[00m
Dec  6 03:16:49 np0005548731 nova_compute[232433]: 2025-12-06 08:16:49.442 232437 DEBUG nova.compute.manager [req-d5d38f9e-f35f-4cb5-85b6-cb13bab637f6 req-a2d17ac4-2e1d-40c8-8702-db5637b90564 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Received event network-vif-deleted-f212f53a-4f91-4f42-93d2-4e480ddb53d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:16:49 np0005548731 nova_compute[232433]: 2025-12-06 08:16:49.618 232437 INFO nova.compute.manager [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Dec  6 03:16:49 np0005548731 nova_compute[232433]: 2025-12-06 08:16:49.620 232437 DEBUG nova.compute.manager [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Deleting volume: a427e5a0-fe95-4f1e-be90-c6d37b0a00af _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Dec  6 03:16:49 np0005548731 nova_compute[232433]: 2025-12-06 08:16:49.806 232437 DEBUG oslo_concurrency.lockutils [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:49 np0005548731 nova_compute[232433]: 2025-12-06 08:16:49.807 232437 DEBUG oslo_concurrency.lockutils [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:49 np0005548731 nova_compute[232433]: 2025-12-06 08:16:49.874 232437 DEBUG oslo_concurrency.processutils [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:50 np0005548731 nova_compute[232433]: 2025-12-06 08:16:50.213 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Updating instance_info_cache with network_info: [{"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:16:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:16:50 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/950012681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:16:50 np0005548731 nova_compute[232433]: 2025-12-06 08:16:50.299 232437 DEBUG oslo_concurrency.processutils [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:50 np0005548731 nova_compute[232433]: 2025-12-06 08:16:50.304 232437 DEBUG nova.compute.provider_tree [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:16:50 np0005548731 nova_compute[232433]: 2025-12-06 08:16:50.317 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:16:50 np0005548731 nova_compute[232433]: 2025-12-06 08:16:50.317 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 03:16:50 np0005548731 nova_compute[232433]: 2025-12-06 08:16:50.317 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:16:50 np0005548731 nova_compute[232433]: 2025-12-06 08:16:50.321 232437 DEBUG nova.scheduler.client.report [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:16:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:50 np0005548731 nova_compute[232433]: 2025-12-06 08:16:50.416 232437 DEBUG oslo_concurrency.lockutils [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:50 np0005548731 nova_compute[232433]: 2025-12-06 08:16:50.558 232437 INFO nova.scheduler.client.report [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Deleted allocations for instance 1afa9f23-5514-4eb1-9d88-856a8f5e6744#033[00m
Dec  6 03:16:50 np0005548731 nova_compute[232433]: 2025-12-06 08:16:50.648 232437 DEBUG oslo_concurrency.lockutils [None req-70b3bfdb-6b0f-4f7f-a665-0951104e049f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "1afa9f23-5514-4eb1-9d88-856a8f5e6744" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:16:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:51.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:16:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:16:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:16:52 np0005548731 nova_compute[232433]: 2025-12-06 08:16:52.087 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:16:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:16:53 np0005548731 nova_compute[232433]: 2025-12-06 08:16:53.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:16:53 np0005548731 nova_compute[232433]: 2025-12-06 08:16:53.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:16:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:53.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:53 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e418 e418: 3 total, 3 up, 3 in
Dec  6 03:16:53 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:53Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:c6:6e 10.100.0.9
Dec  6 03:16:53 np0005548731 ovn_controller[133927]: 2025-12-06T08:16:53Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:c6:6e 10.100.0.9
Dec  6 03:16:54 np0005548731 nova_compute[232433]: 2025-12-06 08:16:54.365 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:16:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:55.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:16:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.130 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.130 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:16:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1484566160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.593 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.666 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.667 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.874 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.875 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3957MB free_disk=20.90155029296875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.875 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:16:56 np0005548731 nova_compute[232433]: 2025-12-06 08:16:56.876 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:16:57 np0005548731 nova_compute[232433]: 2025-12-06 08:16:57.009 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 73127705-004d-4660-88f2-e3758c8f14c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:16:57 np0005548731 nova_compute[232433]: 2025-12-06 08:16:57.010 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:16:57 np0005548731 nova_compute[232433]: 2025-12-06 08:16:57.010 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:16:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:16:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:57.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:16:57 np0005548731 nova_compute[232433]: 2025-12-06 08:16:57.058 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:16:57 np0005548731 nova_compute[232433]: 2025-12-06 08:16:57.111 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:57.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:16:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1500307962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:16:57 np0005548731 nova_compute[232433]: 2025-12-06 08:16:57.569 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:16:57 np0005548731 nova_compute[232433]: 2025-12-06 08:16:57.577 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:16:57 np0005548731 nova_compute[232433]: 2025-12-06 08:16:57.599 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:16:57 np0005548731 nova_compute[232433]: 2025-12-06 08:16:57.621 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:16:57 np0005548731 nova_compute[232433]: 2025-12-06 08:16:57.622 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:16:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:16:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:16:59 np0005548731 nova_compute[232433]: 2025-12-06 08:16:59.104 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009004.1033514, 1afa9f23-5514-4eb1-9d88-856a8f5e6744 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:16:59 np0005548731 nova_compute[232433]: 2025-12-06 08:16:59.105 232437 INFO nova.compute.manager [-] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:16:59 np0005548731 nova_compute[232433]: 2025-12-06 08:16:59.122 232437 DEBUG nova.compute.manager [None req-9569924a-0d23-4592-b9ed-23134e3456a5 - - - - - -] [instance: 1afa9f23-5514-4eb1-9d88-856a8f5e6744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:16:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:16:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:16:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:59.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:16:59 np0005548731 nova_compute[232433]: 2025-12-06 08:16:59.369 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:16:59 np0005548731 nova_compute[232433]: 2025-12-06 08:16:59.622 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:17:00 np0005548731 nova_compute[232433]: 2025-12-06 08:17:00.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:17:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e419 e419: 3 total, 3 up, 3 in
Dec  6 03:17:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:17:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:17:00 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:17:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:00.920 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:00.920 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:00.921 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:17:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:01.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:17:02 np0005548731 nova_compute[232433]: 2025-12-06 08:17:02.091 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:03 np0005548731 nova_compute[232433]: 2025-12-06 08:17:03.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:17:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:03.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.294 232437 DEBUG nova.compute.manager [req-df6dce0c-6e0b-4410-8285-e39af33f34cc req-af5d580e-0376-4b0b-9daf-e0aa28e601b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Received event network-changed-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.294 232437 DEBUG nova.compute.manager [req-df6dce0c-6e0b-4410-8285-e39af33f34cc req-af5d580e-0376-4b0b-9daf-e0aa28e601b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Refreshing instance network info cache due to event network-changed-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.295 232437 DEBUG oslo_concurrency.lockutils [req-df6dce0c-6e0b-4410-8285-e39af33f34cc req-af5d580e-0376-4b0b-9daf-e0aa28e601b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.295 232437 DEBUG oslo_concurrency.lockutils [req-df6dce0c-6e0b-4410-8285-e39af33f34cc req-af5d580e-0376-4b0b-9daf-e0aa28e601b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.295 232437 DEBUG nova.network.neutron [req-df6dce0c-6e0b-4410-8285-e39af33f34cc req-af5d580e-0376-4b0b-9daf-e0aa28e601b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Refreshing network info cache for port 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.355 232437 DEBUG oslo_concurrency.lockutils [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Acquiring lock "73127705-004d-4660-88f2-e3758c8f14c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.355 232437 DEBUG oslo_concurrency.lockutils [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.356 232437 DEBUG oslo_concurrency.lockutils [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Acquiring lock "73127705-004d-4660-88f2-e3758c8f14c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.356 232437 DEBUG oslo_concurrency.lockutils [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.356 232437 DEBUG oslo_concurrency.lockutils [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.358 232437 INFO nova.compute.manager [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Terminating instance#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.359 232437 DEBUG nova.compute.manager [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.372 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:04 np0005548731 kernel: tap2e36dbbd-5e (unregistering): left promiscuous mode
Dec  6 03:17:04 np0005548731 NetworkManager[49182]: <info>  [1765009024.4212] device (tap2e36dbbd-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.439 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:04Z|01068|binding|INFO|Releasing lport 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 from this chassis (sb_readonly=0)
Dec  6 03:17:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:04Z|01069|binding|INFO|Setting lport 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 down in Southbound
Dec  6 03:17:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:04Z|01070|binding|INFO|Removing iface tap2e36dbbd-5e ovn-installed in OVS
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.443 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.450 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:c6:6e 10.100.0.9'], port_security=['fa:16:3e:72:c6:6e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73127705-004d-4660-88f2-e3758c8f14c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf4b9c80-699e-493c-9b5c-f95417bc87b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9b819f443be4cbe852a843ef1c9e3c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8eb9efa9-fc9a-4511-8391-9ec775bfc17f bb40e599-4106-4c20-b561-62f2212db0c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c077d174-55fe-4d5a-bef2-34f4c94a958a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=2e36dbbd-5e69-4b13-9b5d-12b3e57951c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.452 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 in datapath cf4b9c80-699e-493c-9b5c-f95417bc87b3 unbound from our chassis#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.454 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf4b9c80-699e-493c-9b5c-f95417bc87b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.455 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6e211056-629d-445e-a6b9-3c9cd26d098f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.456 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3 namespace which is not needed anymore#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.461 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:04 np0005548731 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000cf.scope: Deactivated successfully.
Dec  6 03:17:04 np0005548731 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000cf.scope: Consumed 13.843s CPU time.
Dec  6 03:17:04 np0005548731 systemd-machined[195355]: Machine qemu-107-instance-000000cf terminated.
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.600 232437 INFO nova.virt.libvirt.driver [-] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Instance destroyed successfully.#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.600 232437 DEBUG nova.objects.instance [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lazy-loading 'resources' on Instance uuid 73127705-004d-4660-88f2-e3758c8f14c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.617 232437 DEBUG nova.virt.libvirt.vif [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:16:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-193167232-access_point-539910206',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-193167232-access_point-539910206',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-193167232-acc',id=207,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHMmZWG/O6PqcvpqRlVFaRu5xBPj4FqN4KXTvc680k96sNfIz4Sh3CFmI/fthirdlUn2yG6bxynVBLf1/e7VCngDUtWmrOk2Wxd8iITW+fjCRZcW6KQpfbYi88t8MRnpgg==',key_name='tempest-TestSecurityGroupsBasicOps-1647941493',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:16:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c9b819f443be4cbe852a843ef1c9e3c7',ramdisk_id='',reservation_id='r-zzimchur',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-193167232',owner_user_name='tempest-TestSecurityGroupsBasicOps-193167232-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:16:39Z,user_data=None,user_id='e07a31bfb9584e07a5620ecf91e88a4b',uuid=73127705-004d-4660-88f2-e3758c8f14c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.619 232437 DEBUG nova.network.os_vif_util [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Converting VIF {"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.621 232437 DEBUG nova.network.os_vif_util [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:c6:6e,bridge_name='br-int',has_traffic_filtering=True,id=2e36dbbd-5e69-4b13-9b5d-12b3e57951c1,network=Network(cf4b9c80-699e-493c-9b5c-f95417bc87b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e36dbbd-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.621 232437 DEBUG os_vif [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:c6:6e,bridge_name='br-int',has_traffic_filtering=True,id=2e36dbbd-5e69-4b13-9b5d-12b3e57951c1,network=Network(cf4b9c80-699e-493c-9b5c-f95417bc87b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e36dbbd-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.623 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.623 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e36dbbd-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.625 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.626 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.628 232437 INFO os_vif [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:c6:6e,bridge_name='br-int',has_traffic_filtering=True,id=2e36dbbd-5e69-4b13-9b5d-12b3e57951c1,network=Network(cf4b9c80-699e-493c-9b5c-f95417bc87b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e36dbbd-5e')#033[00m
Dec  6 03:17:04 np0005548731 neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3[337979]: [NOTICE]   (337983) : haproxy version is 2.8.14-c23fe91
Dec  6 03:17:04 np0005548731 neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3[337979]: [NOTICE]   (337983) : path to executable is /usr/sbin/haproxy
Dec  6 03:17:04 np0005548731 neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3[337979]: [WARNING]  (337983) : Exiting Master process...
Dec  6 03:17:04 np0005548731 neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3[337979]: [ALERT]    (337983) : Current worker (337985) exited with code 143 (Terminated)
Dec  6 03:17:04 np0005548731 neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3[337979]: [WARNING]  (337983) : All workers exited. Exiting... (0)
Dec  6 03:17:04 np0005548731 systemd[1]: libpod-e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb.scope: Deactivated successfully.
Dec  6 03:17:04 np0005548731 podman[338551]: 2025-12-06 08:17:04.664295617 +0000 UTC m=+0.091451689 container died e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.673 232437 DEBUG nova.compute.manager [req-cdbff4c9-e7b4-4e3e-b0fd-7fcd83a13b4d req-f27c757e-850c-454a-ab65-ca7108311e41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Received event network-vif-unplugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.674 232437 DEBUG oslo_concurrency.lockutils [req-cdbff4c9-e7b4-4e3e-b0fd-7fcd83a13b4d req-f27c757e-850c-454a-ab65-ca7108311e41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "73127705-004d-4660-88f2-e3758c8f14c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.674 232437 DEBUG oslo_concurrency.lockutils [req-cdbff4c9-e7b4-4e3e-b0fd-7fcd83a13b4d req-f27c757e-850c-454a-ab65-ca7108311e41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.675 232437 DEBUG oslo_concurrency.lockutils [req-cdbff4c9-e7b4-4e3e-b0fd-7fcd83a13b4d req-f27c757e-850c-454a-ab65-ca7108311e41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.675 232437 DEBUG nova.compute.manager [req-cdbff4c9-e7b4-4e3e-b0fd-7fcd83a13b4d req-f27c757e-850c-454a-ab65-ca7108311e41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] No waiting events found dispatching network-vif-unplugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.675 232437 DEBUG nova.compute.manager [req-cdbff4c9-e7b4-4e3e-b0fd-7fcd83a13b4d req-f27c757e-850c-454a-ab65-ca7108311e41 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Received event network-vif-unplugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:17:04 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb-userdata-shm.mount: Deactivated successfully.
Dec  6 03:17:04 np0005548731 systemd[1]: var-lib-containers-storage-overlay-aac4503fb981d68335a0686f7edaa32e0b45b16087b820aa53506e181f76e9e3-merged.mount: Deactivated successfully.
Dec  6 03:17:04 np0005548731 podman[338551]: 2025-12-06 08:17:04.70258143 +0000 UTC m=+0.129737512 container cleanup e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  6 03:17:04 np0005548731 systemd[1]: libpod-conmon-e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb.scope: Deactivated successfully.
Dec  6 03:17:04 np0005548731 podman[338609]: 2025-12-06 08:17:04.761590297 +0000 UTC m=+0.037306840 container remove e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.767 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f73c3c14-3ceb-4c7a-8492-c2deedbf8d96]: (4, ('Sat Dec  6 08:17:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3 (e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb)\ne40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb\nSat Dec  6 08:17:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3 (e40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb)\ne40886c67ab8f5853aaf8a0f6c7a95a211335b0bf7d9e3a425a7f7cc78e5e6eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.768 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9b946d78-89b7-4d3e-9066-dad1138dc11f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.769 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf4b9c80-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:17:04 np0005548731 kernel: tapcf4b9c80-60: left promiscuous mode
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.774 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:04 np0005548731 nova_compute[232433]: 2025-12-06 08:17:04.787 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.789 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[94fb1966-08d4-4bcd-a2f6-f00227490732]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.810 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[18313162-397f-4f51-9e63-521c39ff311e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.811 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[92fef7af-adf1-4b64-99fd-b6616849c9ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.827 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6f552bac-bb93-4580-b19d-0d720fe73e47]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 926057, 'reachable_time': 44130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338625, 'error': None, 'target': 'ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.830 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf4b9c80-699e-493c-9b5c-f95417bc87b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:17:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:04.830 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9b8bc1-2c52-4170-a2b4-759d46dee86d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:04 np0005548731 systemd[1]: run-netns-ovnmeta\x2dcf4b9c80\x2d699e\x2d493c\x2d9b5c\x2df95417bc87b3.mount: Deactivated successfully.
Dec  6 03:17:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:05.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:05.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:05.378 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=97, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=96) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:17:05 np0005548731 nova_compute[232433]: 2025-12-06 08:17:05.379 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:05.381 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:17:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:05.383 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '97'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:17:05 np0005548731 nova_compute[232433]: 2025-12-06 08:17:05.530 232437 INFO nova.virt.libvirt.driver [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Deleting instance files /var/lib/nova/instances/73127705-004d-4660-88f2-e3758c8f14c0_del#033[00m
Dec  6 03:17:05 np0005548731 nova_compute[232433]: 2025-12-06 08:17:05.531 232437 INFO nova.virt.libvirt.driver [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Deletion of /var/lib/nova/instances/73127705-004d-4660-88f2-e3758c8f14c0_del complete#033[00m
Dec  6 03:17:05 np0005548731 nova_compute[232433]: 2025-12-06 08:17:05.637 232437 DEBUG nova.network.neutron [req-df6dce0c-6e0b-4410-8285-e39af33f34cc req-af5d580e-0376-4b0b-9daf-e0aa28e601b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Updated VIF entry in instance network info cache for port 2e36dbbd-5e69-4b13-9b5d-12b3e57951c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:17:05 np0005548731 nova_compute[232433]: 2025-12-06 08:17:05.638 232437 DEBUG nova.network.neutron [req-df6dce0c-6e0b-4410-8285-e39af33f34cc req-af5d580e-0376-4b0b-9daf-e0aa28e601b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Updating instance_info_cache with network_info: [{"id": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "address": "fa:16:3e:72:c6:6e", "network": {"id": "cf4b9c80-699e-493c-9b5c-f95417bc87b3", "bridge": "br-int", "label": "tempest-network-smoke--1764011039", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9b819f443be4cbe852a843ef1c9e3c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e36dbbd-5e", "ovs_interfaceid": "2e36dbbd-5e69-4b13-9b5d-12b3e57951c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:17:05 np0005548731 nova_compute[232433]: 2025-12-06 08:17:05.705 232437 INFO nova.compute.manager [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Took 1.35 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:17:05 np0005548731 nova_compute[232433]: 2025-12-06 08:17:05.705 232437 DEBUG oslo.service.loopingcall [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:17:05 np0005548731 nova_compute[232433]: 2025-12-06 08:17:05.706 232437 DEBUG nova.compute.manager [-] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:17:05 np0005548731 nova_compute[232433]: 2025-12-06 08:17:05.706 232437 DEBUG nova.network.neutron [-] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:17:05 np0005548731 nova_compute[232433]: 2025-12-06 08:17:05.929 232437 DEBUG oslo_concurrency.lockutils [req-df6dce0c-6e0b-4410-8285-e39af33f34cc req-af5d580e-0376-4b0b-9daf-e0aa28e601b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-73127705-004d-4660-88f2-e3758c8f14c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.560 232437 DEBUG nova.network.neutron [-] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.577 232437 INFO nova.compute.manager [-] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Took 0.87 seconds to deallocate network for instance.#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.619 232437 DEBUG oslo_concurrency.lockutils [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.620 232437 DEBUG oslo_concurrency.lockutils [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.629 232437 DEBUG nova.compute.manager [req-16256442-1b9d-4a34-b831-71e4af744754 req-36b24ab9-c199-4c5f-949b-db76f808f427 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Received event network-vif-deleted-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.671 232437 DEBUG oslo_concurrency.processutils [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.779 232437 DEBUG nova.compute.manager [req-8d967339-a402-47b0-96fe-52f5cc363482 req-b5f68c16-98d5-438a-a1f9-06e595b99f2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Received event network-vif-plugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.779 232437 DEBUG oslo_concurrency.lockutils [req-8d967339-a402-47b0-96fe-52f5cc363482 req-b5f68c16-98d5-438a-a1f9-06e595b99f2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "73127705-004d-4660-88f2-e3758c8f14c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.780 232437 DEBUG oslo_concurrency.lockutils [req-8d967339-a402-47b0-96fe-52f5cc363482 req-b5f68c16-98d5-438a-a1f9-06e595b99f2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.780 232437 DEBUG oslo_concurrency.lockutils [req-8d967339-a402-47b0-96fe-52f5cc363482 req-b5f68c16-98d5-438a-a1f9-06e595b99f2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.780 232437 DEBUG nova.compute.manager [req-8d967339-a402-47b0-96fe-52f5cc363482 req-b5f68c16-98d5-438a-a1f9-06e595b99f2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] No waiting events found dispatching network-vif-plugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:17:06 np0005548731 nova_compute[232433]: 2025-12-06 08:17:06.780 232437 WARNING nova.compute.manager [req-8d967339-a402-47b0-96fe-52f5cc363482 req-b5f68c16-98d5-438a-a1f9-06e595b99f2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Received unexpected event network-vif-plugged-2e36dbbd-5e69-4b13-9b5d-12b3e57951c1 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 03:17:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:17:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:17:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:07.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:17:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/713378447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:17:07 np0005548731 nova_compute[232433]: 2025-12-06 08:17:07.119 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:07 np0005548731 nova_compute[232433]: 2025-12-06 08:17:07.135 232437 DEBUG oslo_concurrency.processutils [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:17:07 np0005548731 nova_compute[232433]: 2025-12-06 08:17:07.141 232437 DEBUG nova.compute.provider_tree [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:17:07 np0005548731 nova_compute[232433]: 2025-12-06 08:17:07.158 232437 DEBUG nova.scheduler.client.report [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:17:07 np0005548731 nova_compute[232433]: 2025-12-06 08:17:07.179 232437 DEBUG oslo_concurrency.lockutils [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:07.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:07 np0005548731 nova_compute[232433]: 2025-12-06 08:17:07.313 232437 INFO nova.scheduler.client.report [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Deleted allocations for instance 73127705-004d-4660-88f2-e3758c8f14c0#033[00m
Dec  6 03:17:07 np0005548731 nova_compute[232433]: 2025-12-06 08:17:07.421 232437 DEBUG oslo_concurrency.lockutils [None req-f44e49db-84a5-4387-94c5-09ceca29b28a e07a31bfb9584e07a5620ecf91e88a4b c9b819f443be4cbe852a843ef1c9e3c7 - - default default] Lock "73127705-004d-4660-88f2-e3758c8f14c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:17:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1028973287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:17:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:17:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1028973287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:17:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:17:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:17:09 np0005548731 nova_compute[232433]: 2025-12-06 08:17:09.628 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:09 np0005548731 nova_compute[232433]: 2025-12-06 08:17:09.708 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:09 np0005548731 nova_compute[232433]: 2025-12-06 08:17:09.708 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:09 np0005548731 nova_compute[232433]: 2025-12-06 08:17:09.725 232437 DEBUG nova.compute.manager [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:17:09 np0005548731 nova_compute[232433]: 2025-12-06 08:17:09.795 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:09 np0005548731 nova_compute[232433]: 2025-12-06 08:17:09.795 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:09 np0005548731 nova_compute[232433]: 2025-12-06 08:17:09.802 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:17:09 np0005548731 nova_compute[232433]: 2025-12-06 08:17:09.802 232437 INFO nova.compute.claims [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:17:09 np0005548731 nova_compute[232433]: 2025-12-06 08:17:09.932 232437 DEBUG oslo_concurrency.processutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:17:09 np0005548731 podman[338701]: 2025-12-06 08:17:09.946788288 +0000 UTC m=+0.098932371 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  6 03:17:09 np0005548731 podman[338703]: 2025-12-06 08:17:09.963964446 +0000 UTC m=+0.114073109 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Dec  6 03:17:09 np0005548731 podman[338702]: 2025-12-06 08:17:09.971853028 +0000 UTC m=+0.132727474 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.126 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:17:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/28693135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.334 232437 DEBUG oslo_concurrency.processutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:17:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.343 232437 DEBUG nova.compute.provider_tree [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.345 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.363 232437 DEBUG nova.scheduler.client.report [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.390 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.391 232437 DEBUG nova.compute.manager [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.442 232437 DEBUG nova.compute.manager [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.442 232437 DEBUG nova.network.neutron [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.469 232437 INFO nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.486 232437 DEBUG nova.compute.manager [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.544 232437 INFO nova.virt.block_device [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Booting with volume aa997441-9cbb-4e9f-b2e7-f4b6f1e20ab1 at /dev/vda#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.624 232437 DEBUG nova.policy [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e8feb4540af4e2caa45a88a9202dbe2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.716 232437 DEBUG os_brick.utils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.718 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.739 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.740 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[222dc0b5-cba7-4df3-8669-957ef43e430e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.742 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.752 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.752 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[811ad2e4-d18e-4463-ad4c-dc6286d17734]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.755 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.762 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.762 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7401ce94-b2b7-4513-a87b-2fcb4c814ee1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.764 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3c7a77-46ad-4096-a16c-abe4dcd67f45]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.765 232437 DEBUG oslo_concurrency.processutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.810 232437 DEBUG oslo_concurrency.processutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "nvme version" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.815 232437 DEBUG os_brick.initiator.connectors.lightos [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.815 232437 DEBUG os_brick.initiator.connectors.lightos [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.816 232437 DEBUG os_brick.initiator.connectors.lightos [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.818 232437 DEBUG os_brick.utils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 03:17:10 np0005548731 nova_compute[232433]: 2025-12-06 08:17:10.818 232437 DEBUG nova.virt.block_device [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Updating existing volume attachment record: 555216c0-15c7-4dbd-ad53-603c7a858461 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 03:17:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:17:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:17:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:11 np0005548731 nova_compute[232433]: 2025-12-06 08:17:11.994 232437 DEBUG nova.compute.manager [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:17:11 np0005548731 nova_compute[232433]: 2025-12-06 08:17:11.995 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:17:11 np0005548731 nova_compute[232433]: 2025-12-06 08:17:11.996 232437 INFO nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Creating image(s)#033[00m
Dec  6 03:17:11 np0005548731 nova_compute[232433]: 2025-12-06 08:17:11.996 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 03:17:11 np0005548731 nova_compute[232433]: 2025-12-06 08:17:11.996 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Ensure instance console log exists: /var/lib/nova/instances/02dd8ac7-35d8-410a-9ec7-05f6acfed2a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:17:11 np0005548731 nova_compute[232433]: 2025-12-06 08:17:11.997 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:11 np0005548731 nova_compute[232433]: 2025-12-06 08:17:11.997 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:11 np0005548731 nova_compute[232433]: 2025-12-06 08:17:11.997 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:12 np0005548731 nova_compute[232433]: 2025-12-06 08:17:12.062 232437 DEBUG nova.network.neutron [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Successfully created port: f368996f-9dbe-4c0f-8b79-360f8c3c0348 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:17:12 np0005548731 nova_compute[232433]: 2025-12-06 08:17:12.158 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:13.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:13.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:13 np0005548731 nova_compute[232433]: 2025-12-06 08:17:13.580 232437 DEBUG nova.network.neutron [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Successfully updated port: f368996f-9dbe-4c0f-8b79-360f8c3c0348 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:17:13 np0005548731 nova_compute[232433]: 2025-12-06 08:17:13.594 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:17:13 np0005548731 nova_compute[232433]: 2025-12-06 08:17:13.594 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquired lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:17:13 np0005548731 nova_compute[232433]: 2025-12-06 08:17:13.594 232437 DEBUG nova.network.neutron [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:17:13 np0005548731 nova_compute[232433]: 2025-12-06 08:17:13.723 232437 DEBUG nova.network.neutron [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.477 232437 DEBUG nova.network.neutron [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Updating instance_info_cache with network_info: [{"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.498 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Releasing lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.498 232437 DEBUG nova.compute.manager [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Instance network_info: |[{"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.500 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Start _get_guest_xml network_info=[{"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aa997441-9cbb-4e9f-b2e7-f4b6f1e20ab1', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aa997441-9cbb-4e9f-b2e7-f4b6f1e20ab1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '02dd8ac7-35d8-410a-9ec7-05f6acfed2a1', 'attached_at': '', 'detached_at': '', 'volume_id': 'aa997441-9cbb-4e9f-b2e7-f4b6f1e20ab1', 'serial': 'aa997441-9cbb-4e9f-b2e7-f4b6f1e20ab1'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'attachment_id': '555216c0-15c7-4dbd-ad53-603c7a858461', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.505 232437 WARNING nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.510 232437 DEBUG nova.virt.libvirt.host [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.511 232437 DEBUG nova.virt.libvirt.host [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.515 232437 DEBUG nova.virt.libvirt.host [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.516 232437 DEBUG nova.virt.libvirt.host [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.517 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.517 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.517 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.517 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.518 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.518 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.518 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.518 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.519 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.519 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.519 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.519 232437 DEBUG nova.virt.hardware [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.543 232437 DEBUG nova.storage.rbd_utils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.547 232437 DEBUG oslo_concurrency.processutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.632 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:17:14 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/686669092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.947 232437 DEBUG oslo_concurrency.processutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.977 232437 DEBUG nova.virt.libvirt.vif [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:17:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-432414758',display_name='tempest-TestVolumeBootPattern-volume-backed-server-432414758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-432414758',id=208,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEaPA38/V7SDeUJEH1PEQmqBPRdyrr6vTmK0YY+l34Pyl4mgafdDtmsvHmDvkdA4yVsJSOQl578JF9Rc5P9/OWjQks65zuMrvGK8LhjfS2jd2cb0AGhJSBtEZLic7Ae5Rg==',key_name='tempest-keypair-606612846',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-s88z9jl4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:17:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=02dd8ac7-35d8-410a-9ec7-05f6acfed2a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.978 232437 DEBUG nova.network.os_vif_util [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.978 232437 DEBUG nova.network.os_vif_util [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:0f:6a,bridge_name='br-int',has_traffic_filtering=True,id=f368996f-9dbe-4c0f-8b79-360f8c3c0348,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf368996f-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.979 232437 DEBUG nova.objects.instance [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'pci_devices' on Instance uuid 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:17:14 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.996 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  <uuid>02dd8ac7-35d8-410a-9ec7-05f6acfed2a1</uuid>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  <name>instance-000000d0</name>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-432414758</nova:name>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:17:14</nova:creationTime>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:17:14 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:        <nova:user uuid="8e8feb4540af4e2caa45a88a9202dbe2">tempest-TestVolumeBootPattern-97496240-project-member</nova:user>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:        <nova:project uuid="4b2dc4b8729f446a9c7ac69ca446f71d">tempest-TestVolumeBootPattern-97496240</nova:project>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:        <nova:port uuid="f368996f-9dbe-4c0f-8b79-360f8c3c0348">
Dec  6 03:17:14 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <entry name="serial">02dd8ac7-35d8-410a-9ec7-05f6acfed2a1</entry>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <entry name="uuid">02dd8ac7-35d8-410a-9ec7-05f6acfed2a1</entry>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:17:14 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:17:14 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/02dd8ac7-35d8-410a-9ec7-05f6acfed2a1_disk.config">
Dec  6 03:17:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:17:15 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-aa997441-9cbb-4e9f-b2e7-f4b6f1e20ab1">
Dec  6 03:17:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:17:15 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <serial>aa997441-9cbb-4e9f-b2e7-f4b6f1e20ab1</serial>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:f3:0f:6a"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <target dev="tapf368996f-9d"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/02dd8ac7-35d8-410a-9ec7-05f6acfed2a1/console.log" append="off"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:17:15 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:17:15 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:17:15 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:17:15 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.998 232437 DEBUG nova.compute.manager [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Preparing to wait for external event network-vif-plugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.998 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.998 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.999 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:14.999 232437 DEBUG nova.virt.libvirt.vif [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:17:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-432414758',display_name='tempest-TestVolumeBootPattern-volume-backed-server-432414758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-432414758',id=208,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEaPA38/V7SDeUJEH1PEQmqBPRdyrr6vTmK0YY+l34Pyl4mgafdDtmsvHmDvkdA4yVsJSOQl578JF9Rc5P9/OWjQks65zuMrvGK8LhjfS2jd2cb0AGhJSBtEZLic7Ae5Rg==',key_name='tempest-keypair-606612846',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-s88z9jl4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:17:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=02dd8ac7-35d8-410a-9ec7-05f6acfed2a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.000 232437 DEBUG nova.network.os_vif_util [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.000 232437 DEBUG nova.network.os_vif_util [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:0f:6a,bridge_name='br-int',has_traffic_filtering=True,id=f368996f-9dbe-4c0f-8b79-360f8c3c0348,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf368996f-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.000 232437 DEBUG os_vif [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:0f:6a,bridge_name='br-int',has_traffic_filtering=True,id=f368996f-9dbe-4c0f-8b79-360f8c3c0348,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf368996f-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.001 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.001 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.002 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.004 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.005 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf368996f-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.005 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf368996f-9d, col_values=(('external_ids', {'iface-id': 'f368996f-9dbe-4c0f-8b79-360f8c3c0348', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:0f:6a', 'vm-uuid': '02dd8ac7-35d8-410a-9ec7-05f6acfed2a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.006 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:15 np0005548731 NetworkManager[49182]: <info>  [1765009035.0077] manager: (tapf368996f-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.009 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.013 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.014 232437 INFO os_vif [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:0f:6a,bridge_name='br-int',has_traffic_filtering=True,id=f368996f-9dbe-4c0f-8b79-360f8c3c0348,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf368996f-9d')#033[00m
Dec  6 03:17:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:15.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.070 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.070 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.070 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No VIF found with MAC fa:16:3e:f3:0f:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.071 232437 INFO nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Using config drive#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.096 232437 DEBUG nova.storage.rbd_utils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:17:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:15.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.634 232437 DEBUG nova.compute.manager [req-ff940955-11b4-44c5-a931-f81786af79a8 req-b94f0e1d-c41e-4f0f-afbd-11f19299e205 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Received event network-changed-f368996f-9dbe-4c0f-8b79-360f8c3c0348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.635 232437 DEBUG nova.compute.manager [req-ff940955-11b4-44c5-a931-f81786af79a8 req-b94f0e1d-c41e-4f0f-afbd-11f19299e205 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Refreshing instance network info cache due to event network-changed-f368996f-9dbe-4c0f-8b79-360f8c3c0348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.635 232437 DEBUG oslo_concurrency.lockutils [req-ff940955-11b4-44c5-a931-f81786af79a8 req-b94f0e1d-c41e-4f0f-afbd-11f19299e205 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.636 232437 DEBUG oslo_concurrency.lockutils [req-ff940955-11b4-44c5-a931-f81786af79a8 req-b94f0e1d-c41e-4f0f-afbd-11f19299e205 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:17:15 np0005548731 nova_compute[232433]: 2025-12-06 08:17:15.636 232437 DEBUG nova.network.neutron [req-ff940955-11b4-44c5-a931-f81786af79a8 req-b94f0e1d-c41e-4f0f-afbd-11f19299e205 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Refreshing network info cache for port f368996f-9dbe-4c0f-8b79-360f8c3c0348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:17:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.003000071s ======
Dec  6 03:17:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:17.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.090 232437 INFO nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Creating config drive at /var/lib/nova/instances/02dd8ac7-35d8-410a-9ec7-05f6acfed2a1/disk.config#033[00m
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.096 232437 DEBUG oslo_concurrency.processutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02dd8ac7-35d8-410a-9ec7-05f6acfed2a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmyqyo3dx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.217 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:17.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.250 232437 DEBUG oslo_concurrency.processutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02dd8ac7-35d8-410a-9ec7-05f6acfed2a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmyqyo3dx" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.286 232437 DEBUG nova.storage.rbd_utils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.290 232437 DEBUG oslo_concurrency.processutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02dd8ac7-35d8-410a-9ec7-05f6acfed2a1/disk.config 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.479 232437 DEBUG oslo_concurrency.processutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02dd8ac7-35d8-410a-9ec7-05f6acfed2a1/disk.config 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.481 232437 INFO nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Deleting local config drive /var/lib/nova/instances/02dd8ac7-35d8-410a-9ec7-05f6acfed2a1/disk.config because it was imported into RBD.#033[00m
Dec  6 03:17:17 np0005548731 kernel: tapf368996f-9d: entered promiscuous mode
Dec  6 03:17:17 np0005548731 NetworkManager[49182]: <info>  [1765009037.5401] manager: (tapf368996f-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/507)
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.541 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:17 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:17Z|01071|binding|INFO|Claiming lport f368996f-9dbe-4c0f-8b79-360f8c3c0348 for this chassis.
Dec  6 03:17:17 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:17Z|01072|binding|INFO|f368996f-9dbe-4c0f-8b79-360f8c3c0348: Claiming fa:16:3e:f3:0f:6a 10.100.0.10
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.550 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:17 np0005548731 systemd-udevd[338907]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.571 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:0f:6a 10.100.0.10'], port_security=['fa:16:3e:f3:0f:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '02dd8ac7-35d8-410a-9ec7-05f6acfed2a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aaf47155-388f-40f5-8da3-afcf909cfc50', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f368996f-9dbe-4c0f-8b79-360f8c3c0348) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.574 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f368996f-9dbe-4c0f-8b79-360f8c3c0348 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a bound to our chassis#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.576 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4ef1374-9c77-45a7-8776-50aa60c7d84a#033[00m
Dec  6 03:17:17 np0005548731 systemd-machined[195355]: New machine qemu-109-instance-000000d0.
Dec  6 03:17:17 np0005548731 NetworkManager[49182]: <info>  [1765009037.5836] device (tapf368996f-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:17:17 np0005548731 NetworkManager[49182]: <info>  [1765009037.5846] device (tapf368996f-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.590 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[56ae8fa5-fed6-42f5-a3ca-9d116eae7aaf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.591 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4ef1374-91 in ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.592 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4ef1374-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.592 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dde1efea-aee2-4cc5-ad9e-d7d07991a9a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.594 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b47ce22e-3705-472d-8216-bbdc2c3d6376]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.608 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4204c8-6dc4-4ff3-86e4-1bb3d0641afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 systemd[1]: Started Virtual Machine qemu-109-instance-000000d0.
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.636 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[287408b6-5f72-4459-9a67-5d0c054601d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:17Z|01073|binding|INFO|Setting lport f368996f-9dbe-4c0f-8b79-360f8c3c0348 ovn-installed in OVS
Dec  6 03:17:17 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:17Z|01074|binding|INFO|Setting lport f368996f-9dbe-4c0f-8b79-360f8c3c0348 up in Southbound
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.648 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.649 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.672 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[888d6939-ba7a-4afc-b4ce-4257b9f195ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.678 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c6db9158-02af-4d71-8463-91fc34abf3df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 systemd-udevd[338913]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:17:17 np0005548731 NetworkManager[49182]: <info>  [1765009037.6789] manager: (tapb4ef1374-90): new Veth device (/org/freedesktop/NetworkManager/Devices/508)
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.717 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[32ce1699-13e8-4560-914b-bfbb088b969b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.719 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[14652174-4067-45d7-a2f5-1b83d74ca48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 NetworkManager[49182]: <info>  [1765009037.7429] device (tapb4ef1374-90): carrier: link connected
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.746 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[86b67e19-e49b-43ce-a88b-bb5a6d595977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.769 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[59a60e49-4032-4dc2-af16-aec2f3dbb0a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 329], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 929974, 'reachable_time': 18798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338943, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.785 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[530f4d29-b055-4c27-948e-e4352d5c3eae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:d4b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 929974, 'tstamp': 929974}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338944, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.802 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[003b6c9c-c4d8-4303-be05-ee4bb2ee0fdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 329], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 929974, 'reachable_time': 18798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338945, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.839 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ca411f41-32aa-414a-b8e6-b4a2537b9756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.901 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[214fe730-bc22-4a38-9597-5cc7b343467a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.902 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.902 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.903 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4ef1374-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.905 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:17 np0005548731 NetworkManager[49182]: <info>  [1765009037.9059] manager: (tapb4ef1374-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Dec  6 03:17:17 np0005548731 kernel: tapb4ef1374-90: entered promiscuous mode
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.907 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.908 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4ef1374-90, col_values=(('external_ids', {'iface-id': '32c82c25-6496-4edd-ba74-1791824b99ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.909 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:17 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:17Z|01075|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec  6 03:17:17 np0005548731 nova_compute[232433]: 2025-12-06 08:17:17.928 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.929 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.930 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[012e9d6f-70e5-4368-90f5-86be5508939b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.931 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:17:17 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:17:17.931 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'env', 'PROCESS_TAG=haproxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4ef1374-9c77-45a7-8776-50aa60c7d84a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:17:18 np0005548731 nova_compute[232433]: 2025-12-06 08:17:18.090 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009038.0898156, 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:17:18 np0005548731 nova_compute[232433]: 2025-12-06 08:17:18.091 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] VM Started (Lifecycle Event)#033[00m
Dec  6 03:17:18 np0005548731 nova_compute[232433]: 2025-12-06 08:17:18.120 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:17:18 np0005548731 nova_compute[232433]: 2025-12-06 08:17:18.123 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009038.0948586, 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:17:18 np0005548731 nova_compute[232433]: 2025-12-06 08:17:18.124 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:17:18 np0005548731 nova_compute[232433]: 2025-12-06 08:17:18.147 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:17:18 np0005548731 nova_compute[232433]: 2025-12-06 08:17:18.150 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:17:18 np0005548731 nova_compute[232433]: 2025-12-06 08:17:18.170 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:17:18 np0005548731 podman[339019]: 2025-12-06 08:17:18.278963292 +0000 UTC m=+0.053320599 container create 74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 03:17:18 np0005548731 systemd[1]: Started libpod-conmon-74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613.scope.
Dec  6 03:17:18 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:17:18 np0005548731 podman[339019]: 2025-12-06 08:17:18.252288992 +0000 UTC m=+0.026646329 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:17:18 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4c865e52bf71c826eebb5fb816e93ab6b6d08d1eb969ae10d249711ba81436/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:17:18 np0005548731 podman[339019]: 2025-12-06 08:17:18.368587335 +0000 UTC m=+0.142944722 container init 74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 03:17:18 np0005548731 podman[339019]: 2025-12-06 08:17:18.373757871 +0000 UTC m=+0.148115208 container start 74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  6 03:17:18 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[339034]: [NOTICE]   (339038) : New worker (339040) forked
Dec  6 03:17:18 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[339034]: [NOTICE]   (339038) : Loading success.
Dec  6 03:17:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:19.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.060 232437 DEBUG nova.compute.manager [req-8deb6f4d-a38a-443a-ae59-cf87e718e17f req-b061ca7c-5fa8-44ec-9c1b-d73699c10c98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Received event network-vif-plugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.061 232437 DEBUG oslo_concurrency.lockutils [req-8deb6f4d-a38a-443a-ae59-cf87e718e17f req-b061ca7c-5fa8-44ec-9c1b-d73699c10c98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.062 232437 DEBUG oslo_concurrency.lockutils [req-8deb6f4d-a38a-443a-ae59-cf87e718e17f req-b061ca7c-5fa8-44ec-9c1b-d73699c10c98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.062 232437 DEBUG oslo_concurrency.lockutils [req-8deb6f4d-a38a-443a-ae59-cf87e718e17f req-b061ca7c-5fa8-44ec-9c1b-d73699c10c98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.063 232437 DEBUG nova.compute.manager [req-8deb6f4d-a38a-443a-ae59-cf87e718e17f req-b061ca7c-5fa8-44ec-9c1b-d73699c10c98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Processing event network-vif-plugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.064 232437 DEBUG nova.compute.manager [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.068 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009039.0678644, 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.068 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.070 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.074 232437 INFO nova.virt.libvirt.driver [-] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Instance spawned successfully.#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.075 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.092 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.100 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.104 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.105 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.106 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.106 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.107 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.107 232437 DEBUG nova.virt.libvirt.driver [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.130 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.166 232437 INFO nova.compute.manager [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Took 7.17 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.166 232437 DEBUG nova.compute.manager [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.226 232437 INFO nova.compute.manager [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Took 9.45 seconds to build instance.#033[00m
Dec  6 03:17:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:19.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.248 232437 DEBUG oslo_concurrency.lockutils [None req-b65d101b-9d72-404d-b42c-6e76a0d0b547 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.597 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009024.5954592, 73127705-004d-4660-88f2-e3758c8f14c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.597 232437 INFO nova.compute.manager [-] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:17:19 np0005548731 nova_compute[232433]: 2025-12-06 08:17:19.620 232437 DEBUG nova.compute.manager [None req-00551a39-a24b-438f-b03d-71cf86185984 - - - - - -] [instance: 73127705-004d-4660-88f2-e3758c8f14c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:17:20 np0005548731 nova_compute[232433]: 2025-12-06 08:17:20.007 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:21.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:21 np0005548731 nova_compute[232433]: 2025-12-06 08:17:21.115 232437 DEBUG nova.network.neutron [req-ff940955-11b4-44c5-a931-f81786af79a8 req-b94f0e1d-c41e-4f0f-afbd-11f19299e205 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Updated VIF entry in instance network info cache for port f368996f-9dbe-4c0f-8b79-360f8c3c0348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:17:21 np0005548731 nova_compute[232433]: 2025-12-06 08:17:21.117 232437 DEBUG nova.network.neutron [req-ff940955-11b4-44c5-a931-f81786af79a8 req-b94f0e1d-c41e-4f0f-afbd-11f19299e205 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Updating instance_info_cache with network_info: [{"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:17:21 np0005548731 nova_compute[232433]: 2025-12-06 08:17:21.141 232437 DEBUG oslo_concurrency.lockutils [req-ff940955-11b4-44c5-a931-f81786af79a8 req-b94f0e1d-c41e-4f0f-afbd-11f19299e205 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:17:21 np0005548731 nova_compute[232433]: 2025-12-06 08:17:21.171 232437 DEBUG nova.compute.manager [req-63aa8b07-727d-4afb-ba40-cda2216e1ec7 req-309dc479-6b3d-4b10-a0b5-6ffca5a85d60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Received event network-vif-plugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:17:21 np0005548731 nova_compute[232433]: 2025-12-06 08:17:21.171 232437 DEBUG oslo_concurrency.lockutils [req-63aa8b07-727d-4afb-ba40-cda2216e1ec7 req-309dc479-6b3d-4b10-a0b5-6ffca5a85d60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:21 np0005548731 nova_compute[232433]: 2025-12-06 08:17:21.171 232437 DEBUG oslo_concurrency.lockutils [req-63aa8b07-727d-4afb-ba40-cda2216e1ec7 req-309dc479-6b3d-4b10-a0b5-6ffca5a85d60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:21 np0005548731 nova_compute[232433]: 2025-12-06 08:17:21.172 232437 DEBUG oslo_concurrency.lockutils [req-63aa8b07-727d-4afb-ba40-cda2216e1ec7 req-309dc479-6b3d-4b10-a0b5-6ffca5a85d60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:21 np0005548731 nova_compute[232433]: 2025-12-06 08:17:21.172 232437 DEBUG nova.compute.manager [req-63aa8b07-727d-4afb-ba40-cda2216e1ec7 req-309dc479-6b3d-4b10-a0b5-6ffca5a85d60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] No waiting events found dispatching network-vif-plugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:17:21 np0005548731 nova_compute[232433]: 2025-12-06 08:17:21.173 232437 WARNING nova.compute.manager [req-63aa8b07-727d-4afb-ba40-cda2216e1ec7 req-309dc479-6b3d-4b10-a0b5-6ffca5a85d60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Received unexpected event network-vif-plugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:17:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:21.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:22 np0005548731 nova_compute[232433]: 2025-12-06 08:17:22.257 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:23.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:23.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:24 np0005548731 NetworkManager[49182]: <info>  [1765009044.8803] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Dec  6 03:17:24 np0005548731 NetworkManager[49182]: <info>  [1765009044.8814] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Dec  6 03:17:24 np0005548731 nova_compute[232433]: 2025-12-06 08:17:24.880 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:24 np0005548731 nova_compute[232433]: 2025-12-06 08:17:24.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:25 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:24Z|01076|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec  6 03:17:25 np0005548731 nova_compute[232433]: 2025-12-06 08:17:25.006 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:25 np0005548731 nova_compute[232433]: 2025-12-06 08:17:25.008 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:25.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:25 np0005548731 nova_compute[232433]: 2025-12-06 08:17:25.592 232437 DEBUG nova.compute.manager [req-d0569292-6be2-420e-abf7-288c7f9ee683 req-14a2aa70-58d4-44f2-9751-2df5a6253de9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Received event network-changed-f368996f-9dbe-4c0f-8b79-360f8c3c0348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:17:25 np0005548731 nova_compute[232433]: 2025-12-06 08:17:25.593 232437 DEBUG nova.compute.manager [req-d0569292-6be2-420e-abf7-288c7f9ee683 req-14a2aa70-58d4-44f2-9751-2df5a6253de9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Refreshing instance network info cache due to event network-changed-f368996f-9dbe-4c0f-8b79-360f8c3c0348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:17:25 np0005548731 nova_compute[232433]: 2025-12-06 08:17:25.594 232437 DEBUG oslo_concurrency.lockutils [req-d0569292-6be2-420e-abf7-288c7f9ee683 req-14a2aa70-58d4-44f2-9751-2df5a6253de9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:17:25 np0005548731 nova_compute[232433]: 2025-12-06 08:17:25.594 232437 DEBUG oslo_concurrency.lockutils [req-d0569292-6be2-420e-abf7-288c7f9ee683 req-14a2aa70-58d4-44f2-9751-2df5a6253de9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:17:25 np0005548731 nova_compute[232433]: 2025-12-06 08:17:25.595 232437 DEBUG nova.network.neutron [req-d0569292-6be2-420e-abf7-288c7f9ee683 req-14a2aa70-58d4-44f2-9751-2df5a6253de9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Refreshing network info cache for port f368996f-9dbe-4c0f-8b79-360f8c3c0348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:17:26 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:26Z|01077|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec  6 03:17:26 np0005548731 nova_compute[232433]: 2025-12-06 08:17:26.463 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:27.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:27.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:27 np0005548731 nova_compute[232433]: 2025-12-06 08:17:27.297 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:27 np0005548731 nova_compute[232433]: 2025-12-06 08:17:27.702 232437 DEBUG nova.network.neutron [req-d0569292-6be2-420e-abf7-288c7f9ee683 req-14a2aa70-58d4-44f2-9751-2df5a6253de9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Updated VIF entry in instance network info cache for port f368996f-9dbe-4c0f-8b79-360f8c3c0348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:17:27 np0005548731 nova_compute[232433]: 2025-12-06 08:17:27.703 232437 DEBUG nova.network.neutron [req-d0569292-6be2-420e-abf7-288c7f9ee683 req-14a2aa70-58d4-44f2-9751-2df5a6253de9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Updating instance_info_cache with network_info: [{"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:17:27 np0005548731 nova_compute[232433]: 2025-12-06 08:17:27.723 232437 DEBUG oslo_concurrency.lockutils [req-d0569292-6be2-420e-abf7-288c7f9ee683 req-14a2aa70-58d4-44f2-9751-2df5a6253de9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:17:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:17:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:29.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:17:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:17:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:29.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:17:30 np0005548731 nova_compute[232433]: 2025-12-06 08:17:30.010 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:30 np0005548731 nova_compute[232433]: 2025-12-06 08:17:30.582 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:31.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:32 np0005548731 nova_compute[232433]: 2025-12-06 08:17:32.300 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:17:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:33.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:17:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:33.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:33 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:33Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:0f:6a 10.100.0.10
Dec  6 03:17:33 np0005548731 ovn_controller[133927]: 2025-12-06T08:17:33Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:0f:6a 10.100.0.10
Dec  6 03:17:35 np0005548731 nova_compute[232433]: 2025-12-06 08:17:35.011 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:17:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:35.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:17:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:35.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:35 np0005548731 nova_compute[232433]: 2025-12-06 08:17:35.550 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:37.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:37 np0005548731 nova_compute[232433]: 2025-12-06 08:17:37.304 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:39.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:39.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:40 np0005548731 nova_compute[232433]: 2025-12-06 08:17:40.013 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:40 np0005548731 podman[339114]: 2025-12-06 08:17:40.904866288 +0000 UTC m=+0.056256064 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 03:17:40 np0005548731 podman[339116]: 2025-12-06 08:17:40.917460626 +0000 UTC m=+0.068849782 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Dec  6 03:17:40 np0005548731 podman[339115]: 2025-12-06 08:17:40.995519522 +0000 UTC m=+0.146773065 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:17:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:17:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:17:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:41.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:42 np0005548731 nova_compute[232433]: 2025-12-06 08:17:42.306 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:43.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:43.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:45 np0005548731 nova_compute[232433]: 2025-12-06 08:17:45.015 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:45.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:45.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e420 e420: 3 total, 3 up, 3 in
Dec  6 03:17:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:47.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:47.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:47 np0005548731 nova_compute[232433]: 2025-12-06 08:17:47.344 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 e421: 3 total, 3 up, 3 in
Dec  6 03:17:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:49 np0005548731 nova_compute[232433]: 2025-12-06 08:17:49.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:17:49 np0005548731 nova_compute[232433]: 2025-12-06 08:17:49.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:17:49 np0005548731 nova_compute[232433]: 2025-12-06 08:17:49.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:17:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:49.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:49 np0005548731 nova_compute[232433]: 2025-12-06 08:17:49.335 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:17:49 np0005548731 nova_compute[232433]: 2025-12-06 08:17:49.335 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:17:49 np0005548731 nova_compute[232433]: 2025-12-06 08:17:49.335 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 03:17:49 np0005548731 nova_compute[232433]: 2025-12-06 08:17:49.335 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:17:50 np0005548731 nova_compute[232433]: 2025-12-06 08:17:50.017 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:17:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:51.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:17:51 np0005548731 nova_compute[232433]: 2025-12-06 08:17:51.096 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Updating instance_info_cache with network_info: [{"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:17:51 np0005548731 nova_compute[232433]: 2025-12-06 08:17:51.110 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:17:51 np0005548731 nova_compute[232433]: 2025-12-06 08:17:51.111 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 03:17:51 np0005548731 nova_compute[232433]: 2025-12-06 08:17:51.111 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:17:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:51.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:52 np0005548731 nova_compute[232433]: 2025-12-06 08:17:52.348 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:17:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:53.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:17:53 np0005548731 nova_compute[232433]: 2025-12-06 08:17:53.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:17:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:53.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:54 np0005548731 nova_compute[232433]: 2025-12-06 08:17:54.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:17:55 np0005548731 nova_compute[232433]: 2025-12-06 08:17:55.018 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:55.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:55.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:17:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:57.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.127 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.128 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.129 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:17:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:57.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.349 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:17:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:17:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/895991980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.563 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.652 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.653 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.822 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.823 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3961MB free_disk=20.967212677001953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.824 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.824 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.901 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.902 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.902 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:17:57 np0005548731 nova_compute[232433]: 2025-12-06 08:17:57.949 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:17:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:17:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2688482981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:17:58 np0005548731 nova_compute[232433]: 2025-12-06 08:17:58.371 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:17:58 np0005548731 nova_compute[232433]: 2025-12-06 08:17:58.376 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:17:58 np0005548731 nova_compute[232433]: 2025-12-06 08:17:58.393 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:17:58 np0005548731 nova_compute[232433]: 2025-12-06 08:17:58.416 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:17:58 np0005548731 nova_compute[232433]: 2025-12-06 08:17:58.417 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:17:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:59.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:17:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:17:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:59.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:17:59 np0005548731 nova_compute[232433]: 2025-12-06 08:17:59.417 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:18:00 np0005548731 nova_compute[232433]: 2025-12-06 08:18:00.020 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:00.921 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:00.921 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:00.922 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:18:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:01.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:18:01 np0005548731 nova_compute[232433]: 2025-12-06 08:18:01.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:18:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:01.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:02 np0005548731 nova_compute[232433]: 2025-12-06 08:18:02.394 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:03.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:03 np0005548731 nova_compute[232433]: 2025-12-06 08:18:03.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:18:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:18:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:03.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:18:05 np0005548731 nova_compute[232433]: 2025-12-06 08:18:05.021 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:05.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:05.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 03:18:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:18:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:18:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:18:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:07.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:07 np0005548731 nova_compute[232433]: 2025-12-06 08:18:07.397 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:09.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:09.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:10 np0005548731 nova_compute[232433]: 2025-12-06 08:18:10.023 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:11.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:18:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:11.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:18:11 np0005548731 podman[339476]: 2025-12-06 08:18:11.916674448 +0000 UTC m=+0.072985702 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  6 03:18:11 np0005548731 podman[339478]: 2025-12-06 08:18:11.933438468 +0000 UTC m=+0.076957740 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 03:18:11 np0005548731 podman[339477]: 2025-12-06 08:18:11.942374586 +0000 UTC m=+0.097722938 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  6 03:18:12 np0005548731 nova_compute[232433]: 2025-12-06 08:18:12.398 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:13 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:18:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:13.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:13.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.679800) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094679858, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1323, "num_deletes": 256, "total_data_size": 2746173, "memory_usage": 2787104, "flush_reason": "Manual Compaction"}
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094697756, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1799082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81829, "largest_seqno": 83147, "table_properties": {"data_size": 1793451, "index_size": 2961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12596, "raw_average_key_size": 19, "raw_value_size": 1781830, "raw_average_value_size": 2810, "num_data_blocks": 131, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008995, "oldest_key_time": 1765008995, "file_creation_time": 1765009094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 18040 microseconds, and 5207 cpu microseconds.
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.697844) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1799082 bytes OK
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.697865) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.700857) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.700903) EVENT_LOG_v1 {"time_micros": 1765009094700892, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.700927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2739896, prev total WAL file size 2739896, number of live WAL files 2.
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.701971) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303330' seq:72057594037927935, type:22 .. '6C6F676D0033323831' seq:0, type:0; will stop at (end)
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1756KB)], [165(11MB)]
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094702035, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 13652081, "oldest_snapshot_seqno": -1}
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 11053 keys, 13516607 bytes, temperature: kUnknown
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094799260, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 13516607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13446916, "index_size": 40970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27653, "raw_key_size": 292428, "raw_average_key_size": 26, "raw_value_size": 13255175, "raw_average_value_size": 1199, "num_data_blocks": 1555, "num_entries": 11053, "num_filter_entries": 11053, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765009094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.799531) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13516607 bytes
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.801474) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.3 rd, 138.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 11.3 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(15.1) write-amplify(7.5) OK, records in: 11582, records dropped: 529 output_compression: NoCompression
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.801490) EVENT_LOG_v1 {"time_micros": 1765009094801482, "job": 106, "event": "compaction_finished", "compaction_time_micros": 97320, "compaction_time_cpu_micros": 32715, "output_level": 6, "num_output_files": 1, "total_output_size": 13516607, "num_input_records": 11582, "num_output_records": 11053, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094801908, "job": 106, "event": "table_file_deletion", "file_number": 167}
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094803856, "job": 106, "event": "table_file_deletion", "file_number": 165}
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.701834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.803937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.803942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.803944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.803946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:14 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:14.803947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:15 np0005548731 nova_compute[232433]: 2025-12-06 08:18:15.024 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:15 np0005548731 nova_compute[232433]: 2025-12-06 08:18:15.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:18:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:15.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:15.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:15 np0005548731 nova_compute[232433]: 2025-12-06 08:18:15.791 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:15 np0005548731 nova_compute[232433]: 2025-12-06 08:18:15.791 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:15 np0005548731 nova_compute[232433]: 2025-12-06 08:18:15.828 232437 DEBUG nova.compute.manager [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:18:15 np0005548731 nova_compute[232433]: 2025-12-06 08:18:15.908 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:15 np0005548731 nova_compute[232433]: 2025-12-06 08:18:15.908 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:15 np0005548731 nova_compute[232433]: 2025-12-06 08:18:15.915 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:18:15 np0005548731 nova_compute[232433]: 2025-12-06 08:18:15.915 232437 INFO nova.compute.claims [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:18:16 np0005548731 nova_compute[232433]: 2025-12-06 08:18:16.057 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:18:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:18:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4049142387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:18:16 np0005548731 nova_compute[232433]: 2025-12-06 08:18:16.503 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:18:16 np0005548731 nova_compute[232433]: 2025-12-06 08:18:16.509 232437 DEBUG nova.compute.provider_tree [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:18:16 np0005548731 nova_compute[232433]: 2025-12-06 08:18:16.615 232437 DEBUG nova.scheduler.client.report [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:18:16 np0005548731 nova_compute[232433]: 2025-12-06 08:18:16.745 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:16 np0005548731 nova_compute[232433]: 2025-12-06 08:18:16.745 232437 DEBUG nova.compute.manager [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:18:16 np0005548731 nova_compute[232433]: 2025-12-06 08:18:16.903 232437 DEBUG nova.compute.manager [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:18:16 np0005548731 nova_compute[232433]: 2025-12-06 08:18:16.904 232437 DEBUG nova.network.neutron [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:18:16 np0005548731 nova_compute[232433]: 2025-12-06 08:18:16.939 232437 INFO nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.003 232437 DEBUG nova.compute.manager [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.097 232437 DEBUG nova.policy [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0432cb6633e14c1b86fc320e7f3bb880', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:18:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:17.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:17.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.349 232437 DEBUG nova.compute.manager [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.350 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.351 232437 INFO nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Creating image(s)#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.377 232437 DEBUG nova.storage.rbd_utils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.399 232437 DEBUG nova.storage.rbd_utils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.424 232437 DEBUG nova.storage.rbd_utils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.428 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.453 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.493 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.494 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.495 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.495 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.523 232437 DEBUG nova.storage.rbd_utils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:18:17 np0005548731 nova_compute[232433]: 2025-12-06 08:18:17.526 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:18:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:19.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:18:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:19.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:18:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:19.439 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=98, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=97) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:18:19 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:19.440 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:18:19 np0005548731 nova_compute[232433]: 2025-12-06 08:18:19.441 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:20 np0005548731 nova_compute[232433]: 2025-12-06 08:18:20.026 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:20 np0005548731 nova_compute[232433]: 2025-12-06 08:18:20.336 232437 DEBUG nova.network.neutron [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Successfully created port: 4b2cd819-4f26-4469-9cbc-45dfed1645af _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:18:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:20 np0005548731 nova_compute[232433]: 2025-12-06 08:18:20.782 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:18:20 np0005548731 nova_compute[232433]: 2025-12-06 08:18:20.844 232437 DEBUG nova.storage.rbd_utils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] resizing rbd image 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:18:20 np0005548731 nova_compute[232433]: 2025-12-06 08:18:20.959 232437 DEBUG nova.objects.instance [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 710b13bb-ef03-446e-8fa9-5bbc77d9232d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:18:20 np0005548731 nova_compute[232433]: 2025-12-06 08:18:20.996 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:18:20 np0005548731 nova_compute[232433]: 2025-12-06 08:18:20.996 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Ensure instance console log exists: /var/lib/nova/instances/710b13bb-ef03-446e-8fa9-5bbc77d9232d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:18:20 np0005548731 nova_compute[232433]: 2025-12-06 08:18:20.997 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:20 np0005548731 nova_compute[232433]: 2025-12-06 08:18:20.997 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:20 np0005548731 nova_compute[232433]: 2025-12-06 08:18:20.997 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:21.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:21.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:22 np0005548731 nova_compute[232433]: 2025-12-06 08:18:22.403 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:23.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:23.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:25 np0005548731 nova_compute[232433]: 2025-12-06 08:18:25.029 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:25.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.275248) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105275323, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 348, "num_deletes": 251, "total_data_size": 266556, "memory_usage": 273520, "flush_reason": "Manual Compaction"}
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105278402, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 175405, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83152, "largest_seqno": 83495, "table_properties": {"data_size": 173268, "index_size": 300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5355, "raw_average_key_size": 18, "raw_value_size": 169094, "raw_average_value_size": 585, "num_data_blocks": 14, "num_entries": 289, "num_filter_entries": 289, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009095, "oldest_key_time": 1765009095, "file_creation_time": 1765009105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 3171 microseconds, and 1139 cpu microseconds.
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.278431) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 175405 bytes OK
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.278445) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.280453) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.280468) EVENT_LOG_v1 {"time_micros": 1765009105280463, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.280484) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 264181, prev total WAL file size 264181, number of live WAL files 2.
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.280963) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(171KB)], [168(12MB)]
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105281027, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 13692012, "oldest_snapshot_seqno": -1}
Dec  6 03:18:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:25.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 10832 keys, 11755887 bytes, temperature: kUnknown
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105405905, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 11755887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11689085, "index_size": 38593, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 288529, "raw_average_key_size": 26, "raw_value_size": 11502567, "raw_average_value_size": 1061, "num_data_blocks": 1447, "num_entries": 10832, "num_filter_entries": 10832, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765009105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.406263) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 11755887 bytes
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.408337) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.6 rd, 94.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.9 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(145.1) write-amplify(67.0) OK, records in: 11342, records dropped: 510 output_compression: NoCompression
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.408370) EVENT_LOG_v1 {"time_micros": 1765009105408356, "job": 108, "event": "compaction_finished", "compaction_time_micros": 124966, "compaction_time_cpu_micros": 46537, "output_level": 6, "num_output_files": 1, "total_output_size": 11755887, "num_input_records": 11342, "num_output_records": 10832, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105408638, "job": 108, "event": "table_file_deletion", "file_number": 170}
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105413768, "job": 108, "event": "table_file_deletion", "file_number": 168}
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.280774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.413891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.413900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.413904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.413908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:18:25.413912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:18:25 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:25.442 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '98'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:18:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:26 np0005548731 nova_compute[232433]: 2025-12-06 08:18:26.118 232437 DEBUG nova.network.neutron [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Successfully updated port: 4b2cd819-4f26-4469-9cbc-45dfed1645af _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:18:26 np0005548731 nova_compute[232433]: 2025-12-06 08:18:26.592 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "refresh_cache-710b13bb-ef03-446e-8fa9-5bbc77d9232d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:18:26 np0005548731 nova_compute[232433]: 2025-12-06 08:18:26.592 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquired lock "refresh_cache-710b13bb-ef03-446e-8fa9-5bbc77d9232d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:18:26 np0005548731 nova_compute[232433]: 2025-12-06 08:18:26.592 232437 DEBUG nova.network.neutron [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:18:26 np0005548731 nova_compute[232433]: 2025-12-06 08:18:26.635 232437 DEBUG nova.compute.manager [req-f0641931-db2a-406b-aa6e-bd640c399915 req-55336e56-31c0-47aa-b4f8-587722c1c9aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Received event network-changed-4b2cd819-4f26-4469-9cbc-45dfed1645af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:18:26 np0005548731 nova_compute[232433]: 2025-12-06 08:18:26.635 232437 DEBUG nova.compute.manager [req-f0641931-db2a-406b-aa6e-bd640c399915 req-55336e56-31c0-47aa-b4f8-587722c1c9aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Refreshing instance network info cache due to event network-changed-4b2cd819-4f26-4469-9cbc-45dfed1645af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:18:26 np0005548731 nova_compute[232433]: 2025-12-06 08:18:26.635 232437 DEBUG oslo_concurrency.lockutils [req-f0641931-db2a-406b-aa6e-bd640c399915 req-55336e56-31c0-47aa-b4f8-587722c1c9aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-710b13bb-ef03-446e-8fa9-5bbc77d9232d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:18:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:27.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:27.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:27 np0005548731 nova_compute[232433]: 2025-12-06 08:18:27.405 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:28 np0005548731 nova_compute[232433]: 2025-12-06 08:18:28.128 232437 DEBUG nova.network.neutron [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:18:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 03:18:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:29.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 03:18:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:29.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.093 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.237 232437 DEBUG nova.network.neutron [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Updating instance_info_cache with network_info: [{"id": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "address": "fa:16:3e:f2:fd:af", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2cd819-4f", "ovs_interfaceid": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.257 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Releasing lock "refresh_cache-710b13bb-ef03-446e-8fa9-5bbc77d9232d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.258 232437 DEBUG nova.compute.manager [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Instance network_info: |[{"id": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "address": "fa:16:3e:f2:fd:af", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2cd819-4f", "ovs_interfaceid": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.258 232437 DEBUG oslo_concurrency.lockutils [req-f0641931-db2a-406b-aa6e-bd640c399915 req-55336e56-31c0-47aa-b4f8-587722c1c9aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-710b13bb-ef03-446e-8fa9-5bbc77d9232d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.258 232437 DEBUG nova.network.neutron [req-f0641931-db2a-406b-aa6e-bd640c399915 req-55336e56-31c0-47aa-b4f8-587722c1c9aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Refreshing network info cache for port 4b2cd819-4f26-4469-9cbc-45dfed1645af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.262 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Start _get_guest_xml network_info=[{"id": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "address": "fa:16:3e:f2:fd:af", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2cd819-4f", "ovs_interfaceid": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.266 232437 WARNING nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.278 232437 DEBUG nova.virt.libvirt.host [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.279 232437 DEBUG nova.virt.libvirt.host [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.282 232437 DEBUG nova.virt.libvirt.host [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.283 232437 DEBUG nova.virt.libvirt.host [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.284 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.284 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.285 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.285 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.285 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.286 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.286 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.286 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.286 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.287 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.287 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.287 232437 DEBUG nova.virt.hardware [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.290 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:18:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:18:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2296411019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.744 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.780 232437 DEBUG nova.storage.rbd_utils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:18:30 np0005548731 nova_compute[232433]: 2025-12-06 08:18:30.784 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:18:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:31.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:18:31 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2222883412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.228 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.230 232437 DEBUG nova.virt.libvirt.vif [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-0-1724040432',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-0-1724040432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-gen',id=211,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzGPaWk14xbtW5sHv4GXDiMSG/C5MZExlVtwgnjxxcupj/ss21DUqD2dDVK61uPoLLLdziVAI6yXqU7ErexaGkX9er10rHIWMNub/drUqFoT5/Q97yMArULe40gKdjSAw==',key_name='tempest-TestSecurityGroupsBasicOps-853526666',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-j0oogsb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:18:17Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=710b13bb-ef03-446e-8fa9-5bbc77d9232d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "address": "fa:16:3e:f2:fd:af", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2cd819-4f", "ovs_interfaceid": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.231 232437 DEBUG nova.network.os_vif_util [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "address": "fa:16:3e:f2:fd:af", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2cd819-4f", "ovs_interfaceid": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.232 232437 DEBUG nova.network.os_vif_util [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:fd:af,bridge_name='br-int',has_traffic_filtering=True,id=4b2cd819-4f26-4469-9cbc-45dfed1645af,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2cd819-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.233 232437 DEBUG nova.objects.instance [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 710b13bb-ef03-446e-8fa9-5bbc77d9232d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.254 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  <uuid>710b13bb-ef03-446e-8fa9-5bbc77d9232d</uuid>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  <name>instance-000000d3</name>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-0-1724040432</nova:name>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:18:30</nova:creationTime>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <nova:user uuid="0432cb6633e14c1b86fc320e7f3bb880">tempest-TestSecurityGroupsBasicOps-568463891-project-member</nova:user>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <nova:project uuid="5d23d1d6ffc142eaa9bee0ef93fe60e4">tempest-TestSecurityGroupsBasicOps-568463891</nova:project>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <nova:port uuid="4b2cd819-4f26-4469-9cbc-45dfed1645af">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <entry name="serial">710b13bb-ef03-446e-8fa9-5bbc77d9232d</entry>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <entry name="uuid">710b13bb-ef03-446e-8fa9-5bbc77d9232d</entry>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk.config">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:f2:fd:af"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <target dev="tap4b2cd819-4f"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/710b13bb-ef03-446e-8fa9-5bbc77d9232d/console.log" append="off"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:18:31 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:18:31 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:18:31 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:18:31 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.256 232437 DEBUG nova.compute.manager [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Preparing to wait for external event network-vif-plugged-4b2cd819-4f26-4469-9cbc-45dfed1645af prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.256 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.256 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.257 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.257 232437 DEBUG nova.virt.libvirt.vif [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-0-1724040432',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-0-1724040432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-gen',id=211,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzGPaWk14xbtW5sHv4GXDiMSG/C5MZExlVtwgnjxxcupj/ss21DUqD2dDVK61uPoLLLdziVAI6yXqU7ErexaGkX9er10rHIWMNub/drUqFoT5/Q97yMArULe40gKdjSAw==',key_name='tempest-TestSecurityGroupsBasicOps-853526666',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-j0oogsb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:18:17Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=710b13bb-ef03-446e-8fa9-5bbc77d9232d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "address": "fa:16:3e:f2:fd:af", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2cd819-4f", "ovs_interfaceid": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.258 232437 DEBUG nova.network.os_vif_util [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "address": "fa:16:3e:f2:fd:af", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2cd819-4f", "ovs_interfaceid": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.258 232437 DEBUG nova.network.os_vif_util [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:fd:af,bridge_name='br-int',has_traffic_filtering=True,id=4b2cd819-4f26-4469-9cbc-45dfed1645af,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2cd819-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.258 232437 DEBUG os_vif [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:fd:af,bridge_name='br-int',has_traffic_filtering=True,id=4b2cd819-4f26-4469-9cbc-45dfed1645af,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2cd819-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.259 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.260 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.260 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.264 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.265 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b2cd819-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.265 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b2cd819-4f, col_values=(('external_ids', {'iface-id': '4b2cd819-4f26-4469-9cbc-45dfed1645af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:fd:af', 'vm-uuid': '710b13bb-ef03-446e-8fa9-5bbc77d9232d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.312 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:31 np0005548731 NetworkManager[49182]: <info>  [1765009111.3131] manager: (tap4b2cd819-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.315 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.319 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.319 232437 INFO os_vif [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:fd:af,bridge_name='br-int',has_traffic_filtering=True,id=4b2cd819-4f26-4469-9cbc-45dfed1645af,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2cd819-4f')#033[00m
Dec  6 03:18:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:31.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.365 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.365 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.366 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No VIF found with MAC fa:16:3e:f2:fd:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.366 232437 INFO nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Using config drive#033[00m
Dec  6 03:18:31 np0005548731 nova_compute[232433]: 2025-12-06 08:18:31.389 232437 DEBUG nova.storage.rbd_utils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.214 232437 INFO nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Creating config drive at /var/lib/nova/instances/710b13bb-ef03-446e-8fa9-5bbc77d9232d/disk.config#033[00m
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.220 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/710b13bb-ef03-446e-8fa9-5bbc77d9232d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8c4h60tr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.354 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/710b13bb-ef03-446e-8fa9-5bbc77d9232d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8c4h60tr" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.381 232437 DEBUG nova.storage.rbd_utils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.384 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/710b13bb-ef03-446e-8fa9-5bbc77d9232d/disk.config 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.441 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.547 232437 DEBUG oslo_concurrency.processutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/710b13bb-ef03-446e-8fa9-5bbc77d9232d/disk.config 710b13bb-ef03-446e-8fa9-5bbc77d9232d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.547 232437 INFO nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Deleting local config drive /var/lib/nova/instances/710b13bb-ef03-446e-8fa9-5bbc77d9232d/disk.config because it was imported into RBD.#033[00m
Dec  6 03:18:32 np0005548731 kernel: tap4b2cd819-4f: entered promiscuous mode
Dec  6 03:18:32 np0005548731 NetworkManager[49182]: <info>  [1765009112.5949] manager: (tap4b2cd819-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Dec  6 03:18:32 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:32Z|01078|binding|INFO|Claiming lport 4b2cd819-4f26-4469-9cbc-45dfed1645af for this chassis.
Dec  6 03:18:32 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:32Z|01079|binding|INFO|4b2cd819-4f26-4469-9cbc-45dfed1645af: Claiming fa:16:3e:f2:fd:af 10.100.0.14
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.596 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.604 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:fd:af 10.100.0.14'], port_security=['fa:16:3e:f2:fd:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '710b13bb-ef03-446e-8fa9-5bbc77d9232d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e18f04-0697-4301-a26d-02786b558075', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c4946238-f586-46b2-b353-baad07fea15f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=626f221f-4e25-4acd-9bf5-4d267283bf54, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=4b2cd819-4f26-4469-9cbc-45dfed1645af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.605 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 4b2cd819-4f26-4469-9cbc-45dfed1645af in datapath 14e18f04-0697-4301-a26d-02786b558075 bound to our chassis#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.607 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e18f04-0697-4301-a26d-02786b558075#033[00m
Dec  6 03:18:32 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:32Z|01080|binding|INFO|Setting lport 4b2cd819-4f26-4469-9cbc-45dfed1645af ovn-installed in OVS
Dec  6 03:18:32 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:32Z|01081|binding|INFO|Setting lport 4b2cd819-4f26-4469-9cbc-45dfed1645af up in Southbound
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.611 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.616 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.625 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[532800fc-684b-4ce3-bd93-9fe0214ed624]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.627 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e18f04-01 in ovnmeta-14e18f04-0697-4301-a26d-02786b558075 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.629 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e18f04-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.629 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ecd1b8-08bc-48ab-adad-c623e060ae8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.630 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e19e3722-a73f-4508-b190-e0be5a25f344]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 systemd-udevd[339976]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:18:32 np0005548731 systemd-machined[195355]: New machine qemu-110-instance-000000d3.
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.643 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad08b91-6f7d-4740-ac45-7737a45e3207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 NetworkManager[49182]: <info>  [1765009112.6459] device (tap4b2cd819-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:18:32 np0005548731 NetworkManager[49182]: <info>  [1765009112.6468] device (tap4b2cd819-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:18:32 np0005548731 systemd[1]: Started Virtual Machine qemu-110-instance-000000d3.
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.667 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[191acac1-44f1-40e6-b8e3-dbd74345da56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.698 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[561b3afa-c3b7-44ad-84d8-d19a3b28b379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.702 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[f7110561-acdb-4eaa-b803-bca364ce75b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 NetworkManager[49182]: <info>  [1765009112.7037] manager: (tap14e18f04-00): new Veth device (/org/freedesktop/NetworkManager/Devices/514)
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.737 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[129777a0-3579-429d-83fb-0ab2545e7551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.741 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[fac2e9da-6b11-46a7-bca9-a8e0317e7d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 NetworkManager[49182]: <info>  [1765009112.7636] device (tap14e18f04-00): carrier: link connected
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.769 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2cf62c-1a5b-407b-a421-32d2387e04c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.785 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[668a562d-9dc1-4f19-85ce-7b59793fbf0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e18f04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:2e:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 937477, 'reachable_time': 40080, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340008, 'error': None, 'target': 'ovnmeta-14e18f04-0697-4301-a26d-02786b558075', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.803 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7275db-1731-4573-9946-7892e174ca0f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:2eaf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 937477, 'tstamp': 937477}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340009, 'error': None, 'target': 'ovnmeta-14e18f04-0697-4301-a26d-02786b558075', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.822 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb8ce96-9b21-4708-a809-8b71e91547bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e18f04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:2e:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 937477, 'reachable_time': 40080, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340025, 'error': None, 'target': 'ovnmeta-14e18f04-0697-4301-a26d-02786b558075', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.849 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e3bb0b-19bc-47cd-a62e-52e5cd0d701d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.899 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6d727760-913a-4494-b6e2-0e3a29dd0c2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.900 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e18f04-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.901 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.901 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e18f04-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:18:32 np0005548731 NetworkManager[49182]: <info>  [1765009112.9038] manager: (tap14e18f04-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/515)
Dec  6 03:18:32 np0005548731 kernel: tap14e18f04-00: entered promiscuous mode
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.903 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.905 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.907 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e18f04-00, col_values=(('external_ids', {'iface-id': '43aca8c4-55d8-4309-a0d5-cbba710a5d1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.908 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:32 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:32Z|01082|binding|INFO|Releasing lport 43aca8c4-55d8-4309-a0d5-cbba710a5d1f from this chassis (sb_readonly=0)
Dec  6 03:18:32 np0005548731 nova_compute[232433]: 2025-12-06 08:18:32.921 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.922 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e18f04-0697-4301-a26d-02786b558075.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e18f04-0697-4301-a26d-02786b558075.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.924 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[947d4030-8873-4433-8f60-34e764a12288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.925 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-14e18f04-0697-4301-a26d-02786b558075
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/14e18f04-0697-4301-a26d-02786b558075.pid.haproxy
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 14e18f04-0697-4301-a26d-02786b558075
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:18:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:32.927 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e18f04-0697-4301-a26d-02786b558075', 'env', 'PROCESS_TAG=haproxy-14e18f04-0697-4301-a26d-02786b558075', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e18f04-0697-4301-a26d-02786b558075.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:18:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:18:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:33.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.135 232437 DEBUG nova.network.neutron [req-f0641931-db2a-406b-aa6e-bd640c399915 req-55336e56-31c0-47aa-b4f8-587722c1c9aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Updated VIF entry in instance network info cache for port 4b2cd819-4f26-4469-9cbc-45dfed1645af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.136 232437 DEBUG nova.network.neutron [req-f0641931-db2a-406b-aa6e-bd640c399915 req-55336e56-31c0-47aa-b4f8-587722c1c9aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Updating instance_info_cache with network_info: [{"id": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "address": "fa:16:3e:f2:fd:af", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2cd819-4f", "ovs_interfaceid": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.158 232437 DEBUG oslo_concurrency.lockutils [req-f0641931-db2a-406b-aa6e-bd640c399915 req-55336e56-31c0-47aa-b4f8-587722c1c9aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-710b13bb-ef03-446e-8fa9-5bbc77d9232d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.221 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009113.2213547, 710b13bb-ef03-446e-8fa9-5bbc77d9232d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.222 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] VM Started (Lifecycle Event)#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.244 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.248 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009113.2215772, 710b13bb-ef03-446e-8fa9-5bbc77d9232d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.249 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.275 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.279 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.289 232437 DEBUG nova.compute.manager [req-c8521ac4-fcca-4312-936b-bbf7befdd9bd req-99145038-a534-404d-a366-9c6b91992615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Received event network-vif-plugged-4b2cd819-4f26-4469-9cbc-45dfed1645af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.289 232437 DEBUG oslo_concurrency.lockutils [req-c8521ac4-fcca-4312-936b-bbf7befdd9bd req-99145038-a534-404d-a366-9c6b91992615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.290 232437 DEBUG oslo_concurrency.lockutils [req-c8521ac4-fcca-4312-936b-bbf7befdd9bd req-99145038-a534-404d-a366-9c6b91992615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.290 232437 DEBUG oslo_concurrency.lockutils [req-c8521ac4-fcca-4312-936b-bbf7befdd9bd req-99145038-a534-404d-a366-9c6b91992615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.290 232437 DEBUG nova.compute.manager [req-c8521ac4-fcca-4312-936b-bbf7befdd9bd req-99145038-a534-404d-a366-9c6b91992615 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Processing event network-vif-plugged-4b2cd819-4f26-4469-9cbc-45dfed1645af _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.291 232437 DEBUG nova.compute.manager [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.295 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.297 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.297 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009113.2934394, 710b13bb-ef03-446e-8fa9-5bbc77d9232d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.297 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.300 232437 INFO nova.virt.libvirt.driver [-] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Instance spawned successfully.#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.300 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.321 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.326 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.327 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.327 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.328 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.328 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.329 232437 DEBUG nova.virt.libvirt.driver [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.334 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:18:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:18:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:33.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:18:33 np0005548731 podman[340082]: 2025-12-06 08:18:33.350213241 +0000 UTC m=+0.103857077 container create 9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 03:18:33 np0005548731 podman[340082]: 2025-12-06 08:18:33.269333626 +0000 UTC m=+0.022977482 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.368 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:18:33 np0005548731 systemd[1]: Started libpod-conmon-9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931.scope.
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.388 232437 INFO nova.compute.manager [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Took 16.04 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.389 232437 DEBUG nova.compute.manager [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:18:33 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:18:33 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ee2866591a5d27c7bca7af565c204fc774b54885637d918ba892d220f30f1b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:18:33 np0005548731 podman[340082]: 2025-12-06 08:18:33.444136514 +0000 UTC m=+0.197780370 container init 9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:18:33 np0005548731 podman[340082]: 2025-12-06 08:18:33.449484464 +0000 UTC m=+0.203128300 container start 9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.460 232437 INFO nova.compute.manager [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Took 17.58 seconds to build instance.#033[00m
Dec  6 03:18:33 np0005548731 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[340097]: [NOTICE]   (340101) : New worker (340103) forked
Dec  6 03:18:33 np0005548731 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[340097]: [NOTICE]   (340101) : Loading success.
Dec  6 03:18:33 np0005548731 nova_compute[232433]: 2025-12-06 08:18:33.478 232437 DEBUG oslo_concurrency.lockutils [None req-aa9dae40-02f0-4c3e-9f91-54281de994ca 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:35.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:35.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:35 np0005548731 nova_compute[232433]: 2025-12-06 08:18:35.387 232437 DEBUG nova.compute.manager [req-e45477b0-26ae-406e-836b-2002036faaf7 req-4399a312-34ae-406e-80dc-eb321c166f83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Received event network-vif-plugged-4b2cd819-4f26-4469-9cbc-45dfed1645af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:18:35 np0005548731 nova_compute[232433]: 2025-12-06 08:18:35.388 232437 DEBUG oslo_concurrency.lockutils [req-e45477b0-26ae-406e-836b-2002036faaf7 req-4399a312-34ae-406e-80dc-eb321c166f83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:35 np0005548731 nova_compute[232433]: 2025-12-06 08:18:35.388 232437 DEBUG oslo_concurrency.lockutils [req-e45477b0-26ae-406e-836b-2002036faaf7 req-4399a312-34ae-406e-80dc-eb321c166f83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:35 np0005548731 nova_compute[232433]: 2025-12-06 08:18:35.389 232437 DEBUG oslo_concurrency.lockutils [req-e45477b0-26ae-406e-836b-2002036faaf7 req-4399a312-34ae-406e-80dc-eb321c166f83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:35 np0005548731 nova_compute[232433]: 2025-12-06 08:18:35.389 232437 DEBUG nova.compute.manager [req-e45477b0-26ae-406e-836b-2002036faaf7 req-4399a312-34ae-406e-80dc-eb321c166f83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] No waiting events found dispatching network-vif-plugged-4b2cd819-4f26-4469-9cbc-45dfed1645af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:18:35 np0005548731 nova_compute[232433]: 2025-12-06 08:18:35.389 232437 WARNING nova.compute.manager [req-e45477b0-26ae-406e-836b-2002036faaf7 req-4399a312-34ae-406e-80dc-eb321c166f83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Received unexpected event network-vif-plugged-4b2cd819-4f26-4469-9cbc-45dfed1645af for instance with vm_state active and task_state None.#033[00m
Dec  6 03:18:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:36 np0005548731 nova_compute[232433]: 2025-12-06 08:18:36.314 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:37.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:18:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:37.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:18:37 np0005548731 nova_compute[232433]: 2025-12-06 08:18:37.443 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:39.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:39.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:41.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:41 np0005548731 nova_compute[232433]: 2025-12-06 08:18:41.316 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:41.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:42 np0005548731 nova_compute[232433]: 2025-12-06 08:18:42.444 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:43 np0005548731 podman[340118]: 2025-12-06 08:18:43.103862327 +0000 UTC m=+0.264453438 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec  6 03:18:43 np0005548731 podman[340120]: 2025-12-06 08:18:43.103872087 +0000 UTC m=+0.262158341 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd)
Dec  6 03:18:43 np0005548731 podman[340119]: 2025-12-06 08:18:43.104033371 +0000 UTC m=+0.262341436 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:18:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:43.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:43.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:45.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e422 e422: 3 total, 3 up, 3 in
Dec  6 03:18:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:45.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.240 232437 DEBUG oslo_concurrency.lockutils [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.241 232437 DEBUG oslo_concurrency.lockutils [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.241 232437 DEBUG oslo_concurrency.lockutils [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.241 232437 DEBUG oslo_concurrency.lockutils [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.242 232437 DEBUG oslo_concurrency.lockutils [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.242 232437 INFO nova.compute.manager [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Terminating instance#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.243 232437 DEBUG nova.compute.manager [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:18:46 np0005548731 kernel: tapf368996f-9d (unregistering): left promiscuous mode
Dec  6 03:18:46 np0005548731 NetworkManager[49182]: <info>  [1765009126.2999] device (tapf368996f-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.353 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:46 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:46Z|01083|binding|INFO|Releasing lport f368996f-9dbe-4c0f-8b79-360f8c3c0348 from this chassis (sb_readonly=0)
Dec  6 03:18:46 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:46Z|01084|binding|INFO|Setting lport f368996f-9dbe-4c0f-8b79-360f8c3c0348 down in Southbound
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.354 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:46 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:46Z|01085|binding|INFO|Removing iface tapf368996f-9d ovn-installed in OVS
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.356 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.365 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:0f:6a 10.100.0.10'], port_security=['fa:16:3e:f3:0f:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '02dd8ac7-35d8-410a-9ec7-05f6acfed2a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aaf47155-388f-40f5-8da3-afcf909cfc50', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=f368996f-9dbe-4c0f-8b79-360f8c3c0348) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.367 143965 INFO neutron.agent.ovn.metadata.agent [-] Port f368996f-9dbe-4c0f-8b79-360f8c3c0348 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a unbound from our chassis#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.368 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4ef1374-9c77-45a7-8776-50aa60c7d84a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.375 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.369 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1c97c576-0166-4dc7-973d-c9126c17ec57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.380 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace which is not needed anymore#033[00m
Dec  6 03:18:46 np0005548731 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000d0.scope: Deactivated successfully.
Dec  6 03:18:46 np0005548731 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000d0.scope: Consumed 16.814s CPU time.
Dec  6 03:18:46 np0005548731 systemd-machined[195355]: Machine qemu-109-instance-000000d0 terminated.
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.476 232437 INFO nova.virt.libvirt.driver [-] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Instance destroyed successfully.#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.477 232437 DEBUG nova.objects.instance [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'resources' on Instance uuid 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.489 232437 DEBUG nova.virt.libvirt.vif [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:17:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-432414758',display_name='tempest-TestVolumeBootPattern-volume-backed-server-432414758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-432414758',id=208,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEaPA38/V7SDeUJEH1PEQmqBPRdyrr6vTmK0YY+l34Pyl4mgafdDtmsvHmDvkdA4yVsJSOQl578JF9Rc5P9/OWjQks65zuMrvGK8LhjfS2jd2cb0AGhJSBtEZLic7Ae5Rg==',key_name='tempest-keypair-606612846',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:17:19Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-s88z9jl4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:17:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=02dd8ac7-35d8-410a-9ec7-05f6acfed2a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.489 232437 DEBUG nova.network.os_vif_util [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "address": "fa:16:3e:f3:0f:6a", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf368996f-9d", "ovs_interfaceid": "f368996f-9dbe-4c0f-8b79-360f8c3c0348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.490 232437 DEBUG nova.network.os_vif_util [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:0f:6a,bridge_name='br-int',has_traffic_filtering=True,id=f368996f-9dbe-4c0f-8b79-360f8c3c0348,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf368996f-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.490 232437 DEBUG os_vif [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:0f:6a,bridge_name='br-int',has_traffic_filtering=True,id=f368996f-9dbe-4c0f-8b79-360f8c3c0348,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf368996f-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.491 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.492 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf368996f-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.494 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.495 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.498 232437 INFO os_vif [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:0f:6a,bridge_name='br-int',has_traffic_filtering=True,id=f368996f-9dbe-4c0f-8b79-360f8c3c0348,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf368996f-9d')#033[00m
Dec  6 03:18:46 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[339034]: [NOTICE]   (339038) : haproxy version is 2.8.14-c23fe91
Dec  6 03:18:46 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[339034]: [NOTICE]   (339038) : path to executable is /usr/sbin/haproxy
Dec  6 03:18:46 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[339034]: [WARNING]  (339038) : Exiting Master process...
Dec  6 03:18:46 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[339034]: [ALERT]    (339038) : Current worker (339040) exited with code 143 (Terminated)
Dec  6 03:18:46 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[339034]: [WARNING]  (339038) : All workers exited. Exiting... (0)
Dec  6 03:18:46 np0005548731 systemd[1]: libpod-74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613.scope: Deactivated successfully.
Dec  6 03:18:46 np0005548731 podman[340257]: 2025-12-06 08:18:46.520034254 +0000 UTC m=+0.053608380 container died 74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  6 03:18:46 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613-userdata-shm.mount: Deactivated successfully.
Dec  6 03:18:46 np0005548731 systemd[1]: var-lib-containers-storage-overlay-5d4c865e52bf71c826eebb5fb816e93ab6b6d08d1eb969ae10d249711ba81436-merged.mount: Deactivated successfully.
Dec  6 03:18:46 np0005548731 podman[340257]: 2025-12-06 08:18:46.556070254 +0000 UTC m=+0.089644390 container cleanup 74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:18:46 np0005548731 systemd[1]: libpod-conmon-74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613.scope: Deactivated successfully.
Dec  6 03:18:46 np0005548731 podman[340319]: 2025-12-06 08:18:46.62227387 +0000 UTC m=+0.042663102 container remove 74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.627 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5a219265-32cc-4c36-a6a2-6ca9a81406f1]: (4, ('Sat Dec  6 08:18:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613)\n74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613\nSat Dec  6 08:18:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613)\n74798c6cce4ee527d267c872ce6bf079ffdc815edfa5011a15dce59eedba9613\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.629 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d5dd2b-4990-463e-89de-6f8f94bc904f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.630 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.631 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:46 np0005548731 kernel: tapb4ef1374-90: left promiscuous mode
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.645 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.647 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[68013865-8e01-40b7-b9f7-05be38265612]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.669 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[acbf9ec6-72cb-4314-9ee2-2be37b8ff092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.670 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d738ef7e-5014-4d64-8383-f217b66db9ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.688 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[424d1091-e22b-4c7f-91e5-a1bd1f61e08f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 929967, 'reachable_time': 25427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340331, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.690 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:18:46 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:46.690 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4fc5bc-1d96-4652-ac71-5082591ab405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:46 np0005548731 systemd[1]: run-netns-ovnmeta\x2db4ef1374\x2d9c77\x2d45a7\x2d8776\x2d50aa60c7d84a.mount: Deactivated successfully.
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.694 232437 INFO nova.virt.libvirt.driver [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Deleting instance files /var/lib/nova/instances/02dd8ac7-35d8-410a-9ec7-05f6acfed2a1_del#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.695 232437 INFO nova.virt.libvirt.driver [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Deletion of /var/lib/nova/instances/02dd8ac7-35d8-410a-9ec7-05f6acfed2a1_del complete#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.745 232437 INFO nova.compute.manager [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Took 0.50 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.746 232437 DEBUG oslo.service.loopingcall [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.746 232437 DEBUG nova.compute.manager [-] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:18:46 np0005548731 nova_compute[232433]: 2025-12-06 08:18:46.746 232437 DEBUG nova.network.neutron [-] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:18:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:47.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:47 np0005548731 nova_compute[232433]: 2025-12-06 08:18:47.341 232437 DEBUG nova.compute.manager [req-009a30b2-7973-4b76-9c3c-7e76300226c2 req-b4fe1441-703c-4cca-a243-8d1ba72d73b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Received event network-vif-unplugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:18:47 np0005548731 nova_compute[232433]: 2025-12-06 08:18:47.341 232437 DEBUG oslo_concurrency.lockutils [req-009a30b2-7973-4b76-9c3c-7e76300226c2 req-b4fe1441-703c-4cca-a243-8d1ba72d73b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:47 np0005548731 nova_compute[232433]: 2025-12-06 08:18:47.341 232437 DEBUG oslo_concurrency.lockutils [req-009a30b2-7973-4b76-9c3c-7e76300226c2 req-b4fe1441-703c-4cca-a243-8d1ba72d73b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:47 np0005548731 nova_compute[232433]: 2025-12-06 08:18:47.341 232437 DEBUG oslo_concurrency.lockutils [req-009a30b2-7973-4b76-9c3c-7e76300226c2 req-b4fe1441-703c-4cca-a243-8d1ba72d73b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:47 np0005548731 nova_compute[232433]: 2025-12-06 08:18:47.341 232437 DEBUG nova.compute.manager [req-009a30b2-7973-4b76-9c3c-7e76300226c2 req-b4fe1441-703c-4cca-a243-8d1ba72d73b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] No waiting events found dispatching network-vif-unplugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:18:47 np0005548731 nova_compute[232433]: 2025-12-06 08:18:47.342 232437 DEBUG nova.compute.manager [req-009a30b2-7973-4b76-9c3c-7e76300226c2 req-b4fe1441-703c-4cca-a243-8d1ba72d73b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Received event network-vif-unplugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:18:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:47.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:47 np0005548731 nova_compute[232433]: 2025-12-06 08:18:47.480 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:47 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:47Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:fd:af 10.100.0.14
Dec  6 03:18:47 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:47Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:fd:af 10.100.0.14
Dec  6 03:18:48 np0005548731 nova_compute[232433]: 2025-12-06 08:18:48.589 232437 DEBUG nova.network.neutron [-] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:18:48 np0005548731 nova_compute[232433]: 2025-12-06 08:18:48.605 232437 INFO nova.compute.manager [-] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Took 1.86 seconds to deallocate network for instance.#033[00m
Dec  6 03:18:48 np0005548731 nova_compute[232433]: 2025-12-06 08:18:48.826 232437 INFO nova.compute.manager [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Dec  6 03:18:48 np0005548731 nova_compute[232433]: 2025-12-06 08:18:48.828 232437 DEBUG nova.compute.manager [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Deleting volume: aa997441-9cbb-4e9f-b2e7-f4b6f1e20ab1 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.041 232437 DEBUG oslo_concurrency.lockutils [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.042 232437 DEBUG oslo_concurrency.lockutils [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.118 232437 DEBUG oslo_concurrency.processutils [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:18:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:49.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:49.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.435 232437 DEBUG nova.compute.manager [req-ff8ebf0f-32fa-41ad-aea9-f8d9be339ead req-df6d3ecc-e37c-4a12-a2bf-fc1c4f78ee54 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Received event network-vif-plugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.436 232437 DEBUG oslo_concurrency.lockutils [req-ff8ebf0f-32fa-41ad-aea9-f8d9be339ead req-df6d3ecc-e37c-4a12-a2bf-fc1c4f78ee54 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.436 232437 DEBUG oslo_concurrency.lockutils [req-ff8ebf0f-32fa-41ad-aea9-f8d9be339ead req-df6d3ecc-e37c-4a12-a2bf-fc1c4f78ee54 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.437 232437 DEBUG oslo_concurrency.lockutils [req-ff8ebf0f-32fa-41ad-aea9-f8d9be339ead req-df6d3ecc-e37c-4a12-a2bf-fc1c4f78ee54 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.437 232437 DEBUG nova.compute.manager [req-ff8ebf0f-32fa-41ad-aea9-f8d9be339ead req-df6d3ecc-e37c-4a12-a2bf-fc1c4f78ee54 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] No waiting events found dispatching network-vif-plugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.437 232437 WARNING nova.compute.manager [req-ff8ebf0f-32fa-41ad-aea9-f8d9be339ead req-df6d3ecc-e37c-4a12-a2bf-fc1c4f78ee54 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Received unexpected event network-vif-plugged-f368996f-9dbe-4c0f-8b79-360f8c3c0348 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.437 232437 DEBUG nova.compute.manager [req-ff8ebf0f-32fa-41ad-aea9-f8d9be339ead req-df6d3ecc-e37c-4a12-a2bf-fc1c4f78ee54 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Received event network-vif-deleted-f368996f-9dbe-4c0f-8b79-360f8c3c0348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:18:49 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:18:49 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/274917318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.558 232437 DEBUG oslo_concurrency.processutils [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.566 232437 DEBUG nova.compute.provider_tree [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.588 232437 DEBUG nova.scheduler.client.report [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.613 232437 DEBUG oslo_concurrency.lockutils [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.638 232437 INFO nova.scheduler.client.report [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Deleted allocations for instance 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1#033[00m
Dec  6 03:18:49 np0005548731 nova_compute[232433]: 2025-12-06 08:18:49.699 232437 DEBUG oslo_concurrency.lockutils [None req-be3bc9b1-a1cc-46e0-904a-831879abe4be 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "02dd8ac7-35d8-410a-9ec7-05f6acfed2a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:51 np0005548731 nova_compute[232433]: 2025-12-06 08:18:51.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:18:51 np0005548731 nova_compute[232433]: 2025-12-06 08:18:51.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:18:51 np0005548731 nova_compute[232433]: 2025-12-06 08:18:51.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:18:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:51.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:51.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:51 np0005548731 nova_compute[232433]: 2025-12-06 08:18:51.495 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:51 np0005548731 nova_compute[232433]: 2025-12-06 08:18:51.927 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-710b13bb-ef03-446e-8fa9-5bbc77d9232d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:18:51 np0005548731 nova_compute[232433]: 2025-12-06 08:18:51.927 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-710b13bb-ef03-446e-8fa9-5bbc77d9232d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:18:51 np0005548731 nova_compute[232433]: 2025-12-06 08:18:51.927 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 03:18:51 np0005548731 nova_compute[232433]: 2025-12-06 08:18:51.928 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 710b13bb-ef03-446e-8fa9-5bbc77d9232d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:18:52 np0005548731 nova_compute[232433]: 2025-12-06 08:18:52.482 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:53.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:53.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e423 e423: 3 total, 3 up, 3 in
Dec  6 03:18:54 np0005548731 nova_compute[232433]: 2025-12-06 08:18:54.990 232437 DEBUG oslo_concurrency.lockutils [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:54 np0005548731 nova_compute[232433]: 2025-12-06 08:18:54.990 232437 DEBUG oslo_concurrency.lockutils [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:54 np0005548731 nova_compute[232433]: 2025-12-06 08:18:54.991 232437 DEBUG oslo_concurrency.lockutils [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:54 np0005548731 nova_compute[232433]: 2025-12-06 08:18:54.991 232437 DEBUG oslo_concurrency.lockutils [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:54 np0005548731 nova_compute[232433]: 2025-12-06 08:18:54.991 232437 DEBUG oslo_concurrency.lockutils [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:54 np0005548731 nova_compute[232433]: 2025-12-06 08:18:54.992 232437 INFO nova.compute.manager [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Terminating instance#033[00m
Dec  6 03:18:54 np0005548731 nova_compute[232433]: 2025-12-06 08:18:54.993 232437 DEBUG nova.compute.manager [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:18:55 np0005548731 kernel: tap4b2cd819-4f (unregistering): left promiscuous mode
Dec  6 03:18:55 np0005548731 NetworkManager[49182]: <info>  [1765009135.0510] device (tap4b2cd819-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:18:55 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:55Z|01086|binding|INFO|Releasing lport 4b2cd819-4f26-4469-9cbc-45dfed1645af from this chassis (sb_readonly=0)
Dec  6 03:18:55 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:55Z|01087|binding|INFO|Setting lport 4b2cd819-4f26-4469-9cbc-45dfed1645af down in Southbound
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.056 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:55 np0005548731 ovn_controller[133927]: 2025-12-06T08:18:55Z|01088|binding|INFO|Removing iface tap4b2cd819-4f ovn-installed in OVS
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.089 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:fd:af 10.100.0.14'], port_security=['fa:16:3e:f2:fd:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '710b13bb-ef03-446e-8fa9-5bbc77d9232d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e18f04-0697-4301-a26d-02786b558075', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c4946238-f586-46b2-b353-baad07fea15f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=626f221f-4e25-4acd-9bf5-4d267283bf54, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=4b2cd819-4f26-4469-9cbc-45dfed1645af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.090 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 4b2cd819-4f26-4469-9cbc-45dfed1645af in datapath 14e18f04-0697-4301-a26d-02786b558075 unbound from our chassis#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.091 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e18f04-0697-4301-a26d-02786b558075, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.091 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.091 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a52a69-2f42-4cac-b890-18f827b8a89d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.092 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e18f04-0697-4301-a26d-02786b558075 namespace which is not needed anymore#033[00m
Dec  6 03:18:55 np0005548731 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d000000d3.scope: Deactivated successfully.
Dec  6 03:18:55 np0005548731 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d000000d3.scope: Consumed 13.908s CPU time.
Dec  6 03:18:55 np0005548731 systemd-machined[195355]: Machine qemu-110-instance-000000d3 terminated.
Dec  6 03:18:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:55.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.233 232437 INFO nova.virt.libvirt.driver [-] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Instance destroyed successfully.#033[00m
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.233 232437 DEBUG nova.objects.instance [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'resources' on Instance uuid 710b13bb-ef03-446e-8fa9-5bbc77d9232d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.257 232437 DEBUG nova.virt.libvirt.vif [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:18:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-0-1724040432',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-0-1724040432',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-gen',id=211,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzGPaWk14xbtW5sHv4GXDiMSG/C5MZExlVtwgnjxxcupj/ss21DUqD2dDVK61uPoLLLdziVAI6yXqU7ErexaGkX9er10rHIWMNub/drUqFoT5/Q97yMArULe40gKdjSAw==',key_name='tempest-TestSecurityGroupsBasicOps-853526666',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:18:33Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-j0oogsb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:18:33Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=710b13bb-ef03-446e-8fa9-5bbc77d9232d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "address": "fa:16:3e:f2:fd:af", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2cd819-4f", "ovs_interfaceid": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:18:55 np0005548731 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[340097]: [NOTICE]   (340101) : haproxy version is 2.8.14-c23fe91
Dec  6 03:18:55 np0005548731 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[340097]: [NOTICE]   (340101) : path to executable is /usr/sbin/haproxy
Dec  6 03:18:55 np0005548731 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[340097]: [WARNING]  (340101) : Exiting Master process...
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.258 232437 DEBUG nova.network.os_vif_util [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "address": "fa:16:3e:f2:fd:af", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2cd819-4f", "ovs_interfaceid": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.259 232437 DEBUG nova.network.os_vif_util [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:fd:af,bridge_name='br-int',has_traffic_filtering=True,id=4b2cd819-4f26-4469-9cbc-45dfed1645af,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2cd819-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.259 232437 DEBUG os_vif [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:fd:af,bridge_name='br-int',has_traffic_filtering=True,id=4b2cd819-4f26-4469-9cbc-45dfed1645af,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2cd819-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:18:55 np0005548731 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[340097]: [ALERT]    (340101) : Current worker (340103) exited with code 143 (Terminated)
Dec  6 03:18:55 np0005548731 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[340097]: [WARNING]  (340101) : All workers exited. Exiting... (0)
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.261 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.262 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b2cd819-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:18:55 np0005548731 systemd[1]: libpod-9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931.scope: Deactivated successfully.
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.264 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:55 np0005548731 conmon[340097]: conmon 9d3fa97d5d3c5360f56e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931.scope/container/memory.events
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.266 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.267 232437 INFO os_vif [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:fd:af,bridge_name='br-int',has_traffic_filtering=True,id=4b2cd819-4f26-4469-9cbc-45dfed1645af,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2cd819-4f')#033[00m
Dec  6 03:18:55 np0005548731 podman[340381]: 2025-12-06 08:18:55.270953339 +0000 UTC m=+0.068108374 container died 9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 03:18:55 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931-userdata-shm.mount: Deactivated successfully.
Dec  6 03:18:55 np0005548731 systemd[1]: var-lib-containers-storage-overlay-c3ee2866591a5d27c7bca7af565c204fc774b54885637d918ba892d220f30f1b-merged.mount: Deactivated successfully.
Dec  6 03:18:55 np0005548731 podman[340381]: 2025-12-06 08:18:55.305864212 +0000 UTC m=+0.103019227 container cleanup 9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:18:55 np0005548731 systemd[1]: libpod-conmon-9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931.scope: Deactivated successfully.
Dec  6 03:18:55 np0005548731 podman[340435]: 2025-12-06 08:18:55.371892834 +0000 UTC m=+0.043174036 container remove 9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 03:18:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:55.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.378 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8870ceea-5de4-4e9f-b184-c85afc5e651c]: (4, ('Sat Dec  6 08:18:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075 (9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931)\n9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931\nSat Dec  6 08:18:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075 (9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931)\n9d3fa97d5d3c5360f56ed74cc574214ea842332858004870976aef6e281ca931\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.380 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d441483f-4f1e-48b0-9dab-3d6104a9a817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.381 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e18f04-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:18:55 np0005548731 kernel: tap14e18f04-00: left promiscuous mode
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.383 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.387 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[6aebff7f-24c5-4af0-ae57-79b693507fda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.398 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.405 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c730f08a-f7b9-49f2-9d41-4de6394457ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.405 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9526dfae-a422-4cbb-a8d0-815330ee38a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.422 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[be9a2faf-7e8a-46e3-aff1-e52527eb9dda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 937470, 'reachable_time': 15180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340454, 'error': None, 'target': 'ovnmeta-14e18f04-0697-4301-a26d-02786b558075', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.423 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e18f04-0697-4301-a26d-02786b558075 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:18:55 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:55.423 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[56fc5bf9-f0c4-48bd-8a05-785e568a4b86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:18:55 np0005548731 systemd[1]: run-netns-ovnmeta\x2d14e18f04\x2d0697\x2d4301\x2da26d\x2d02786b558075.mount: Deactivated successfully.
Dec  6 03:18:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.669 232437 INFO nova.virt.libvirt.driver [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Deleting instance files /var/lib/nova/instances/710b13bb-ef03-446e-8fa9-5bbc77d9232d_del#033[00m
Dec  6 03:18:55 np0005548731 nova_compute[232433]: 2025-12-06 08:18:55.670 232437 INFO nova.virt.libvirt.driver [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Deletion of /var/lib/nova/instances/710b13bb-ef03-446e-8fa9-5bbc77d9232d_del complete#033[00m
Dec  6 03:18:56 np0005548731 nova_compute[232433]: 2025-12-06 08:18:56.065 232437 DEBUG nova.compute.manager [req-8d570d16-3b7a-465d-9962-e40989a7b994 req-267b33a8-bc93-46ae-b7c5-2df37885d0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Received event network-vif-unplugged-4b2cd819-4f26-4469-9cbc-45dfed1645af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:18:56 np0005548731 nova_compute[232433]: 2025-12-06 08:18:56.066 232437 DEBUG oslo_concurrency.lockutils [req-8d570d16-3b7a-465d-9962-e40989a7b994 req-267b33a8-bc93-46ae-b7c5-2df37885d0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:56 np0005548731 nova_compute[232433]: 2025-12-06 08:18:56.066 232437 DEBUG oslo_concurrency.lockutils [req-8d570d16-3b7a-465d-9962-e40989a7b994 req-267b33a8-bc93-46ae-b7c5-2df37885d0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:56 np0005548731 nova_compute[232433]: 2025-12-06 08:18:56.066 232437 DEBUG oslo_concurrency.lockutils [req-8d570d16-3b7a-465d-9962-e40989a7b994 req-267b33a8-bc93-46ae-b7c5-2df37885d0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:56 np0005548731 nova_compute[232433]: 2025-12-06 08:18:56.067 232437 DEBUG nova.compute.manager [req-8d570d16-3b7a-465d-9962-e40989a7b994 req-267b33a8-bc93-46ae-b7c5-2df37885d0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] No waiting events found dispatching network-vif-unplugged-4b2cd819-4f26-4469-9cbc-45dfed1645af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:18:56 np0005548731 nova_compute[232433]: 2025-12-06 08:18:56.067 232437 DEBUG nova.compute.manager [req-8d570d16-3b7a-465d-9962-e40989a7b994 req-267b33a8-bc93-46ae-b7c5-2df37885d0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Received event network-vif-unplugged-4b2cd819-4f26-4469-9cbc-45dfed1645af for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:18:56 np0005548731 nova_compute[232433]: 2025-12-06 08:18:56.578 232437 INFO nova.compute.manager [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Took 1.58 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:18:56 np0005548731 nova_compute[232433]: 2025-12-06 08:18:56.579 232437 DEBUG oslo.service.loopingcall [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:18:56 np0005548731 nova_compute[232433]: 2025-12-06 08:18:56.580 232437 DEBUG nova.compute.manager [-] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:18:56 np0005548731 nova_compute[232433]: 2025-12-06 08:18:56.580 232437 DEBUG nova.network.neutron [-] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:18:56 np0005548731 nova_compute[232433]: 2025-12-06 08:18:56.904 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Updating instance_info_cache with network_info: [{"id": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "address": "fa:16:3e:f2:fd:af", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2cd819-4f", "ovs_interfaceid": "4b2cd819-4f26-4469-9cbc-45dfed1645af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:18:57 np0005548731 nova_compute[232433]: 2025-12-06 08:18:57.136 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-710b13bb-ef03-446e-8fa9-5bbc77d9232d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:18:57 np0005548731 nova_compute[232433]: 2025-12-06 08:18:57.137 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 03:18:57 np0005548731 nova_compute[232433]: 2025-12-06 08:18:57.138 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:18:57 np0005548731 nova_compute[232433]: 2025-12-06 08:18:57.138 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:18:57 np0005548731 nova_compute[232433]: 2025-12-06 08:18:57.138 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:18:57 np0005548731 nova_compute[232433]: 2025-12-06 08:18:57.138 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:18:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:57.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:18:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:57.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:18:57 np0005548731 nova_compute[232433]: 2025-12-06 08:18:57.539 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:58 np0005548731 nova_compute[232433]: 2025-12-06 08:18:58.817 232437 DEBUG nova.compute.manager [req-b4f9419a-f844-45e9-9d35-a366af4ca240 req-7af5d520-9111-4516-b2b6-73c083b4c366 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Received event network-vif-plugged-4b2cd819-4f26-4469-9cbc-45dfed1645af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:18:58 np0005548731 nova_compute[232433]: 2025-12-06 08:18:58.818 232437 DEBUG oslo_concurrency.lockutils [req-b4f9419a-f844-45e9-9d35-a366af4ca240 req-7af5d520-9111-4516-b2b6-73c083b4c366 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:58 np0005548731 nova_compute[232433]: 2025-12-06 08:18:58.818 232437 DEBUG oslo_concurrency.lockutils [req-b4f9419a-f844-45e9-9d35-a366af4ca240 req-7af5d520-9111-4516-b2b6-73c083b4c366 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:58 np0005548731 nova_compute[232433]: 2025-12-06 08:18:58.818 232437 DEBUG oslo_concurrency.lockutils [req-b4f9419a-f844-45e9-9d35-a366af4ca240 req-7af5d520-9111-4516-b2b6-73c083b4c366 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:58 np0005548731 nova_compute[232433]: 2025-12-06 08:18:58.818 232437 DEBUG nova.compute.manager [req-b4f9419a-f844-45e9-9d35-a366af4ca240 req-7af5d520-9111-4516-b2b6-73c083b4c366 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] No waiting events found dispatching network-vif-plugged-4b2cd819-4f26-4469-9cbc-45dfed1645af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:18:58 np0005548731 nova_compute[232433]: 2025-12-06 08:18:58.818 232437 WARNING nova.compute.manager [req-b4f9419a-f844-45e9-9d35-a366af4ca240 req-7af5d520-9111-4516-b2b6-73c083b4c366 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Received unexpected event network-vif-plugged-4b2cd819-4f26-4469-9cbc-45dfed1645af for instance with vm_state active and task_state deleting.#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.054 232437 DEBUG nova.network.neutron [-] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.063 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:18:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:59.062 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=99, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=98) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:18:59 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:18:59.064 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.094 232437 INFO nova.compute.manager [-] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Took 2.51 seconds to deallocate network for instance.#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:18:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:59.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.172 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.173 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.173 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.173 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.173 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.207 232437 DEBUG oslo_concurrency.lockutils [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.208 232437 DEBUG oslo_concurrency.lockutils [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.271 232437 DEBUG oslo_concurrency.processutils [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.317 232437 DEBUG nova.compute.manager [req-3975d288-e466-4934-afd2-2bfb9fdd1d02 req-ed5d52ec-6a5b-42fe-a999-87b85579aae6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Received event network-vif-deleted-4b2cd819-4f26-4469-9cbc-45dfed1645af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:18:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:18:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:18:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:59.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:18:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e424 e424: 3 total, 3 up, 3 in
Dec  6 03:18:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:18:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/315860512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.589 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:18:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:18:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3436298347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.721 232437 DEBUG oslo_concurrency.processutils [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.728 232437 DEBUG nova.compute.provider_tree [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.777 232437 DEBUG nova.scheduler.client.report [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.794 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.795 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4148MB free_disk=20.92782974243164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.796 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.812 232437 DEBUG oslo_concurrency.lockutils [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.815 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.934 232437 INFO nova.scheduler.client.report [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Deleted allocations for instance 710b13bb-ef03-446e-8fa9-5bbc77d9232d#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.978 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:18:59 np0005548731 nova_compute[232433]: 2025-12-06 08:18:59.979 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:19:00 np0005548731 nova_compute[232433]: 2025-12-06 08:19:00.034 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:19:00 np0005548731 nova_compute[232433]: 2025-12-06 08:19:00.220 232437 DEBUG oslo_concurrency.lockutils [None req-721bea3e-accf-4df8-905d-55b19368527f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "710b13bb-ef03-446e-8fa9-5bbc77d9232d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:19:00 np0005548731 nova_compute[232433]: 2025-12-06 08:19:00.265 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:19:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/870741633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:19:00 np0005548731 nova_compute[232433]: 2025-12-06 08:19:00.465 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:19:00 np0005548731 nova_compute[232433]: 2025-12-06 08:19:00.471 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:19:00 np0005548731 nova_compute[232433]: 2025-12-06 08:19:00.501 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:19:00 np0005548731 nova_compute[232433]: 2025-12-06 08:19:00.544 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:19:00 np0005548731 nova_compute[232433]: 2025-12-06 08:19:00.544 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:19:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:00.922 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:19:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:00.922 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:19:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:00.922 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:19:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:01.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:19:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:01.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:19:01 np0005548731 nova_compute[232433]: 2025-12-06 08:19:01.476 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009126.475401, 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:19:01 np0005548731 nova_compute[232433]: 2025-12-06 08:19:01.476 232437 INFO nova.compute.manager [-] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:19:01 np0005548731 nova_compute[232433]: 2025-12-06 08:19:01.541 232437 DEBUG nova.compute.manager [None req-8c8a985c-d9a4-4caf-91e6-62fb71896a59 - - - - - -] [instance: 02dd8ac7-35d8-410a-9ec7-05f6acfed2a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:19:01 np0005548731 nova_compute[232433]: 2025-12-06 08:19:01.545 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:01 np0005548731 nova_compute[232433]: 2025-12-06 08:19:01.545 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:02 np0005548731 nova_compute[232433]: 2025-12-06 08:19:02.584 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:03 np0005548731 nova_compute[232433]: 2025-12-06 08:19:03.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:03.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:03.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 e425: 3 total, 3 up, 3 in
Dec  6 03:19:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:05.067 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '99'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:19:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:05.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:05 np0005548731 nova_compute[232433]: 2025-12-06 08:19:05.267 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:05.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:07.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:07.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:07 np0005548731 nova_compute[232433]: 2025-12-06 08:19:07.626 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:09.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:09.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:10 np0005548731 nova_compute[232433]: 2025-12-06 08:19:10.235 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009135.2312436, 710b13bb-ef03-446e-8fa9-5bbc77d9232d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:19:10 np0005548731 nova_compute[232433]: 2025-12-06 08:19:10.236 232437 INFO nova.compute.manager [-] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:19:10 np0005548731 nova_compute[232433]: 2025-12-06 08:19:10.270 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:10 np0005548731 nova_compute[232433]: 2025-12-06 08:19:10.276 232437 DEBUG nova.compute.manager [None req-e9c82676-fdc3-4266-92ad-8a4c7fd9c12c - - - - - -] [instance: 710b13bb-ef03-446e-8fa9-5bbc77d9232d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:19:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:11 np0005548731 nova_compute[232433]: 2025-12-06 08:19:11.097 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:11.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:11 np0005548731 nova_compute[232433]: 2025-12-06 08:19:11.248 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:11.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:12 np0005548731 nova_compute[232433]: 2025-12-06 08:19:12.651 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:13.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:13.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:13 np0005548731 podman[340610]: 2025-12-06 08:19:13.604863367 +0000 UTC m=+0.065597642 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd)
Dec  6 03:19:13 np0005548731 podman[340609]: 2025-12-06 08:19:13.620333615 +0000 UTC m=+0.081239155 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 03:19:13 np0005548731 podman[340608]: 2025-12-06 08:19:13.621568465 +0000 UTC m=+0.088131933 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 03:19:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:19:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:19:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:19:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:15.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:15 np0005548731 nova_compute[232433]: 2025-12-06 08:19:15.272 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:15.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:17.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:17.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:17 np0005548731 nova_compute[232433]: 2025-12-06 08:19:17.654 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:19 np0005548731 nova_compute[232433]: 2025-12-06 08:19:19.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:19 np0005548731 nova_compute[232433]: 2025-12-06 08:19:19.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 03:19:19 np0005548731 nova_compute[232433]: 2025-12-06 08:19:19.122 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 03:19:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:19.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:19.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:20 np0005548731 nova_compute[232433]: 2025-12-06 08:19:20.274 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:19:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:19:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:21.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:21.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:22 np0005548731 nova_compute[232433]: 2025-12-06 08:19:22.697 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:23.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:23.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:19:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 16K writes, 84K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1517 writes, 7487 keys, 1517 commit groups, 1.0 writes per commit group, ingest: 15.30 MB, 0.03 MB/s#012Interval WAL: 1517 writes, 1517 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     40.7      2.56              0.31        54    0.047       0      0       0.0       0.0#012  L6      1/0   11.21 MB   0.0      0.6     0.1      0.5       0.6      0.0       0.0   5.4    107.7     92.5      6.11              1.66        53    0.115    435K    28K       0.0       0.0#012 Sum      1/0   11.21 MB   0.0      0.6     0.1      0.5       0.7      0.1       0.0   6.4     75.9     77.2      8.67              1.96       107    0.081    435K    28K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.6    102.3    103.3      0.80              0.28        12    0.067     67K   3094       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.6      0.0       0.0   0.0    107.7     92.5      6.11              1.66        53    0.115    435K    28K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     40.8      2.55              0.31        53    0.048       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.102, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.65 GB write, 0.10 MB/s write, 0.64 GB read, 0.10 MB/s read, 8.7 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 70.96 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000367 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4022,67.89 MB,22.332%) FilterBlock(107,1.17 MB,0.384095%) IndexBlock(107,1.90 MB,0.624361%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 03:19:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:25.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:25 np0005548731 nova_compute[232433]: 2025-12-06 08:19:25.296 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:25.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:27.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:27.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:27 np0005548731 nova_compute[232433]: 2025-12-06 08:19:27.699 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:29.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:29.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:30 np0005548731 nova_compute[232433]: 2025-12-06 08:19:30.299 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:31.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:31.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:32 np0005548731 nova_compute[232433]: 2025-12-06 08:19:32.701 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:33.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:33.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:35.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:35 np0005548731 nova_compute[232433]: 2025-12-06 08:19:35.303 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:35.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:37.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:37.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:37 np0005548731 nova_compute[232433]: 2025-12-06 08:19:37.703 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:39.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:39.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:40 np0005548731 nova_compute[232433]: 2025-12-06 08:19:40.307 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:41.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:41.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:42 np0005548731 nova_compute[232433]: 2025-12-06 08:19:42.742 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:43.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:43.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:43 np0005548731 podman[340895]: 2025-12-06 08:19:43.90208581 +0000 UTC m=+0.060516298 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 03:19:43 np0005548731 podman[340897]: 2025-12-06 08:19:43.967059516 +0000 UTC m=+0.110732894 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  6 03:19:43 np0005548731 podman[340896]: 2025-12-06 08:19:43.977750248 +0000 UTC m=+0.134077335 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible)
Dec  6 03:19:44 np0005548731 nova_compute[232433]: 2025-12-06 08:19:44.061 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "9af91392-7795-4927-9070-f820c4efd904" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:19:44 np0005548731 nova_compute[232433]: 2025-12-06 08:19:44.061 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:19:44 np0005548731 nova_compute[232433]: 2025-12-06 08:19:44.218 232437 DEBUG nova.compute.manager [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:19:44 np0005548731 nova_compute[232433]: 2025-12-06 08:19:44.316 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:19:44 np0005548731 nova_compute[232433]: 2025-12-06 08:19:44.316 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:19:44 np0005548731 nova_compute[232433]: 2025-12-06 08:19:44.329 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:19:44 np0005548731 nova_compute[232433]: 2025-12-06 08:19:44.330 232437 INFO nova.compute.claims [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:19:44 np0005548731 nova_compute[232433]: 2025-12-06 08:19:44.499 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:19:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:19:44 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3533005233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:19:44 np0005548731 nova_compute[232433]: 2025-12-06 08:19:44.928 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:19:44 np0005548731 nova_compute[232433]: 2025-12-06 08:19:44.936 232437 DEBUG nova.compute.provider_tree [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:19:44 np0005548731 nova_compute[232433]: 2025-12-06 08:19:44.972 232437 DEBUG nova.scheduler.client.report [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.013 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.015 232437 DEBUG nova.compute.manager [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.171 232437 DEBUG nova.compute.manager [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.171 232437 DEBUG nova.network.neutron [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.197 232437 INFO nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:19:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:45.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.234 232437 DEBUG nova.compute.manager [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.351 232437 DEBUG nova.compute.manager [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.353 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.353 232437 INFO nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Creating image(s)#033[00m
Dec  6 03:19:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:45.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.583 232437 DEBUG nova.storage.rbd_utils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 9af91392-7795-4927-9070-f820c4efd904_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.611 232437 DEBUG nova.storage.rbd_utils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 9af91392-7795-4927-9070-f820c4efd904_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.646 232437 DEBUG nova.storage.rbd_utils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 9af91392-7795-4927-9070-f820c4efd904_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.651 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:19:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.744 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.745 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.745 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.745 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.774 232437 DEBUG nova.storage.rbd_utils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 9af91392-7795-4927-9070-f820c4efd904_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:19:45 np0005548731 nova_compute[232433]: 2025-12-06 08:19:45.778 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9af91392-7795-4927-9070-f820c4efd904_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:19:46 np0005548731 nova_compute[232433]: 2025-12-06 08:19:46.189 232437 DEBUG nova.policy [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0432cb6633e14c1b86fc320e7f3bb880', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:19:46 np0005548731 nova_compute[232433]: 2025-12-06 08:19:46.478 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9af91392-7795-4927-9070-f820c4efd904_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:19:46 np0005548731 nova_compute[232433]: 2025-12-06 08:19:46.561 232437 DEBUG nova.storage.rbd_utils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] resizing rbd image 9af91392-7795-4927-9070-f820c4efd904_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:19:46 np0005548731 nova_compute[232433]: 2025-12-06 08:19:46.818 232437 DEBUG nova.objects.instance [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 9af91392-7795-4927-9070-f820c4efd904 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:19:46 np0005548731 nova_compute[232433]: 2025-12-06 08:19:46.859 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:19:46 np0005548731 nova_compute[232433]: 2025-12-06 08:19:46.860 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Ensure instance console log exists: /var/lib/nova/instances/9af91392-7795-4927-9070-f820c4efd904/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:19:46 np0005548731 nova_compute[232433]: 2025-12-06 08:19:46.860 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:19:46 np0005548731 nova_compute[232433]: 2025-12-06 08:19:46.861 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:19:46 np0005548731 nova_compute[232433]: 2025-12-06 08:19:46.861 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:19:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:47.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:47.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:47 np0005548731 nova_compute[232433]: 2025-12-06 08:19:47.744 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:49.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:49 np0005548731 nova_compute[232433]: 2025-12-06 08:19:49.369 232437 DEBUG nova.network.neutron [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Successfully created port: 0031a548-e628-44e3-813b-e2b19eff3073 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:19:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:49.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:50 np0005548731 nova_compute[232433]: 2025-12-06 08:19:50.340 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.035 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:51.034 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=100, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=99) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:19:51 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:51.036 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.134 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.134 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.135 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.154 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: 9af91392-7795-4927-9070-f820c4efd904] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.154 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.155 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:51.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.406 232437 DEBUG nova.network.neutron [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Successfully updated port: 0031a548-e628-44e3-813b-e2b19eff3073 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:19:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:51.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.631 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.632 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquired lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.632 232437 DEBUG nova.network.neutron [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.890 232437 DEBUG nova.compute.manager [req-19cc0694-10d3-4f81-9360-680626bd8c87 req-4dbfba1b-3146-4a4e-8120-86d1c446c41d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Received event network-changed-0031a548-e628-44e3-813b-e2b19eff3073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.890 232437 DEBUG nova.compute.manager [req-19cc0694-10d3-4f81-9360-680626bd8c87 req-4dbfba1b-3146-4a4e-8120-86d1c446c41d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Refreshing instance network info cache due to event network-changed-0031a548-e628-44e3-813b-e2b19eff3073. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:19:51 np0005548731 nova_compute[232433]: 2025-12-06 08:19:51.890 232437 DEBUG oslo_concurrency.lockutils [req-19cc0694-10d3-4f81-9360-680626bd8c87 req-4dbfba1b-3146-4a4e-8120-86d1c446c41d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:19:52 np0005548731 nova_compute[232433]: 2025-12-06 08:19:52.197 232437 DEBUG nova.network.neutron [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:19:52 np0005548731 nova_compute[232433]: 2025-12-06 08:19:52.769 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:53.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:53.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.784 232437 DEBUG nova.network.neutron [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Updating instance_info_cache with network_info: [{"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.856 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Releasing lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.856 232437 DEBUG nova.compute.manager [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Instance network_info: |[{"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.857 232437 DEBUG oslo_concurrency.lockutils [req-19cc0694-10d3-4f81-9360-680626bd8c87 req-4dbfba1b-3146-4a4e-8120-86d1c446c41d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.857 232437 DEBUG nova.network.neutron [req-19cc0694-10d3-4f81-9360-680626bd8c87 req-4dbfba1b-3146-4a4e-8120-86d1c446c41d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Refreshing network info cache for port 0031a548-e628-44e3-813b-e2b19eff3073 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.860 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Start _get_guest_xml network_info=[{"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.864 232437 WARNING nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.871 232437 DEBUG nova.virt.libvirt.host [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.871 232437 DEBUG nova.virt.libvirt.host [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.875 232437 DEBUG nova.virt.libvirt.host [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.875 232437 DEBUG nova.virt.libvirt.host [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.876 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.876 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.877 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.877 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.877 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.878 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.878 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.878 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.878 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.878 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.879 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.879 232437 DEBUG nova.virt.hardware [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:19:53 np0005548731 nova_compute[232433]: 2025-12-06 08:19:53.882 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.138 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:19:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1638807678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.331 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.369 232437 DEBUG nova.storage.rbd_utils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 9af91392-7795-4927-9070-f820c4efd904_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.373 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:19:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:19:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3758252918' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.798 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.801 232437 DEBUG nova.virt.libvirt.vif [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-21027346',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-21027346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=213,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEpPL8QMcLUHBG1lDEV3xyTalweJOiC97sj7nhdx0+SksDQPIssAgzPziD/ow0DmP284gUqVt6ILdtwsRJeimlQ2jXKvDiPTpC5W+N5M8MOtov2iZiqOgGKwgUrOxQcfJw==',key_name='tempest-TestSecurityGroupsBasicOps-874960529',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-g8itipsr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:19:45Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=9af91392-7795-4927-9070-f820c4efd904,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.802 232437 DEBUG nova.network.os_vif_util [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.803 232437 DEBUG nova.network.os_vif_util [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:1d:fe,bridge_name='br-int',has_traffic_filtering=True,id=0031a548-e628-44e3-813b-e2b19eff3073,network=Network(068c4f21-6907-4167-a212-0b2a87054228),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0031a548-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.805 232437 DEBUG nova.objects.instance [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9af91392-7795-4927-9070-f820c4efd904 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.823 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  <uuid>9af91392-7795-4927-9070-f820c4efd904</uuid>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  <name>instance-000000d5</name>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-21027346</nova:name>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:19:53</nova:creationTime>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <nova:user uuid="0432cb6633e14c1b86fc320e7f3bb880">tempest-TestSecurityGroupsBasicOps-568463891-project-member</nova:user>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <nova:project uuid="5d23d1d6ffc142eaa9bee0ef93fe60e4">tempest-TestSecurityGroupsBasicOps-568463891</nova:project>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <nova:port uuid="0031a548-e628-44e3-813b-e2b19eff3073">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <entry name="serial">9af91392-7795-4927-9070-f820c4efd904</entry>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <entry name="uuid">9af91392-7795-4927-9070-f820c4efd904</entry>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9af91392-7795-4927-9070-f820c4efd904_disk">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/9af91392-7795-4927-9070-f820c4efd904_disk.config">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:d7:1d:fe"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <target dev="tap0031a548-e6"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/9af91392-7795-4927-9070-f820c4efd904/console.log" append="off"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:19:54 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:19:54 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:19:54 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:19:54 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.824 232437 DEBUG nova.compute.manager [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Preparing to wait for external event network-vif-plugged-0031a548-e628-44e3-813b-e2b19eff3073 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.824 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "9af91392-7795-4927-9070-f820c4efd904-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.825 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.825 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.826 232437 DEBUG nova.virt.libvirt.vif [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-21027346',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-21027346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=213,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEpPL8QMcLUHBG1lDEV3xyTalweJOiC97sj7nhdx0+SksDQPIssAgzPziD/ow0DmP284gUqVt6ILdtwsRJeimlQ2jXKvDiPTpC5W+N5M8MOtov2iZiqOgGKwgUrOxQcfJw==',key_name='tempest-TestSecurityGroupsBasicOps-874960529',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-g8itipsr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:19:45Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=9af91392-7795-4927-9070-f820c4efd904,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.826 232437 DEBUG nova.network.os_vif_util [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.827 232437 DEBUG nova.network.os_vif_util [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:1d:fe,bridge_name='br-int',has_traffic_filtering=True,id=0031a548-e628-44e3-813b-e2b19eff3073,network=Network(068c4f21-6907-4167-a212-0b2a87054228),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0031a548-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.827 232437 DEBUG os_vif [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:1d:fe,bridge_name='br-int',has_traffic_filtering=True,id=0031a548-e628-44e3-813b-e2b19eff3073,network=Network(068c4f21-6907-4167-a212-0b2a87054228),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0031a548-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.828 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.828 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.829 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.833 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.834 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0031a548-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.834 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0031a548-e6, col_values=(('external_ids', {'iface-id': '0031a548-e628-44e3-813b-e2b19eff3073', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:1d:fe', 'vm-uuid': '9af91392-7795-4927-9070-f820c4efd904'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.836 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.837 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:19:54 np0005548731 NetworkManager[49182]: <info>  [1765009194.8381] manager: (tap0031a548-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/516)
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.843 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.844 232437 INFO os_vif [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:1d:fe,bridge_name='br-int',has_traffic_filtering=True,id=0031a548-e628-44e3-813b-e2b19eff3073,network=Network(068c4f21-6907-4167-a212-0b2a87054228),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0031a548-e6')#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.901 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.901 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.902 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No VIF found with MAC fa:16:3e:d7:1d:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.902 232437 INFO nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Using config drive#033[00m
Dec  6 03:19:54 np0005548731 nova_compute[232433]: 2025-12-06 08:19:54.937 232437 DEBUG nova.storage.rbd_utils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 9af91392-7795-4927-9070-f820c4efd904_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:19:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.007000166s ======
Dec  6 03:19:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:55.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.007000166s
Dec  6 03:19:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:55.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:55 np0005548731 nova_compute[232433]: 2025-12-06 08:19:55.479 232437 INFO nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Creating config drive at /var/lib/nova/instances/9af91392-7795-4927-9070-f820c4efd904/disk.config#033[00m
Dec  6 03:19:55 np0005548731 nova_compute[232433]: 2025-12-06 08:19:55.488 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9af91392-7795-4927-9070-f820c4efd904/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqd7rhsx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:19:55 np0005548731 nova_compute[232433]: 2025-12-06 08:19:55.628 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9af91392-7795-4927-9070-f820c4efd904/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqd7rhsx" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:19:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:19:55 np0005548731 nova_compute[232433]: 2025-12-06 08:19:55.675 232437 DEBUG nova.storage.rbd_utils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 9af91392-7795-4927-9070-f820c4efd904_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:19:55 np0005548731 nova_compute[232433]: 2025-12-06 08:19:55.681 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9af91392-7795-4927-9070-f820c4efd904/disk.config 9af91392-7795-4927-9070-f820c4efd904_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:19:55 np0005548731 nova_compute[232433]: 2025-12-06 08:19:55.769 232437 DEBUG nova.network.neutron [req-19cc0694-10d3-4f81-9360-680626bd8c87 req-4dbfba1b-3146-4a4e-8120-86d1c446c41d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Updated VIF entry in instance network info cache for port 0031a548-e628-44e3-813b-e2b19eff3073. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:19:55 np0005548731 nova_compute[232433]: 2025-12-06 08:19:55.771 232437 DEBUG nova.network.neutron [req-19cc0694-10d3-4f81-9360-680626bd8c87 req-4dbfba1b-3146-4a4e-8120-86d1c446c41d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Updating instance_info_cache with network_info: [{"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:19:55 np0005548731 nova_compute[232433]: 2025-12-06 08:19:55.864 232437 DEBUG oslo_concurrency.processutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9af91392-7795-4927-9070-f820c4efd904/disk.config 9af91392-7795-4927-9070-f820c4efd904_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:19:55 np0005548731 nova_compute[232433]: 2025-12-06 08:19:55.865 232437 INFO nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Deleting local config drive /var/lib/nova/instances/9af91392-7795-4927-9070-f820c4efd904/disk.config because it was imported into RBD.#033[00m
Dec  6 03:19:55 np0005548731 kernel: tap0031a548-e6: entered promiscuous mode
Dec  6 03:19:55 np0005548731 NetworkManager[49182]: <info>  [1765009195.9399] manager: (tap0031a548-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/517)
Dec  6 03:19:55 np0005548731 nova_compute[232433]: 2025-12-06 08:19:55.941 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:55 np0005548731 ovn_controller[133927]: 2025-12-06T08:19:55Z|01089|binding|INFO|Claiming lport 0031a548-e628-44e3-813b-e2b19eff3073 for this chassis.
Dec  6 03:19:55 np0005548731 ovn_controller[133927]: 2025-12-06T08:19:55Z|01090|binding|INFO|0031a548-e628-44e3-813b-e2b19eff3073: Claiming fa:16:3e:d7:1d:fe 10.100.0.6
Dec  6 03:19:55 np0005548731 systemd-machined[195355]: New machine qemu-111-instance-000000d5.
Dec  6 03:19:55 np0005548731 systemd-udevd[341335]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:19:55 np0005548731 NetworkManager[49182]: <info>  [1765009195.9901] device (tap0031a548-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:19:55 np0005548731 NetworkManager[49182]: <info>  [1765009195.9917] device (tap0031a548-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:19:56 np0005548731 systemd[1]: Started Virtual Machine qemu-111-instance-000000d5.
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.009 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:19:56Z|01091|binding|INFO|Setting lport 0031a548-e628-44e3-813b-e2b19eff3073 ovn-installed in OVS
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.013 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:19:56Z|01092|binding|INFO|Setting lport 0031a548-e628-44e3-813b-e2b19eff3073 up in Southbound
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.046 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:1d:fe 10.100.0.6'], port_security=['fa:16:3e:d7:1d:fe 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9af91392-7795-4927-9070-f820c4efd904', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-068c4f21-6907-4167-a212-0b2a87054228', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '078b563a-ec71-489c-8874-193a2eb887d2 6a1a0974-8ebe-46ab-a5b5-e7b4190e409b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32661e10-b35f-49aa-b57a-a3633ad860d5, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=0031a548-e628-44e3-813b-e2b19eff3073) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.047 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 0031a548-e628-44e3-813b-e2b19eff3073 in datapath 068c4f21-6907-4167-a212-0b2a87054228 bound to our chassis#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.048 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 068c4f21-6907-4167-a212-0b2a87054228#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.062 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d3157ef6-69c1-4534-8e12-56aa7c11ecc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.063 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap068c4f21-61 in ovnmeta-068c4f21-6907-4167-a212-0b2a87054228 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.065 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap068c4f21-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.065 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a4404a13-afb6-430a-8e8c-cc3876d9919b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.066 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a2dabed3-1e85-4eb4-9cc5-302cd85022b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.076 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[95a7de78-6255-446f-b0a1-ae0714768b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.089 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[42c171bb-c154-4da1-a997-ebd4f951d3da]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.125 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3b9526c9-865a-4141-b419-5303a04ca812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.133 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[e6763e5c-2185-4c59-a4f1-56282bbd83dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 NetworkManager[49182]: <info>  [1765009196.1344] manager: (tap068c4f21-60): new Veth device (/org/freedesktop/NetworkManager/Devices/518)
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.167 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[d7daa44b-2923-4a63-9d34-dfceb73432f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.171 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[77a0f685-5c1c-4099-ba1d-2d8bcfd27706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 NetworkManager[49182]: <info>  [1765009196.1989] device (tap068c4f21-60): carrier: link connected
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.200 232437 DEBUG oslo_concurrency.lockutils [req-19cc0694-10d3-4f81-9360-680626bd8c87 req-4dbfba1b-3146-4a4e-8120-86d1c446c41d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.212 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[321fbcfd-7e55-4943-b127-3b91f77b674e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.229 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fab4f452-e9e8-4457-9449-2ffc459e0460]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap068c4f21-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:12:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 335], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 945820, 'reachable_time': 34375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341368, 'error': None, 'target': 'ovnmeta-068c4f21-6907-4167-a212-0b2a87054228', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.246 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[229de657-b13b-4060-b558-b5adec8b058c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:1215'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 945820, 'tstamp': 945820}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341369, 'error': None, 'target': 'ovnmeta-068c4f21-6907-4167-a212-0b2a87054228', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.262 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[81abafb2-b048-4f96-b773-e9a19df46ed4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap068c4f21-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:12:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 335], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 945820, 'reachable_time': 34375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341370, 'error': None, 'target': 'ovnmeta-068c4f21-6907-4167-a212-0b2a87054228', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.291 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff343d0-7171-4c03-a3ea-388df01fccca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.350 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[db0e37cb-a490-487d-b1fa-1861d3109601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.352 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap068c4f21-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.352 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.352 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap068c4f21-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:19:56 np0005548731 NetworkManager[49182]: <info>  [1765009196.3548] manager: (tap068c4f21-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.354 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:56 np0005548731 kernel: tap068c4f21-60: entered promiscuous mode
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.358 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.359 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap068c4f21-60, col_values=(('external_ids', {'iface-id': 'c776d704-0c5f-4753-a9f6-c0a544c5354c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.360 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:56 np0005548731 ovn_controller[133927]: 2025-12-06T08:19:56Z|01093|binding|INFO|Releasing lport c776d704-0c5f-4753-a9f6-c0a544c5354c from this chassis (sb_readonly=0)
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.392 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.394 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/068c4f21-6907-4167-a212-0b2a87054228.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/068c4f21-6907-4167-a212-0b2a87054228.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.395 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a44dc8b5-e8ee-43e2-be8a-0b4dacbcf337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.395 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-068c4f21-6907-4167-a212-0b2a87054228
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/068c4f21-6907-4167-a212-0b2a87054228.pid.haproxy
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 068c4f21-6907-4167-a212-0b2a87054228
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:19:56 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:56.396 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-068c4f21-6907-4167-a212-0b2a87054228', 'env', 'PROCESS_TAG=haproxy-068c4f21-6907-4167-a212-0b2a87054228', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/068c4f21-6907-4167-a212-0b2a87054228.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:19:56 np0005548731 podman[341435]: 2025-12-06 08:19:56.769346277 +0000 UTC m=+0.054702067 container create bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 03:19:56 np0005548731 systemd[1]: Started libpod-conmon-bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0.scope.
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.818 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009196.817969, 9af91392-7795-4927-9070-f820c4efd904 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.819 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9af91392-7795-4927-9070-f820c4efd904] VM Started (Lifecycle Event)#033[00m
Dec  6 03:19:56 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:19:56 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3587ff309ae668eb9c75ea36c71d5b86e3881f4b2b1b01b39785c4a389a03fc4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:19:56 np0005548731 podman[341435]: 2025-12-06 08:19:56.745224228 +0000 UTC m=+0.030580038 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:19:56 np0005548731 podman[341435]: 2025-12-06 08:19:56.840986696 +0000 UTC m=+0.126342506 container init bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:19:56 np0005548731 podman[341435]: 2025-12-06 08:19:56.846154002 +0000 UTC m=+0.131509802 container start bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 03:19:56 np0005548731 neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228[341459]: [NOTICE]   (341463) : New worker (341465) forked
Dec  6 03:19:56 np0005548731 neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228[341459]: [NOTICE]   (341463) : Loading success.
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.932 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9af91392-7795-4927-9070-f820c4efd904] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.936 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009196.8190255, 9af91392-7795-4927-9070-f820c4efd904 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.936 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9af91392-7795-4927-9070-f820c4efd904] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.944 232437 DEBUG nova.compute.manager [req-9768a110-1a04-46da-9ec9-a76f8e3643af req-bb2db10c-590d-46e9-af11-76372b9632c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Received event network-vif-plugged-0031a548-e628-44e3-813b-e2b19eff3073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.944 232437 DEBUG oslo_concurrency.lockutils [req-9768a110-1a04-46da-9ec9-a76f8e3643af req-bb2db10c-590d-46e9-af11-76372b9632c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9af91392-7795-4927-9070-f820c4efd904-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.945 232437 DEBUG oslo_concurrency.lockutils [req-9768a110-1a04-46da-9ec9-a76f8e3643af req-bb2db10c-590d-46e9-af11-76372b9632c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.945 232437 DEBUG oslo_concurrency.lockutils [req-9768a110-1a04-46da-9ec9-a76f8e3643af req-bb2db10c-590d-46e9-af11-76372b9632c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.945 232437 DEBUG nova.compute.manager [req-9768a110-1a04-46da-9ec9-a76f8e3643af req-bb2db10c-590d-46e9-af11-76372b9632c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Processing event network-vif-plugged-0031a548-e628-44e3-813b-e2b19eff3073 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.946 232437 DEBUG nova.compute.manager [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.949 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.952 232437 INFO nova.virt.libvirt.driver [-] [instance: 9af91392-7795-4927-9070-f820c4efd904] Instance spawned successfully.#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.952 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.987 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.988 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.988 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.989 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.989 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.990 232437 DEBUG nova.virt.libvirt.driver [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.993 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9af91392-7795-4927-9070-f820c4efd904] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.996 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009196.9491558, 9af91392-7795-4927-9070-f820c4efd904 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:19:56 np0005548731 nova_compute[232433]: 2025-12-06 08:19:56.997 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9af91392-7795-4927-9070-f820c4efd904] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:19:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:19:57.039 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '100'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:19:57 np0005548731 nova_compute[232433]: 2025-12-06 08:19:57.106 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9af91392-7795-4927-9070-f820c4efd904] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:19:57 np0005548731 nova_compute[232433]: 2025-12-06 08:19:57.109 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9af91392-7795-4927-9070-f820c4efd904] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:19:57 np0005548731 nova_compute[232433]: 2025-12-06 08:19:57.199 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 9af91392-7795-4927-9070-f820c4efd904] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:19:57 np0005548731 nova_compute[232433]: 2025-12-06 08:19:57.240 232437 INFO nova.compute.manager [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Took 11.89 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:19:57 np0005548731 nova_compute[232433]: 2025-12-06 08:19:57.241 232437 DEBUG nova.compute.manager [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:19:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:19:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:57.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:19:57 np0005548731 nova_compute[232433]: 2025-12-06 08:19:57.330 232437 INFO nova.compute.manager [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Took 13.05 seconds to build instance.#033[00m
Dec  6 03:19:57 np0005548731 nova_compute[232433]: 2025-12-06 08:19:57.363 232437 DEBUG oslo_concurrency.lockutils [None req-7a9af839-05b2-4aa5-a027-889229875de7 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:19:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:19:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:57.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:19:57 np0005548731 nova_compute[232433]: 2025-12-06 08:19:57.771 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.178 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.179 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.179 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.180 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.181 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.223 232437 DEBUG nova.compute.manager [req-96bc8d23-c389-49ae-a49c-dfbc2379f4c1 req-ce4cc854-49bf-4ad2-b4e4-cc28c5562aa2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Received event network-vif-plugged-0031a548-e628-44e3-813b-e2b19eff3073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.224 232437 DEBUG oslo_concurrency.lockutils [req-96bc8d23-c389-49ae-a49c-dfbc2379f4c1 req-ce4cc854-49bf-4ad2-b4e4-cc28c5562aa2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9af91392-7795-4927-9070-f820c4efd904-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.225 232437 DEBUG oslo_concurrency.lockutils [req-96bc8d23-c389-49ae-a49c-dfbc2379f4c1 req-ce4cc854-49bf-4ad2-b4e4-cc28c5562aa2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.225 232437 DEBUG oslo_concurrency.lockutils [req-96bc8d23-c389-49ae-a49c-dfbc2379f4c1 req-ce4cc854-49bf-4ad2-b4e4-cc28c5562aa2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.226 232437 DEBUG nova.compute.manager [req-96bc8d23-c389-49ae-a49c-dfbc2379f4c1 req-ce4cc854-49bf-4ad2-b4e4-cc28c5562aa2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] No waiting events found dispatching network-vif-plugged-0031a548-e628-44e3-813b-e2b19eff3073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.226 232437 WARNING nova.compute.manager [req-96bc8d23-c389-49ae-a49c-dfbc2379f4c1 req-ce4cc854-49bf-4ad2-b4e4-cc28c5562aa2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Received unexpected event network-vif-plugged-0031a548-e628-44e3-813b-e2b19eff3073 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:19:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:59.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:19:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:19:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:59.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:19:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:19:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/656197683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.761 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.838 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.905 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000d5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:19:59 np0005548731 nova_compute[232433]: 2025-12-06 08:19:59.906 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000d5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:20:00 np0005548731 nova_compute[232433]: 2025-12-06 08:20:00.104 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:20:00 np0005548731 nova_compute[232433]: 2025-12-06 08:20:00.106 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3938MB free_disk=20.967376708984375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:20:00 np0005548731 nova_compute[232433]: 2025-12-06 08:20:00.106 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:20:00 np0005548731 nova_compute[232433]: 2025-12-06 08:20:00.107 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:20:00 np0005548731 nova_compute[232433]: 2025-12-06 08:20:00.208 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 9af91392-7795-4927-9070-f820c4efd904 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:20:00 np0005548731 nova_compute[232433]: 2025-12-06 08:20:00.209 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:20:00 np0005548731 nova_compute[232433]: 2025-12-06 08:20:00.209 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:20:00 np0005548731 nova_compute[232433]: 2025-12-06 08:20:00.276 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:20:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:00.923 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:20:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:00.924 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:20:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:00.925 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:20:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:20:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2368192170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:20:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:00 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 03:20:01 np0005548731 nova_compute[232433]: 2025-12-06 08:20:01.005 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:20:01 np0005548731 nova_compute[232433]: 2025-12-06 08:20:01.015 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:20:01 np0005548731 nova_compute[232433]: 2025-12-06 08:20:01.158 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:20:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:01.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:01 np0005548731 nova_compute[232433]: 2025-12-06 08:20:01.291 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:20:01 np0005548731 nova_compute[232433]: 2025-12-06 08:20:01.292 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:20:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:01.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:02 np0005548731 nova_compute[232433]: 2025-12-06 08:20:02.293 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:20:02 np0005548731 nova_compute[232433]: 2025-12-06 08:20:02.294 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:20:02 np0005548731 nova_compute[232433]: 2025-12-06 08:20:02.774 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:03.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:03.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:04 np0005548731 nova_compute[232433]: 2025-12-06 08:20:04.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:20:04 np0005548731 nova_compute[232433]: 2025-12-06 08:20:04.843 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:05.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:05 np0005548731 NetworkManager[49182]: <info>  [1765009205.3377] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/520)
Dec  6 03:20:05 np0005548731 NetworkManager[49182]: <info>  [1765009205.3385] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/521)
Dec  6 03:20:05 np0005548731 nova_compute[232433]: 2025-12-06 08:20:05.336 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:05 np0005548731 nova_compute[232433]: 2025-12-06 08:20:05.439 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:05 np0005548731 ovn_controller[133927]: 2025-12-06T08:20:05Z|01094|binding|INFO|Releasing lport c776d704-0c5f-4753-a9f6-c0a544c5354c from this chassis (sb_readonly=0)
Dec  6 03:20:05 np0005548731 nova_compute[232433]: 2025-12-06 08:20:05.453 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:05.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:06 np0005548731 nova_compute[232433]: 2025-12-06 08:20:06.125 232437 DEBUG nova.compute.manager [req-71184fcd-e9e1-4876-a240-5b9b34631162 req-aed71dd9-f090-42b7-988a-930fd8d8921d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Received event network-changed-0031a548-e628-44e3-813b-e2b19eff3073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:20:06 np0005548731 nova_compute[232433]: 2025-12-06 08:20:06.126 232437 DEBUG nova.compute.manager [req-71184fcd-e9e1-4876-a240-5b9b34631162 req-aed71dd9-f090-42b7-988a-930fd8d8921d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Refreshing instance network info cache due to event network-changed-0031a548-e628-44e3-813b-e2b19eff3073. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:20:06 np0005548731 nova_compute[232433]: 2025-12-06 08:20:06.126 232437 DEBUG oslo_concurrency.lockutils [req-71184fcd-e9e1-4876-a240-5b9b34631162 req-aed71dd9-f090-42b7-988a-930fd8d8921d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:20:06 np0005548731 nova_compute[232433]: 2025-12-06 08:20:06.127 232437 DEBUG oslo_concurrency.lockutils [req-71184fcd-e9e1-4876-a240-5b9b34631162 req-aed71dd9-f090-42b7-988a-930fd8d8921d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:20:06 np0005548731 nova_compute[232433]: 2025-12-06 08:20:06.127 232437 DEBUG nova.network.neutron [req-71184fcd-e9e1-4876-a240-5b9b34631162 req-aed71dd9-f090-42b7-988a-930fd8d8921d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Refreshing network info cache for port 0031a548-e628-44e3-813b-e2b19eff3073 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:20:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:07.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:07.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:07 np0005548731 nova_compute[232433]: 2025-12-06 08:20:07.827 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:08 np0005548731 nova_compute[232433]: 2025-12-06 08:20:08.237 232437 DEBUG nova.network.neutron [req-71184fcd-e9e1-4876-a240-5b9b34631162 req-aed71dd9-f090-42b7-988a-930fd8d8921d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Updated VIF entry in instance network info cache for port 0031a548-e628-44e3-813b-e2b19eff3073. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:20:08 np0005548731 nova_compute[232433]: 2025-12-06 08:20:08.238 232437 DEBUG nova.network.neutron [req-71184fcd-e9e1-4876-a240-5b9b34631162 req-aed71dd9-f090-42b7-988a-930fd8d8921d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Updating instance_info_cache with network_info: [{"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:20:08 np0005548731 nova_compute[232433]: 2025-12-06 08:20:08.280 232437 DEBUG oslo_concurrency.lockutils [req-71184fcd-e9e1-4876-a240-5b9b34631162 req-aed71dd9-f090-42b7-988a-930fd8d8921d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:20:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:09.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:20:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:09.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:20:09 np0005548731 nova_compute[232433]: 2025-12-06 08:20:09.846 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:11.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:11.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:11 np0005548731 ovn_controller[133927]: 2025-12-06T08:20:11Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:1d:fe 10.100.0.6
Dec  6 03:20:11 np0005548731 ovn_controller[133927]: 2025-12-06T08:20:11Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:1d:fe 10.100.0.6
Dec  6 03:20:12 np0005548731 nova_compute[232433]: 2025-12-06 08:20:12.830 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:13.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:13.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:14 np0005548731 nova_compute[232433]: 2025-12-06 08:20:14.850 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:14 np0005548731 podman[341582]: 2025-12-06 08:20:14.953526635 +0000 UTC m=+0.085778725 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 03:20:14 np0005548731 podman[341580]: 2025-12-06 08:20:14.964096773 +0000 UTC m=+0.101928989 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  6 03:20:14 np0005548731 podman[341581]: 2025-12-06 08:20:14.980281199 +0000 UTC m=+0.121218741 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  6 03:20:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:15.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:15.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:17 np0005548731 nova_compute[232433]: 2025-12-06 08:20:17.101 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:20:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:17.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:17.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:17 np0005548731 nova_compute[232433]: 2025-12-06 08:20:17.833 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:19.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:20:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:20:19 np0005548731 nova_compute[232433]: 2025-12-06 08:20:19.899 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:21.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:21.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:22 np0005548731 nova_compute[232433]: 2025-12-06 08:20:22.837 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:20:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:20:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:20:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:20:22 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:20:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:23.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:24 np0005548731 nova_compute[232433]: 2025-12-06 08:20:24.902 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:25.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:25.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:27.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:27.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:27 np0005548731 nova_compute[232433]: 2025-12-06 08:20:27.840 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:20:29 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:20:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:29.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:29.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:29 np0005548731 nova_compute[232433]: 2025-12-06 08:20:29.941 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:20:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2333218000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:20:30 np0005548731 nova_compute[232433]: 2025-12-06 08:20:30.935 232437 DEBUG nova.compute.manager [req-3d0bdce4-2fda-4a2e-a192-e7e2162be36e req-27436a69-2005-442f-a575-0a78e2d2cec9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Received event network-changed-0031a548-e628-44e3-813b-e2b19eff3073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:20:30 np0005548731 nova_compute[232433]: 2025-12-06 08:20:30.935 232437 DEBUG nova.compute.manager [req-3d0bdce4-2fda-4a2e-a192-e7e2162be36e req-27436a69-2005-442f-a575-0a78e2d2cec9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Refreshing instance network info cache due to event network-changed-0031a548-e628-44e3-813b-e2b19eff3073. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:20:30 np0005548731 nova_compute[232433]: 2025-12-06 08:20:30.936 232437 DEBUG oslo_concurrency.lockutils [req-3d0bdce4-2fda-4a2e-a192-e7e2162be36e req-27436a69-2005-442f-a575-0a78e2d2cec9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:20:30 np0005548731 nova_compute[232433]: 2025-12-06 08:20:30.936 232437 DEBUG oslo_concurrency.lockutils [req-3d0bdce4-2fda-4a2e-a192-e7e2162be36e req-27436a69-2005-442f-a575-0a78e2d2cec9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:20:30 np0005548731 nova_compute[232433]: 2025-12-06 08:20:30.936 232437 DEBUG nova.network.neutron [req-3d0bdce4-2fda-4a2e-a192-e7e2162be36e req-27436a69-2005-442f-a575-0a78e2d2cec9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Refreshing network info cache for port 0031a548-e628-44e3-813b-e2b19eff3073 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:20:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.035 232437 DEBUG oslo_concurrency.lockutils [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "9af91392-7795-4927-9070-f820c4efd904" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.035 232437 DEBUG oslo_concurrency.lockutils [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.036 232437 DEBUG oslo_concurrency.lockutils [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "9af91392-7795-4927-9070-f820c4efd904-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.037 232437 DEBUG oslo_concurrency.lockutils [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:20:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:20:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6602.4 total, 600.0 interval#012Cumulative writes: 77K writes, 312K keys, 77K commit groups, 1.0 writes per commit group, ingest: 0.32 GB, 0.05 MB/s#012Cumulative WAL: 77K writes, 29K syncs, 2.69 writes per sync, written: 0.32 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4721 writes, 18K keys, 4721 commit groups, 1.0 writes per commit group, ingest: 21.16 MB, 0.04 MB/s#012Interval WAL: 4721 writes, 1802 syncs, 2.62 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.037 232437 DEBUG oslo_concurrency.lockutils [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.039 232437 INFO nova.compute.manager [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Terminating instance#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.041 232437 DEBUG nova.compute.manager [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:20:31 np0005548731 kernel: tap0031a548-e6 (unregistering): left promiscuous mode
Dec  6 03:20:31 np0005548731 NetworkManager[49182]: <info>  [1765009231.1018] device (tap0031a548-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.159 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:31 np0005548731 ovn_controller[133927]: 2025-12-06T08:20:31Z|01095|binding|INFO|Releasing lport 0031a548-e628-44e3-813b-e2b19eff3073 from this chassis (sb_readonly=0)
Dec  6 03:20:31 np0005548731 ovn_controller[133927]: 2025-12-06T08:20:31Z|01096|binding|INFO|Setting lport 0031a548-e628-44e3-813b-e2b19eff3073 down in Southbound
Dec  6 03:20:31 np0005548731 ovn_controller[133927]: 2025-12-06T08:20:31Z|01097|binding|INFO|Removing iface tap0031a548-e6 ovn-installed in OVS
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.162 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.174 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:31 np0005548731 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d000000d5.scope: Deactivated successfully.
Dec  6 03:20:31 np0005548731 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d000000d5.scope: Consumed 15.529s CPU time.
Dec  6 03:20:31 np0005548731 systemd-machined[195355]: Machine qemu-111-instance-000000d5 terminated.
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.270 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:1d:fe 10.100.0.6'], port_security=['fa:16:3e:d7:1d:fe 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9af91392-7795-4927-9070-f820c4efd904', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-068c4f21-6907-4167-a212-0b2a87054228', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '078b563a-ec71-489c-8874-193a2eb887d2 6a1a0974-8ebe-46ab-a5b5-e7b4190e409b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32661e10-b35f-49aa-b57a-a3633ad860d5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=0031a548-e628-44e3-813b-e2b19eff3073) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.272 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 0031a548-e628-44e3-813b-e2b19eff3073 in datapath 068c4f21-6907-4167-a212-0b2a87054228 unbound from our chassis#033[00m
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.273 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 068c4f21-6907-4167-a212-0b2a87054228, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.275 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8fd546-e776-4ad0-8c52-8bc976f65776]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.276 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-068c4f21-6907-4167-a212-0b2a87054228 namespace which is not needed anymore#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.276 232437 INFO nova.virt.libvirt.driver [-] [instance: 9af91392-7795-4927-9070-f820c4efd904] Instance destroyed successfully.#033[00m
Dec  6 03:20:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.277 232437 DEBUG nova.objects.instance [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'resources' on Instance uuid 9af91392-7795-4927-9070-f820c4efd904 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:20:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:20:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:31.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:20:31 np0005548731 neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228[341459]: [NOTICE]   (341463) : haproxy version is 2.8.14-c23fe91
Dec  6 03:20:31 np0005548731 neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228[341459]: [NOTICE]   (341463) : path to executable is /usr/sbin/haproxy
Dec  6 03:20:31 np0005548731 neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228[341459]: [WARNING]  (341463) : Exiting Master process...
Dec  6 03:20:31 np0005548731 neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228[341459]: [ALERT]    (341463) : Current worker (341465) exited with code 143 (Terminated)
Dec  6 03:20:31 np0005548731 neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228[341459]: [WARNING]  (341463) : All workers exited. Exiting... (0)
Dec  6 03:20:31 np0005548731 systemd[1]: libpod-bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0.scope: Deactivated successfully.
Dec  6 03:20:31 np0005548731 podman[341920]: 2025-12-06 08:20:31.426580557 +0000 UTC m=+0.046778374 container died bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 03:20:31 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0-userdata-shm.mount: Deactivated successfully.
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.452 232437 DEBUG nova.virt.libvirt.vif [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-21027346',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-21027346',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=213,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEpPL8QMcLUHBG1lDEV3xyTalweJOiC97sj7nhdx0+SksDQPIssAgzPziD/ow0DmP284gUqVt6ILdtwsRJeimlQ2jXKvDiPTpC5W+N5M8MOtov2iZiqOgGKwgUrOxQcfJw==',key_name='tempest-TestSecurityGroupsBasicOps-874960529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:19:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-g8itipsr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:19:57Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=9af91392-7795-4927-9070-f820c4efd904,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:20:31 np0005548731 systemd[1]: var-lib-containers-storage-overlay-3587ff309ae668eb9c75ea36c71d5b86e3881f4b2b1b01b39785c4a389a03fc4-merged.mount: Deactivated successfully.
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.453 232437 DEBUG nova.network.os_vif_util [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.454 232437 DEBUG nova.network.os_vif_util [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:1d:fe,bridge_name='br-int',has_traffic_filtering=True,id=0031a548-e628-44e3-813b-e2b19eff3073,network=Network(068c4f21-6907-4167-a212-0b2a87054228),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0031a548-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.454 232437 DEBUG os_vif [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:1d:fe,bridge_name='br-int',has_traffic_filtering=True,id=0031a548-e628-44e3-813b-e2b19eff3073,network=Network(068c4f21-6907-4167-a212-0b2a87054228),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0031a548-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.456 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.456 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0031a548-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.458 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.461 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.463 232437 INFO os_vif [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:1d:fe,bridge_name='br-int',has_traffic_filtering=True,id=0031a548-e628-44e3-813b-e2b19eff3073,network=Network(068c4f21-6907-4167-a212-0b2a87054228),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0031a548-e6')#033[00m
Dec  6 03:20:31 np0005548731 podman[341920]: 2025-12-06 08:20:31.46811956 +0000 UTC m=+0.088317357 container cleanup bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 03:20:31 np0005548731 systemd[1]: libpod-conmon-bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0.scope: Deactivated successfully.
Dec  6 03:20:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:31.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:31 np0005548731 podman[341959]: 2025-12-06 08:20:31.52832589 +0000 UTC m=+0.036040850 container remove bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.533 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[039a1f5c-29e6-4a28-8b42-8fd36a82fe31]: (4, ('Sat Dec  6 08:20:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228 (bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0)\nbdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0\nSat Dec  6 08:20:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-068c4f21-6907-4167-a212-0b2a87054228 (bdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0)\nbdcebae31c4c5d058e140b317a2d7f907cd616c575e13fc467386d96f4b936f0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.535 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7efd2153-7fa2-452a-8890-fee894bfcb6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.536 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap068c4f21-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.537 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:31 np0005548731 kernel: tap068c4f21-60: left promiscuous mode
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.539 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.541 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[04738c3e-d297-44fc-803f-5365cd2a161a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.555 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.567 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3752b2-2f04-417c-9843-80e619c80cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.569 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ecaa4859-f2e7-484f-94f5-b7142d1e797f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.586 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[820dc29b-cb61-49d1-8253-dbdaa760ba0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 945812, 'reachable_time': 43382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341979, 'error': None, 'target': 'ovnmeta-068c4f21-6907-4167-a212-0b2a87054228', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.589 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-068c4f21-6907-4167-a212-0b2a87054228 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:20:31 np0005548731 systemd[1]: run-netns-ovnmeta\x2d068c4f21\x2d6907\x2d4167\x2da212\x2d0b2a87054228.mount: Deactivated successfully.
Dec  6 03:20:31 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:31.590 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[79a33d8f-2fd1-4b74-b99b-2f28aac2ee64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.852 232437 INFO nova.virt.libvirt.driver [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Deleting instance files /var/lib/nova/instances/9af91392-7795-4927-9070-f820c4efd904_del#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.853 232437 INFO nova.virt.libvirt.driver [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Deletion of /var/lib/nova/instances/9af91392-7795-4927-9070-f820c4efd904_del complete#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.974 232437 INFO nova.compute.manager [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.975 232437 DEBUG oslo.service.loopingcall [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.976 232437 DEBUG nova.compute.manager [-] [instance: 9af91392-7795-4927-9070-f820c4efd904] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:20:31 np0005548731 nova_compute[232433]: 2025-12-06 08:20:31.976 232437 DEBUG nova.network.neutron [-] [instance: 9af91392-7795-4927-9070-f820c4efd904] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:20:32 np0005548731 nova_compute[232433]: 2025-12-06 08:20:32.841 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:32 np0005548731 nova_compute[232433]: 2025-12-06 08:20:32.882 232437 DEBUG nova.compute.manager [req-8805ee87-695f-47be-828c-30a82daaa8d4 req-2cb411e8-ed5b-4788-8ff5-7e04546e1029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Received event network-vif-unplugged-0031a548-e628-44e3-813b-e2b19eff3073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:20:32 np0005548731 nova_compute[232433]: 2025-12-06 08:20:32.882 232437 DEBUG oslo_concurrency.lockutils [req-8805ee87-695f-47be-828c-30a82daaa8d4 req-2cb411e8-ed5b-4788-8ff5-7e04546e1029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9af91392-7795-4927-9070-f820c4efd904-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:20:32 np0005548731 nova_compute[232433]: 2025-12-06 08:20:32.883 232437 DEBUG oslo_concurrency.lockutils [req-8805ee87-695f-47be-828c-30a82daaa8d4 req-2cb411e8-ed5b-4788-8ff5-7e04546e1029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:20:32 np0005548731 nova_compute[232433]: 2025-12-06 08:20:32.883 232437 DEBUG oslo_concurrency.lockutils [req-8805ee87-695f-47be-828c-30a82daaa8d4 req-2cb411e8-ed5b-4788-8ff5-7e04546e1029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:20:32 np0005548731 nova_compute[232433]: 2025-12-06 08:20:32.883 232437 DEBUG nova.compute.manager [req-8805ee87-695f-47be-828c-30a82daaa8d4 req-2cb411e8-ed5b-4788-8ff5-7e04546e1029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] No waiting events found dispatching network-vif-unplugged-0031a548-e628-44e3-813b-e2b19eff3073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:20:32 np0005548731 nova_compute[232433]: 2025-12-06 08:20:32.884 232437 DEBUG nova.compute.manager [req-8805ee87-695f-47be-828c-30a82daaa8d4 req-2cb411e8-ed5b-4788-8ff5-7e04546e1029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Received event network-vif-unplugged-0031a548-e628-44e3-813b-e2b19eff3073 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:20:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:33.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:33.316 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=101, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=100) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:20:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:33.317 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:20:33 np0005548731 nova_compute[232433]: 2025-12-06 08:20:33.355 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:33 np0005548731 nova_compute[232433]: 2025-12-06 08:20:33.430 232437 DEBUG nova.network.neutron [-] [instance: 9af91392-7795-4927-9070-f820c4efd904] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:20:33 np0005548731 nova_compute[232433]: 2025-12-06 08:20:33.508 232437 INFO nova.compute.manager [-] [instance: 9af91392-7795-4927-9070-f820c4efd904] Took 1.53 seconds to deallocate network for instance.#033[00m
Dec  6 03:20:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:33.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:33 np0005548731 nova_compute[232433]: 2025-12-06 08:20:33.594 232437 DEBUG oslo_concurrency.lockutils [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:20:33 np0005548731 nova_compute[232433]: 2025-12-06 08:20:33.595 232437 DEBUG oslo_concurrency.lockutils [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:20:33 np0005548731 nova_compute[232433]: 2025-12-06 08:20:33.609 232437 DEBUG nova.compute.manager [req-71207eec-3798-485d-9bb8-c60085ec4068 req-003e3808-3fc5-4d91-a6d6-a6bd24ffd3fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Received event network-vif-deleted-0031a548-e628-44e3-813b-e2b19eff3073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:20:33 np0005548731 nova_compute[232433]: 2025-12-06 08:20:33.677 232437 DEBUG oslo_concurrency.processutils [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:20:34 np0005548731 nova_compute[232433]: 2025-12-06 08:20:34.098 232437 DEBUG nova.network.neutron [req-3d0bdce4-2fda-4a2e-a192-e7e2162be36e req-27436a69-2005-442f-a575-0a78e2d2cec9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Updated VIF entry in instance network info cache for port 0031a548-e628-44e3-813b-e2b19eff3073. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:20:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:20:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3058139754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:20:34 np0005548731 nova_compute[232433]: 2025-12-06 08:20:34.099 232437 DEBUG nova.network.neutron [req-3d0bdce4-2fda-4a2e-a192-e7e2162be36e req-27436a69-2005-442f-a575-0a78e2d2cec9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Updating instance_info_cache with network_info: [{"id": "0031a548-e628-44e3-813b-e2b19eff3073", "address": "fa:16:3e:d7:1d:fe", "network": {"id": "068c4f21-6907-4167-a212-0b2a87054228", "bridge": "br-int", "label": "tempest-network-smoke--1734606332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0031a548-e6", "ovs_interfaceid": "0031a548-e628-44e3-813b-e2b19eff3073", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:20:34 np0005548731 nova_compute[232433]: 2025-12-06 08:20:34.118 232437 DEBUG oslo_concurrency.processutils [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:20:34 np0005548731 nova_compute[232433]: 2025-12-06 08:20:34.125 232437 DEBUG nova.compute.provider_tree [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:20:34 np0005548731 nova_compute[232433]: 2025-12-06 08:20:34.397 232437 DEBUG oslo_concurrency.lockutils [req-3d0bdce4-2fda-4a2e-a192-e7e2162be36e req-27436a69-2005-442f-a575-0a78e2d2cec9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9af91392-7795-4927-9070-f820c4efd904" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:20:34 np0005548731 nova_compute[232433]: 2025-12-06 08:20:34.432 232437 DEBUG nova.scheduler.client.report [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:20:34 np0005548731 nova_compute[232433]: 2025-12-06 08:20:34.897 232437 DEBUG oslo_concurrency.lockutils [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:20:34 np0005548731 nova_compute[232433]: 2025-12-06 08:20:34.983 232437 INFO nova.scheduler.client.report [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Deleted allocations for instance 9af91392-7795-4927-9070-f820c4efd904#033[00m
Dec  6 03:20:35 np0005548731 nova_compute[232433]: 2025-12-06 08:20:35.276 232437 DEBUG nova.compute.manager [req-89406d45-8ce9-46e9-b8ee-b12a8b5e2e40 req-f9ab48b1-043c-437a-9084-9dcf010ab0ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Received event network-vif-plugged-0031a548-e628-44e3-813b-e2b19eff3073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:20:35 np0005548731 nova_compute[232433]: 2025-12-06 08:20:35.277 232437 DEBUG oslo_concurrency.lockutils [req-89406d45-8ce9-46e9-b8ee-b12a8b5e2e40 req-f9ab48b1-043c-437a-9084-9dcf010ab0ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9af91392-7795-4927-9070-f820c4efd904-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:20:35 np0005548731 nova_compute[232433]: 2025-12-06 08:20:35.278 232437 DEBUG oslo_concurrency.lockutils [req-89406d45-8ce9-46e9-b8ee-b12a8b5e2e40 req-f9ab48b1-043c-437a-9084-9dcf010ab0ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:20:35 np0005548731 nova_compute[232433]: 2025-12-06 08:20:35.278 232437 DEBUG oslo_concurrency.lockutils [req-89406d45-8ce9-46e9-b8ee-b12a8b5e2e40 req-f9ab48b1-043c-437a-9084-9dcf010ab0ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:20:35 np0005548731 nova_compute[232433]: 2025-12-06 08:20:35.279 232437 DEBUG nova.compute.manager [req-89406d45-8ce9-46e9-b8ee-b12a8b5e2e40 req-f9ab48b1-043c-437a-9084-9dcf010ab0ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] No waiting events found dispatching network-vif-plugged-0031a548-e628-44e3-813b-e2b19eff3073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:20:35 np0005548731 nova_compute[232433]: 2025-12-06 08:20:35.279 232437 WARNING nova.compute.manager [req-89406d45-8ce9-46e9-b8ee-b12a8b5e2e40 req-f9ab48b1-043c-437a-9084-9dcf010ab0ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9af91392-7795-4927-9070-f820c4efd904] Received unexpected event network-vif-plugged-0031a548-e628-44e3-813b-e2b19eff3073 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 03:20:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:35.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:35 np0005548731 nova_compute[232433]: 2025-12-06 08:20:35.332 232437 DEBUG oslo_concurrency.lockutils [None req-2164762c-7c0a-4efd-96af-bab91c91562f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "9af91392-7795-4927-9070-f820c4efd904" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:20:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:35.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:36 np0005548731 nova_compute[232433]: 2025-12-06 08:20:36.459 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:37.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:37.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:37 np0005548731 nova_compute[232433]: 2025-12-06 08:20:37.843 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:20:38.320 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '101'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:20:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:39.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:39.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:41.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:41 np0005548731 nova_compute[232433]: 2025-12-06 08:20:41.461 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:41.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:42 np0005548731 nova_compute[232433]: 2025-12-06 08:20:42.454 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:42 np0005548731 nova_compute[232433]: 2025-12-06 08:20:42.574 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:42 np0005548731 nova_compute[232433]: 2025-12-06 08:20:42.846 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:43.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:43.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:20:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:45.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:20:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:45.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:45 np0005548731 podman[342061]: 2025-12-06 08:20:45.916534365 +0000 UTC m=+0.069289663 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:20:45 np0005548731 podman[342063]: 2025-12-06 08:20:45.945375609 +0000 UTC m=+0.083323445 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 03:20:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:45 np0005548731 podman[342062]: 2025-12-06 08:20:45.996337184 +0000 UTC m=+0.139992309 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:20:46 np0005548731 nova_compute[232433]: 2025-12-06 08:20:46.275 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009231.274052, 9af91392-7795-4927-9070-f820c4efd904 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:20:46 np0005548731 nova_compute[232433]: 2025-12-06 08:20:46.276 232437 INFO nova.compute.manager [-] [instance: 9af91392-7795-4927-9070-f820c4efd904] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:20:46 np0005548731 nova_compute[232433]: 2025-12-06 08:20:46.306 232437 DEBUG nova.compute.manager [None req-707127a2-2278-418f-bb47-9b5703366cc6 - - - - - -] [instance: 9af91392-7795-4927-9070-f820c4efd904] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:20:46 np0005548731 nova_compute[232433]: 2025-12-06 08:20:46.464 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:47.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:47.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:47 np0005548731 nova_compute[232433]: 2025-12-06 08:20:47.847 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:49.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:49.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:51 np0005548731 nova_compute[232433]: 2025-12-06 08:20:51.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:20:51 np0005548731 nova_compute[232433]: 2025-12-06 08:20:51.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:20:51 np0005548731 nova_compute[232433]: 2025-12-06 08:20:51.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:20:51 np0005548731 nova_compute[232433]: 2025-12-06 08:20:51.122 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:20:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:20:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:51.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:20:51 np0005548731 nova_compute[232433]: 2025-12-06 08:20:51.468 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:51.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:52 np0005548731 nova_compute[232433]: 2025-12-06 08:20:52.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:20:52 np0005548731 nova_compute[232433]: 2025-12-06 08:20:52.850 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:53.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:20:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:53.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:20:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:55.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:55.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:20:56 np0005548731 nova_compute[232433]: 2025-12-06 08:20:56.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:20:56 np0005548731 nova_compute[232433]: 2025-12-06 08:20:56.471 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:57 np0005548731 nova_compute[232433]: 2025-12-06 08:20:57.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:20:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:20:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:57.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:20:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:57.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:57 np0005548731 nova_compute[232433]: 2025-12-06 08:20:57.852 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:20:59 np0005548731 nova_compute[232433]: 2025-12-06 08:20:59.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:20:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:20:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:59.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:20:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:20:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 03:20:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:59.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.047 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.047 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.047 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.048 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.048 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:21:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:21:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2460462847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.500 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.765 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.768 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4161MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.768 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.768 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:21:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:00.924 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:21:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:00.925 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:21:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:00.925 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.968 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.969 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:21:00 np0005548731 nova_compute[232433]: 2025-12-06 08:21:00.986 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 03:21:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:01 np0005548731 nova_compute[232433]: 2025-12-06 08:21:01.016 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 03:21:01 np0005548731 nova_compute[232433]: 2025-12-06 08:21:01.016 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 03:21:01 np0005548731 nova_compute[232433]: 2025-12-06 08:21:01.031 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 03:21:01 np0005548731 nova_compute[232433]: 2025-12-06 08:21:01.071 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 03:21:01 np0005548731 nova_compute[232433]: 2025-12-06 08:21:01.092 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:21:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:01.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:01 np0005548731 nova_compute[232433]: 2025-12-06 08:21:01.473 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:21:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1718808240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:21:01 np0005548731 nova_compute[232433]: 2025-12-06 08:21:01.534 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:21:01 np0005548731 nova_compute[232433]: 2025-12-06 08:21:01.541 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:21:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:01.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:01 np0005548731 nova_compute[232433]: 2025-12-06 08:21:01.766 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:21:01 np0005548731 nova_compute[232433]: 2025-12-06 08:21:01.801 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:21:01 np0005548731 nova_compute[232433]: 2025-12-06 08:21:01.802 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:21:02 np0005548731 nova_compute[232433]: 2025-12-06 08:21:02.802 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:21:02 np0005548731 nova_compute[232433]: 2025-12-06 08:21:02.803 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:21:02 np0005548731 nova_compute[232433]: 2025-12-06 08:21:02.803 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:21:02 np0005548731 nova_compute[232433]: 2025-12-06 08:21:02.804 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:21:02 np0005548731 nova_compute[232433]: 2025-12-06 08:21:02.855 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:21:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:03.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:21:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:03.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:05.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:05.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:06 np0005548731 nova_compute[232433]: 2025-12-06 08:21:06.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:21:06 np0005548731 nova_compute[232433]: 2025-12-06 08:21:06.477 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:07.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:07.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:07 np0005548731 nova_compute[232433]: 2025-12-06 08:21:07.858 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 e426: 3 total, 3 up, 3 in
Dec  6 03:21:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:09.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:09.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:11.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:11.381 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=102, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=101) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:21:11 np0005548731 nova_compute[232433]: 2025-12-06 08:21:11.382 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:11 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:11.383 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:21:11 np0005548731 nova_compute[232433]: 2025-12-06 08:21:11.478 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:11.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:12 np0005548731 nova_compute[232433]: 2025-12-06 08:21:12.861 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:13.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:21:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:13.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:21:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:15.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:15 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:15.386 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '102'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:21:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:15.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:16 np0005548731 nova_compute[232433]: 2025-12-06 08:21:16.480 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:16 np0005548731 podman[342236]: 2025-12-06 08:21:16.912219301 +0000 UTC m=+0.059142575 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 03:21:16 np0005548731 podman[342234]: 2025-12-06 08:21:16.929860072 +0000 UTC m=+0.071979719 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  6 03:21:16 np0005548731 podman[342235]: 2025-12-06 08:21:16.990719807 +0000 UTC m=+0.129786199 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  6 03:21:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:17.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:17.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:17 np0005548731 nova_compute[232433]: 2025-12-06 08:21:17.863 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:19.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:19.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:21.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:21 np0005548731 nova_compute[232433]: 2025-12-06 08:21:21.482 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:21.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:22 np0005548731 nova_compute[232433]: 2025-12-06 08:21:22.865 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:23.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:23.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:25 np0005548731 nova_compute[232433]: 2025-12-06 08:21:25.252 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "a7305638-446a-410d-8da1-2a2e5ddfa894" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:21:25 np0005548731 nova_compute[232433]: 2025-12-06 08:21:25.253 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:21:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:25.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:25 np0005548731 nova_compute[232433]: 2025-12-06 08:21:25.454 232437 DEBUG nova.compute.manager [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:21:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:25.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:25 np0005548731 nova_compute[232433]: 2025-12-06 08:21:25.689 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:21:25 np0005548731 nova_compute[232433]: 2025-12-06 08:21:25.690 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:21:25 np0005548731 nova_compute[232433]: 2025-12-06 08:21:25.700 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:21:25 np0005548731 nova_compute[232433]: 2025-12-06 08:21:25.700 232437 INFO nova.compute.claims [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:21:25 np0005548731 nova_compute[232433]: 2025-12-06 08:21:25.904 232437 DEBUG oslo_concurrency.processutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:21:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:21:26 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/18612945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:21:26 np0005548731 nova_compute[232433]: 2025-12-06 08:21:26.376 232437 DEBUG oslo_concurrency.processutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:21:26 np0005548731 nova_compute[232433]: 2025-12-06 08:21:26.386 232437 DEBUG nova.compute.provider_tree [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:21:26 np0005548731 nova_compute[232433]: 2025-12-06 08:21:26.476 232437 DEBUG nova.scheduler.client.report [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:21:26 np0005548731 nova_compute[232433]: 2025-12-06 08:21:26.484 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:26 np0005548731 nova_compute[232433]: 2025-12-06 08:21:26.808 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:21:26 np0005548731 nova_compute[232433]: 2025-12-06 08:21:26.810 232437 DEBUG nova.compute.manager [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.056 232437 DEBUG nova.compute.manager [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.056 232437 DEBUG nova.network.neutron [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.121 232437 INFO nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.166 232437 DEBUG nova.compute.manager [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.273 232437 INFO nova.virt.block_device [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Booting with volume 09c93a0a-7903-4f14-b7b3-cc5eeb590680 at /dev/vda#033[00m
Dec  6 03:21:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:27.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.426 232437 DEBUG os_brick.utils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.428 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.448 237736 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.449 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d0f33c-e64e-47cb-a5d6-8b8760e8c7cd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.450 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.468 237736 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.468 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1618d5-2064-4593-9728-946fa4a34107]: (4, ('InitiatorName=iqn.1994-05.com.redhat:63778d5959f0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.470 237736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.483 237736 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.484 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c4fe50-2797-4b04-ab43-566e76af8748]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.485 237736 DEBUG oslo.privsep.daemon [-] privsep: reply[696388a3-3d18-4356-912a-1eabde68bd3f]: (4, 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.486 232437 DEBUG oslo_concurrency.processutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.532 232437 DEBUG nova.policy [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e8feb4540af4e2caa45a88a9202dbe2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.535 232437 DEBUG oslo_concurrency.processutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "nvme version" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.537 232437 DEBUG os_brick.initiator.connectors.lightos [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.537 232437 DEBUG os_brick.initiator.connectors.lightos [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.537 232437 DEBUG os_brick.initiator.connectors.lightos [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.538 232437 DEBUG os_brick.utils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] <== get_connector_properties: return (110ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:63778d5959f0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'ec2492e2-e0d1-43d3-b74f-744a8272a0e6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.538 232437 DEBUG nova.virt.block_device [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updating existing volume attachment record: 10a5a500-79ad-4389-83b0-797a05fcdb28 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  6 03:21:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:21:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:27.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:21:27 np0005548731 nova_compute[232433]: 2025-12-06 08:21:27.868 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:29 np0005548731 nova_compute[232433]: 2025-12-06 08:21:29.073 232437 DEBUG nova.compute.manager [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:21:29 np0005548731 nova_compute[232433]: 2025-12-06 08:21:29.075 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:21:29 np0005548731 nova_compute[232433]: 2025-12-06 08:21:29.076 232437 INFO nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Creating image(s)#033[00m
Dec  6 03:21:29 np0005548731 nova_compute[232433]: 2025-12-06 08:21:29.076 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  6 03:21:29 np0005548731 nova_compute[232433]: 2025-12-06 08:21:29.077 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Ensure instance console log exists: /var/lib/nova/instances/a7305638-446a-410d-8da1-2a2e5ddfa894/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:21:29 np0005548731 nova_compute[232433]: 2025-12-06 08:21:29.077 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:21:29 np0005548731 nova_compute[232433]: 2025-12-06 08:21:29.078 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:21:29 np0005548731 nova_compute[232433]: 2025-12-06 08:21:29.078 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:21:29 np0005548731 nova_compute[232433]: 2025-12-06 08:21:29.215 232437 DEBUG nova.network.neutron [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Successfully created port: 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:21:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:29.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:21:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:21:30 np0005548731 nova_compute[232433]: 2025-12-06 08:21:30.915 232437 DEBUG nova.network.neutron [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Successfully updated port: 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:21:30 np0005548731 nova_compute[232433]: 2025-12-06 08:21:30.938 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:21:30 np0005548731 nova_compute[232433]: 2025-12-06 08:21:30.938 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquired lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:21:30 np0005548731 nova_compute[232433]: 2025-12-06 08:21:30.939 232437 DEBUG nova.network.neutron [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:21:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:21:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:31.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:21:31 np0005548731 nova_compute[232433]: 2025-12-06 08:21:31.401 232437 DEBUG nova.network.neutron [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:21:31 np0005548731 nova_compute[232433]: 2025-12-06 08:21:31.486 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:31.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:21:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:21:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:21:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.072 232437 DEBUG nova.compute.manager [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Received event network-changed-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.073 232437 DEBUG nova.compute.manager [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Refreshing instance network info cache due to event network-changed-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.074 232437 DEBUG oslo_concurrency.lockutils [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.493 232437 DEBUG nova.network.neutron [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updating instance_info_cache with network_info: [{"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.520 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Releasing lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.521 232437 DEBUG nova.compute.manager [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Instance network_info: |[{"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.522 232437 DEBUG oslo_concurrency.lockutils [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.523 232437 DEBUG nova.network.neutron [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Refreshing network info cache for port 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.528 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Start _get_guest_xml network_info=[{"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-09c93a0a-7903-4f14-b7b3-cc5eeb590680', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '09c93a0a-7903-4f14-b7b3-cc5eeb590680', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a7305638-446a-410d-8da1-2a2e5ddfa894', 'attached_at': '', 'detached_at': '', 'volume_id': '09c93a0a-7903-4f14-b7b3-cc5eeb590680', 'serial': '09c93a0a-7903-4f14-b7b3-cc5eeb590680'}, 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': False, 'mount_device': '/dev/vda', 'attachment_id': '10a5a500-79ad-4389-83b0-797a05fcdb28', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.535 232437 WARNING nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.545 232437 DEBUG nova.virt.libvirt.host [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.547 232437 DEBUG nova.virt.libvirt.host [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.557 232437 DEBUG nova.virt.libvirt.host [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.558 232437 DEBUG nova.virt.libvirt.host [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.560 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.561 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.561 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.561 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.562 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.562 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.562 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.562 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.562 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.563 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.563 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.563 232437 DEBUG nova.virt.hardware [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.603 232437 DEBUG nova.storage.rbd_utils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image a7305638-446a-410d-8da1-2a2e5ddfa894_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.608 232437 DEBUG oslo_concurrency.processutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:21:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:21:32 np0005548731 nova_compute[232433]: 2025-12-06 08:21:32.869 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:21:33 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3905464386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.033 232437 DEBUG oslo_concurrency.processutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.060 232437 DEBUG nova.virt.libvirt.vif [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:21:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1240260262',display_name='tempest-TestVolumeBootPattern-server-1240260262',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1240260262',id=216,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvUl3Ab4ESWezLZ9mehuTavMygXDhT0chVOH5OGNfzBJ6GphwodjSkpQcbaa1ADoOOfJ6+3BcKIVxorR3UxI6tyiW7Q3SFHkhHBjCjD54foFQ6i6sfCU/p7OcBbQ12cuw==',key_name='tempest-TestVolumeBootPattern-2075529576',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-s18kw14p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:21:27Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=a7305638-446a-410d-8da1-2a2e5ddfa894,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.060 232437 DEBUG nova.network.os_vif_util [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.061 232437 DEBUG nova.network.os_vif_util [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:ab:c2,bridge_name='br-int',has_traffic_filtering=True,id=07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c2ee2d-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.062 232437 DEBUG nova.objects.instance [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'pci_devices' on Instance uuid a7305638-446a-410d-8da1-2a2e5ddfa894 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.084 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  <uuid>a7305638-446a-410d-8da1-2a2e5ddfa894</uuid>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  <name>instance-000000d8</name>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestVolumeBootPattern-server-1240260262</nova:name>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:21:32</nova:creationTime>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <nova:user uuid="8e8feb4540af4e2caa45a88a9202dbe2">tempest-TestVolumeBootPattern-97496240-project-member</nova:user>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <nova:project uuid="4b2dc4b8729f446a9c7ac69ca446f71d">tempest-TestVolumeBootPattern-97496240</nova:project>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <nova:port uuid="07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <entry name="serial">a7305638-446a-410d-8da1-2a2e5ddfa894</entry>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <entry name="uuid">a7305638-446a-410d-8da1-2a2e5ddfa894</entry>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/a7305638-446a-410d-8da1-2a2e5ddfa894_disk.config">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="volumes/volume-09c93a0a-7903-4f14-b7b3-cc5eeb590680">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <serial>09c93a0a-7903-4f14-b7b3-cc5eeb590680</serial>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:4e:ab:c2"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <target dev="tap07c2ee2d-4a"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/a7305638-446a-410d-8da1-2a2e5ddfa894/console.log" append="off"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:21:33 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:21:33 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:21:33 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:21:33 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.086 232437 DEBUG nova.compute.manager [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Preparing to wait for external event network-vif-plugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.086 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.087 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.087 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.088 232437 DEBUG nova.virt.libvirt.vif [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:21:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1240260262',display_name='tempest-TestVolumeBootPattern-server-1240260262',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1240260262',id=216,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvUl3Ab4ESWezLZ9mehuTavMygXDhT0chVOH5OGNfzBJ6GphwodjSkpQcbaa1ADoOOfJ6+3BcKIVxorR3UxI6tyiW7Q3SFHkhHBjCjD54foFQ6i6sfCU/p7OcBbQ12cuw==',key_name='tempest-TestVolumeBootPattern-2075529576',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-s18kw14p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:21:27Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=a7305638-446a-410d-8da1-2a2e5ddfa894,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.089 232437 DEBUG nova.network.os_vif_util [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.089 232437 DEBUG nova.network.os_vif_util [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:ab:c2,bridge_name='br-int',has_traffic_filtering=True,id=07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c2ee2d-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.090 232437 DEBUG os_vif [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:ab:c2,bridge_name='br-int',has_traffic_filtering=True,id=07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c2ee2d-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.091 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.092 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.092 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.096 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.097 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07c2ee2d-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.097 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07c2ee2d-4a, col_values=(('external_ids', {'iface-id': '07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:ab:c2', 'vm-uuid': 'a7305638-446a-410d-8da1-2a2e5ddfa894'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:21:33 np0005548731 NetworkManager[49182]: <info>  [1765009293.0998] manager: (tap07c2ee2d-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.099 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.102 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.108 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.109 232437 INFO os_vif [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:ab:c2,bridge_name='br-int',has_traffic_filtering=True,id=07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c2ee2d-4a')#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.184 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.185 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.185 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No VIF found with MAC fa:16:3e:4e:ab:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.185 232437 INFO nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Using config drive#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.212 232437 DEBUG nova.storage.rbd_utils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image a7305638-446a-410d-8da1-2a2e5ddfa894_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:21:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:33.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:33.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.801 232437 INFO nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Creating config drive at /var/lib/nova/instances/a7305638-446a-410d-8da1-2a2e5ddfa894/disk.config#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.805 232437 DEBUG oslo_concurrency.processutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a7305638-446a-410d-8da1-2a2e5ddfa894/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeb0v0ns3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.953 232437 DEBUG oslo_concurrency.processutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a7305638-446a-410d-8da1-2a2e5ddfa894/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeb0v0ns3" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:21:33 np0005548731 nova_compute[232433]: 2025-12-06 08:21:33.998 232437 DEBUG nova.storage.rbd_utils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image a7305638-446a-410d-8da1-2a2e5ddfa894_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.003 232437 DEBUG oslo_concurrency.processutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a7305638-446a-410d-8da1-2a2e5ddfa894/disk.config a7305638-446a-410d-8da1-2a2e5ddfa894_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.190 232437 DEBUG oslo_concurrency.processutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a7305638-446a-410d-8da1-2a2e5ddfa894/disk.config a7305638-446a-410d-8da1-2a2e5ddfa894_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.191 232437 INFO nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Deleting local config drive /var/lib/nova/instances/a7305638-446a-410d-8da1-2a2e5ddfa894/disk.config because it was imported into RBD.#033[00m
Dec  6 03:21:34 np0005548731 kernel: tap07c2ee2d-4a: entered promiscuous mode
Dec  6 03:21:34 np0005548731 ovn_controller[133927]: 2025-12-06T08:21:34Z|01098|binding|INFO|Claiming lport 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f for this chassis.
Dec  6 03:21:34 np0005548731 ovn_controller[133927]: 2025-12-06T08:21:34Z|01099|binding|INFO|07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f: Claiming fa:16:3e:4e:ab:c2 10.100.0.6
Dec  6 03:21:34 np0005548731 NetworkManager[49182]: <info>  [1765009294.2537] manager: (tap07c2ee2d-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/523)
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.254 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.259 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:34 np0005548731 NetworkManager[49182]: <info>  [1765009294.2666] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Dec  6 03:21:34 np0005548731 NetworkManager[49182]: <info>  [1765009294.2675] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/525)
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.265 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:34 np0005548731 systemd-udevd[342622]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:21:34 np0005548731 NetworkManager[49182]: <info>  [1765009294.2955] device (tap07c2ee2d-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:21:34 np0005548731 NetworkManager[49182]: <info>  [1765009294.2961] device (tap07c2ee2d-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.296 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:ab:c2 10.100.0.6'], port_security=['fa:16:3e:4e:ab:c2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a7305638-446a-410d-8da1-2a2e5ddfa894', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cd07b30-a335-4570-957e-3674d9a06120', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.297 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a bound to our chassis#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.299 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4ef1374-9c77-45a7-8776-50aa60c7d84a#033[00m
Dec  6 03:21:34 np0005548731 systemd-machined[195355]: New machine qemu-112-instance-000000d8.
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.309 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[bddf89fd-26eb-471c-bddc-09eb59dfae71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.310 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4ef1374-91 in ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.311 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4ef1374-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.312 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b69ca168-9f36-47b6-be69-65caed069897]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.313 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[d48e91d5-7421-4c06-85cb-6635300183b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.324 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[947ba883-ac1f-4663-9dd8-fa6fa90ba41e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 systemd[1]: Started Virtual Machine qemu-112-instance-000000d8.
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.347 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[afb0a0d9-ebd0-4ab8-8bf6-0bce3763ab7d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.379 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[b1412a70-057a-49ca-a017-7cfc17e89634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.388 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[02452907-154a-4169-a7cc-4dac292614aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 NetworkManager[49182]: <info>  [1765009294.3896] manager: (tapb4ef1374-90): new Veth device (/org/freedesktop/NetworkManager/Devices/526)
Dec  6 03:21:34 np0005548731 systemd-udevd[342627]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.420 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.421 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5a261ac3-45cc-4453-b4da-255851b60a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.425 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cb80de-f151-427b-80de-020d1c176606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.441 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:34 np0005548731 ovn_controller[133927]: 2025-12-06T08:21:34Z|01100|binding|INFO|Setting lport 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f ovn-installed in OVS
Dec  6 03:21:34 np0005548731 ovn_controller[133927]: 2025-12-06T08:21:34Z|01101|binding|INFO|Setting lport 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f up in Southbound
Dec  6 03:21:34 np0005548731 NetworkManager[49182]: <info>  [1765009294.4505] device (tapb4ef1374-90): carrier: link connected
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.453 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.461 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2e1310-38c4-44d6-b383-2f03f00cb699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.479 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[75e69b97-e5ef-4c77-a57a-d4c5f7ed87f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 955645, 'reachable_time': 32299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342658, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.494 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac7a7c1-03dd-41a6-a8b6-8f6991c95604]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:d4b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 955645, 'tstamp': 955645}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342660, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.509 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[acf761be-df29-4e09-ae9a-3eaca2cfaad8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 955645, 'reachable_time': 32299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342661, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.536 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[5472610f-3766-4dc4-beb6-40888d5389c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.588 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7a88b89d-cb70-4744-960d-3a4f8d660098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.589 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.590 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.590 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4ef1374-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.592 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:34 np0005548731 NetworkManager[49182]: <info>  [1765009294.5925] manager: (tapb4ef1374-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Dec  6 03:21:34 np0005548731 kernel: tapb4ef1374-90: entered promiscuous mode
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.595 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4ef1374-90, col_values=(('external_ids', {'iface-id': '32c82c25-6496-4edd-ba74-1791824b99ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.596 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:34 np0005548731 ovn_controller[133927]: 2025-12-06T08:21:34Z|01102|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.611 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.612 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.613 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c99d1b-f432-4082-8ba5-edf4207ac684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.614 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:21:34 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:21:34.616 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'env', 'PROCESS_TAG=haproxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4ef1374-9c77-45a7-8776-50aa60c7d84a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.818 232437 DEBUG nova.network.neutron [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updated VIF entry in instance network info cache for port 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.819 232437 DEBUG nova.network.neutron [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updating instance_info_cache with network_info: [{"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.842 232437 DEBUG oslo_concurrency.lockutils [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.980 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009294.9802504, a7305638-446a-410d-8da1-2a2e5ddfa894 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:21:34 np0005548731 nova_compute[232433]: 2025-12-06 08:21:34.981 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] VM Started (Lifecycle Event)#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.040 232437 DEBUG nova.compute.manager [req-50211967-6947-4ae1-845e-124c6c61efc0 req-ad1e3453-4400-44d5-9147-a4cd357e0d6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Received event network-vif-plugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.040 232437 DEBUG oslo_concurrency.lockutils [req-50211967-6947-4ae1-845e-124c6c61efc0 req-ad1e3453-4400-44d5-9147-a4cd357e0d6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.041 232437 DEBUG oslo_concurrency.lockutils [req-50211967-6947-4ae1-845e-124c6c61efc0 req-ad1e3453-4400-44d5-9147-a4cd357e0d6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.041 232437 DEBUG oslo_concurrency.lockutils [req-50211967-6947-4ae1-845e-124c6c61efc0 req-ad1e3453-4400-44d5-9147-a4cd357e0d6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.041 232437 DEBUG nova.compute.manager [req-50211967-6947-4ae1-845e-124c6c61efc0 req-ad1e3453-4400-44d5-9147-a4cd357e0d6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Processing event network-vif-plugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.042 232437 DEBUG nova.compute.manager [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.047 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.052 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.054 232437 INFO nova.virt.libvirt.driver [-] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Instance spawned successfully.#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.054 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.058 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.083 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.084 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009294.9813023, a7305638-446a-410d-8da1-2a2e5ddfa894 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.085 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:21:35 np0005548731 podman[342733]: 2025-12-06 08:21:35.088511658 +0000 UTC m=+0.075709989 container create f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.092 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.093 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.094 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.095 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.096 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.097 232437 DEBUG nova.virt.libvirt.driver [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:21:35 np0005548731 podman[342733]: 2025-12-06 08:21:35.045747694 +0000 UTC m=+0.032946085 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.140 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:21:35 np0005548731 systemd[1]: Started libpod-conmon-f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980.scope.
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.145 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009295.0468328, a7305638-446a-410d-8da1-2a2e5ddfa894 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.145 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:21:35 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:21:35 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/282b369c698f3105ce4e15a9e30c439516a073b6ea60d3d92f48bf872ea33fa3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.185 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.190 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:21:35 np0005548731 podman[342733]: 2025-12-06 08:21:35.198838692 +0000 UTC m=+0.186036983 container init f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:21:35 np0005548731 podman[342733]: 2025-12-06 08:21:35.203582718 +0000 UTC m=+0.190781009 container start f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:21:35 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[342748]: [NOTICE]   (342752) : New worker (342754) forked
Dec  6 03:21:35 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[342748]: [NOTICE]   (342752) : Loading success.
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.250 232437 INFO nova.compute.manager [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Took 6.18 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.251 232437 DEBUG nova.compute.manager [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.277 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:21:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:35.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.453 232437 INFO nova.compute.manager [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Took 9.82 seconds to build instance.#033[00m
Dec  6 03:21:35 np0005548731 nova_compute[232433]: 2025-12-06 08:21:35.481 232437 DEBUG oslo_concurrency.lockutils [None req-c0a7484b-a42e-4758-b0e6-67f2f2cbe7b2 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:21:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:35.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:36 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:37.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:37 np0005548731 auditd[702]: Audit daemon rotating log files
Dec  6 03:21:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:37.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:37 np0005548731 nova_compute[232433]: 2025-12-06 08:21:37.873 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:38 np0005548731 nova_compute[232433]: 2025-12-06 08:21:38.099 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:21:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:21:38 np0005548731 nova_compute[232433]: 2025-12-06 08:21:38.693 232437 DEBUG nova.compute.manager [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Received event network-vif-plugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:21:38 np0005548731 nova_compute[232433]: 2025-12-06 08:21:38.693 232437 DEBUG oslo_concurrency.lockutils [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:21:38 np0005548731 nova_compute[232433]: 2025-12-06 08:21:38.694 232437 DEBUG oslo_concurrency.lockutils [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:21:38 np0005548731 nova_compute[232433]: 2025-12-06 08:21:38.695 232437 DEBUG oslo_concurrency.lockutils [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:21:38 np0005548731 nova_compute[232433]: 2025-12-06 08:21:38.696 232437 DEBUG nova.compute.manager [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] No waiting events found dispatching network-vif-plugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:21:38 np0005548731 nova_compute[232433]: 2025-12-06 08:21:38.696 232437 WARNING nova.compute.manager [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Received unexpected event network-vif-plugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f for instance with vm_state active and task_state None.#033[00m
Dec  6 03:21:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:21:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:39.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:21:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:39.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:41.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:41.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:42 np0005548731 nova_compute[232433]: 2025-12-06 08:21:42.874 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:43 np0005548731 nova_compute[232433]: 2025-12-06 08:21:43.101 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:43.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:43.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:21:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:45.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:21:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:45.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:45 np0005548731 nova_compute[232433]: 2025-12-06 08:21:45.650 232437 DEBUG nova.compute.manager [req-14993eae-250f-47b1-9155-cb20e6789821 req-f6666a69-aa98-4791-b9b7-264cb278d626 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Received event network-changed-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:21:45 np0005548731 nova_compute[232433]: 2025-12-06 08:21:45.650 232437 DEBUG nova.compute.manager [req-14993eae-250f-47b1-9155-cb20e6789821 req-f6666a69-aa98-4791-b9b7-264cb278d626 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Refreshing instance network info cache due to event network-changed-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:21:45 np0005548731 nova_compute[232433]: 2025-12-06 08:21:45.651 232437 DEBUG oslo_concurrency.lockutils [req-14993eae-250f-47b1-9155-cb20e6789821 req-f6666a69-aa98-4791-b9b7-264cb278d626 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:21:45 np0005548731 nova_compute[232433]: 2025-12-06 08:21:45.651 232437 DEBUG oslo_concurrency.lockutils [req-14993eae-250f-47b1-9155-cb20e6789821 req-f6666a69-aa98-4791-b9b7-264cb278d626 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:21:45 np0005548731 nova_compute[232433]: 2025-12-06 08:21:45.651 232437 DEBUG nova.network.neutron [req-14993eae-250f-47b1-9155-cb20e6789821 req-f6666a69-aa98-4791-b9b7-264cb278d626 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Refreshing network info cache for port 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:21:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:47.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:21:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:47.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:21:47 np0005548731 nova_compute[232433]: 2025-12-06 08:21:47.876 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:47 np0005548731 podman[342871]: 2025-12-06 08:21:47.891454915 +0000 UTC m=+0.056421519 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  6 03:21:47 np0005548731 podman[342873]: 2025-12-06 08:21:47.908240385 +0000 UTC m=+0.064898186 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec  6 03:21:47 np0005548731 podman[342872]: 2025-12-06 08:21:47.9231975 +0000 UTC m=+0.085742255 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 03:21:48 np0005548731 nova_compute[232433]: 2025-12-06 08:21:48.103 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:48 np0005548731 nova_compute[232433]: 2025-12-06 08:21:48.464 232437 DEBUG nova.network.neutron [req-14993eae-250f-47b1-9155-cb20e6789821 req-f6666a69-aa98-4791-b9b7-264cb278d626 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updated VIF entry in instance network info cache for port 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:21:48 np0005548731 nova_compute[232433]: 2025-12-06 08:21:48.465 232437 DEBUG nova.network.neutron [req-14993eae-250f-47b1-9155-cb20e6789821 req-f6666a69-aa98-4791-b9b7-264cb278d626 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updating instance_info_cache with network_info: [{"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:21:48 np0005548731 nova_compute[232433]: 2025-12-06 08:21:48.497 232437 DEBUG oslo_concurrency.lockutils [req-14993eae-250f-47b1-9155-cb20e6789821 req-f6666a69-aa98-4791-b9b7-264cb278d626 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:21:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:49.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:49 np0005548731 ovn_controller[133927]: 2025-12-06T08:21:49Z|00148|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.6
Dec  6 03:21:49 np0005548731 ovn_controller[133927]: 2025-12-06T08:21:49Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:4e:ab:c2 10.100.0.6
Dec  6 03:21:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:49.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.637038) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310637082, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 2389, "num_deletes": 253, "total_data_size": 5674155, "memory_usage": 5750592, "flush_reason": "Manual Compaction"}
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310662677, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 3696882, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83500, "largest_seqno": 85884, "table_properties": {"data_size": 3687334, "index_size": 5977, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19993, "raw_average_key_size": 20, "raw_value_size": 3668179, "raw_average_value_size": 3762, "num_data_blocks": 261, "num_entries": 975, "num_filter_entries": 975, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009106, "oldest_key_time": 1765009106, "file_creation_time": 1765009310, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 25733 microseconds, and 7932 cpu microseconds.
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.662764) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 3696882 bytes OK
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.662789) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.664593) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.664608) EVENT_LOG_v1 {"time_micros": 1765009310664603, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.664624) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 5663755, prev total WAL file size 5663755, number of live WAL files 2.
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.666300) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(3610KB)], [171(11MB)]
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310666369, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 15452769, "oldest_snapshot_seqno": -1}
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 11282 keys, 13444660 bytes, temperature: kUnknown
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310771193, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 13444660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13373490, "index_size": 41873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 298656, "raw_average_key_size": 26, "raw_value_size": 13177889, "raw_average_value_size": 1168, "num_data_blocks": 1582, "num_entries": 11282, "num_filter_entries": 11282, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765009310, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.771448) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 13444660 bytes
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.773463) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.3 rd, 128.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.2 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 11807, records dropped: 525 output_compression: NoCompression
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.773478) EVENT_LOG_v1 {"time_micros": 1765009310773471, "job": 110, "event": "compaction_finished", "compaction_time_micros": 104889, "compaction_time_cpu_micros": 30407, "output_level": 6, "num_output_files": 1, "total_output_size": 13444660, "num_input_records": 11807, "num_output_records": 11282, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310774253, "job": 110, "event": "table_file_deletion", "file_number": 173}
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310776279, "job": 110, "event": "table_file_deletion", "file_number": 171}
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.666192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.776379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.776386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.776387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.776389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:21:50 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:21:50.776391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:21:51 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:51 np0005548731 nova_compute[232433]: 2025-12-06 08:21:51.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:21:51 np0005548731 nova_compute[232433]: 2025-12-06 08:21:51.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:21:51 np0005548731 nova_compute[232433]: 2025-12-06 08:21:51.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:21:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:51.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:51.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:52 np0005548731 nova_compute[232433]: 2025-12-06 08:21:52.879 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:53 np0005548731 nova_compute[232433]: 2025-12-06 08:21:53.105 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:53.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:53 np0005548731 nova_compute[232433]: 2025-12-06 08:21:53.441 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:21:53 np0005548731 nova_compute[232433]: 2025-12-06 08:21:53.441 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquired lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:21:53 np0005548731 nova_compute[232433]: 2025-12-06 08:21:53.441 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  6 03:21:53 np0005548731 nova_compute[232433]: 2025-12-06 08:21:53.441 232437 DEBUG nova.objects.instance [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lazy-loading 'info_cache' on Instance uuid a7305638-446a-410d-8da1-2a2e5ddfa894 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:21:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:53.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:54 np0005548731 ovn_controller[133927]: 2025-12-06T08:21:54Z|00150|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.6
Dec  6 03:21:54 np0005548731 ovn_controller[133927]: 2025-12-06T08:21:54Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:4e:ab:c2 10.100.0.6
Dec  6 03:21:54 np0005548731 ovn_controller[133927]: 2025-12-06T08:21:54Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:ab:c2 10.100.0.6
Dec  6 03:21:54 np0005548731 ovn_controller[133927]: 2025-12-06T08:21:54Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:ab:c2 10.100.0.6
Dec  6 03:21:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:55.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:55.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:21:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:21:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:57.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:21:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:21:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:57.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:21:57 np0005548731 nova_compute[232433]: 2025-12-06 08:21:57.882 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:58 np0005548731 nova_compute[232433]: 2025-12-06 08:21:58.107 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:21:58 np0005548731 nova_compute[232433]: 2025-12-06 08:21:58.466 232437 DEBUG nova.network.neutron [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updating instance_info_cache with network_info: [{"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:21:58 np0005548731 nova_compute[232433]: 2025-12-06 08:21:58.850 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Releasing lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:21:58 np0005548731 nova_compute[232433]: 2025-12-06 08:21:58.850 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  6 03:21:58 np0005548731 nova_compute[232433]: 2025-12-06 08:21:58.850 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:21:58 np0005548731 nova_compute[232433]: 2025-12-06 08:21:58.851 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:21:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:59.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:21:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:21:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:21:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:59.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:00 np0005548731 nova_compute[232433]: 2025-12-06 08:22:00.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:22:00 np0005548731 nova_compute[232433]: 2025-12-06 08:22:00.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:22:00 np0005548731 nova_compute[232433]: 2025-12-06 08:22:00.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:22:00 np0005548731 nova_compute[232433]: 2025-12-06 08:22:00.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:22:00 np0005548731 nova_compute[232433]: 2025-12-06 08:22:00.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:22:00 np0005548731 nova_compute[232433]: 2025-12-06 08:22:00.745 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:22:00 np0005548731 nova_compute[232433]: 2025-12-06 08:22:00.745 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:22:00 np0005548731 nova_compute[232433]: 2025-12-06 08:22:00.746 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:22:00 np0005548731 nova_compute[232433]: 2025-12-06 08:22:00.746 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:22:00 np0005548731 nova_compute[232433]: 2025-12-06 08:22:00.746 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:22:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:00.926 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:22:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:00.927 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:22:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:00.928 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:22:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:22:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2736176045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:22:01 np0005548731 nova_compute[232433]: 2025-12-06 08:22:01.227 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:22:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:01.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:01.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:02 np0005548731 nova_compute[232433]: 2025-12-06 08:22:02.884 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:03 np0005548731 nova_compute[232433]: 2025-12-06 08:22:03.109 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:03 np0005548731 nova_compute[232433]: 2025-12-06 08:22:03.221 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:22:03 np0005548731 nova_compute[232433]: 2025-12-06 08:22:03.222 232437 DEBUG nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  6 03:22:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:03.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:03 np0005548731 nova_compute[232433]: 2025-12-06 08:22:03.380 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:22:03 np0005548731 nova_compute[232433]: 2025-12-06 08:22:03.381 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3964MB free_disk=20.942432403564453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:22:03 np0005548731 nova_compute[232433]: 2025-12-06 08:22:03.382 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:22:03 np0005548731 nova_compute[232433]: 2025-12-06 08:22:03.382 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:22:03 np0005548731 nova_compute[232433]: 2025-12-06 08:22:03.538 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance a7305638-446a-410d-8da1-2a2e5ddfa894 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:22:03 np0005548731 nova_compute[232433]: 2025-12-06 08:22:03.539 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:22:03 np0005548731 nova_compute[232433]: 2025-12-06 08:22:03.539 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:22:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:03.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:03 np0005548731 nova_compute[232433]: 2025-12-06 08:22:03.732 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:22:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:04.155 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=103, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=102) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:22:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:04.157 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.156 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:22:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3353675565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.210 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.216 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.242 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.300 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.301 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.542 232437 DEBUG nova.compute.manager [req-4c6e12eb-3f4c-4c9a-8829-2e7734855ddc req-71c2d597-8fe2-4856-b3c2-f54aa2f40f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Received event network-changed-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.543 232437 DEBUG nova.compute.manager [req-4c6e12eb-3f4c-4c9a-8829-2e7734855ddc req-71c2d597-8fe2-4856-b3c2-f54aa2f40f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Refreshing instance network info cache due to event network-changed-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.543 232437 DEBUG oslo_concurrency.lockutils [req-4c6e12eb-3f4c-4c9a-8829-2e7734855ddc req-71c2d597-8fe2-4856-b3c2-f54aa2f40f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.543 232437 DEBUG oslo_concurrency.lockutils [req-4c6e12eb-3f4c-4c9a-8829-2e7734855ddc req-71c2d597-8fe2-4856-b3c2-f54aa2f40f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.543 232437 DEBUG nova.network.neutron [req-4c6e12eb-3f4c-4c9a-8829-2e7734855ddc req-71c2d597-8fe2-4856-b3c2-f54aa2f40f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Refreshing network info cache for port 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.718 232437 DEBUG oslo_concurrency.lockutils [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "a7305638-446a-410d-8da1-2a2e5ddfa894" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.719 232437 DEBUG oslo_concurrency.lockutils [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.719 232437 DEBUG oslo_concurrency.lockutils [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.720 232437 DEBUG oslo_concurrency.lockutils [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.720 232437 DEBUG oslo_concurrency.lockutils [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.721 232437 INFO nova.compute.manager [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Terminating instance#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.722 232437 DEBUG nova.compute.manager [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:22:04 np0005548731 kernel: tap07c2ee2d-4a (unregistering): left promiscuous mode
Dec  6 03:22:04 np0005548731 NetworkManager[49182]: <info>  [1765009324.8114] device (tap07c2ee2d-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:22:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:22:04Z|01103|binding|INFO|Releasing lport 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f from this chassis (sb_readonly=0)
Dec  6 03:22:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:22:04Z|01104|binding|INFO|Setting lport 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f down in Southbound
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.820 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:04 np0005548731 ovn_controller[133927]: 2025-12-06T08:22:04Z|01105|binding|INFO|Removing iface tap07c2ee2d-4a ovn-installed in OVS
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.824 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:04.840 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:ab:c2 10.100.0.6'], port_security=['fa:16:3e:4e:ab:c2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a7305638-446a-410d-8da1-2a2e5ddfa894', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cd07b30-a335-4570-957e-3674d9a06120', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:22:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:04.841 143965 INFO neutron.agent.ovn.metadata.agent [-] Port 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a unbound from our chassis#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.843 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:04.842 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4ef1374-9c77-45a7-8776-50aa60c7d84a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:22:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:04.844 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c957e7c9-6572-43a1-bcbb-4af64352017d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:22:04 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:04.845 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace which is not needed anymore#033[00m
Dec  6 03:22:04 np0005548731 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d000000d8.scope: Deactivated successfully.
Dec  6 03:22:04 np0005548731 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d000000d8.scope: Consumed 14.731s CPU time.
Dec  6 03:22:04 np0005548731 systemd-machined[195355]: Machine qemu-112-instance-000000d8 terminated.
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.941 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.946 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.954 232437 INFO nova.virt.libvirt.driver [-] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Instance destroyed successfully.#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.955 232437 DEBUG nova.objects.instance [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'resources' on Instance uuid a7305638-446a-410d-8da1-2a2e5ddfa894 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.986 232437 DEBUG nova.virt.libvirt.vif [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:21:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1240260262',display_name='tempest-TestVolumeBootPattern-server-1240260262',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1240260262',id=216,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvUl3Ab4ESWezLZ9mehuTavMygXDhT0chVOH5OGNfzBJ6GphwodjSkpQcbaa1ADoOOfJ6+3BcKIVxorR3UxI6tyiW7Q3SFHkhHBjCjD54foFQ6i6sfCU/p7OcBbQ12cuw==',key_name='tempest-TestVolumeBootPattern-2075529576',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:21:35Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-s18kw14p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:21:35Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=a7305638-446a-410d-8da1-2a2e5ddfa894,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.987 232437 DEBUG nova.network.os_vif_util [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.988 232437 DEBUG nova.network.os_vif_util [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:ab:c2,bridge_name='br-int',has_traffic_filtering=True,id=07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c2ee2d-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.988 232437 DEBUG os_vif [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:ab:c2,bridge_name='br-int',has_traffic_filtering=True,id=07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c2ee2d-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.990 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.990 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07c2ee2d-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.992 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.994 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:04 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[342748]: [NOTICE]   (342752) : haproxy version is 2.8.14-c23fe91
Dec  6 03:22:04 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[342748]: [NOTICE]   (342752) : path to executable is /usr/sbin/haproxy
Dec  6 03:22:04 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[342748]: [WARNING]  (342752) : Exiting Master process...
Dec  6 03:22:04 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[342748]: [WARNING]  (342752) : Exiting Master process...
Dec  6 03:22:04 np0005548731 nova_compute[232433]: 2025-12-06 08:22:04.997 232437 INFO os_vif [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:ab:c2,bridge_name='br-int',has_traffic_filtering=True,id=07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c2ee2d-4a')#033[00m
Dec  6 03:22:04 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[342748]: [ALERT]    (342752) : Current worker (342754) exited with code 143 (Terminated)
Dec  6 03:22:04 np0005548731 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[342748]: [WARNING]  (342752) : All workers exited. Exiting... (0)
Dec  6 03:22:05 np0005548731 systemd[1]: libpod-f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980.scope: Deactivated successfully.
Dec  6 03:22:05 np0005548731 conmon[342748]: conmon f2e4de8a3a96f5a9a3ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980.scope/container/memory.events
Dec  6 03:22:05 np0005548731 podman[343009]: 2025-12-06 08:22:05.009047126 +0000 UTC m=+0.080908257 container died f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 03:22:05 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980-userdata-shm.mount: Deactivated successfully.
Dec  6 03:22:05 np0005548731 systemd[1]: var-lib-containers-storage-overlay-282b369c698f3105ce4e15a9e30c439516a073b6ea60d3d92f48bf872ea33fa3-merged.mount: Deactivated successfully.
Dec  6 03:22:05 np0005548731 podman[343009]: 2025-12-06 08:22:05.051235856 +0000 UTC m=+0.123096987 container cleanup f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:22:05 np0005548731 systemd[1]: libpod-conmon-f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980.scope: Deactivated successfully.
Dec  6 03:22:05 np0005548731 podman[343064]: 2025-12-06 08:22:05.286444198 +0000 UTC m=+0.210916981 container remove f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:22:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:05.295 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[fec51673-f437-4d8c-ab73-6c7f8b5cd1ce]: (4, ('Sat Dec  6 08:22:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980)\nf2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980\nSat Dec  6 08:22:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (f2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980)\nf2e4de8a3a96f5a9a3cabab62f207dedf4368fca1746d13afb2fe5508f140980\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:22:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:05.297 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9df4b2dc-daec-48bc-a760-df23b1c7a773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:22:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:05.298 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.300 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:05 np0005548731 kernel: tapb4ef1374-90: left promiscuous mode
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.303 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:05.306 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c68587de-cfd6-441f-ac4e-886aed035fa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.315 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:05.329 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c0806b50-f4b8-4b5c-b79f-4263bb48f907]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:22:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:05.330 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cc7c78-c76a-49ea-ab9b-dbbd9adcc3cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:22:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:05.352 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c13ac2f4-0e19-41d1-b35f-3a65056c5f0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 955638, 'reachable_time': 17293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343129, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:22:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:05.354 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:22:05 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:05.355 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f66d12-053a-42a0-b899-a8276c992fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:22:05 np0005548731 systemd[1]: run-netns-ovnmeta\x2db4ef1374\x2d9c77\x2d45a7\x2d8776\x2d50aa60c7d84a.mount: Deactivated successfully.
Dec  6 03:22:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:05.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.627 232437 INFO nova.virt.libvirt.driver [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Deleting instance files /var/lib/nova/instances/a7305638-446a-410d-8da1-2a2e5ddfa894_del#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.628 232437 INFO nova.virt.libvirt.driver [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Deletion of /var/lib/nova/instances/a7305638-446a-410d-8da1-2a2e5ddfa894_del complete#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.636 232437 DEBUG nova.compute.manager [req-7adcde9c-63ac-4d84-a93c-085c34a6f3ce req-9e090b52-a851-4a51-a276-ec7a37709981 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Received event network-vif-unplugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.637 232437 DEBUG oslo_concurrency.lockutils [req-7adcde9c-63ac-4d84-a93c-085c34a6f3ce req-9e090b52-a851-4a51-a276-ec7a37709981 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.637 232437 DEBUG oslo_concurrency.lockutils [req-7adcde9c-63ac-4d84-a93c-085c34a6f3ce req-9e090b52-a851-4a51-a276-ec7a37709981 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.637 232437 DEBUG oslo_concurrency.lockutils [req-7adcde9c-63ac-4d84-a93c-085c34a6f3ce req-9e090b52-a851-4a51-a276-ec7a37709981 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.638 232437 DEBUG nova.compute.manager [req-7adcde9c-63ac-4d84-a93c-085c34a6f3ce req-9e090b52-a851-4a51-a276-ec7a37709981 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] No waiting events found dispatching network-vif-unplugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.638 232437 DEBUG nova.compute.manager [req-7adcde9c-63ac-4d84-a93c-085c34a6f3ce req-9e090b52-a851-4a51-a276-ec7a37709981 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Received event network-vif-unplugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:22:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:05.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.781 232437 INFO nova.compute.manager [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.782 232437 DEBUG oslo.service.loopingcall [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.782 232437 DEBUG nova.compute.manager [-] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:22:05 np0005548731 nova_compute[232433]: 2025-12-06 08:22:05.782 232437 DEBUG nova.network.neutron [-] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:22:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.301 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:22:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:07.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.528 232437 DEBUG nova.network.neutron [-] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.569 232437 INFO nova.compute.manager [-] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Took 1.79 seconds to deallocate network for instance.#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.634 232437 DEBUG nova.network.neutron [req-4c6e12eb-3f4c-4c9a-8829-2e7734855ddc req-71c2d597-8fe2-4856-b3c2-f54aa2f40f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updated VIF entry in instance network info cache for port 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.634 232437 DEBUG nova.network.neutron [req-4c6e12eb-3f4c-4c9a-8829-2e7734855ddc req-71c2d597-8fe2-4856-b3c2-f54aa2f40f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updating instance_info_cache with network_info: [{"id": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "address": "fa:16:3e:4e:ab:c2", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c2ee2d-4a", "ovs_interfaceid": "07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:22:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:07.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.664 232437 DEBUG oslo_concurrency.lockutils [req-4c6e12eb-3f4c-4c9a-8829-2e7734855ddc req-71c2d597-8fe2-4856-b3c2-f54aa2f40f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a7305638-446a-410d-8da1-2a2e5ddfa894" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.693 232437 DEBUG nova.compute.manager [req-262eec73-cf2d-4398-970c-df45f64bd433 req-e7a4c175-2313-4cdd-b79d-c1c92d4ec326 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Received event network-vif-deleted-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.694 232437 INFO nova.compute.manager [req-262eec73-cf2d-4398-970c-df45f64bd433 req-e7a4c175-2313-4cdd-b79d-c1c92d4ec326 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Neutron deleted interface 07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f; detaching it from the instance and deleting it from the info cache#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.694 232437 DEBUG nova.network.neutron [req-262eec73-cf2d-4398-970c-df45f64bd433 req-e7a4c175-2313-4cdd-b79d-c1c92d4ec326 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.722 232437 DEBUG nova.compute.manager [req-262eec73-cf2d-4398-970c-df45f64bd433 req-e7a4c175-2313-4cdd-b79d-c1c92d4ec326 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Detach interface failed, port_id=07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f, reason: Instance a7305638-446a-410d-8da1-2a2e5ddfa894 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.845 232437 DEBUG nova.compute.manager [req-405c2918-6ca2-491e-be5f-acfa2f9a6f07 req-97ed5b12-7e14-491f-8f6b-bc14d91e6bc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Received event network-vif-plugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.846 232437 DEBUG oslo_concurrency.lockutils [req-405c2918-6ca2-491e-be5f-acfa2f9a6f07 req-97ed5b12-7e14-491f-8f6b-bc14d91e6bc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.846 232437 DEBUG oslo_concurrency.lockutils [req-405c2918-6ca2-491e-be5f-acfa2f9a6f07 req-97ed5b12-7e14-491f-8f6b-bc14d91e6bc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.846 232437 DEBUG oslo_concurrency.lockutils [req-405c2918-6ca2-491e-be5f-acfa2f9a6f07 req-97ed5b12-7e14-491f-8f6b-bc14d91e6bc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.847 232437 DEBUG nova.compute.manager [req-405c2918-6ca2-491e-be5f-acfa2f9a6f07 req-97ed5b12-7e14-491f-8f6b-bc14d91e6bc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] No waiting events found dispatching network-vif-plugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.847 232437 WARNING nova.compute.manager [req-405c2918-6ca2-491e-be5f-acfa2f9a6f07 req-97ed5b12-7e14-491f-8f6b-bc14d91e6bc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Received unexpected event network-vif-plugged-07c2ee2d-4ad8-444c-9e9c-e2bd99a0bb9f for instance with vm_state active and task_state deleting.#033[00m
Dec  6 03:22:07 np0005548731 nova_compute[232433]: 2025-12-06 08:22:07.886 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:08 np0005548731 nova_compute[232433]: 2025-12-06 08:22:08.017 232437 INFO nova.compute.manager [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Took 0.45 seconds to detach 1 volumes for instance.#033[00m
Dec  6 03:22:08 np0005548731 nova_compute[232433]: 2025-12-06 08:22:08.087 232437 DEBUG oslo_concurrency.lockutils [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:22:08 np0005548731 nova_compute[232433]: 2025-12-06 08:22:08.088 232437 DEBUG oslo_concurrency.lockutils [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:22:08 np0005548731 nova_compute[232433]: 2025-12-06 08:22:08.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:22:08 np0005548731 nova_compute[232433]: 2025-12-06 08:22:08.202 232437 DEBUG oslo_concurrency.processutils [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:22:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:22:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/192332593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:22:08 np0005548731 nova_compute[232433]: 2025-12-06 08:22:08.649 232437 DEBUG oslo_concurrency.processutils [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:22:08 np0005548731 nova_compute[232433]: 2025-12-06 08:22:08.660 232437 DEBUG nova.compute.provider_tree [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:22:08 np0005548731 nova_compute[232433]: 2025-12-06 08:22:08.685 232437 DEBUG nova.scheduler.client.report [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:22:08 np0005548731 nova_compute[232433]: 2025-12-06 08:22:08.730 232437 DEBUG oslo_concurrency.lockutils [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:22:08 np0005548731 nova_compute[232433]: 2025-12-06 08:22:08.886 232437 INFO nova.scheduler.client.report [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Deleted allocations for instance a7305638-446a-410d-8da1-2a2e5ddfa894#033[00m
Dec  6 03:22:09 np0005548731 nova_compute[232433]: 2025-12-06 08:22:09.008 232437 DEBUG oslo_concurrency.lockutils [None req-f168d78c-c9de-4300-8b77-bf12e1521ff8 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "a7305638-446a-410d-8da1-2a2e5ddfa894" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:22:09 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:09.160 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '103'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:22:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:09.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:09.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:10 np0005548731 nova_compute[232433]: 2025-12-06 08:22:10.029 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:11.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:11.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:22:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/186175830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:22:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:22:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/186175830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:22:12 np0005548731 nova_compute[232433]: 2025-12-06 08:22:12.887 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:13.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:13.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e427 e427: 3 total, 3 up, 3 in
Dec  6 03:22:15 np0005548731 nova_compute[232433]: 2025-12-06 08:22:15.032 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:15.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:15.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:17 np0005548731 nova_compute[232433]: 2025-12-06 08:22:17.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:22:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:17.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:17.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:17 np0005548731 nova_compute[232433]: 2025-12-06 08:22:17.889 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:18 np0005548731 podman[343160]: 2025-12-06 08:22:18.910224782 +0000 UTC m=+0.069112708 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  6 03:22:18 np0005548731 podman[343162]: 2025-12-06 08:22:18.927311229 +0000 UTC m=+0.076371345 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:22:18 np0005548731 podman[343161]: 2025-12-06 08:22:18.971692943 +0000 UTC m=+0.114362513 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  6 03:22:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:19.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:19.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 e428: 3 total, 3 up, 3 in
Dec  6 03:22:19 np0005548731 nova_compute[232433]: 2025-12-06 08:22:19.950 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009324.9498143, a7305638-446a-410d-8da1-2a2e5ddfa894 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:22:19 np0005548731 nova_compute[232433]: 2025-12-06 08:22:19.951 232437 INFO nova.compute.manager [-] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:22:19 np0005548731 nova_compute[232433]: 2025-12-06 08:22:19.971 232437 DEBUG nova.compute.manager [None req-8bccbe8f-99bf-4ae7-a61f-5f10c10d0dd8 - - - - - -] [instance: a7305638-446a-410d-8da1-2a2e5ddfa894] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:22:20 np0005548731 nova_compute[232433]: 2025-12-06 08:22:20.034 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:21.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:21.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:22 np0005548731 nova_compute[232433]: 2025-12-06 08:22:22.890 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:22:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:23.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:22:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:23.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:25 np0005548731 nova_compute[232433]: 2025-12-06 08:22:25.038 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:25.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:22:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:25.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:22:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:27.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:27.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:27 np0005548731 nova_compute[232433]: 2025-12-06 08:22:27.892 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:29.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:29.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:30 np0005548731 nova_compute[232433]: 2025-12-06 08:22:30.042 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:31 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:31.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:31.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:32 np0005548731 nova_compute[232433]: 2025-12-06 08:22:32.893 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:33 np0005548731 nova_compute[232433]: 2025-12-06 08:22:33.331 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:33.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:33 np0005548731 nova_compute[232433]: 2025-12-06 08:22:33.484 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:33.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:35 np0005548731 nova_compute[232433]: 2025-12-06 08:22:35.045 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:35.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000048s ======
Dec  6 03:22:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:35.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Dec  6 03:22:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:37.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:37.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:37 np0005548731 nova_compute[232433]: 2025-12-06 08:22:37.894 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:38 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:22:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:39.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:39.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:22:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:22:40 np0005548731 nova_compute[232433]: 2025-12-06 08:22:40.048 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:40.718 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=104, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=103) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:22:40 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:40.719 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:22:40 np0005548731 nova_compute[232433]: 2025-12-06 08:22:40.719 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:41.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:41.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:43 np0005548731 nova_compute[232433]: 2025-12-06 08:22:43.009 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:43.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:43.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:45 np0005548731 nova_compute[232433]: 2025-12-06 08:22:45.053 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:45.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:22:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:22:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:45.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:47.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:22:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:47.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:22:47 np0005548731 nova_compute[232433]: 2025-12-06 08:22:47.898 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:49.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:49.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:49 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:22:49.721 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '104'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:22:49 np0005548731 podman[343520]: 2025-12-06 08:22:49.931262177 +0000 UTC m=+0.094203922 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 03:22:49 np0005548731 podman[343522]: 2025-12-06 08:22:49.955351005 +0000 UTC m=+0.114924127 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 03:22:49 np0005548731 podman[343521]: 2025-12-06 08:22:49.973442497 +0000 UTC m=+0.135835508 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  6 03:22:50 np0005548731 nova_compute[232433]: 2025-12-06 08:22:50.055 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:51.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:51.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:52 np0005548731 nova_compute[232433]: 2025-12-06 08:22:52.900 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:53 np0005548731 nova_compute[232433]: 2025-12-06 08:22:53.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:22:53 np0005548731 nova_compute[232433]: 2025-12-06 08:22:53.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:22:53 np0005548731 nova_compute[232433]: 2025-12-06 08:22:53.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:22:53 np0005548731 nova_compute[232433]: 2025-12-06 08:22:53.122 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:22:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:53.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:53.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:54 np0005548731 nova_compute[232433]: 2025-12-06 08:22:54.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:22:55 np0005548731 nova_compute[232433]: 2025-12-06 08:22:55.056 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:55.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:55.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:57 np0005548731 nova_compute[232433]: 2025-12-06 08:22:57.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:22:57 np0005548731 nova_compute[232433]: 2025-12-06 08:22:57.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:22:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:22:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:57.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:57.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:57 np0005548731 nova_compute[232433]: 2025-12-06 08:22:57.902 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:22:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:59.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:22:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:22:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:22:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:59.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:00 np0005548731 nova_compute[232433]: 2025-12-06 08:23:00.059 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:00 np0005548731 nova_compute[232433]: 2025-12-06 08:23:00.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:23:00 np0005548731 nova_compute[232433]: 2025-12-06 08:23:00.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:23:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:23:00.927 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:23:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:23:00.927 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:23:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:23:00.927 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:23:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:01.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:23:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:01.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.137 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.138 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.138 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.139 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.139 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:23:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:23:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1354063911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.659 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.814 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.816 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4146MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.816 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.816 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.903 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.936 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.936 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:23:02 np0005548731 nova_compute[232433]: 2025-12-06 08:23:02.953 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:23:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:23:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4214306056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:23:03 np0005548731 nova_compute[232433]: 2025-12-06 08:23:03.368 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:23:03 np0005548731 nova_compute[232433]: 2025-12-06 08:23:03.374 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:23:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:03.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:03 np0005548731 nova_compute[232433]: 2025-12-06 08:23:03.578 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:23:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:03.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:03 np0005548731 nova_compute[232433]: 2025-12-06 08:23:03.990 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:23:03 np0005548731 nova_compute[232433]: 2025-12-06 08:23:03.990 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:23:05 np0005548731 nova_compute[232433]: 2025-12-06 08:23:05.084 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:05.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:05.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:05 np0005548731 nova_compute[232433]: 2025-12-06 08:23:05.990 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:23:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:07.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:23:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:07.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:23:07 np0005548731 nova_compute[232433]: 2025-12-06 08:23:07.905 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:09.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:09.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:10 np0005548731 nova_compute[232433]: 2025-12-06 08:23:10.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:23:10 np0005548731 nova_compute[232433]: 2025-12-06 08:23:10.134 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:11.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:11.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:12 np0005548731 nova_compute[232433]: 2025-12-06 08:23:12.907 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:13.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:23:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:13.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:23:15 np0005548731 nova_compute[232433]: 2025-12-06 08:23:15.208 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:15.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:15.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:17.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:17.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:17 np0005548731 nova_compute[232433]: 2025-12-06 08:23:17.909 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:19.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:19.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:20 np0005548731 nova_compute[232433]: 2025-12-06 08:23:20.211 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:20 np0005548731 podman[343695]: 2025-12-06 08:23:20.905463876 +0000 UTC m=+0.059460984 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  6 03:23:20 np0005548731 podman[343697]: 2025-12-06 08:23:20.931385728 +0000 UTC m=+0.068525024 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 03:23:21 np0005548731 podman[343696]: 2025-12-06 08:23:21.004379191 +0000 UTC m=+0.155484308 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 03:23:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:21.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:21.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:22 np0005548731 nova_compute[232433]: 2025-12-06 08:23:22.911 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:23.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:23.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:25 np0005548731 nova_compute[232433]: 2025-12-06 08:23:25.214 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:25.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:27.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:27 np0005548731 nova_compute[232433]: 2025-12-06 08:23:27.913 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:23:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:29.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:23:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:23:29.675 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=105, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=104) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:23:29 np0005548731 nova_compute[232433]: 2025-12-06 08:23:29.676 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:23:29.676 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:23:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:29.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:30 np0005548731 nova_compute[232433]: 2025-12-06 08:23:30.217 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:31.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:31.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:32 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:23:32.679 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '105'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:23:32 np0005548731 ovn_controller[133927]: 2025-12-06T08:23:32Z|01106|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  6 03:23:32 np0005548731 nova_compute[232433]: 2025-12-06 08:23:32.914 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:33.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:33.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:35 np0005548731 nova_compute[232433]: 2025-12-06 08:23:35.221 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:35.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:35.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:37.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:37.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:37 np0005548731 nova_compute[232433]: 2025-12-06 08:23:37.915 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:39.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:39.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:40 np0005548731 nova_compute[232433]: 2025-12-06 08:23:40.226 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:41.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:41.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:42 np0005548731 nova_compute[232433]: 2025-12-06 08:23:42.917 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:43.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:23:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:43.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:23:45 np0005548731 nova_compute[232433]: 2025-12-06 08:23:45.229 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:45.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:23:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:23:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:23:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:45.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:23:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:23:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:23:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:23:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:47.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:47.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:47 np0005548731 nova_compute[232433]: 2025-12-06 08:23:47.962 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:23:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:49.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:23:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:49.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:50 np0005548731 nova_compute[232433]: 2025-12-06 08:23:50.233 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:51.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:51.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:51 np0005548731 podman[344005]: 2025-12-06 08:23:51.905365177 +0000 UTC m=+0.068460542 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 03:23:51 np0005548731 podman[344006]: 2025-12-06 08:23:51.929323853 +0000 UTC m=+0.092446688 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 03:23:51 np0005548731 podman[344007]: 2025-12-06 08:23:51.929377654 +0000 UTC m=+0.088444021 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  6 03:23:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:23:52 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:23:52 np0005548731 nova_compute[232433]: 2025-12-06 08:23:52.964 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:53.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:23:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:53.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:23:55 np0005548731 nova_compute[232433]: 2025-12-06 08:23:55.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:23:55 np0005548731 nova_compute[232433]: 2025-12-06 08:23:55.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:23:55 np0005548731 nova_compute[232433]: 2025-12-06 08:23:55.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:23:55 np0005548731 nova_compute[232433]: 2025-12-06 08:23:55.120 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:23:55 np0005548731 nova_compute[232433]: 2025-12-06 08:23:55.120 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:23:55 np0005548731 nova_compute[232433]: 2025-12-06 08:23:55.273 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:55.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:23:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:57.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:23:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:23:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:57.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:57 np0005548731 nova_compute[232433]: 2025-12-06 08:23:57.966 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:23:58 np0005548731 nova_compute[232433]: 2025-12-06 08:23:58.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:23:59 np0005548731 nova_compute[232433]: 2025-12-06 08:23:59.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:23:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:59.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:23:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:23:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:23:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:59.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:00 np0005548731 nova_compute[232433]: 2025-12-06 08:24:00.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:00 np0005548731 nova_compute[232433]: 2025-12-06 08:24:00.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:24:00 np0005548731 nova_compute[232433]: 2025-12-06 08:24:00.277 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:24:00.928 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:24:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:24:00.928 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:24:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:24:00.928 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:24:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:01.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:01.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:02 np0005548731 nova_compute[232433]: 2025-12-06 08:24:02.969 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.144 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.144 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.145 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.145 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.145 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:24:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:03.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:24:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/158196637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.578 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.739 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.741 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4136MB free_disk=20.921977996826172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.742 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.742 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.817 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:24:03 np0005548731 nova_compute[232433]: 2025-12-06 08:24:03.817 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:24:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:03.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:04 np0005548731 nova_compute[232433]: 2025-12-06 08:24:04.428 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:24:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:24:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1585537670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:24:04 np0005548731 nova_compute[232433]: 2025-12-06 08:24:04.875 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:24:04 np0005548731 nova_compute[232433]: 2025-12-06 08:24:04.881 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:24:04 np0005548731 nova_compute[232433]: 2025-12-06 08:24:04.908 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:24:04 np0005548731 nova_compute[232433]: 2025-12-06 08:24:04.910 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:24:04 np0005548731 nova_compute[232433]: 2025-12-06 08:24:04.910 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:24:05 np0005548731 nova_compute[232433]: 2025-12-06 08:24:05.281 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:05.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:05.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:05 np0005548731 nova_compute[232433]: 2025-12-06 08:24:05.910 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:05 np0005548731 nova_compute[232433]: 2025-12-06 08:24:05.911 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:24:06.686 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=106, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=105) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:24:06 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:24:06.687 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:24:06 np0005548731 nova_compute[232433]: 2025-12-06 08:24:06.688 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:07.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:24:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:07.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:24:07 np0005548731 nova_compute[232433]: 2025-12-06 08:24:07.969 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:24:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1369561069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:24:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:24:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1369561069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:24:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:09.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:09.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:10 np0005548731 nova_compute[232433]: 2025-12-06 08:24:10.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:10 np0005548731 nova_compute[232433]: 2025-12-06 08:24:10.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:11.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:11.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:12 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:24:12.689 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '106'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:24:12 np0005548731 nova_compute[232433]: 2025-12-06 08:24:12.970 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:13.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:24:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:13.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:24:15 np0005548731 nova_compute[232433]: 2025-12-06 08:24:15.288 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:15.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:15.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:17.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:17.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:18 np0005548731 nova_compute[232433]: 2025-12-06 08:24:18.015 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:19 np0005548731 nova_compute[232433]: 2025-12-06 08:24:19.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:19.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:19.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:20 np0005548731 nova_compute[232433]: 2025-12-06 08:24:20.292 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:21.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:21.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:22 np0005548731 podman[344228]: 2025-12-06 08:24:22.932675775 +0000 UTC m=+0.069817736 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 03:24:22 np0005548731 podman[344230]: 2025-12-06 08:24:22.942969236 +0000 UTC m=+0.068198617 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 03:24:22 np0005548731 podman[344229]: 2025-12-06 08:24:22.964743887 +0000 UTC m=+0.102889973 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 03:24:23 np0005548731 nova_compute[232433]: 2025-12-06 08:24:23.016 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:24:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:23.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:24:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:23.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:25 np0005548731 nova_compute[232433]: 2025-12-06 08:24:25.297 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:25.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:25.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:27 np0005548731 nova_compute[232433]: 2025-12-06 08:24:27.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:27 np0005548731 nova_compute[232433]: 2025-12-06 08:24:27.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 03:24:27 np0005548731 nova_compute[232433]: 2025-12-06 08:24:27.119 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 03:24:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:27.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:24:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:27.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:24:28 np0005548731 nova_compute[232433]: 2025-12-06 08:24:28.020 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:29.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:29.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:30 np0005548731 nova_compute[232433]: 2025-12-06 08:24:30.301 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:31.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:31.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:33 np0005548731 nova_compute[232433]: 2025-12-06 08:24:33.021 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:33.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:35 np0005548731 nova_compute[232433]: 2025-12-06 08:24:35.342 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:35.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:37.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:38 np0005548731 nova_compute[232433]: 2025-12-06 08:24:38.023 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:39.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:39.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:40 np0005548731 nova_compute[232433]: 2025-12-06 08:24:40.346 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:24:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:41.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:24:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:41.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:43 np0005548731 nova_compute[232433]: 2025-12-06 08:24:43.025 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:24:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:24:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:43.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:45 np0005548731 nova_compute[232433]: 2025-12-06 08:24:45.350 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:45.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:24:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:45.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:24:46 np0005548731 systemd[1]: Starting dnf makecache...
Dec  6 03:24:46 np0005548731 dnf[344371]: Metadata cache refreshed recently.
Dec  6 03:24:46 np0005548731 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  6 03:24:46 np0005548731 systemd[1]: Finished dnf makecache.
Dec  6 03:24:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:24:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:24:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:47.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:48 np0005548731 nova_compute[232433]: 2025-12-06 08:24:48.027 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:49.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:49.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:50 np0005548731 nova_compute[232433]: 2025-12-06 08:24:50.353 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:51 np0005548731 nova_compute[232433]: 2025-12-06 08:24:51.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:51 np0005548731 nova_compute[232433]: 2025-12-06 08:24:51.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 03:24:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:51.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:51.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:53 np0005548731 nova_compute[232433]: 2025-12-06 08:24:53.028 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:53 np0005548731 podman[344694]: 2025-12-06 08:24:53.076816388 +0000 UTC m=+0.057585517 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 03:24:53 np0005548731 podman[344694]: 2025-12-06 08:24:53.169174103 +0000 UTC m=+0.149943222 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 03:24:53 np0005548731 podman[344732]: 2025-12-06 08:24:53.288397734 +0000 UTC m=+0.058162421 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  6 03:24:53 np0005548731 podman[344729]: 2025-12-06 08:24:53.303077723 +0000 UTC m=+0.077978355 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  6 03:24:53 np0005548731 podman[344731]: 2025-12-06 08:24:53.313789904 +0000 UTC m=+0.088690386 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 03:24:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:24:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:24:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 03:24:53 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 03:24:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:53.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:53 np0005548731 podman[344906]: 2025-12-06 08:24:53.693417012 +0000 UTC m=+0.049963690 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 03:24:53 np0005548731 podman[344906]: 2025-12-06 08:24:53.7068368 +0000 UTC m=+0.063383468 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 03:24:53 np0005548731 podman[344975]: 2025-12-06 08:24:53.895286822 +0000 UTC m=+0.053342564 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, distribution-scope=public, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, build-date=2023-02-22T09:23:20, release=1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9)
Dec  6 03:24:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:53.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:53 np0005548731 podman[344975]: 2025-12-06 08:24:53.906389312 +0000 UTC m=+0.064445024 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, architecture=x86_64, distribution-scope=public, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc.)
Dec  6 03:24:54 np0005548731 nova_compute[232433]: 2025-12-06 08:24:54.308 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:24:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:24:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:24:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:24:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:24:55 np0005548731 nova_compute[232433]: 2025-12-06 08:24:55.107 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:55 np0005548731 nova_compute[232433]: 2025-12-06 08:24:55.108 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:24:55 np0005548731 nova_compute[232433]: 2025-12-06 08:24:55.108 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:24:55 np0005548731 nova_compute[232433]: 2025-12-06 08:24:55.123 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:24:55 np0005548731 nova_compute[232433]: 2025-12-06 08:24:55.356 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:55.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:55.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:57 np0005548731 nova_compute[232433]: 2025-12-06 08:24:57.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:57.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:24:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:24:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:57.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:24:58 np0005548731 nova_compute[232433]: 2025-12-06 08:24:58.071 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:24:58 np0005548731 nova_compute[232433]: 2025-12-06 08:24:58.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:24:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:59.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.796482) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499796586, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 2133, "num_deletes": 252, "total_data_size": 5081900, "memory_usage": 5151592, "flush_reason": "Manual Compaction"}
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499808056, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 1984809, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85889, "largest_seqno": 88017, "table_properties": {"data_size": 1978402, "index_size": 3288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17032, "raw_average_key_size": 21, "raw_value_size": 1964130, "raw_average_value_size": 2439, "num_data_blocks": 146, "num_entries": 805, "num_filter_entries": 805, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009311, "oldest_key_time": 1765009311, "file_creation_time": 1765009499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 11617 microseconds, and 4886 cpu microseconds.
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.808112) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 1984809 bytes OK
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.808128) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.809633) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.809645) EVENT_LOG_v1 {"time_micros": 1765009499809641, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.809660) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 5072489, prev total WAL file size 5072489, number of live WAL files 2.
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.810919) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303238' seq:72057594037927935, type:22 .. '6D6772737461740033323830' seq:0, type:0; will stop at (end)
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(1938KB)], [174(12MB)]
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499810982, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 15429469, "oldest_snapshot_seqno": -1}
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 11650 keys, 12896292 bytes, temperature: kUnknown
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499892641, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 12896292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12824792, "index_size": 41243, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29189, "raw_key_size": 306562, "raw_average_key_size": 26, "raw_value_size": 12624897, "raw_average_value_size": 1083, "num_data_blocks": 1561, "num_entries": 11650, "num_filter_entries": 11650, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765009499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.892933) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 12896292 bytes
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.895065) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.8 rd, 157.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(14.3) write-amplify(6.5) OK, records in: 12087, records dropped: 437 output_compression: NoCompression
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.895080) EVENT_LOG_v1 {"time_micros": 1765009499895073, "job": 112, "event": "compaction_finished", "compaction_time_micros": 81735, "compaction_time_cpu_micros": 33203, "output_level": 6, "num_output_files": 1, "total_output_size": 12896292, "num_input_records": 12087, "num_output_records": 11650, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499895612, "job": 112, "event": "table_file_deletion", "file_number": 176}
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499898024, "job": 112, "event": "table_file_deletion", "file_number": 174}
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.810821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.898071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.898076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.898078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.898079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:24:59 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:24:59.898081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:24:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:24:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:24:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:59.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:00 np0005548731 nova_compute[232433]: 2025-12-06 08:25:00.381 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:25:00.929 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:25:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:25:00.929 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:25:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:25:00.929 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:25:01 np0005548731 nova_compute[232433]: 2025-12-06 08:25:01.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:25:01 np0005548731 nova_compute[232433]: 2025-12-06 08:25:01.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:25:01 np0005548731 nova_compute[232433]: 2025-12-06 08:25:01.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:25:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:01.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:25:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:25:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:01.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:03 np0005548731 nova_compute[232433]: 2025-12-06 08:25:03.073 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:03 np0005548731 nova_compute[232433]: 2025-12-06 08:25:03.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:25:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:03.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:25:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:03.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:25:04 np0005548731 nova_compute[232433]: 2025-12-06 08:25:04.125 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.129 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.129 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.385 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:05.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:25:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3901546734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.581 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.761 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.763 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4146MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.763 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:25:05 np0005548731 nova_compute[232433]: 2025-12-06 08:25:05.763 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:25:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:05.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:06 np0005548731 nova_compute[232433]: 2025-12-06 08:25:06.031 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:25:06 np0005548731 nova_compute[232433]: 2025-12-06 08:25:06.031 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:25:06 np0005548731 nova_compute[232433]: 2025-12-06 08:25:06.051 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:25:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:25:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3298858900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:25:06 np0005548731 nova_compute[232433]: 2025-12-06 08:25:06.498 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:25:06 np0005548731 nova_compute[232433]: 2025-12-06 08:25:06.504 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:25:06 np0005548731 nova_compute[232433]: 2025-12-06 08:25:06.603 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:25:06 np0005548731 nova_compute[232433]: 2025-12-06 08:25:06.605 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:25:06 np0005548731 nova_compute[232433]: 2025-12-06 08:25:06.605 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:25:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:25:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:07.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:25:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:07.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:08 np0005548731 nova_compute[232433]: 2025-12-06 08:25:08.113 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:25:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:09.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.802082) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509802123, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 389, "num_deletes": 259, "total_data_size": 352256, "memory_usage": 361144, "flush_reason": "Manual Compaction"}
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509805855, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 232373, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88022, "largest_seqno": 88406, "table_properties": {"data_size": 230045, "index_size": 427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5756, "raw_average_key_size": 18, "raw_value_size": 225311, "raw_average_value_size": 713, "num_data_blocks": 18, "num_entries": 316, "num_filter_entries": 316, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009500, "oldest_key_time": 1765009500, "file_creation_time": 1765009509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 3811 microseconds, and 1489 cpu microseconds.
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.805894) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 232373 bytes OK
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.805909) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.807484) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.807500) EVENT_LOG_v1 {"time_micros": 1765009509807495, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.807514) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 349658, prev total WAL file size 349658, number of live WAL files 2.
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.808007) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323830' seq:72057594037927935, type:22 .. '6C6F676D0033353335' seq:0, type:0; will stop at (end)
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(226KB)], [177(12MB)]
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509808077, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 13128665, "oldest_snapshot_seqno": -1}
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 11436 keys, 13004192 bytes, temperature: kUnknown
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509888294, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 13004192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12933472, "index_size": 41039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28613, "raw_key_size": 303088, "raw_average_key_size": 26, "raw_value_size": 12736573, "raw_average_value_size": 1113, "num_data_blocks": 1550, "num_entries": 11436, "num_filter_entries": 11436, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765009509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.888592) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13004192 bytes
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.889931) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.5 rd, 161.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.3 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(112.5) write-amplify(56.0) OK, records in: 11966, records dropped: 530 output_compression: NoCompression
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.889947) EVENT_LOG_v1 {"time_micros": 1765009509889939, "job": 114, "event": "compaction_finished", "compaction_time_micros": 80317, "compaction_time_cpu_micros": 41952, "output_level": 6, "num_output_files": 1, "total_output_size": 13004192, "num_input_records": 11966, "num_output_records": 11436, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509890191, "job": 114, "event": "table_file_deletion", "file_number": 179}
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509892504, "job": 114, "event": "table_file_deletion", "file_number": 177}
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.807883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.892678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.892684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.892687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.892689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:09 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:09.892691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:09.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:10 np0005548731 nova_compute[232433]: 2025-12-06 08:25:10.439 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:11.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:11 np0005548731 nova_compute[232433]: 2025-12-06 08:25:11.606 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:25:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:11.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:13 np0005548731 nova_compute[232433]: 2025-12-06 08:25:13.117 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:13.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:13.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:14 np0005548731 nova_compute[232433]: 2025-12-06 08:25:14.248 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:25:15 np0005548731 nova_compute[232433]: 2025-12-06 08:25:15.443 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:15.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:15.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:17.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:17.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:18 np0005548731 nova_compute[232433]: 2025-12-06 08:25:18.118 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:19.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:19.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:20 np0005548731 nova_compute[232433]: 2025-12-06 08:25:20.447 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.836120) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520836147, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 373, "num_deletes": 251, "total_data_size": 316512, "memory_usage": 324024, "flush_reason": "Manual Compaction"}
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520840932, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 208199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88411, "largest_seqno": 88779, "table_properties": {"data_size": 205998, "index_size": 364, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5534, "raw_average_key_size": 18, "raw_value_size": 201608, "raw_average_value_size": 676, "num_data_blocks": 16, "num_entries": 298, "num_filter_entries": 298, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009510, "oldest_key_time": 1765009510, "file_creation_time": 1765009520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 4845 microseconds, and 959 cpu microseconds.
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.840964) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 208199 bytes OK
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.840977) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.842618) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.842642) EVENT_LOG_v1 {"time_micros": 1765009520842638, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.842654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 314046, prev total WAL file size 314046, number of live WAL files 2.
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.842986) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(203KB)], [180(12MB)]
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520843011, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 13212391, "oldest_snapshot_seqno": -1}
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 11221 keys, 11300530 bytes, temperature: kUnknown
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520910233, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 11300530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11232710, "index_size": 38651, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28101, "raw_key_size": 299295, "raw_average_key_size": 26, "raw_value_size": 11040973, "raw_average_value_size": 983, "num_data_blocks": 1443, "num_entries": 11221, "num_filter_entries": 11221, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765009520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.910733) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11300530 bytes
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.912794) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.2 rd, 167.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(117.7) write-amplify(54.3) OK, records in: 11734, records dropped: 513 output_compression: NoCompression
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.912827) EVENT_LOG_v1 {"time_micros": 1765009520912813, "job": 116, "event": "compaction_finished", "compaction_time_micros": 67351, "compaction_time_cpu_micros": 28567, "output_level": 6, "num_output_files": 1, "total_output_size": 11300530, "num_input_records": 11734, "num_output_records": 11221, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520913102, "job": 116, "event": "table_file_deletion", "file_number": 182}
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520917629, "job": 116, "event": "table_file_deletion", "file_number": 180}
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.842944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.917711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.917715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.917716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.917718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:20 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:25:20.917719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:25:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:25:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:21.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:25:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:21.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:23 np0005548731 nova_compute[232433]: 2025-12-06 08:25:23.121 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:23.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:23 np0005548731 podman[345298]: 2025-12-06 08:25:23.917456927 +0000 UTC m=+0.075664988 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 03:25:23 np0005548731 podman[345300]: 2025-12-06 08:25:23.926914638 +0000 UTC m=+0.076988000 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd)
Dec  6 03:25:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:23.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:24 np0005548731 podman[345299]: 2025-12-06 08:25:24.005414485 +0000 UTC m=+0.150022755 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 03:25:25 np0005548731 nova_compute[232433]: 2025-12-06 08:25:25.490 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:25.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:25.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:27.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:27.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:28 np0005548731 nova_compute[232433]: 2025-12-06 08:25:28.172 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:29.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:29.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:30 np0005548731 nova_compute[232433]: 2025-12-06 08:25:30.493 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:31.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:31.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:33 np0005548731 nova_compute[232433]: 2025-12-06 08:25:33.175 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:33.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:25:33.630 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=107, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=106) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:25:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:25:33.631 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:25:33 np0005548731 nova_compute[232433]: 2025-12-06 08:25:33.631 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:33 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:25:33.632 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '107'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:25:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:33.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:25:34 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3315669685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:25:35 np0005548731 nova_compute[232433]: 2025-12-06 08:25:35.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:35.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:35.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:37.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:25:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:37.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:25:38 np0005548731 nova_compute[232433]: 2025-12-06 08:25:38.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:39.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:39.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:40 np0005548731 nova_compute[232433]: 2025-12-06 08:25:40.545 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:41.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:41.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:43 np0005548731 nova_compute[232433]: 2025-12-06 08:25:43.180 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:43.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:43.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:45 np0005548731 nova_compute[232433]: 2025-12-06 08:25:45.600 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:45.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:45.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:47.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:47.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:48 np0005548731 nova_compute[232433]: 2025-12-06 08:25:48.183 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:49.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:49.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:50 np0005548731 nova_compute[232433]: 2025-12-06 08:25:50.604 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:51.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:51.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:53 np0005548731 nova_compute[232433]: 2025-12-06 08:25:53.184 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:53.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:25:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:53.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:25:54 np0005548731 podman[345484]: 2025-12-06 08:25:54.897421629 +0000 UTC m=+0.056699065 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:25:54 np0005548731 podman[345486]: 2025-12-06 08:25:54.901352055 +0000 UTC m=+0.055845515 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  6 03:25:54 np0005548731 podman[345485]: 2025-12-06 08:25:54.921759783 +0000 UTC m=+0.080497596 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  6 03:25:55 np0005548731 nova_compute[232433]: 2025-12-06 08:25:55.607 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:55.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:25:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:55.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:56 np0005548731 nova_compute[232433]: 2025-12-06 08:25:56.126 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:25:56 np0005548731 nova_compute[232433]: 2025-12-06 08:25:56.126 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:25:56 np0005548731 nova_compute[232433]: 2025-12-06 08:25:56.126 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:25:56 np0005548731 nova_compute[232433]: 2025-12-06 08:25:56.147 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:25:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:25:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:57.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:25:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:58.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:25:58 np0005548731 nova_compute[232433]: 2025-12-06 08:25:58.268 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:25:59 np0005548731 nova_compute[232433]: 2025-12-06 08:25:59.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:25:59 np0005548731 nova_compute[232433]: 2025-12-06 08:25:59.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:25:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:25:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:25:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:59.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:00.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:00 np0005548731 nova_compute[232433]: 2025-12-06 08:26:00.611 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:00.929 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:00.930 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:00.930 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:01.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:26:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:26:01 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:26:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:02.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:02 np0005548731 nova_compute[232433]: 2025-12-06 08:26:02.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:26:02 np0005548731 nova_compute[232433]: 2025-12-06 08:26:02.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:26:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:03 np0005548731 nova_compute[232433]: 2025-12-06 08:26:03.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:26:03 np0005548731 nova_compute[232433]: 2025-12-06 08:26:03.293 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:03.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:04.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.069 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "772293bc-ff3c-401f-b61b-905279cb0976" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.069 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.085 232437 DEBUG nova.compute.manager [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.182 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.183 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.189 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.189 232437 INFO nova.compute.claims [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Claim successful on node compute-2.ctlplane.example.com#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.348 232437 DEBUG nova.scheduler.client.report [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.382 232437 DEBUG nova.scheduler.client.report [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.383 232437 DEBUG nova.compute.provider_tree [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.397 232437 DEBUG nova.scheduler.client.report [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.428 232437 DEBUG nova.scheduler.client.report [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.470 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:26:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:26:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3266479852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.889 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.896 232437 DEBUG nova.compute.provider_tree [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.911 232437 DEBUG nova.scheduler.client.report [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.933 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.934 232437 DEBUG nova.compute.manager [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.985 232437 DEBUG nova.compute.manager [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  6 03:26:04 np0005548731 nova_compute[232433]: 2025-12-06 08:26:04.985 232437 DEBUG nova.network.neutron [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.015 232437 INFO nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.033 232437 DEBUG nova.compute.manager [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.123 232437 DEBUG nova.compute.manager [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.124 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.125 232437 INFO nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Creating image(s)#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.158 232437 DEBUG nova.storage.rbd_utils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 772293bc-ff3c-401f-b61b-905279cb0976_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.185 232437 DEBUG nova.storage.rbd_utils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 772293bc-ff3c-401f-b61b-905279cb0976_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.214 232437 DEBUG nova.storage.rbd_utils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 772293bc-ff3c-401f-b61b-905279cb0976_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.218 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.259 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.260 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.261 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.262 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.262 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.301 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.303 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.304 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.304 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.335 232437 DEBUG nova.storage.rbd_utils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 772293bc-ff3c-401f-b61b-905279cb0976_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.339 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 772293bc-ff3c-401f-b61b-905279cb0976_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.615 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:05.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.707 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 772293bc-ff3c-401f-b61b-905279cb0976_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:26:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:26:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/382677093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.748 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.782 232437 DEBUG nova.storage.rbd_utils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] resizing rbd image 772293bc-ff3c-401f-b61b-905279cb0976_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.878 232437 DEBUG nova.objects.instance [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lazy-loading 'migration_context' on Instance uuid 772293bc-ff3c-401f-b61b-905279cb0976 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.898 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.899 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Ensure instance console log exists: /var/lib/nova/instances/772293bc-ff3c-401f-b61b-905279cb0976/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.899 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.900 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.900 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.992 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.993 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4131MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.993 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:05 np0005548731 nova_compute[232433]: 2025-12-06 08:26:05.993 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:06.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.111 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Instance 772293bc-ff3c-401f-b61b-905279cb0976 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.111 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.111 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.114 232437 DEBUG nova.network.neutron [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Successfully created port: da11c88a-b2ea-4144-a5d1-ee734cad8848 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.164 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:26:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:26:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3789081481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.589 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.594 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.609 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.629 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.629 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.874 232437 DEBUG nova.network.neutron [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Successfully updated port: da11c88a-b2ea-4144-a5d1-ee734cad8848 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.897 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "refresh_cache-772293bc-ff3c-401f-b61b-905279cb0976" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.897 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquired lock "refresh_cache-772293bc-ff3c-401f-b61b-905279cb0976" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:26:06 np0005548731 nova_compute[232433]: 2025-12-06 08:26:06.898 232437 DEBUG nova.network.neutron [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.004 232437 DEBUG nova.compute.manager [req-9932bffc-682d-4fda-88f6-8e8b97d0b2e1 req-af6bb826-c55f-4535-a84d-a88550207010 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Received event network-changed-da11c88a-b2ea-4144-a5d1-ee734cad8848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.004 232437 DEBUG nova.compute.manager [req-9932bffc-682d-4fda-88f6-8e8b97d0b2e1 req-af6bb826-c55f-4535-a84d-a88550207010 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Refreshing instance network info cache due to event network-changed-da11c88a-b2ea-4144-a5d1-ee734cad8848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.005 232437 DEBUG oslo_concurrency.lockutils [req-9932bffc-682d-4fda-88f6-8e8b97d0b2e1 req-af6bb826-c55f-4535-a84d-a88550207010 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-772293bc-ff3c-401f-b61b-905279cb0976" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.056 232437 DEBUG nova.network.neutron [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  6 03:26:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.629 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:26:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:07.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.919 232437 DEBUG nova.network.neutron [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Updating instance_info_cache with network_info: [{"id": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "address": "fa:16:3e:99:11:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda11c88a-b2", "ovs_interfaceid": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.941 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Releasing lock "refresh_cache-772293bc-ff3c-401f-b61b-905279cb0976" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.942 232437 DEBUG nova.compute.manager [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Instance network_info: |[{"id": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "address": "fa:16:3e:99:11:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda11c88a-b2", "ovs_interfaceid": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.942 232437 DEBUG oslo_concurrency.lockutils [req-9932bffc-682d-4fda-88f6-8e8b97d0b2e1 req-af6bb826-c55f-4535-a84d-a88550207010 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-772293bc-ff3c-401f-b61b-905279cb0976" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.943 232437 DEBUG nova.network.neutron [req-9932bffc-682d-4fda-88f6-8e8b97d0b2e1 req-af6bb826-c55f-4535-a84d-a88550207010 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Refreshing network info cache for port da11c88a-b2ea-4144-a5d1-ee734cad8848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.949 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Start _get_guest_xml network_info=[{"id": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "address": "fa:16:3e:99:11:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda11c88a-b2", "ovs_interfaceid": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'encryption_options': None, 'size': 0, 'encrypted': False, 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.956 232437 WARNING nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.963 232437 DEBUG nova.virt.libvirt.host [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.965 232437 DEBUG nova.virt.libvirt.host [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.979 232437 DEBUG nova.virt.libvirt.host [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.981 232437 DEBUG nova.virt.libvirt.host [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.983 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.985 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.985 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.986 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.986 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.987 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.987 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.987 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.988 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.988 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.989 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.989 232437 DEBUG nova.virt.hardware [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  6 03:26:07 np0005548731 nova_compute[232433]: 2025-12-06 08:26:07.995 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:26:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:08.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.295 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:26:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4278581340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.443 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.474 232437 DEBUG nova.storage.rbd_utils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 772293bc-ff3c-401f-b61b-905279cb0976_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.481 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:26:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:26:08 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:26:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:26:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2751347995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.952 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.954 232437 DEBUG nova.virt.libvirt.vif [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:26:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-618286686',display_name='tempest-TestServerMultinode-server-618286686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-618286686',id=221,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7102dcd5b58d4dec801a71dacc60eaaf',ramdisk_id='',reservation_id='r-71wgmgy0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1864301627',owner_user_name='tempest-TestServerMultinode-1864301627-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:26:05Z,user_data=None,user_id='4989b6252b64443aaec21b075dbc29d9',uuid=772293bc-ff3c-401f-b61b-905279cb0976,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "address": "fa:16:3e:99:11:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda11c88a-b2", "ovs_interfaceid": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.955 232437 DEBUG nova.network.os_vif_util [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converting VIF {"id": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "address": "fa:16:3e:99:11:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda11c88a-b2", "ovs_interfaceid": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.956 232437 DEBUG nova.network.os_vif_util [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:11:05,bridge_name='br-int',has_traffic_filtering=True,id=da11c88a-b2ea-4144-a5d1-ee734cad8848,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda11c88a-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.957 232437 DEBUG nova.objects.instance [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lazy-loading 'pci_devices' on Instance uuid 772293bc-ff3c-401f-b61b-905279cb0976 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.985 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] End _get_guest_xml xml=<domain type="kvm">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  <uuid>772293bc-ff3c-401f-b61b-905279cb0976</uuid>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  <name>instance-000000dd</name>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  <memory>131072</memory>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  <vcpu>1</vcpu>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  <metadata>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <nova:name>tempest-TestServerMultinode-server-618286686</nova:name>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <nova:creationTime>2025-12-06 08:26:07</nova:creationTime>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <nova:flavor name="m1.nano">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <nova:memory>128</nova:memory>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <nova:disk>1</nova:disk>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <nova:swap>0</nova:swap>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <nova:ephemeral>0</nova:ephemeral>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <nova:vcpus>1</nova:vcpus>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      </nova:flavor>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <nova:owner>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <nova:user uuid="4989b6252b64443aaec21b075dbc29d9">tempest-TestServerMultinode-1864301627-project-admin</nova:user>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <nova:project uuid="7102dcd5b58d4dec801a71dacc60eaaf">tempest-TestServerMultinode-1864301627</nova:project>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      </nova:owner>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <nova:ports>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <nova:port uuid="da11c88a-b2ea-4144-a5d1-ee734cad8848">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        </nova:port>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      </nova:ports>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    </nova:instance>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  </metadata>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  <sysinfo type="smbios">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <system>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <entry name="manufacturer">RDO</entry>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <entry name="product">OpenStack Compute</entry>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <entry name="serial">772293bc-ff3c-401f-b61b-905279cb0976</entry>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <entry name="uuid">772293bc-ff3c-401f-b61b-905279cb0976</entry>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <entry name="family">Virtual Machine</entry>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    </system>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  </sysinfo>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  <os>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <boot dev="hd"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <smbios mode="sysinfo"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  </os>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  <features>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <acpi/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <apic/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <vmcoreinfo/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  </features>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  <clock offset="utc">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <timer name="pit" tickpolicy="delay"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <timer name="hpet" present="no"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  </clock>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  <cpu mode="custom" match="exact">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <model>Nehalem</model>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <topology sockets="1" cores="1" threads="1"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  </cpu>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  <devices>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <disk type="network" device="disk">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/772293bc-ff3c-401f-b61b-905279cb0976_disk">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <target dev="vda" bus="virtio"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <disk type="network" device="cdrom">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <driver type="raw" cache="none"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <source protocol="rbd" name="vms/772293bc-ff3c-401f-b61b-905279cb0976_disk.config">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.100" port="6789"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.102" port="6789"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <host name="192.168.122.101" port="6789"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      </source>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <auth username="openstack">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:        <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      </auth>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <target dev="sda" bus="sata"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    </disk>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <interface type="ethernet">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <mac address="fa:16:3e:99:11:05"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <driver name="vhost" rx_queue_size="512"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <mtu size="1442"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <target dev="tapda11c88a-b2"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    </interface>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <serial type="pty">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <log file="/var/lib/nova/instances/772293bc-ff3c-401f-b61b-905279cb0976/console.log" append="off"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    </serial>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <video>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <model type="virtio"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    </video>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <input type="tablet" bus="usb"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <rng model="virtio">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <backend model="random">/dev/urandom</backend>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    </rng>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="pci" model="pcie-root-port"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <controller type="usb" index="0"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    <memballoon model="virtio">
Dec  6 03:26:08 np0005548731 nova_compute[232433]:      <stats period="10"/>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:    </memballoon>
Dec  6 03:26:08 np0005548731 nova_compute[232433]:  </devices>
Dec  6 03:26:08 np0005548731 nova_compute[232433]: </domain>
Dec  6 03:26:08 np0005548731 nova_compute[232433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.987 232437 DEBUG nova.compute.manager [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Preparing to wait for external event network-vif-plugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.988 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "772293bc-ff3c-401f-b61b-905279cb0976-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.988 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.989 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.991 232437 DEBUG nova.virt.libvirt.vif [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:26:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-618286686',display_name='tempest-TestServerMultinode-server-618286686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-618286686',id=221,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7102dcd5b58d4dec801a71dacc60eaaf',ramdisk_id='',reservation_id='r-71wgmgy0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1864301627',owner_user_name='tempest-TestServerMultinode-1864301627-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:26:05Z,user_data=None,user_id='4989b6252b64443aaec21b075dbc29d9',uuid=772293bc-ff3c-401f-b61b-905279cb0976,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "address": "fa:16:3e:99:11:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda11c88a-b2", "ovs_interfaceid": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.991 232437 DEBUG nova.network.os_vif_util [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converting VIF {"id": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "address": "fa:16:3e:99:11:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda11c88a-b2", "ovs_interfaceid": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.993 232437 DEBUG nova.network.os_vif_util [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:11:05,bridge_name='br-int',has_traffic_filtering=True,id=da11c88a-b2ea-4144-a5d1-ee734cad8848,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda11c88a-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.994 232437 DEBUG os_vif [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:11:05,bridge_name='br-int',has_traffic_filtering=True,id=da11c88a-b2ea-4144-a5d1-ee734cad8848,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda11c88a-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.995 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.996 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:26:08 np0005548731 nova_compute[232433]: 2025-12-06 08:26:08.998 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.004 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.004 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda11c88a-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.005 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda11c88a-b2, col_values=(('external_ids', {'iface-id': 'da11c88a-b2ea-4144-a5d1-ee734cad8848', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:11:05', 'vm-uuid': '772293bc-ff3c-401f-b61b-905279cb0976'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.046 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.048 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  6 03:26:09 np0005548731 NetworkManager[49182]: <info>  [1765009569.0494] manager: (tapda11c88a-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.059 232437 INFO os_vif [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:11:05,bridge_name='br-int',has_traffic_filtering=True,id=da11c88a-b2ea-4144-a5d1-ee734cad8848,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda11c88a-b2')#033[00m
Dec  6 03:26:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:26:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/110990899' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:26:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:26:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/110990899' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.119 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.120 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.120 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] No VIF found with MAC fa:16:3e:99:11:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.120 232437 INFO nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Using config drive#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.212 232437 DEBUG nova.storage.rbd_utils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 772293bc-ff3c-401f-b61b-905279cb0976_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:26:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:26:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:09.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.638 232437 INFO nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Creating config drive at /var/lib/nova/instances/772293bc-ff3c-401f-b61b-905279cb0976/disk.config#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.648 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/772293bc-ff3c-401f-b61b-905279cb0976/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2lbjx0cx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.679 232437 DEBUG nova.network.neutron [req-9932bffc-682d-4fda-88f6-8e8b97d0b2e1 req-af6bb826-c55f-4535-a84d-a88550207010 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Updated VIF entry in instance network info cache for port da11c88a-b2ea-4144-a5d1-ee734cad8848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.679 232437 DEBUG nova.network.neutron [req-9932bffc-682d-4fda-88f6-8e8b97d0b2e1 req-af6bb826-c55f-4535-a84d-a88550207010 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Updating instance_info_cache with network_info: [{"id": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "address": "fa:16:3e:99:11:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda11c88a-b2", "ovs_interfaceid": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.701 232437 DEBUG oslo_concurrency.lockutils [req-9932bffc-682d-4fda-88f6-8e8b97d0b2e1 req-af6bb826-c55f-4535-a84d-a88550207010 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-772293bc-ff3c-401f-b61b-905279cb0976" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.786 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/772293bc-ff3c-401f-b61b-905279cb0976/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2lbjx0cx" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.824 232437 DEBUG nova.storage.rbd_utils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 772293bc-ff3c-401f-b61b-905279cb0976_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  6 03:26:09 np0005548731 nova_compute[232433]: 2025-12-06 08:26:09.829 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/772293bc-ff3c-401f-b61b-905279cb0976/disk.config 772293bc-ff3c-401f-b61b-905279cb0976_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.011 232437 DEBUG oslo_concurrency.processutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/772293bc-ff3c-401f-b61b-905279cb0976/disk.config 772293bc-ff3c-401f-b61b-905279cb0976_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.012 232437 INFO nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Deleting local config drive /var/lib/nova/instances/772293bc-ff3c-401f-b61b-905279cb0976/disk.config because it was imported into RBD.#033[00m
Dec  6 03:26:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:10.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:10 np0005548731 kernel: tapda11c88a-b2: entered promiscuous mode
Dec  6 03:26:10 np0005548731 NetworkManager[49182]: <info>  [1765009570.0757] manager: (tapda11c88a-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Dec  6 03:26:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:26:10Z|01107|binding|INFO|Claiming lport da11c88a-b2ea-4144-a5d1-ee734cad8848 for this chassis.
Dec  6 03:26:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:26:10Z|01108|binding|INFO|da11c88a-b2ea-4144-a5d1-ee734cad8848: Claiming fa:16:3e:99:11:05 10.100.0.7
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.076 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:26:10 np0005548731 systemd-udevd[346151]: Network interface NamePolicy= disabled on kernel command line.
Dec  6 03:26:10 np0005548731 systemd-machined[195355]: New machine qemu-113-instance-000000dd.
Dec  6 03:26:10 np0005548731 NetworkManager[49182]: <info>  [1765009570.1303] device (tapda11c88a-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  6 03:26:10 np0005548731 NetworkManager[49182]: <info>  [1765009570.1316] device (tapda11c88a-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.170 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:26:10Z|01109|binding|INFO|Setting lport da11c88a-b2ea-4144-a5d1-ee734cad8848 ovn-installed in OVS
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:10 np0005548731 systemd[1]: Started Virtual Machine qemu-113-instance-000000dd.
Dec  6 03:26:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:26:10Z|01110|binding|INFO|Setting lport da11c88a-b2ea-4144-a5d1-ee734cad8848 up in Southbound
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.263 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:11:05 10.100.0.7'], port_security=['fa:16:3e:99:11:05 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '772293bc-ff3c-401f-b61b-905279cb0976', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7102dcd5b58d4dec801a71dacc60eaaf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '542d1f4a-a306-4d0a-9719-694ed1b1f413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d50a2125-562d-4707-b67c-0f0d40fd3bbc, chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=da11c88a-b2ea-4144-a5d1-ee734cad8848) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.265 143965 INFO neutron.agent.ovn.metadata.agent [-] Port da11c88a-b2ea-4144-a5d1-ee734cad8848 in datapath 0cfa750a-9a18-4c24-bbf9-75517f0157ee bound to our chassis#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.267 143965 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cfa750a-9a18-4c24-bbf9-75517f0157ee#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.288 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[1348982a-dbaa-455c-9c19-2b63c03fda53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.289 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0cfa750a-91 in ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.291 236668 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0cfa750a-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.292 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2e9c54-d663-41ab-bea3-fb8397dc8ffe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.292 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[cae5305f-08be-4bb7-9520-1efe6aad669e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.306 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[80dd59c8-7759-4b49-88c8-2863acbbc083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.321 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[3b304a00-3f92-4555-8921-b5d1d50d4602]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.363 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[3e450a82-ed2d-4219-9aa9-2ae3f7073c8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.372 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[0da4d5c5-70de-4d81-b357-cb1948e2cb23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 NetworkManager[49182]: <info>  [1765009570.3733] manager: (tap0cfa750a-90): new Veth device (/org/freedesktop/NetworkManager/Devices/530)
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.412 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0a0cf0-3981-4b39-bacd-daac42dbd2e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.415 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[4a533c35-0e69-4609-98b3-16f7b5b9f149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 NetworkManager[49182]: <info>  [1765009570.4424] device (tap0cfa750a-90): carrier: link connected
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.450 236707 DEBUG oslo.privsep.daemon [-] privsep: reply[08a43043-b4c4-46a7-a529-bf63685ba1c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.468 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4e4ecc-d436-4acc-acb0-4949e4d39cc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cfa750a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:93:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 983244, 'reachable_time': 16095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346188, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.485 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[b6116646-12c2-49d3-8362-df408196d2da]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:93e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 983244, 'tstamp': 983244}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346189, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.503 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a01f69-3f61-45cb-860e-2ebd3c488da6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cfa750a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:93:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 983244, 'reachable_time': 16095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346191, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.543 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2e81a8-9cbc-4889-8840-8e73ccc0b078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.624 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[17a8b284-56b7-4e9a-993e-94ae407b0957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.625 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cfa750a-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.625 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.626 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cfa750a-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.627 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:10 np0005548731 NetworkManager[49182]: <info>  [1765009570.6284] manager: (tap0cfa750a-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/531)
Dec  6 03:26:10 np0005548731 kernel: tap0cfa750a-90: entered promiscuous mode
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.629 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.630 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cfa750a-90, col_values=(('external_ids', {'iface-id': '9c309117-cb51-4d66-b962-5f9b07ea29e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.631 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:10 np0005548731 ovn_controller[133927]: 2025-12-06T08:26:10Z|01111|binding|INFO|Releasing lport 9c309117-cb51-4d66-b962-5f9b07ea29e2 from this chassis (sb_readonly=0)
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.658 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.659 143965 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0cfa750a-9a18-4c24-bbf9-75517f0157ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0cfa750a-9a18-4c24-bbf9-75517f0157ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.660 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[78ef68a0-2f04-458b-9851-331ea20a6079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.660 143965 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: global
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    log         /dev/log local0 debug
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    log-tag     haproxy-metadata-proxy-0cfa750a-9a18-4c24-bbf9-75517f0157ee
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    user        root
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    group       root
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    maxconn     1024
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    pidfile     /var/lib/neutron/external/pids/0cfa750a-9a18-4c24-bbf9-75517f0157ee.pid.haproxy
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    daemon
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: defaults
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    log global
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    mode http
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    option httplog
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    option dontlognull
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    option http-server-close
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    option forwardfor
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    retries                 3
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    timeout http-request    30s
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    timeout connect         30s
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    timeout client          32s
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    timeout server          32s
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    timeout http-keep-alive 30s
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: listen listener
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    bind 169.254.169.254:80
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    server metadata /var/lib/neutron/metadata_proxy
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]:    http-request add-header X-OVN-Network-ID 0cfa750a-9a18-4c24-bbf9-75517f0157ee
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  6 03:26:10 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:10.661 143965 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'env', 'PROCESS_TAG=haproxy-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0cfa750a-9a18-4c24-bbf9-75517f0157ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.742 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009570.741413, 772293bc-ff3c-401f-b61b-905279cb0976 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.742 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] VM Started (Lifecycle Event)#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.775 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.783 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009570.741659, 772293bc-ff3c-401f-b61b-905279cb0976 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.783 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] VM Paused (Lifecycle Event)#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.793 232437 DEBUG nova.compute.manager [req-bde1dcb2-1c56-4241-849f-550dcd9c0d5e req-8c5759f3-074a-42f1-9d2f-00df97dae5d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Received event network-vif-plugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.793 232437 DEBUG oslo_concurrency.lockutils [req-bde1dcb2-1c56-4241-849f-550dcd9c0d5e req-8c5759f3-074a-42f1-9d2f-00df97dae5d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "772293bc-ff3c-401f-b61b-905279cb0976-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.793 232437 DEBUG oslo_concurrency.lockutils [req-bde1dcb2-1c56-4241-849f-550dcd9c0d5e req-8c5759f3-074a-42f1-9d2f-00df97dae5d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.794 232437 DEBUG oslo_concurrency.lockutils [req-bde1dcb2-1c56-4241-849f-550dcd9c0d5e req-8c5759f3-074a-42f1-9d2f-00df97dae5d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.794 232437 DEBUG nova.compute.manager [req-bde1dcb2-1c56-4241-849f-550dcd9c0d5e req-8c5759f3-074a-42f1-9d2f-00df97dae5d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Processing event network-vif-plugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.794 232437 DEBUG nova.compute.manager [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.804 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.805 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.808 232437 DEBUG nova.virt.driver [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] Emitting event <LifecycleEvent: 1765009570.8031912, 772293bc-ff3c-401f-b61b-905279cb0976 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.808 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] VM Resumed (Lifecycle Event)#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.810 232437 INFO nova.virt.libvirt.driver [-] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Instance spawned successfully.#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.810 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.832 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.838 232437 DEBUG nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.841 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.842 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.842 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.843 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.843 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.843 232437 DEBUG nova.virt.libvirt.driver [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.874 232437 INFO nova.compute.manager [None req-f6638fc6-dd76-41ac-8d40-ca415e24b4d4 - - - - - -] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.906 232437 INFO nova.compute.manager [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Took 5.78 seconds to spawn the instance on the hypervisor.#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.907 232437 DEBUG nova.compute.manager [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.964 232437 INFO nova.compute.manager [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Took 6.81 seconds to build instance.#033[00m
Dec  6 03:26:10 np0005548731 nova_compute[232433]: 2025-12-06 08:26:10.988 232437 DEBUG oslo_concurrency.lockutils [None req-08669c18-324b-41cf-b519-d357bf470c4e 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:11 np0005548731 podman[346265]: 2025-12-06 08:26:11.037343349 +0000 UTC m=+0.045111172 container create 90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  6 03:26:11 np0005548731 systemd[1]: Started libpod-conmon-90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2.scope.
Dec  6 03:26:11 np0005548731 podman[346265]: 2025-12-06 08:26:11.013490107 +0000 UTC m=+0.021257960 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  6 03:26:11 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:26:11 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e608b48d6cb6fa8f353b05c716502be187d00c4170773eeee24618fbf16f2374/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  6 03:26:11 np0005548731 podman[346265]: 2025-12-06 08:26:11.135522717 +0000 UTC m=+0.143290570 container init 90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  6 03:26:11 np0005548731 podman[346265]: 2025-12-06 08:26:11.140727284 +0000 UTC m=+0.148495107 container start 90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 03:26:11 np0005548731 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[346280]: [NOTICE]   (346284) : New worker (346286) forked
Dec  6 03:26:11 np0005548731 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[346280]: [NOTICE]   (346284) : Loading success.
Dec  6 03:26:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:11.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:12.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:13 np0005548731 nova_compute[232433]: 2025-12-06 08:26:13.297 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:13.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:13 np0005548731 nova_compute[232433]: 2025-12-06 08:26:13.758 232437 DEBUG nova.compute.manager [req-3adef0bb-3142-4c12-ba4d-a0609a3e26ab req-c558260f-0572-44e0-acaf-d48977f53b10 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Received event network-vif-plugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:26:13 np0005548731 nova_compute[232433]: 2025-12-06 08:26:13.759 232437 DEBUG oslo_concurrency.lockutils [req-3adef0bb-3142-4c12-ba4d-a0609a3e26ab req-c558260f-0572-44e0-acaf-d48977f53b10 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "772293bc-ff3c-401f-b61b-905279cb0976-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:13 np0005548731 nova_compute[232433]: 2025-12-06 08:26:13.759 232437 DEBUG oslo_concurrency.lockutils [req-3adef0bb-3142-4c12-ba4d-a0609a3e26ab req-c558260f-0572-44e0-acaf-d48977f53b10 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:13 np0005548731 nova_compute[232433]: 2025-12-06 08:26:13.759 232437 DEBUG oslo_concurrency.lockutils [req-3adef0bb-3142-4c12-ba4d-a0609a3e26ab req-c558260f-0572-44e0-acaf-d48977f53b10 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:13 np0005548731 nova_compute[232433]: 2025-12-06 08:26:13.759 232437 DEBUG nova.compute.manager [req-3adef0bb-3142-4c12-ba4d-a0609a3e26ab req-c558260f-0572-44e0-acaf-d48977f53b10 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] No waiting events found dispatching network-vif-plugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:26:13 np0005548731 nova_compute[232433]: 2025-12-06 08:26:13.759 232437 WARNING nova.compute.manager [req-3adef0bb-3142-4c12-ba4d-a0609a3e26ab req-c558260f-0572-44e0-acaf-d48977f53b10 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Received unexpected event network-vif-plugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 for instance with vm_state active and task_state None.#033[00m
Dec  6 03:26:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:14.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:14 np0005548731 nova_compute[232433]: 2025-12-06 08:26:14.047 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:15.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:16.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:17.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:18.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:18 np0005548731 nova_compute[232433]: 2025-12-06 08:26:18.338 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:19 np0005548731 nova_compute[232433]: 2025-12-06 08:26:19.048 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:19.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:20.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:21 np0005548731 nova_compute[232433]: 2025-12-06 08:26:21.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:26:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:21.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:26:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:22.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:26:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:23 np0005548731 nova_compute[232433]: 2025-12-06 08:26:23.391 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:23.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:23 np0005548731 ovn_controller[133927]: 2025-12-06T08:26:23Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:11:05 10.100.0.7
Dec  6 03:26:23 np0005548731 ovn_controller[133927]: 2025-12-06T08:26:23Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:11:05 10.100.0.7
Dec  6 03:26:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:24.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:24 np0005548731 nova_compute[232433]: 2025-12-06 08:26:24.051 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:25.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:25 np0005548731 podman[346305]: 2025-12-06 08:26:25.915363779 +0000 UTC m=+0.062015256 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 03:26:25 np0005548731 podman[346303]: 2025-12-06 08:26:25.920709729 +0000 UTC m=+0.062959458 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 03:26:25 np0005548731 podman[346304]: 2025-12-06 08:26:25.942359978 +0000 UTC m=+0.086174945 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 03:26:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:26.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:27.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:28.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:28 np0005548731 nova_compute[232433]: 2025-12-06 08:26:28.429 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.053 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:29.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.704 232437 DEBUG oslo_concurrency.lockutils [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "772293bc-ff3c-401f-b61b-905279cb0976" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.704 232437 DEBUG oslo_concurrency.lockutils [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.704 232437 DEBUG oslo_concurrency.lockutils [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "772293bc-ff3c-401f-b61b-905279cb0976-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.705 232437 DEBUG oslo_concurrency.lockutils [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.705 232437 DEBUG oslo_concurrency.lockutils [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.706 232437 INFO nova.compute.manager [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Terminating instance#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.707 232437 DEBUG nova.compute.manager [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  6 03:26:29 np0005548731 kernel: tapda11c88a-b2 (unregistering): left promiscuous mode
Dec  6 03:26:29 np0005548731 NetworkManager[49182]: <info>  [1765009589.7575] device (tapda11c88a-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.765 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:29 np0005548731 ovn_controller[133927]: 2025-12-06T08:26:29Z|01112|binding|INFO|Releasing lport da11c88a-b2ea-4144-a5d1-ee734cad8848 from this chassis (sb_readonly=0)
Dec  6 03:26:29 np0005548731 ovn_controller[133927]: 2025-12-06T08:26:29Z|01113|binding|INFO|Setting lport da11c88a-b2ea-4144-a5d1-ee734cad8848 down in Southbound
Dec  6 03:26:29 np0005548731 ovn_controller[133927]: 2025-12-06T08:26:29Z|01114|binding|INFO|Removing iface tapda11c88a-b2 ovn-installed in OVS
Dec  6 03:26:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:29.774 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:11:05 10.100.0.7'], port_security=['fa:16:3e:99:11:05 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '772293bc-ff3c-401f-b61b-905279cb0976', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7102dcd5b58d4dec801a71dacc60eaaf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '542d1f4a-a306-4d0a-9719-694ed1b1f413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d50a2125-562d-4707-b67c-0f0d40fd3bbc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>], logical_port=da11c88a-b2ea-4144-a5d1-ee734cad8848) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6d8df85ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:26:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:29.775 143965 INFO neutron.agent.ovn.metadata.agent [-] Port da11c88a-b2ea-4144-a5d1-ee734cad8848 in datapath 0cfa750a-9a18-4c24-bbf9-75517f0157ee unbound from our chassis#033[00m
Dec  6 03:26:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:29.776 143965 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cfa750a-9a18-4c24-bbf9-75517f0157ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  6 03:26:29 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 03:26:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:29.777 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[ab82f6a3-a5f7-4ced-9888-f8f4de3c8337]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:29 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:29.777 143965 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee namespace which is not needed anymore#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.801 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:29 np0005548731 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d000000dd.scope: Deactivated successfully.
Dec  6 03:26:29 np0005548731 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d000000dd.scope: Consumed 13.622s CPU time.
Dec  6 03:26:29 np0005548731 systemd-machined[195355]: Machine qemu-113-instance-000000dd terminated.
Dec  6 03:26:29 np0005548731 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[346280]: [NOTICE]   (346284) : haproxy version is 2.8.14-c23fe91
Dec  6 03:26:29 np0005548731 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[346280]: [NOTICE]   (346284) : path to executable is /usr/sbin/haproxy
Dec  6 03:26:29 np0005548731 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[346280]: [WARNING]  (346284) : Exiting Master process...
Dec  6 03:26:29 np0005548731 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[346280]: [ALERT]    (346284) : Current worker (346286) exited with code 143 (Terminated)
Dec  6 03:26:29 np0005548731 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[346280]: [WARNING]  (346284) : All workers exited. Exiting... (0)
Dec  6 03:26:29 np0005548731 systemd[1]: libpod-90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2.scope: Deactivated successfully.
Dec  6 03:26:29 np0005548731 podman[346441]: 2025-12-06 08:26:29.927943945 +0000 UTC m=+0.051025196 container died 90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.950 232437 INFO nova.virt.libvirt.driver [-] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Instance destroyed successfully.#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.951 232437 DEBUG nova.objects.instance [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lazy-loading 'resources' on Instance uuid 772293bc-ff3c-401f-b61b-905279cb0976 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  6 03:26:29 np0005548731 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2-userdata-shm.mount: Deactivated successfully.
Dec  6 03:26:29 np0005548731 systemd[1]: var-lib-containers-storage-overlay-e608b48d6cb6fa8f353b05c716502be187d00c4170773eeee24618fbf16f2374-merged.mount: Deactivated successfully.
Dec  6 03:26:29 np0005548731 podman[346441]: 2025-12-06 08:26:29.968755913 +0000 UTC m=+0.091837164 container cleanup 90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.969 232437 DEBUG nova.virt.libvirt.vif [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:26:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-618286686',display_name='tempest-TestServerMultinode-server-618286686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testservermultinode-server-618286686',id=221,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:26:10Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7102dcd5b58d4dec801a71dacc60eaaf',ramdisk_id='',reservation_id='r-71wgmgy0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1864301627',owner_user_name='tempest-TestServerMultinode-1864301627-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:26:10Z,user_data=None,user_id='4989b6252b64443aaec21b075dbc29d9',uuid=772293bc-ff3c-401f-b61b-905279cb0976,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "address": "fa:16:3e:99:11:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda11c88a-b2", "ovs_interfaceid": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.970 232437 DEBUG nova.network.os_vif_util [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converting VIF {"id": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "address": "fa:16:3e:99:11:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda11c88a-b2", "ovs_interfaceid": "da11c88a-b2ea-4144-a5d1-ee734cad8848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.971 232437 DEBUG nova.network.os_vif_util [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:11:05,bridge_name='br-int',has_traffic_filtering=True,id=da11c88a-b2ea-4144-a5d1-ee734cad8848,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda11c88a-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.972 232437 DEBUG os_vif [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:11:05,bridge_name='br-int',has_traffic_filtering=True,id=da11c88a-b2ea-4144-a5d1-ee734cad8848,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda11c88a-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.974 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.975 232437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda11c88a-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.976 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.978 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:29 np0005548731 nova_compute[232433]: 2025-12-06 08:26:29.981 232437 INFO os_vif [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:11:05,bridge_name='br-int',has_traffic_filtering=True,id=da11c88a-b2ea-4144-a5d1-ee734cad8848,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda11c88a-b2')#033[00m
Dec  6 03:26:29 np0005548731 systemd[1]: libpod-conmon-90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2.scope: Deactivated successfully.
Dec  6 03:26:30 np0005548731 podman[346483]: 2025-12-06 08:26:30.04076621 +0000 UTC m=+0.047338616 container remove 90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 03:26:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:30.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:30.050 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e942fa-0694-429e-86df-53e2add4b42e]: (4, ('Sat Dec  6 08:26:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee (90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2)\n90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2\nSat Dec  6 08:26:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee (90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2)\n90f5140292676389ed1bffd6fe3bd9a5cfc6d1d7646994997c01510f08172fe2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:30.052 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[20220d1e-4390-4b29-a204-62d35d7c3974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:30.053 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cfa750a-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:26:30 np0005548731 nova_compute[232433]: 2025-12-06 08:26:30.055 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:30 np0005548731 kernel: tap0cfa750a-90: left promiscuous mode
Dec  6 03:26:30 np0005548731 nova_compute[232433]: 2025-12-06 08:26:30.072 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:30.074 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[c50dc734-93c3-452d-95c7-f5c29d93b3a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:30.084 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfd7bfd-0b9b-43f6-927c-49800fe5fdcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:30.086 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[7354c5d8-2d52-420c-84a3-2ddf2022db2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:30.103 236668 DEBUG oslo.privsep.daemon [-] privsep: reply[75cbc9cb-50ed-49ec-b686-64794609b1e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 983236, 'reachable_time': 26935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346513, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:30.106 144079 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  6 03:26:30 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:30.106 144079 DEBUG oslo.privsep.daemon [-] privsep: reply[3932a0c0-374a-47d5-8d9a-f87d4aa9c82c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  6 03:26:30 np0005548731 systemd[1]: run-netns-ovnmeta\x2d0cfa750a\x2d9a18\x2d4c24\x2dbbf9\x2d75517f0157ee.mount: Deactivated successfully.
Dec  6 03:26:30 np0005548731 nova_compute[232433]: 2025-12-06 08:26:30.376 232437 INFO nova.virt.libvirt.driver [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Deleting instance files /var/lib/nova/instances/772293bc-ff3c-401f-b61b-905279cb0976_del#033[00m
Dec  6 03:26:30 np0005548731 nova_compute[232433]: 2025-12-06 08:26:30.377 232437 INFO nova.virt.libvirt.driver [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Deletion of /var/lib/nova/instances/772293bc-ff3c-401f-b61b-905279cb0976_del complete#033[00m
Dec  6 03:26:30 np0005548731 nova_compute[232433]: 2025-12-06 08:26:30.430 232437 INFO nova.compute.manager [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Dec  6 03:26:30 np0005548731 nova_compute[232433]: 2025-12-06 08:26:30.431 232437 DEBUG oslo.service.loopingcall [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  6 03:26:30 np0005548731 nova_compute[232433]: 2025-12-06 08:26:30.431 232437 DEBUG nova.compute.manager [-] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  6 03:26:30 np0005548731 nova_compute[232433]: 2025-12-06 08:26:30.431 232437 DEBUG nova.network.neutron [-] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  6 03:26:31 np0005548731 nova_compute[232433]: 2025-12-06 08:26:31.256 232437 DEBUG nova.compute.manager [req-d8d9ac0e-c7bf-444f-bcc4-7d630c4a56f4 req-c82c9c1d-7c6d-44ee-83d4-1068b035625a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Received event network-vif-unplugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:26:31 np0005548731 nova_compute[232433]: 2025-12-06 08:26:31.257 232437 DEBUG oslo_concurrency.lockutils [req-d8d9ac0e-c7bf-444f-bcc4-7d630c4a56f4 req-c82c9c1d-7c6d-44ee-83d4-1068b035625a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "772293bc-ff3c-401f-b61b-905279cb0976-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:31 np0005548731 nova_compute[232433]: 2025-12-06 08:26:31.257 232437 DEBUG oslo_concurrency.lockutils [req-d8d9ac0e-c7bf-444f-bcc4-7d630c4a56f4 req-c82c9c1d-7c6d-44ee-83d4-1068b035625a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:31 np0005548731 nova_compute[232433]: 2025-12-06 08:26:31.258 232437 DEBUG oslo_concurrency.lockutils [req-d8d9ac0e-c7bf-444f-bcc4-7d630c4a56f4 req-c82c9c1d-7c6d-44ee-83d4-1068b035625a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:31 np0005548731 nova_compute[232433]: 2025-12-06 08:26:31.258 232437 DEBUG nova.compute.manager [req-d8d9ac0e-c7bf-444f-bcc4-7d630c4a56f4 req-c82c9c1d-7c6d-44ee-83d4-1068b035625a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] No waiting events found dispatching network-vif-unplugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:26:31 np0005548731 nova_compute[232433]: 2025-12-06 08:26:31.258 232437 DEBUG nova.compute.manager [req-d8d9ac0e-c7bf-444f-bcc4-7d630c4a56f4 req-c82c9c1d-7c6d-44ee-83d4-1068b035625a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Received event network-vif-unplugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  6 03:26:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:31.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:32.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:32 np0005548731 nova_compute[232433]: 2025-12-06 08:26:32.873 232437 DEBUG nova.network.neutron [-] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  6 03:26:32 np0005548731 nova_compute[232433]: 2025-12-06 08:26:32.894 232437 INFO nova.compute.manager [-] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Took 2.46 seconds to deallocate network for instance.#033[00m
Dec  6 03:26:32 np0005548731 nova_compute[232433]: 2025-12-06 08:26:32.948 232437 DEBUG oslo_concurrency.lockutils [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:32 np0005548731 nova_compute[232433]: 2025-12-06 08:26:32.949 232437 DEBUG oslo_concurrency.lockutils [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:32 np0005548731 nova_compute[232433]: 2025-12-06 08:26:32.994 232437 DEBUG nova.compute.manager [req-7815b76b-f324-4552-9768-7fa79fb20ce9 req-6a030ae0-3b20-476e-b13e-176499d74d25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Received event network-vif-deleted-da11c88a-b2ea-4144-a5d1-ee734cad8848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.018 232437 DEBUG oslo_concurrency.processutils [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.334 232437 DEBUG nova.compute.manager [req-82374091-2a24-43fa-a3bd-dad9f95209f0 req-9680c74b-b71d-4597-8f30-164f5e0eda66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Received event network-vif-plugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.334 232437 DEBUG oslo_concurrency.lockutils [req-82374091-2a24-43fa-a3bd-dad9f95209f0 req-9680c74b-b71d-4597-8f30-164f5e0eda66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "772293bc-ff3c-401f-b61b-905279cb0976-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.334 232437 DEBUG oslo_concurrency.lockutils [req-82374091-2a24-43fa-a3bd-dad9f95209f0 req-9680c74b-b71d-4597-8f30-164f5e0eda66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.334 232437 DEBUG oslo_concurrency.lockutils [req-82374091-2a24-43fa-a3bd-dad9f95209f0 req-9680c74b-b71d-4597-8f30-164f5e0eda66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.335 232437 DEBUG nova.compute.manager [req-82374091-2a24-43fa-a3bd-dad9f95209f0 req-9680c74b-b71d-4597-8f30-164f5e0eda66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] No waiting events found dispatching network-vif-plugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.335 232437 WARNING nova.compute.manager [req-82374091-2a24-43fa-a3bd-dad9f95209f0 req-9680c74b-b71d-4597-8f30-164f5e0eda66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Received unexpected event network-vif-plugged-da11c88a-b2ea-4144-a5d1-ee734cad8848 for instance with vm_state deleted and task_state None.#033[00m
Dec  6 03:26:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:26:33 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/992866140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.481 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.499 232437 DEBUG oslo_concurrency.processutils [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.507 232437 DEBUG nova.compute.provider_tree [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.524 232437 DEBUG nova.scheduler.client.report [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.548 232437 DEBUG oslo_concurrency.lockutils [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.580 232437 INFO nova.scheduler.client.report [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Deleted allocations for instance 772293bc-ff3c-401f-b61b-905279cb0976#033[00m
Dec  6 03:26:33 np0005548731 nova_compute[232433]: 2025-12-06 08:26:33.659 232437 DEBUG oslo_concurrency.lockutils [None req-ede68c57-09f5-4990-bb97-2bd64fb64142 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "772293bc-ff3c-401f-b61b-905279cb0976" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:26:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:33.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:34.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:34 np0005548731 nova_compute[232433]: 2025-12-06 08:26:34.978 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:35 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:35.603 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=108, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=107) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:26:35 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:35.604 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:26:35 np0005548731 nova_compute[232433]: 2025-12-06 08:26:35.646 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:35.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:36.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:37.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:38.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:38 np0005548731 nova_compute[232433]: 2025-12-06 08:26:38.483 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:38 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:26:38.606 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '108'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:26:38 np0005548731 nova_compute[232433]: 2025-12-06 08:26:38.975 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:39.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:39 np0005548731 nova_compute[232433]: 2025-12-06 08:26:39.980 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:40.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:41.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:42.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:43 np0005548731 nova_compute[232433]: 2025-12-06 08:26:43.485 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:43.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:26:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:44.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:26:44 np0005548731 nova_compute[232433]: 2025-12-06 08:26:44.948 232437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009589.946617, 772293bc-ff3c-401f-b61b-905279cb0976 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  6 03:26:44 np0005548731 nova_compute[232433]: 2025-12-06 08:26:44.949 232437 INFO nova.compute.manager [-] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] VM Stopped (Lifecycle Event)#033[00m
Dec  6 03:26:44 np0005548731 nova_compute[232433]: 2025-12-06 08:26:44.984 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:45 np0005548731 nova_compute[232433]: 2025-12-06 08:26:45.001 232437 DEBUG nova.compute.manager [None req-494ae5af-d542-4726-ba53-187dbdee3140 - - - - - -] [instance: 772293bc-ff3c-401f-b61b-905279cb0976] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  6 03:26:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:46.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:26:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:26:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:48.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:48 np0005548731 nova_compute[232433]: 2025-12-06 08:26:48.487 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:49.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:49 np0005548731 nova_compute[232433]: 2025-12-06 08:26:49.987 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:50.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:51.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:52.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:53 np0005548731 nova_compute[232433]: 2025-12-06 08:26:53.489 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:53.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:54.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:54 np0005548731 nova_compute[232433]: 2025-12-06 08:26:54.990 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:55.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:56.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:56 np0005548731 podman[346606]: 2025-12-06 08:26:56.895649359 +0000 UTC m=+0.057573707 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 03:26:56 np0005548731 podman[346608]: 2025-12-06 08:26:56.896012267 +0000 UTC m=+0.052369279 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 03:26:56 np0005548731 podman[346607]: 2025-12-06 08:26:56.925294692 +0000 UTC m=+0.085677562 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 03:26:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:26:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:57.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:58.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:58 np0005548731 nova_compute[232433]: 2025-12-06 08:26:58.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:26:58 np0005548731 nova_compute[232433]: 2025-12-06 08:26:58.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:26:58 np0005548731 nova_compute[232433]: 2025-12-06 08:26:58.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:26:58 np0005548731 nova_compute[232433]: 2025-12-06 08:26:58.128 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:26:58 np0005548731 nova_compute[232433]: 2025-12-06 08:26:58.522 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:26:59 np0005548731 nova_compute[232433]: 2025-12-06 08:26:59.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:26:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:26:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:26:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:59.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:26:59 np0005548731 nova_compute[232433]: 2025-12-06 08:26:59.993 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:00.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:27:00.930 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:27:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:27:00.931 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:27:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:27:00.931 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:27:01 np0005548731 nova_compute[232433]: 2025-12-06 08:27:01.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:27:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:27:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:01.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:27:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:02.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:02 np0005548731 nova_compute[232433]: 2025-12-06 08:27:02.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:27:02 np0005548731 nova_compute[232433]: 2025-12-06 08:27:02.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:27:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:03 np0005548731 nova_compute[232433]: 2025-12-06 08:27:03.523 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:03.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:04.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:04 np0005548731 nova_compute[232433]: 2025-12-06 08:27:04.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:27:04 np0005548731 nova_compute[232433]: 2025-12-06 08:27:04.995 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000048s ======
Dec  6 03:27:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:05.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Dec  6 03:27:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:06.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:06 np0005548731 nova_compute[232433]: 2025-12-06 08:27:06.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:27:07 np0005548731 nova_compute[232433]: 2025-12-06 08:27:07.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:27:07 np0005548731 nova_compute[232433]: 2025-12-06 08:27:07.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:27:07 np0005548731 nova_compute[232433]: 2025-12-06 08:27:07.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:27:07 np0005548731 nova_compute[232433]: 2025-12-06 08:27:07.132 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:27:07 np0005548731 nova_compute[232433]: 2025-12-06 08:27:07.132 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:27:07 np0005548731 nova_compute[232433]: 2025-12-06 08:27:07.133 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:27:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:27:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1869144123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:27:07 np0005548731 nova_compute[232433]: 2025-12-06 08:27:07.607 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:27:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:27:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:07.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:27:07 np0005548731 nova_compute[232433]: 2025-12-06 08:27:07.783 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:27:07 np0005548731 nova_compute[232433]: 2025-12-06 08:27:07.784 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4140MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:27:07 np0005548731 nova_compute[232433]: 2025-12-06 08:27:07.784 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:27:07 np0005548731 nova_compute[232433]: 2025-12-06 08:27:07.785 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:27:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:08.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:08 np0005548731 nova_compute[232433]: 2025-12-06 08:27:08.105 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:27:08 np0005548731 nova_compute[232433]: 2025-12-06 08:27:08.105 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:27:08 np0005548731 nova_compute[232433]: 2025-12-06 08:27:08.310 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:27:08 np0005548731 nova_compute[232433]: 2025-12-06 08:27:08.525 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:27:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3898552235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:27:08 np0005548731 nova_compute[232433]: 2025-12-06 08:27:08.743 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:27:08 np0005548731 nova_compute[232433]: 2025-12-06 08:27:08.749 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:27:08 np0005548731 nova_compute[232433]: 2025-12-06 08:27:08.766 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:27:08 np0005548731 nova_compute[232433]: 2025-12-06 08:27:08.798 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:27:08 np0005548731 nova_compute[232433]: 2025-12-06 08:27:08.798 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:27:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:27:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1091282979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:27:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:27:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1091282979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:27:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:09.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:09 np0005548731 nova_compute[232433]: 2025-12-06 08:27:09.799 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:27:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:27:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:27:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:27:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:27:09 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:27:09 np0005548731 nova_compute[232433]: 2025-12-06 08:27:09.998 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:10 np0005548731 nova_compute[232433]: 2025-12-06 08:27:10.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:27:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:10.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:11.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:12.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:13 np0005548731 nova_compute[232433]: 2025-12-06 08:27:13.526 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:27:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:13.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:27:14 np0005548731 ovn_controller[133927]: 2025-12-06T08:27:14Z|01115|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  6 03:27:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:14.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:15 np0005548731 nova_compute[232433]: 2025-12-06 08:27:15.001 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:15.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:16.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:16 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:27:16 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:27:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:17.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:18.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:18 np0005548731 nova_compute[232433]: 2025-12-06 08:27:18.527 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:19.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:20 np0005548731 nova_compute[232433]: 2025-12-06 08:27:20.003 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:20.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:21.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:22.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:23 np0005548731 nova_compute[232433]: 2025-12-06 08:27:23.531 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:27:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:23.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:27:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:24.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:25 np0005548731 nova_compute[232433]: 2025-12-06 08:27:25.006 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:25.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:26.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:27 np0005548731 podman[346979]: 2025-12-06 08:27:27.66102591 +0000 UTC m=+0.050551206 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 03:27:27 np0005548731 podman[346981]: 2025-12-06 08:27:27.675627287 +0000 UTC m=+0.060453108 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec  6 03:27:27 np0005548731 podman[346980]: 2025-12-06 08:27:27.690087359 +0000 UTC m=+0.076683393 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:27:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:27.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:28.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:28 np0005548731 nova_compute[232433]: 2025-12-06 08:27:28.532 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:29.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:30 np0005548731 nova_compute[232433]: 2025-12-06 08:27:30.008 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:30.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:31.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:32.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:33 np0005548731 nova_compute[232433]: 2025-12-06 08:27:33.534 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:27:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:33.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:27:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:34.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:35 np0005548731 nova_compute[232433]: 2025-12-06 08:27:35.011 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:27:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:35.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:27:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:36.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:37.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:38.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:38 np0005548731 nova_compute[232433]: 2025-12-06 08:27:38.535 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e429 e429: 3 total, 3 up, 3 in
Dec  6 03:27:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:39.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:40 np0005548731 nova_compute[232433]: 2025-12-06 08:27:40.013 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:40.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e430 e430: 3 total, 3 up, 3 in
Dec  6 03:27:41 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e431 e431: 3 total, 3 up, 3 in
Dec  6 03:27:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:41.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:42.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e432 e432: 3 total, 3 up, 3 in
Dec  6 03:27:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:43 np0005548731 nova_compute[232433]: 2025-12-06 08:27:43.536 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:43.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:44.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:45 np0005548731 nova_compute[232433]: 2025-12-06 08:27:45.034 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  6 03:27:45 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1713963602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  6 03:27:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:45.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:27:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:46.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:27:46 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e433 e433: 3 total, 3 up, 3 in
Dec  6 03:27:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:47.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:48.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:48 np0005548731 nova_compute[232433]: 2025-12-06 08:27:48.540 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:27:48.612 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=109, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=108) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:27:48 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:27:48.612 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:27:48 np0005548731 nova_compute[232433]: 2025-12-06 08:27:48.613 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:49.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:50 np0005548731 nova_compute[232433]: 2025-12-06 08:27:50.037 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:27:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:50.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:27:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e434 e434: 3 total, 3 up, 3 in
Dec  6 03:27:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:51.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:52.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:53 np0005548731 nova_compute[232433]: 2025-12-06 08:27:53.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:53.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:54.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:55 np0005548731 nova_compute[232433]: 2025-12-06 08:27:55.041 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:55.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:56.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:27:57 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:27:57.614 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '109'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:27:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:57.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:57 np0005548731 podman[347137]: 2025-12-06 08:27:57.897315648 +0000 UTC m=+0.056087231 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:27:57 np0005548731 podman[347136]: 2025-12-06 08:27:57.912508249 +0000 UTC m=+0.074724605 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  6 03:27:57 np0005548731 podman[347135]: 2025-12-06 08:27:57.915261216 +0000 UTC m=+0.077614606 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  6 03:27:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:58.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:27:58 np0005548731 nova_compute[232433]: 2025-12-06 08:27:58.544 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:27:59 np0005548731 nova_compute[232433]: 2025-12-06 08:27:59.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:27:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:27:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:27:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:59.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:00 np0005548731 nova_compute[232433]: 2025-12-06 08:28:00.044 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:00 np0005548731 nova_compute[232433]: 2025-12-06 08:28:00.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:28:00 np0005548731 nova_compute[232433]: 2025-12-06 08:28:00.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:28:00 np0005548731 nova_compute[232433]: 2025-12-06 08:28:00.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:28:00 np0005548731 nova_compute[232433]: 2025-12-06 08:28:00.127 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:28:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:00.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:28:00.931 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:28:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:28:00.931 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:28:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:28:00.931 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:28:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:01.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:02 np0005548731 nova_compute[232433]: 2025-12-06 08:28:02.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:28:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:28:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:02.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:28:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:03 np0005548731 nova_compute[232433]: 2025-12-06 08:28:03.545 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:28:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:03.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:28:04 np0005548731 nova_compute[232433]: 2025-12-06 08:28:04.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:28:04 np0005548731 nova_compute[232433]: 2025-12-06 08:28:04.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:28:04 np0005548731 nova_compute[232433]: 2025-12-06 08:28:04.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:28:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:04.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:05 np0005548731 nova_compute[232433]: 2025-12-06 08:28:05.047 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:05.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:28:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:06.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:28:07 np0005548731 nova_compute[232433]: 2025-12-06 08:28:07.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:28:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:07.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.152 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.152 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.153 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.153 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.153 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:28:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:08.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.547 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:28:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2913063228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.578 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.726 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.727 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4145MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.727 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.727 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.799 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.799 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:28:08 np0005548731 nova_compute[232433]: 2025-12-06 08:28:08.894 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:28:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:28:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1864675316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:28:09 np0005548731 nova_compute[232433]: 2025-12-06 08:28:09.314 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:28:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:28:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1310669518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:28:09 np0005548731 nova_compute[232433]: 2025-12-06 08:28:09.321 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:28:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:28:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1310669518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:28:09 np0005548731 nova_compute[232433]: 2025-12-06 08:28:09.336 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:28:09 np0005548731 nova_compute[232433]: 2025-12-06 08:28:09.337 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:28:09 np0005548731 nova_compute[232433]: 2025-12-06 08:28:09.337 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:28:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:09.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:10 np0005548731 nova_compute[232433]: 2025-12-06 08:28:10.050 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:10.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:11 np0005548731 nova_compute[232433]: 2025-12-06 08:28:11.339 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:28:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:11.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:12 np0005548731 nova_compute[232433]: 2025-12-06 08:28:12.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:28:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:12.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:13 np0005548731 nova_compute[232433]: 2025-12-06 08:28:13.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:13.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:28:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:14.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:28:15 np0005548731 nova_compute[232433]: 2025-12-06 08:28:15.055 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:15.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:16.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 03:28:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:28:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:28:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:28:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:17.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:28:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:18.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:28:18 np0005548731 nova_compute[232433]: 2025-12-06 08:28:18.551 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:19.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:20 np0005548731 nova_compute[232433]: 2025-12-06 08:28:20.057 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:20.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:21.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:22.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:23 np0005548731 nova_compute[232433]: 2025-12-06 08:28:23.551 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:28:23 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:28:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:23.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:28:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:24.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:28:25 np0005548731 nova_compute[232433]: 2025-12-06 08:28:25.060 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:25 np0005548731 nova_compute[232433]: 2025-12-06 08:28:25.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:28:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:25.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:26.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e435 e435: 3 total, 3 up, 3 in
Dec  6 03:28:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e436 e436: 3 total, 3 up, 3 in
Dec  6 03:28:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:27.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:28:28 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1100284927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:28:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:28:28 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1100284927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:28:28 np0005548731 podman[347512]: 2025-12-06 08:28:28.166107349 +0000 UTC m=+0.069332524 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:28:28 np0005548731 podman[347514]: 2025-12-06 08:28:28.188454095 +0000 UTC m=+0.085800487 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  6 03:28:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:28.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:28 np0005548731 podman[347513]: 2025-12-06 08:28:28.253286418 +0000 UTC m=+0.151376487 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 03:28:28 np0005548731 nova_compute[232433]: 2025-12-06 08:28:28.554 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:28:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:29.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:28:30 np0005548731 nova_compute[232433]: 2025-12-06 08:28:30.062 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:30.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:31.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:32.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:33 np0005548731 nova_compute[232433]: 2025-12-06 08:28:33.556 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:33.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:34.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:35 np0005548731 nova_compute[232433]: 2025-12-06 08:28:35.065 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 e437: 3 total, 3 up, 3 in
Dec  6 03:28:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:36.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:28:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:37.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:28:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:38.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:38 np0005548731 nova_compute[232433]: 2025-12-06 08:28:38.558 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:39.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:40 np0005548731 nova_compute[232433]: 2025-12-06 08:28:40.067 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:28:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:40.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:28:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:41.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:42.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:43 np0005548731 nova_compute[232433]: 2025-12-06 08:28:43.561 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:28:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:28:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:44.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:45 np0005548731 nova_compute[232433]: 2025-12-06 08:28:45.070 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:45.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:46.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:47.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:48.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:48 np0005548731 nova_compute[232433]: 2025-12-06 08:28:48.563 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:49.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:50 np0005548731 nova_compute[232433]: 2025-12-06 08:28:50.074 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:28:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:50.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:28:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:51.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:52.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:53.498364) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733498433, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 2413, "num_deletes": 254, "total_data_size": 5770770, "memory_usage": 5866824, "flush_reason": "Manual Compaction"}
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733521117, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 3771841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88784, "largest_seqno": 91192, "table_properties": {"data_size": 3762129, "index_size": 6141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20097, "raw_average_key_size": 20, "raw_value_size": 3742679, "raw_average_value_size": 3819, "num_data_blocks": 269, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009522, "oldest_key_time": 1765009522, "file_creation_time": 1765009733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 22785 microseconds, and 7080 cpu microseconds.
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:53.521158) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 3771841 bytes OK
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:53.521175) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:53.522893) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:53.522903) EVENT_LOG_v1 {"time_micros": 1765009733522900, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:53.522918) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 5760261, prev total WAL file size 5760261, number of live WAL files 2.
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:53.524099) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(3683KB)], [183(10MB)]
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733524160, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 15072371, "oldest_snapshot_seqno": -1}
Dec  6 03:28:53 np0005548731 nova_compute[232433]: 2025-12-06 08:28:53.564 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:53.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 11678 keys, 13164437 bytes, temperature: kUnknown
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733944731, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 13164437, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13091830, "index_size": 42286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29253, "raw_key_size": 309634, "raw_average_key_size": 26, "raw_value_size": 12890377, "raw_average_value_size": 1103, "num_data_blocks": 1592, "num_entries": 11678, "num_filter_entries": 11678, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765009733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:28:53 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:53.944985) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 13164437 bytes
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:54.111379) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 35.8 rd, 31.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.8 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 12201, records dropped: 523 output_compression: NoCompression
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:54.111421) EVENT_LOG_v1 {"time_micros": 1765009734111406, "job": 118, "event": "compaction_finished", "compaction_time_micros": 420653, "compaction_time_cpu_micros": 31353, "output_level": 6, "num_output_files": 1, "total_output_size": 13164437, "num_input_records": 12201, "num_output_records": 11678, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009734112355, "job": 118, "event": "table_file_deletion", "file_number": 185}
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009734114850, "job": 118, "event": "table_file_deletion", "file_number": 183}
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:53.524009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:54.114911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:54.114915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:54.114916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:54.114918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:28:54 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:28:54.114919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:28:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:54.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:55 np0005548731 nova_compute[232433]: 2025-12-06 08:28:55.077 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:55.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:56.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:28:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:57.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:28:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:58.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:28:58 np0005548731 nova_compute[232433]: 2025-12-06 08:28:58.602 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:28:58 np0005548731 podman[347665]: 2025-12-06 08:28:58.884504495 +0000 UTC m=+0.051045917 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec  6 03:28:58 np0005548731 podman[347666]: 2025-12-06 08:28:58.912676312 +0000 UTC m=+0.076053377 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 03:28:58 np0005548731 podman[347667]: 2025-12-06 08:28:58.915298186 +0000 UTC m=+0.074717185 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  6 03:28:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:28:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:28:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:59.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:00 np0005548731 nova_compute[232433]: 2025-12-06 08:29:00.079 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:00.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:29:00.931 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:29:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:29:00.932 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:29:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:29:00.932 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:29:01 np0005548731 nova_compute[232433]: 2025-12-06 08:29:01.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:29:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:29:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:01.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:29:02 np0005548731 nova_compute[232433]: 2025-12-06 08:29:02.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:29:02 np0005548731 nova_compute[232433]: 2025-12-06 08:29:02.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:29:02 np0005548731 nova_compute[232433]: 2025-12-06 08:29:02.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:29:02 np0005548731 nova_compute[232433]: 2025-12-06 08:29:02.130 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:29:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:02.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:03 np0005548731 nova_compute[232433]: 2025-12-06 08:29:03.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:29:03 np0005548731 nova_compute[232433]: 2025-12-06 08:29:03.602 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:03.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:04.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:05 np0005548731 nova_compute[232433]: 2025-12-06 08:29:05.081 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:05 np0005548731 nova_compute[232433]: 2025-12-06 08:29:05.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:29:05 np0005548731 nova_compute[232433]: 2025-12-06 08:29:05.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:29:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:05.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:06 np0005548731 nova_compute[232433]: 2025-12-06 08:29:06.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:29:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:29:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:06.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:29:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:07.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:08 np0005548731 nova_compute[232433]: 2025-12-06 08:29:08.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:29:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:29:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:08.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:08 np0005548731 nova_compute[232433]: 2025-12-06 08:29:08.687 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:29:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761457245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:29:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:29:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761457245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:29:09 np0005548731 nova_compute[232433]: 2025-12-06 08:29:09.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:29:09 np0005548731 nova_compute[232433]: 2025-12-06 08:29:09.213 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:29:09 np0005548731 nova_compute[232433]: 2025-12-06 08:29:09.214 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:29:09 np0005548731 nova_compute[232433]: 2025-12-06 08:29:09.214 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:29:09 np0005548731 nova_compute[232433]: 2025-12-06 08:29:09.214 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:29:09 np0005548731 nova_compute[232433]: 2025-12-06 08:29:09.214 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:29:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:29:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3044378136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:29:09 np0005548731 nova_compute[232433]: 2025-12-06 08:29:09.790 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:29:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:29:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:09.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:29:09 np0005548731 nova_compute[232433]: 2025-12-06 08:29:09.993 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:29:09 np0005548731 nova_compute[232433]: 2025-12-06 08:29:09.994 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4158MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:29:09 np0005548731 nova_compute[232433]: 2025-12-06 08:29:09.994 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:29:09 np0005548731 nova_compute[232433]: 2025-12-06 08:29:09.995 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:29:10 np0005548731 nova_compute[232433]: 2025-12-06 08:29:10.084 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:10 np0005548731 nova_compute[232433]: 2025-12-06 08:29:10.211 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:29:10 np0005548731 nova_compute[232433]: 2025-12-06 08:29:10.211 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:29:10 np0005548731 nova_compute[232433]: 2025-12-06 08:29:10.229 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:29:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:29:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:10.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:29:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3507224446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:29:10 np0005548731 nova_compute[232433]: 2025-12-06 08:29:10.754 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:29:10 np0005548731 nova_compute[232433]: 2025-12-06 08:29:10.761 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:29:10 np0005548731 nova_compute[232433]: 2025-12-06 08:29:10.943 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:29:10 np0005548731 nova_compute[232433]: 2025-12-06 08:29:10.945 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:29:10 np0005548731 nova_compute[232433]: 2025-12-06 08:29:10.946 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:29:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:11.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:11 np0005548731 nova_compute[232433]: 2025-12-06 08:29:11.948 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:29:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:12.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:13 np0005548731 nova_compute[232433]: 2025-12-06 08:29:13.688 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:29:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:13.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:14 np0005548731 nova_compute[232433]: 2025-12-06 08:29:14.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:29:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:14.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:15 np0005548731 nova_compute[232433]: 2025-12-06 08:29:15.087 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:29:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:15.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:29:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:16.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:17.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:18.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:18 np0005548731 nova_compute[232433]: 2025-12-06 08:29:18.733 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:29:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:19.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:20 np0005548731 nova_compute[232433]: 2025-12-06 08:29:20.090 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:20.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:29:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:21.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:29:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:22.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:23 np0005548731 nova_compute[232433]: 2025-12-06 08:29:23.735 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:23.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:24.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:29:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 18K writes, 91K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1550 writes, 7307 keys, 1550 commit groups, 1.0 writes per commit group, ingest: 15.53 MB, 0.03 MB/s#012Interval WAL: 1550 writes, 1550 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     43.3      2.62              0.33        59    0.044       0      0       0.0       0.0#012  L6      1/0   12.55 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.5    105.9     91.2      6.87              1.82        58    0.118    494K    31K       0.0       0.0#012 Sum      1/0   12.55 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.5     76.6     77.9      9.49              2.15       117    0.081    494K    31K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.4     83.7     85.3      0.82              0.19        10    0.082     59K   2528       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    105.9     91.2      6.87              1.82        58    0.118    494K    31K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     43.3      2.62              0.33        58    0.045       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.111, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.72 GB write, 0.10 MB/s write, 0.71 GB read, 0.10 MB/s read, 9.5 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 78.89 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000479 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4468,75.41 MB,24.8066%) FilterBlock(117,1.33 MB,0.438484%) IndexBlock(117,2.14 MB,0.704299%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 03:29:25 np0005548731 nova_compute[232433]: 2025-12-06 08:29:25.093 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:29:25 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:29:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:29:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:25.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:29:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:26.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:27.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:28.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:28 np0005548731 nova_compute[232433]: 2025-12-06 08:29:28.792 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:29.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:29 np0005548731 podman[348016]: 2025-12-06 08:29:29.896799548 +0000 UTC m=+0.058576351 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:29:29 np0005548731 podman[348018]: 2025-12-06 08:29:29.928184373 +0000 UTC m=+0.080681671 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 03:29:29 np0005548731 podman[348017]: 2025-12-06 08:29:29.94934487 +0000 UTC m=+0.108110860 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  6 03:29:30 np0005548731 nova_compute[232433]: 2025-12-06 08:29:30.094 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:30.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:29:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:31.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:29:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:29:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:32.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:33 np0005548731 nova_compute[232433]: 2025-12-06 08:29:33.795 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:33.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:29:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:34.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:29:35 np0005548731 nova_compute[232433]: 2025-12-06 08:29:35.096 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:35.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:29:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:36.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:37.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:38.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:38 np0005548731 nova_compute[232433]: 2025-12-06 08:29:38.798 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:29:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:39.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:40 np0005548731 nova_compute[232433]: 2025-12-06 08:29:40.098 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:40.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:41 np0005548731 nova_compute[232433]: 2025-12-06 08:29:41.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:29:41 np0005548731 nova_compute[232433]: 2025-12-06 08:29:41.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 03:29:41 np0005548731 nova_compute[232433]: 2025-12-06 08:29:41.145 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 03:29:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:41.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:42.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:43 np0005548731 nova_compute[232433]: 2025-12-06 08:29:43.801 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:43.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:44.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:44 np0005548731 systemd-logind[794]: New session 73 of user zuul.
Dec  6 03:29:44 np0005548731 systemd[1]: Started Session 73 of User zuul.
Dec  6 03:29:45 np0005548731 nova_compute[232433]: 2025-12-06 08:29:45.100 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:29:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:45.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:46.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:47.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:48.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:48 np0005548731 nova_compute[232433]: 2025-12-06 08:29:48.802 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  6 03:29:48 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3120292559' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  6 03:29:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:29:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:49.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:50 np0005548731 nova_compute[232433]: 2025-12-06 08:29:50.104 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:29:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:50.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:51.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:51 np0005548731 ovs-vsctl[348477]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  6 03:29:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:29:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:52.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:53 np0005548731 virtqemud[232080]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  6 03:29:53 np0005548731 virtqemud[232080]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  6 03:29:53 np0005548731 virtqemud[232080]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  6 03:29:53 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: cache status {prefix=cache status} (starting...)
Dec  6 03:29:53 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:53 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: client ls {prefix=client ls} (starting...)
Dec  6 03:29:53 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:53 np0005548731 nova_compute[232433]: 2025-12-06 08:29:53.804 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:53 np0005548731 lvm[348805]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 03:29:53 np0005548731 lvm[348805]: VG ceph_vg0 finished
Dec  6 03:29:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:29:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:54.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:29:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: damage ls {prefix=damage ls} (starting...)
Dec  6 03:29:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump loads {prefix=dump loads} (starting...)
Dec  6 03:29:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  6 03:29:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Dec  6 03:29:54 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/201956814' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  6 03:29:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  6 03:29:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  6 03:29:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  6 03:29:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:55 np0005548731 nova_compute[232433]: 2025-12-06 08:29:55.106 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  6 03:29:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  6 03:29:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec  6 03:29:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1262805181' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  6 03:29:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: ops {prefix=ops} (starting...)
Dec  6 03:29:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec  6 03:29:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2945805798' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  6 03:29:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec  6 03:29:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1809216193' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  6 03:29:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:55.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  6 03:29:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2980343332' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  6 03:29:56 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: session ls {prefix=session ls} (starting...)
Dec  6 03:29:56 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:29:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:29:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:56.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:29:56 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: status {prefix=status} (starting...)
Dec  6 03:29:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Dec  6 03:29:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/589179532' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  6 03:29:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  6 03:29:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1722038647' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  6 03:29:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec  6 03:29:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3902118403' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  6 03:29:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  6 03:29:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3005104597' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 03:29:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:29:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  6 03:29:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3607517829' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  6 03:29:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:57.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec  6 03:29:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3947966807' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  6 03:29:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  6 03:29:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3263639216' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  6 03:29:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:58.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec  6 03:29:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1373369456' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  6 03:29:58 np0005548731 nova_compute[232433]: 2025-12-06 08:29:58.810 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:29:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  6 03:29:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4289010433' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  6 03:29:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  6 03:29:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3078248732' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: mgrc ms_handle_reset ms_handle_reset con 0x5612d4428800
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/798720280
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/798720280,v1:192.168.122.100:6801/798720280]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: mgrc handle_mgr_configure stats_period=5
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509165568 unmapped: 90439680 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19a4f2000/0x0/0x1bfc00000, data 0x4913228/0x4b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509165568 unmapped: 90439680 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509173760 unmapped: 90431488 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.243182182s of 10.731412888s, submitted: 255
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d3f4f400 session 0x5612d2bb5c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5330693 data_alloc: 234881024 data_used: 28033024
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d2bcb400 session 0x5612d3462960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d0bc7400 session 0x5612dbe0f0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d2bb3000 session 0x5612d3446b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612dea3f800 session 0x5612d4e832c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499539968 unmapped: 100065280 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499539968 unmapped: 100065280 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19c3cf000/0x0/0x1bfc00000, data 0x25c71b6/0x27dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499539968 unmapped: 100065280 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d1289800 session 0x5612d0b63a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495517696 unmapped: 104087552 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d9185800 session 0x5612d0a9b0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495517696 unmapped: 104087552 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4866544 data_alloc: 218103808 data_used: 8429568
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495517696 unmapped: 104087552 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495517696 unmapped: 104087552 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495525888 unmapped: 104079360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d09f000/0x0/0x1bfc00000, data 0x1d691b6/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495525888 unmapped: 104079360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495525888 unmapped: 104079360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4866544 data_alloc: 218103808 data_used: 8429568
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495525888 unmapped: 104079360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d09f000/0x0/0x1bfc00000, data 0x1d691b6/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495525888 unmapped: 104079360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d09f000/0x0/0x1bfc00000, data 0x1d691b6/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495525888 unmapped: 104079360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495525888 unmapped: 104079360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495525888 unmapped: 104079360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4866544 data_alloc: 218103808 data_used: 8429568
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495525888 unmapped: 104079360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495525888 unmapped: 104079360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d09f000/0x0/0x1bfc00000, data 0x1d691b6/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495525888 unmapped: 104079360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d09f000/0x0/0x1bfc00000, data 0x1d691b6/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.026536942s of 18.391403198s, submitted: 39
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d9182000 session 0x5612d2e034a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d09f000/0x0/0x1bfc00000, data 0x1d691b6/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495534080 unmapped: 104071168 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d7413000 session 0x5612d06330e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495534080 unmapped: 104071168 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4790868 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495534080 unmapped: 104071168 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495534080 unmapped: 104071168 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495517696 unmapped: 104087552 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495452160 unmapped: 104153088 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495452160 unmapped: 104153088 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4790868 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495452160 unmapped: 104153088 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495386624 unmapped: 104218624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495394816 unmapped: 104210432 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.635267258s of 10.401024818s, submitted: 144
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495386624 unmapped: 104218624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495394816 unmapped: 104210432 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4790868 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495419392 unmapped: 104185856 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495378432 unmapped: 104226816 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495378432 unmapped: 104226816 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495378432 unmapped: 104226816 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495378432 unmapped: 104226816 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4790868 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495378432 unmapped: 104226816 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495378432 unmapped: 104226816 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495386624 unmapped: 104218624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495386624 unmapped: 104218624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495386624 unmapped: 104218624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4790868 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495386624 unmapped: 104218624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495386624 unmapped: 104218624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495386624 unmapped: 104218624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495386624 unmapped: 104218624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495386624 unmapped: 104218624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4790868 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495394816 unmapped: 104210432 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495394816 unmapped: 104210432 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495394816 unmapped: 104210432 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495394816 unmapped: 104210432 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495394816 unmapped: 104210432 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4790868 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495394816 unmapped: 104210432 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495394816 unmapped: 104210432 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495394816 unmapped: 104210432 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495403008 unmapped: 104202240 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495403008 unmapped: 104202240 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4790868 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495403008 unmapped: 104202240 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495403008 unmapped: 104202240 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495403008 unmapped: 104202240 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495403008 unmapped: 104202240 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495403008 unmapped: 104202240 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4790868 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495403008 unmapped: 104202240 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495419392 unmapped: 104185856 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495419392 unmapped: 104185856 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495419392 unmapped: 104185856 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495419392 unmapped: 104185856 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4790868 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495419392 unmapped: 104185856 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495419392 unmapped: 104185856 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495419392 unmapped: 104185856 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d3b8c800 session 0x5612d5b0e960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d2bb2c00 session 0x5612d0b701e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d3694400 session 0x5612d0aea1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d157c000 session 0x5612d195f0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.349212646s of 40.064338684s, submitted: 128
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495427584 unmapped: 104177664 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19d84e000/0x0/0x1bfc00000, data 0x15ba1b6/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495435776 unmapped: 104169472 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4790868 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495779840 unmapped: 103825408 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495779840 unmapped: 103825408 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d27b3400 session 0x5612e1712f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d27b3400 session 0x5612e17123c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d157c000 session 0x5612d0b630e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d2bb2c00 session 0x5612d0b70d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d3694400 session 0x5612d0a55a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495779840 unmapped: 103825408 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495779840 unmapped: 103825408 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19ce88000/0x0/0x1bfc00000, data 0x1f7f218/0x2196000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495779840 unmapped: 103825408 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19ce88000/0x0/0x1bfc00000, data 0x1f7f218/0x2196000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4872736 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495779840 unmapped: 103825408 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495779840 unmapped: 103825408 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495788032 unmapped: 103817216 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495788032 unmapped: 103817216 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19ce88000/0x0/0x1bfc00000, data 0x1f7f218/0x2196000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495788032 unmapped: 103817216 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4872736 data_alloc: 218103808 data_used: 7065600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.646504402s of 11.787144661s, submitted: 46
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d9182800 session 0x5612d2bb54a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495730688 unmapped: 103874560 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612dea3f800 session 0x5612d0b71e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19ce88000/0x0/0x1bfc00000, data 0x1f7f218/0x2196000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495738880 unmapped: 103866368 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495747072 unmapped: 103858176 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495755264 unmapped: 103849984 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495755264 unmapped: 103849984 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4944049 data_alloc: 234881024 data_used: 16932864
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495755264 unmapped: 103849984 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19ce88000/0x0/0x1bfc00000, data 0x1f7f218/0x2196000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495755264 unmapped: 103849984 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495755264 unmapped: 103849984 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495755264 unmapped: 103849984 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d1598400 session 0x5612d29261e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d2bcb000 session 0x5612d31b1680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d1b50400 session 0x5612daac3680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 495755264 unmapped: 103849984 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d27b2400 session 0x5612d18e8b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4944049 data_alloc: 234881024 data_used: 16932864
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d1598400 session 0x5612cff53e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 496369664 unmapped: 103235584 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19c62f000/0x0/0x1bfc00000, data 0x27d8218/0x29ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 496369664 unmapped: 103235584 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 496369664 unmapped: 103235584 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19c62f000/0x0/0x1bfc00000, data 0x27d8218/0x29ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.330034256s of 13.415828705s, submitted: 28
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500817920 unmapped: 98787328 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499785728 unmapped: 99819520 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5118151 data_alloc: 234881024 data_used: 18571264
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501276672 unmapped: 98328576 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d2bca400 session 0x5612d0aead20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501276672 unmapped: 98328576 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d7ff7000 session 0x5612d31b0b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501276672 unmapped: 98328576 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d1b51c00 session 0x5612d2e02000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d2bb3800 session 0x5612d077c000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19b9a0000/0x0/0x1bfc00000, data 0x346623b/0x367e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500645888 unmapped: 98959360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500645888 unmapped: 98959360 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5130774 data_alloc: 234881024 data_used: 18944000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500670464 unmapped: 98934784 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19b9a0000/0x0/0x1bfc00000, data 0x346623b/0x367e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500670464 unmapped: 98934784 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500670464 unmapped: 98934784 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500670464 unmapped: 98934784 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500670464 unmapped: 98934784 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183290 data_alloc: 234881024 data_used: 24977408
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500670464 unmapped: 98934784 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.652697563s of 12.918767929s, submitted: 134
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500670464 unmapped: 98934784 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19b97f000/0x0/0x1bfc00000, data 0x348723b/0x369f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500670464 unmapped: 98934784 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500670464 unmapped: 98934784 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19b979000/0x0/0x1bfc00000, data 0x348d23b/0x36a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500670464 unmapped: 98934784 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183346 data_alloc: 234881024 data_used: 24977408
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500670464 unmapped: 98934784 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19b979000/0x0/0x1bfc00000, data 0x348d23b/0x36a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501407744 unmapped: 98197504 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19a919000/0x0/0x1bfc00000, data 0x44e523b/0x46fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500613120 unmapped: 98992128 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 500613120 unmapped: 98992128 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501678080 unmapped: 97927168 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5329452 data_alloc: 234881024 data_used: 25567232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501678080 unmapped: 97927168 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 97894400 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19a86c000/0x0/0x1bfc00000, data 0x459a23b/0x47b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 97894400 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.820291519s of 12.151731491s, submitted: 151
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d27b3400 session 0x5612d0773e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d2bb2c00 session 0x5612d31b0d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 97894400 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19a867000/0x0/0x1bfc00000, data 0x459f23b/0x47b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 97894400 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5325380 data_alloc: 234881024 data_used: 25583616
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 97894400 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 97894400 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 97894400 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 97894400 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19a842000/0x0/0x1bfc00000, data 0x45c423b/0x47dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 97886208 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5327324 data_alloc: 234881024 data_used: 25583616
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 97886208 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 97886208 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 heartbeat osd_stat(store_statfs(0x19a842000/0x0/0x1bfc00000, data 0x45c423b/0x47dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 97886208 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 ms_handle_reset con 0x5612d0a61400 session 0x5612d0a55a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501727232 unmapped: 97878016 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.250429153s of 10.288573265s, submitted: 12
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 handle_osd_map epochs [411,411], i have 411, src has [1,411]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 ms_handle_reset con 0x5612d4653000 session 0x5612d2e03e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 ms_handle_reset con 0x5612d2e43c00 session 0x5612d30de5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 ms_handle_reset con 0x5612d0a61400 session 0x5612d135b680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501751808 unmapped: 97853440 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5332626 data_alloc: 234881024 data_used: 25587712
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 heartbeat osd_stat(store_statfs(0x19a83d000/0x0/0x1bfc00000, data 0x45c5f69/0x47e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 ms_handle_reset con 0x5612d27b3400 session 0x5612e1713e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501760000 unmapped: 97845248 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 ms_handle_reset con 0x5612d2bb2c00 session 0x5612d2bb50e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501760000 unmapped: 97845248 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 heartbeat osd_stat(store_statfs(0x19a83b000/0x0/0x1bfc00000, data 0x45c8f69/0x47e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5333334 data_alloc: 234881024 data_used: 25665536
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 heartbeat osd_stat(store_statfs(0x19a83b000/0x0/0x1bfc00000, data 0x45c8f69/0x47e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5333494 data_alloc: 234881024 data_used: 25669632
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.692729950s of 11.723511696s, submitted: 14
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 ms_handle_reset con 0x5612d4653000 session 0x5612d27a1c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 ms_handle_reset con 0x5612d297b400 session 0x5612d29272c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 heartbeat osd_stat(store_statfs(0x19a83b000/0x0/0x1bfc00000, data 0x45c8f69/0x47e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 heartbeat osd_stat(store_statfs(0x19a83b000/0x0/0x1bfc00000, data 0x45c8f69/0x47e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 heartbeat osd_stat(store_statfs(0x19a83b000/0x0/0x1bfc00000, data 0x45c8f69/0x47e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5333858 data_alloc: 234881024 data_used: 25677824
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501776384 unmapped: 97828864 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 heartbeat osd_stat(store_statfs(0x19a829000/0x0/0x1bfc00000, data 0x45daf69/0x47f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 ms_handle_reset con 0x5612d2bcb800 session 0x5612d480af00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 ms_handle_reset con 0x5612d8e80400 session 0x5612d30cb680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501841920 unmapped: 97763328 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 412 ms_handle_reset con 0x5612d3694800 session 0x5612d2927860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501735424 unmapped: 97869824 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 412 ms_handle_reset con 0x5612d7ffa400 session 0x5612d0a9be00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501743616 unmapped: 97861632 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5404482 data_alloc: 234881024 data_used: 25686016
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501743616 unmapped: 97861632 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 412 ms_handle_reset con 0x5612d1598400 session 0x5612d30b7e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 412 ms_handle_reset con 0x5612d1b51c00 session 0x5612d3a1de00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 412 heartbeat osd_stat(store_statfs(0x19a826000/0x0/0x1bfc00000, data 0x45dcc27/0x47f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.802115440s of 10.973855972s, submitted: 67
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 412 ms_handle_reset con 0x5612d2bcb800 session 0x5612d31b0960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502824960 unmapped: 96780288 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502824960 unmapped: 96780288 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502824960 unmapped: 96780288 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502824960 unmapped: 96780288 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5146886 data_alloc: 234881024 data_used: 18182144
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502824960 unmapped: 96780288 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502833152 unmapped: 96772096 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c1dd000/0x0/0x1bfc00000, data 0x2c257b6/0x2e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502833152 unmapped: 96772096 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502833152 unmapped: 96772096 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502833152 unmapped: 96772096 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5151044 data_alloc: 234881024 data_used: 18194432
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c1dd000/0x0/0x1bfc00000, data 0x2c257b6/0x2e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502833152 unmapped: 96772096 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502833152 unmapped: 96772096 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.657542229s of 10.729125023s, submitted: 43
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502833152 unmapped: 96772096 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502833152 unmapped: 96772096 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502841344 unmapped: 96763904 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5168338 data_alloc: 234881024 data_used: 20332544
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502841344 unmapped: 96763904 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c1d8000/0x0/0x1bfc00000, data 0x2c257b6/0x2e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502841344 unmapped: 96763904 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502841344 unmapped: 96763904 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502841344 unmapped: 96763904 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502841344 unmapped: 96763904 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5167522 data_alloc: 234881024 data_used: 20348928
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c1c4000/0x0/0x1bfc00000, data 0x2c257b6/0x2e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502841344 unmapped: 96763904 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d803f800 session 0x5612d314e5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d803d800 session 0x5612d34472c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502849536 unmapped: 96755712 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.351189613s of 10.450187683s, submitted: 45
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494706688 unmapped: 104898560 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494723072 unmapped: 104882176 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3b8c000 session 0x5612d30b65a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494723072 unmapped: 104882176 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4844360 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494723072 unmapped: 104882176 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d844000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494723072 unmapped: 104882176 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494723072 unmapped: 104882176 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494723072 unmapped: 104882176 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d844000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d844000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494706688 unmapped: 104898560 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4844360 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494706688 unmapped: 104898560 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494706688 unmapped: 104898560 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d844000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3f51000 session 0x5612d0a6be00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d9183400 session 0x5612d2647a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2e42c00 session 0x5612d30caf00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3c400 session 0x5612d31b01e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.112295151s of 10.172833443s, submitted: 22
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494690304 unmapped: 104914944 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612ddc40800 session 0x5612d30ca5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2e42c00 session 0x5612d3447c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3f51000 session 0x5612d34474a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d9183400 session 0x5612d3446780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3c400 session 0x5612d3447680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494690304 unmapped: 104914944 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494690304 unmapped: 104914944 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4932944 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cd1e000/0x0/0x1bfc00000, data 0x20e5764/0x2300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494690304 unmapped: 104914944 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494690304 unmapped: 104914944 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494690304 unmapped: 104914944 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cd1e000/0x0/0x1bfc00000, data 0x20e5764/0x2300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494690304 unmapped: 104914944 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494690304 unmapped: 104914944 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4932944 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d9185000 session 0x5612d34465a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d9185000 session 0x5612d3447860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494690304 unmapped: 104914944 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cd1e000/0x0/0x1bfc00000, data 0x20e5764/0x2300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2e42c00 session 0x5612d3462000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3f51000 session 0x5612d3462d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494690304 unmapped: 104914944 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 494690304 unmapped: 104914944 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 496107520 unmapped: 103497728 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cd1e000/0x0/0x1bfc00000, data 0x20e5764/0x2300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 497156096 unmapped: 102449152 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5008144 data_alloc: 234881024 data_used: 17657856
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 497156096 unmapped: 102449152 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cd1e000/0x0/0x1bfc00000, data 0x20e5764/0x2300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 497156096 unmapped: 102449152 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 497156096 unmapped: 102449152 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 497156096 unmapped: 102449152 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 497156096 unmapped: 102449152 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5008144 data_alloc: 234881024 data_used: 17657856
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cd1e000/0x0/0x1bfc00000, data 0x20e5764/0x2300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cd1e000/0x0/0x1bfc00000, data 0x20e5764/0x2300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 497156096 unmapped: 102449152 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cd1e000/0x0/0x1bfc00000, data 0x20e5764/0x2300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 497156096 unmapped: 102449152 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 497156096 unmapped: 102449152 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 497156096 unmapped: 102449152 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 497156096 unmapped: 102449152 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.228643417s of 22.315589905s, submitted: 16
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021404 data_alloc: 234881024 data_used: 17682432
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499654656 unmapped: 99950592 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19bf9e000/0x0/0x1bfc00000, data 0x2e65764/0x3080000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d156c800 session 0x5612d30b7c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2bb3400 session 0x5612d0a6b4a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d156c800 session 0x5612dbe0e5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2bb3400 session 0x5612d5b0fe00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499859456 unmapped: 99745792 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d5b1a000 session 0x5612d27a01e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2e42c00 session 0x5612d30cb2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3f51000 session 0x5612d3462780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d156c800 session 0x5612d0b63680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2bb3400 session 0x5612d31b05a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499884032 unmapped: 99721216 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499884032 unmapped: 99721216 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499884032 unmapped: 99721216 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5220212 data_alloc: 234881024 data_used: 18178048
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b2e5000/0x0/0x1bfc00000, data 0x3b1c7c6/0x3d38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499884032 unmapped: 99721216 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499884032 unmapped: 99721216 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499892224 unmapped: 99713024 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499892224 unmapped: 99713024 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b2e5000/0x0/0x1bfc00000, data 0x3b1c7c6/0x3d38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2945000 session 0x5612d3a1cf00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499900416 unmapped: 99704832 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5220694 data_alloc: 234881024 data_used: 18178048
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.162417412s of 10.626817703s, submitted: 134
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b2c1000/0x0/0x1bfc00000, data 0x3b407e9/0x3d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 499908608 unmapped: 99696640 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2942800 session 0x5612d3463680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502931456 unmapped: 96673792 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d90a8000 session 0x5612d26461e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506650624 unmapped: 92954624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506650624 unmapped: 92954624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506650624 unmapped: 92954624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5057252 data_alloc: 234881024 data_used: 20299776
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506650624 unmapped: 92954624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cb76000/0x0/0x1bfc00000, data 0x228c7d9/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506650624 unmapped: 92954624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506650624 unmapped: 92954624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506650624 unmapped: 92954624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cb76000/0x0/0x1bfc00000, data 0x228c7d9/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506650624 unmapped: 92954624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5057252 data_alloc: 234881024 data_used: 20299776
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cb76000/0x0/0x1bfc00000, data 0x228c7d9/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506650624 unmapped: 92954624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506650624 unmapped: 92954624 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cb76000/0x0/0x1bfc00000, data 0x228c7d9/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.157920837s of 12.188649178s, submitted: 17
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 508346368 unmapped: 91258880 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509673472 unmapped: 89931776 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509689856 unmapped: 89915392 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5137014 data_alloc: 234881024 data_used: 22138880
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c401000/0x0/0x1bfc00000, data 0x2a017d9/0x2c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c401000/0x0/0x1bfc00000, data 0x2a017d9/0x2c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5137494 data_alloc: 234881024 data_used: 22151168
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c401000/0x0/0x1bfc00000, data 0x2a017d9/0x2c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c401000/0x0/0x1bfc00000, data 0x2a017d9/0x2c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5137814 data_alloc: 234881024 data_used: 22159360
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c401000/0x0/0x1bfc00000, data 0x2a017d9/0x2c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509706240 unmapped: 89899008 heap: 599605248 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ff9400 session 0x5612d314fe00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2942400 session 0x5612d34461e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3fc00 session 0x5612d2bb4000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d90a9800 session 0x5612d3a1cd20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.826993942s of 15.048300743s, submitted: 88
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d9183400 session 0x5612d077d2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2942400 session 0x5612d160be00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509730816 unmapped: 93544448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ff9400 session 0x5612d3a1c780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b848000/0x0/0x1bfc00000, data 0x35b97e9/0x37d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20bdf9c6), peers [0,1] op hist [1,1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d90a9800 session 0x5612d30de1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3fc00 session 0x5612d30ca5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509730816 unmapped: 93544448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509730816 unmapped: 93544448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5238891 data_alloc: 234881024 data_used: 22163456
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509730816 unmapped: 93544448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509730816 unmapped: 93544448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b3b5000/0x0/0x1bfc00000, data 0x363c7e9/0x3859000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509739008 unmapped: 93536256 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509739008 unmapped: 93536256 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509755392 unmapped: 93519872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0dd6000 session 0x5612d4e82780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5237099 data_alloc: 234881024 data_used: 22163456
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2944400 session 0x5612d5b0e960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2942400 session 0x5612d30b7a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b3b5000/0x0/0x1bfc00000, data 0x363c7e9/0x3859000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509755392 unmapped: 93519872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509755392 unmapped: 93519872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510754816 unmapped: 92520448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510779392 unmapped: 92495872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510779392 unmapped: 92495872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5306797 data_alloc: 234881024 data_used: 28663808
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510779392 unmapped: 92495872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b3b4000/0x0/0x1bfc00000, data 0x363c80c/0x385a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510779392 unmapped: 92495872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b3b4000/0x0/0x1bfc00000, data 0x363c80c/0x385a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510779392 unmapped: 92495872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510779392 unmapped: 92495872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b3b4000/0x0/0x1bfc00000, data 0x363c80c/0x385a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510779392 unmapped: 92495872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5306797 data_alloc: 234881024 data_used: 28663808
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.669816971s of 17.824001312s, submitted: 47
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510779392 unmapped: 92495872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510779392 unmapped: 92495872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b3b4000/0x0/0x1bfc00000, data 0x363c80c/0x385a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1598800 session 0x5612d2647a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d803ec00 session 0x5612d34472c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510779392 unmapped: 92495872 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d9185400 session 0x5612d480af00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501686272 unmapped: 101588992 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 504078336 unmapped: 99196928 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5158841 data_alloc: 234881024 data_used: 16756736
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503619584 unmapped: 99655680 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503644160 unmapped: 99631104 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503644160 unmapped: 99631104 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19bab2000/0x0/0x1bfc00000, data 0x2f32787/0x314e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503644160 unmapped: 99631104 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503644160 unmapped: 99631104 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5178417 data_alloc: 234881024 data_used: 17027072
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19bab2000/0x0/0x1bfc00000, data 0x2f32787/0x314e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503644160 unmapped: 99631104 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503644160 unmapped: 99631104 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.338756561s of 11.732018471s, submitted: 191
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503578624 unmapped: 99696640 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503578624 unmapped: 99696640 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503578624 unmapped: 99696640 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5172661 data_alloc: 234881024 data_used: 17027072
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503578624 unmapped: 99696640 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19ba9c000/0x0/0x1bfc00000, data 0x2f56787/0x3172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503578624 unmapped: 99696640 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503578624 unmapped: 99696640 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19ba9c000/0x0/0x1bfc00000, data 0x2f56787/0x3172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503586816 unmapped: 99688448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503586816 unmapped: 99688448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5172605 data_alloc: 234881024 data_used: 17027072
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503586816 unmapped: 99688448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503586816 unmapped: 99688448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19ba92000/0x0/0x1bfc00000, data 0x2f60787/0x317c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503586816 unmapped: 99688448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503586816 unmapped: 99688448 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3f800 session 0x5612d0a632c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ff7400 session 0x5612d29261e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3f50800 session 0x5612d34630e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3b8d400 session 0x5612d30b72c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.293736458s of 12.343719482s, submitted: 8
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d9185000 session 0x5612d314e1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3b8d400 session 0x5612d480be00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503914496 unmapped: 99360768 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5241058 data_alloc: 234881024 data_used: 17027072
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503914496 unmapped: 99360768 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b296000/0x0/0x1bfc00000, data 0x375b7e9/0x3978000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503914496 unmapped: 99360768 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503914496 unmapped: 99360768 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503914496 unmapped: 99360768 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612ddc3ec00 session 0x5612d3446b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503914496 unmapped: 99360768 heap: 603275264 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1b54c00 session 0x5612d0632780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5241486 data_alloc: 234881024 data_used: 17027072
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b296000/0x0/0x1bfc00000, data 0x375b7e9/0x3978000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612ddc40400 session 0x5612d31b0960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d9184800 session 0x5612d0a632c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1b54c00 session 0x5612d480af00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3b8d400 session 0x5612d34472c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612ddc40400 session 0x5612d3a1c780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505094144 unmapped: 106053632 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d27b3c00 session 0x5612d160be00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b163000/0x0/0x1bfc00000, data 0x388d7f9/0x3aab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612ddc3ec00 session 0x5612d30b7a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1b54c00 session 0x5612d2bb4000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d27b3c00 session 0x5612d314fe00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3b8d400 session 0x5612d26461e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612ddc40400 session 0x5612d3463680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505102336 unmapped: 106045440 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a187000/0x0/0x1bfc00000, data 0x486782c/0x4a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505110528 unmapped: 106037248 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505159680 unmapped: 105988096 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505176064 unmapped: 105971712 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5406737 data_alloc: 234881024 data_used: 22155264
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505176064 unmapped: 105971712 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a187000/0x0/0x1bfc00000, data 0x486782c/0x4a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505176064 unmapped: 105971712 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.301767349s of 12.744765282s, submitted: 61
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d803e000 session 0x5612d480ad20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505176064 unmapped: 105971712 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1b54c00 session 0x5612d3a1dc20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505176064 unmapped: 105971712 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d27b3c00 session 0x5612d30df2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505184256 unmapped: 105963520 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5412184 data_alloc: 234881024 data_used: 22159360
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3b8d400 session 0x5612d2e02780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505184256 unmapped: 105963520 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505184256 unmapped: 105963520 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a161000/0x0/0x1bfc00000, data 0x488b85f/0x4aad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 508084224 unmapped: 103063552 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510746624 unmapped: 100401152 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510746624 unmapped: 100401152 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5531224 data_alloc: 251658240 data_used: 38838272
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a161000/0x0/0x1bfc00000, data 0x488b85f/0x4aad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 513933312 unmapped: 97214464 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510328832 unmapped: 100818944 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.761179924s of 10.053714752s, submitted: 103
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 511393792 unmapped: 99753984 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19929c000/0x0/0x1bfc00000, data 0x574a85f/0x596c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 511533056 unmapped: 99614720 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 512753664 unmapped: 98394112 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5666682 data_alloc: 251658240 data_used: 40603648
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 512827392 unmapped: 98320384 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199260000/0x0/0x1bfc00000, data 0x578685f/0x59a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 512827392 unmapped: 98320384 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 512827392 unmapped: 98320384 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519610368 unmapped: 91537408 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514695168 unmapped: 96452608 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5761006 data_alloc: 251658240 data_used: 40890368
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514785280 unmapped: 96362496 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x1986b7000/0x0/0x1bfc00000, data 0x632d85f/0x654f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514121728 unmapped: 97026048 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.748610973s of 10.065562248s, submitted: 138
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514121728 unmapped: 97026048 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514121728 unmapped: 97026048 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3b8c000 session 0x5612d0b63680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ff8c00 session 0x5612d31b05a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514129920 unmapped: 97017856 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5517134 data_alloc: 234881024 data_used: 30916608
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514129920 unmapped: 97017856 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199cc8000/0x0/0x1bfc00000, data 0x4c5d85f/0x4e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514138112 unmapped: 97009664 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ff8c00 session 0x5612d31b01e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514138112 unmapped: 97009664 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514138112 unmapped: 97009664 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514138112 unmapped: 97009664 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5514545 data_alloc: 234881024 data_used: 30912512
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514138112 unmapped: 97009664 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ff9400 session 0x5612d0a6a780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d90a9800 session 0x5612d29270e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199d91000/0x0/0x1bfc00000, data 0x4c5d7ca/0x4e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d9185800 session 0x5612dbe0e3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509755392 unmapped: 101392384 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509755392 unmapped: 101392384 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.669161797s of 11.337374687s, submitted: 100
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509763584 unmapped: 101384192 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509763584 unmapped: 101384192 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5255831 data_alloc: 234881024 data_used: 24129536
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b735000/0x0/0x1bfc00000, data 0x32bc797/0x34d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 509763584 unmapped: 101384192 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612ddc40400 session 0x5612d13ce960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d297a800 session 0x5612d3a1cb40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ff8c00 session 0x5612d18e9680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d434000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920842 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d434000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d434000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920842 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d434000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d434000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920842 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d434000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501710848 unmapped: 109436928 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 109428736 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 109428736 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d434000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 109428736 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920842 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 109428736 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 109428736 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d434000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 109428736 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 109428736 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501719040 unmapped: 109428736 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920842 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3f4ec00 session 0x5612dc404000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd8000 session 0x5612d30dfa40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2bca400 session 0x5612d0a554a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501727232 unmapped: 109420544 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd8000 session 0x5612d30def00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.787673950s of 27.848728180s, submitted: 29
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19d434000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d297a800 session 0x5612d3462d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3f4ec00 session 0x5612d3462780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ff8c00 session 0x5612d30b6b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2e43800 session 0x5612d0b30d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd8000 session 0x5612d0b30b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501694464 unmapped: 109453312 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501694464 unmapped: 109453312 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 501694464 unmapped: 109453312 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19cb9e000/0x0/0x1bfc00000, data 0x1e55764/0x2070000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d297a800 session 0x5612d0b623c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502923264 unmapped: 108224512 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5077157 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502923264 unmapped: 108224512 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3f4ec00 session 0x5612d0a634a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1b53400 session 0x5612daac34a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502923264 unmapped: 108224512 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1577800 session 0x5612d31b1680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd8000 session 0x5612d0b70d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502857728 unmapped: 108290048 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502865920 unmapped: 108281856 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c1b4000/0x0/0x1bfc00000, data 0x283e774/0x2a5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502865920 unmapped: 108281856 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5080007 data_alloc: 218103808 data_used: 7258112
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502874112 unmapped: 108273664 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 502874112 unmapped: 108273664 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.423761368s of 11.628615379s, submitted: 49
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1576400 session 0x5612d0b705a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c1b4000/0x0/0x1bfc00000, data 0x283e774/0x2a5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503021568 unmapped: 108126208 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503021568 unmapped: 108126208 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503037952 unmapped: 108109824 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5212445 data_alloc: 234881024 data_used: 24801280
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503037952 unmapped: 108109824 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c18f000/0x0/0x1bfc00000, data 0x2862797/0x2a7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503037952 unmapped: 108109824 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503037952 unmapped: 108109824 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c18f000/0x0/0x1bfc00000, data 0x2862797/0x2a7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503037952 unmapped: 108109824 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19c18f000/0x0/0x1bfc00000, data 0x2862797/0x2a7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503037952 unmapped: 108109824 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5212445 data_alloc: 234881024 data_used: 24801280
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 503037952 unmapped: 108109824 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 505995264 unmapped: 105152512 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506429440 unmapped: 104718336 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506429440 unmapped: 104718336 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506429440 unmapped: 104718336 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5283117 data_alloc: 234881024 data_used: 25767936
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b9eb000/0x0/0x1bfc00000, data 0x2ffd797/0x321a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x20fef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 506429440 unmapped: 104718336 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.188008308s of 13.434521675s, submitted: 100
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514654208 unmapped: 96493568 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199aaf000/0x0/0x1bfc00000, data 0x3da2797/0x3fbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5390397 data_alloc: 234881024 data_used: 27246592
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199a7e000/0x0/0x1bfc00000, data 0x3dd3797/0x3ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5388549 data_alloc: 234881024 data_used: 27246592
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199a7e000/0x0/0x1bfc00000, data 0x3dd3797/0x3ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.729269981s of 11.014642715s, submitted: 135
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d8e80c00 session 0x5612d30df2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3d83c00 session 0x5612d0a14000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5390321 data_alloc: 234881024 data_used: 27254784
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515104768 unmapped: 96043008 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199a6e000/0x0/0x1bfc00000, data 0x3de3797/0x4000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515112960 unmapped: 96034816 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515121152 unmapped: 96026624 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199a6b000/0x0/0x1bfc00000, data 0x3de6797/0x4003000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515121152 unmapped: 96026624 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515121152 unmapped: 96026624 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5390849 data_alloc: 234881024 data_used: 27344896
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515121152 unmapped: 96026624 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515121152 unmapped: 96026624 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199a6b000/0x0/0x1bfc00000, data 0x3de6797/0x4003000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.855538368s of 10.907034874s, submitted: 15
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515121152 unmapped: 96026624 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515121152 unmapped: 96026624 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515121152 unmapped: 96026624 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5390761 data_alloc: 234881024 data_used: 27344896
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199a5d000/0x0/0x1bfc00000, data 0x3df4797/0x4011000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515121152 unmapped: 96026624 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515121152 unmapped: 96026624 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515121152 unmapped: 96026624 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515129344 unmapped: 96018432 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199a5d000/0x0/0x1bfc00000, data 0x3df4797/0x4011000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515129344 unmapped: 96018432 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5408105 data_alloc: 234881024 data_used: 28839936
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515129344 unmapped: 96018432 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515129344 unmapped: 96018432 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199a5d000/0x0/0x1bfc00000, data 0x3df4797/0x4011000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515129344 unmapped: 96018432 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199a5d000/0x0/0x1bfc00000, data 0x3df4797/0x4011000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d4652400 session 0x5612d4e83e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd8000 session 0x5612d160a1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1576400 session 0x5612d3462f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3d83c00 session 0x5612cf123e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.788809776s of 10.839215279s, submitted: 8
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515129344 unmapped: 96018432 heap: 611147776 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d8e80c00 session 0x5612d30b61e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 516186112 unmapped: 99164160 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5483637 data_alloc: 234881024 data_used: 28856320
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x1990bf000/0x0/0x1bfc00000, data 0x4792797/0x49af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 516186112 unmapped: 99164160 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 516186112 unmapped: 99164160 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 516186112 unmapped: 99164160 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3694400 session 0x5612d30b7860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0be9800 session 0x5612d4e82960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x1990bc000/0x0/0x1bfc00000, data 0x4795797/0x49b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 516186112 unmapped: 99164160 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd8000 session 0x5612dc4054a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510132224 unmapped: 105218048 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5203123 data_alloc: 218103808 data_used: 15663104
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510132224 unmapped: 105218048 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510132224 unmapped: 105218048 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ff8000 session 0x5612e1712000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a888000/0x0/0x1bfc00000, data 0x2fc9774/0x31e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510287872 unmapped: 105062400 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510435328 unmapped: 104914944 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510828544 unmapped: 104521728 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5276903 data_alloc: 234881024 data_used: 25718784
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.269813538s of 11.449792862s, submitted: 56
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510828544 unmapped: 104521728 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510828544 unmapped: 104521728 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510828544 unmapped: 104521728 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a865000/0x0/0x1bfc00000, data 0x2fed774/0x3209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510828544 unmapped: 104521728 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a865000/0x0/0x1bfc00000, data 0x2fed774/0x3209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510828544 unmapped: 104521728 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5277543 data_alloc: 234881024 data_used: 25780224
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510828544 unmapped: 104521728 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a85f000/0x0/0x1bfc00000, data 0x2ff3774/0x320f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510828544 unmapped: 104521728 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a85f000/0x0/0x1bfc00000, data 0x2ff3774/0x320f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510828544 unmapped: 104521728 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510828544 unmapped: 104521728 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a85f000/0x0/0x1bfc00000, data 0x2ff3774/0x320f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 510828544 unmapped: 104521728 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5279407 data_alloc: 234881024 data_used: 25808896
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.912614822s of 10.001329422s, submitted: 26
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a85f000/0x0/0x1bfc00000, data 0x2ff3774/0x320f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514957312 unmapped: 100392960 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199cd7000/0x0/0x1bfc00000, data 0x3b75774/0x3d91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515178496 unmapped: 100171776 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515178496 unmapped: 100171776 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515178496 unmapped: 100171776 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515178496 unmapped: 100171776 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5384395 data_alloc: 234881024 data_used: 26083328
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515178496 unmapped: 100171776 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199c9b000/0x0/0x1bfc00000, data 0x3baf774/0x3dcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199c9b000/0x0/0x1bfc00000, data 0x3baf774/0x3dcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515178496 unmapped: 100171776 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515178496 unmapped: 100171776 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515186688 unmapped: 100163584 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515186688 unmapped: 100163584 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5378855 data_alloc: 234881024 data_used: 26083328
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515186688 unmapped: 100163584 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515186688 unmapped: 100163584 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.045725822s of 12.233748436s, submitted: 74
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199c98000/0x0/0x1bfc00000, data 0x3bba774/0x3dd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515186688 unmapped: 100163584 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515186688 unmapped: 100163584 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515186688 unmapped: 100163584 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5379483 data_alloc: 234881024 data_used: 26083328
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515186688 unmapped: 100163584 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515203072 unmapped: 100147200 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199c98000/0x0/0x1bfc00000, data 0x3bba774/0x3dd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515203072 unmapped: 100147200 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515203072 unmapped: 100147200 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515203072 unmapped: 100147200 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5379359 data_alloc: 234881024 data_used: 26083328
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515203072 unmapped: 100147200 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515203072 unmapped: 100147200 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199c93000/0x0/0x1bfc00000, data 0x3bbf774/0x3ddb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515203072 unmapped: 100147200 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199c93000/0x0/0x1bfc00000, data 0x3bbf774/0x3ddb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515203072 unmapped: 100147200 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.292383194s of 12.308183670s, submitted: 4
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3fc00 session 0x5612d06330e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d297a400 session 0x5612e17130e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd8000 session 0x5612d4e832c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515211264 unmapped: 100139008 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5379435 data_alloc: 234881024 data_used: 26083328
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0be9800 session 0x5612dc4052c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ff8000 session 0x5612d34470e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3fc00 session 0x5612d314e5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d803d000 session 0x5612daac2960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd8000 session 0x5612d2e03e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0be9800 session 0x5612d0b63a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515883008 unmapped: 99467264 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515883008 unmapped: 99467264 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19915b000/0x0/0x1bfc00000, data 0x46f7774/0x4913000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515883008 unmapped: 99467264 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515883008 unmapped: 99467264 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515883008 unmapped: 99467264 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5473392 data_alloc: 234881024 data_used: 26083328
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19915b000/0x0/0x1bfc00000, data 0x46f7774/0x4913000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515883008 unmapped: 99467264 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d4652c00 session 0x5612d0633c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2942400 session 0x5612daac21e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515883008 unmapped: 99467264 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d8e80c00 session 0x5612d0632d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd8000 session 0x5612d3447860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515891200 unmapped: 99459072 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515891200 unmapped: 99459072 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199157000/0x0/0x1bfc00000, data 0x46fa797/0x4917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 516243456 unmapped: 99106816 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5516617 data_alloc: 234881024 data_used: 31727616
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.617841721s of 10.796216965s, submitted: 39
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517169152 unmapped: 98181120 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d4653000 session 0x5612d0773e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d803d800 session 0x5612d2647a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517169152 unmapped: 98181120 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ffa400 session 0x5612d0b71680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a6cb000/0x0/0x1bfc00000, data 0x3186797/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517177344 unmapped: 98172928 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a6cb000/0x0/0x1bfc00000, data 0x3186797/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517177344 unmapped: 98172928 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517177344 unmapped: 98172928 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5322949 data_alloc: 234881024 data_used: 27193344
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517177344 unmapped: 98172928 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517177344 unmapped: 98172928 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517177344 unmapped: 98172928 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517177344 unmapped: 98172928 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a6c8000/0x0/0x1bfc00000, data 0x3189797/0x33a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2218f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517177344 unmapped: 98172928 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5323345 data_alloc: 234881024 data_used: 27193344
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.421003342s of 10.455653191s, submitted: 19
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #58. Immutable memtables: 14.
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 522264576 unmapped: 93085696 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523157504 unmapped: 92192768 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1b53400 session 0x5612d4e825a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d297a800 session 0x5612d13ce000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd8000 session 0x5612d19932c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523608064 unmapped: 91742208 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523608064 unmapped: 91742208 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19964c000/0x0/0x1bfc00000, data 0x3067777/0x3282000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523608064 unmapped: 91742208 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5254066 data_alloc: 234881024 data_used: 20156416
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523608064 unmapped: 91742208 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523608064 unmapped: 91742208 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523608064 unmapped: 91742208 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523608064 unmapped: 91742208 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19962b000/0x0/0x1bfc00000, data 0x3088777/0x32a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523608064 unmapped: 91742208 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5255818 data_alloc: 234881024 data_used: 20156416
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19962b000/0x0/0x1bfc00000, data 0x3088777/0x32a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19962b000/0x0/0x1bfc00000, data 0x3088777/0x32a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523608064 unmapped: 91742208 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.303668022s of 10.690677643s, submitted: 188
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523624448 unmapped: 91725824 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523624448 unmapped: 91725824 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199615000/0x0/0x1bfc00000, data 0x309e777/0x32b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0be9800 session 0x5612d2e03680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 523624448 unmapped: 91725824 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518291456 unmapped: 97058816 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4971922 data_alloc: 218103808 data_used: 7106560
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3fc00 session 0x5612d06154a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd9800 session 0x5612d480af00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd8000 session 0x5612d3463680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0be9800 session 0x5612d480b4a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d297a800 session 0x5612dc4045a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3fc00 session 0x5612d4e823c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0bd6000 session 0x5612daac2960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519364608 unmapped: 95985664 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612ddc3e000 session 0x5612d34470e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519364608 unmapped: 95985664 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3f4f000 session 0x5612dc4052c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519364608 unmapped: 95985664 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2942400 session 0x5612e17130e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d9185800 session 0x5612d06330e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19aa76000/0x0/0x1bfc00000, data 0x1c3c7b6/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519364608 unmapped: 95985664 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519364608 unmapped: 95985664 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5031869 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19aa76000/0x0/0x1bfc00000, data 0x1c3c7b6/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519380992 unmapped: 95969280 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519380992 unmapped: 95969280 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519380992 unmapped: 95969280 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519380992 unmapped: 95969280 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519380992 unmapped: 95969280 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5069149 data_alloc: 218103808 data_used: 12378112
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19aa76000/0x0/0x1bfc00000, data 0x1c3c7b6/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519380992 unmapped: 95969280 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519380992 unmapped: 95969280 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519380992 unmapped: 95969280 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519380992 unmapped: 95969280 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519380992 unmapped: 95969280 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5069149 data_alloc: 218103808 data_used: 12378112
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19aa76000/0x0/0x1bfc00000, data 0x1c3c7b6/0x1e57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 519380992 unmapped: 95969280 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.703044891s of 19.919292450s, submitted: 107
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 516956160 unmapped: 98394112 heap: 615350272 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1b54400 session 0x5612d18e90e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2942400 session 0x5612d30cb2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d3f4f000 session 0x5612d2e03860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d9185800 session 0x5612d314f680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612ddc3e000 session 0x5612d4e83680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517226496 unmapped: 101801984 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3f000 session 0x5612dbe0e780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517226496 unmapped: 101801984 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517226496 unmapped: 101801984 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5252764 data_alloc: 218103808 data_used: 12525568
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19942a000/0x0/0x1bfc00000, data 0x3288818/0x34a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517234688 unmapped: 101793792 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d803d800 session 0x5612d0773a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19942a000/0x0/0x1bfc00000, data 0x3288818/0x34a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517234688 unmapped: 101793792 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d90a8000 session 0x5612d3a1cd20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19942a000/0x0/0x1bfc00000, data 0x3288818/0x34a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517357568 unmapped: 101670912 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517316608 unmapped: 101711872 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517316608 unmapped: 101711872 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5250628 data_alloc: 218103808 data_used: 12529664
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2bb2c00 session 0x5612dc404000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d31b4800 session 0x5612d3a1c5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517275648 unmapped: 101752832 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199408000/0x0/0x1bfc00000, data 0x32a9828/0x34c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517251072 unmapped: 101777408 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.125140190s of 11.744334221s, submitted: 170
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d4652c00 session 0x5612e1712000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517578752 unmapped: 101449728 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517595136 unmapped: 101433344 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2bb2c00 session 0x5612d077c000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a5d0000/0x0/0x1bfc00000, data 0x1e297c6/0x2045000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517595136 unmapped: 101433344 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5117630 data_alloc: 218103808 data_used: 15380480
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517595136 unmapped: 101433344 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517595136 unmapped: 101433344 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517595136 unmapped: 101433344 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a5d0000/0x0/0x1bfc00000, data 0x1e297c6/0x2045000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517595136 unmapped: 101433344 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d7ff6400 session 0x5612d2e02960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1b52800 session 0x5612d18e9680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2e43400 session 0x5612d0b701e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514711552 unmapped: 104316928 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4991581 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a5d0000/0x0/0x1bfc00000, data 0x1e297c6/0x2045000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514711552 unmapped: 104316928 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514719744 unmapped: 104308736 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514719744 unmapped: 104308736 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514719744 unmapped: 104308736 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514719744 unmapped: 104308736 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4991581 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b0f4000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514719744 unmapped: 104308736 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514719744 unmapped: 104308736 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514719744 unmapped: 104308736 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514719744 unmapped: 104308736 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514736128 unmapped: 104292352 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4991581 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514736128 unmapped: 104292352 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b0f4000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514736128 unmapped: 104292352 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514736128 unmapped: 104292352 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514736128 unmapped: 104292352 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b0f4000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514736128 unmapped: 104292352 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4991581 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19b0f4000/0x0/0x1bfc00000, data 0x15bf754/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514736128 unmapped: 104292352 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514736128 unmapped: 104292352 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514744320 unmapped: 104284160 heap: 619028480 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3dc00 session 0x5612d30ca780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3dc00 session 0x5612d3446b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1b52800 session 0x5612e1712f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2bb2c00 session 0x5612d34461e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.930173874s of 25.422096252s, submitted: 42
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d4428c00 session 0x5612d30def00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2e43400 session 0x5612d30ca5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2e43400 session 0x5612d480ab40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1b52800 session 0x5612d160b0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2bb2c00 session 0x5612d480be00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514940928 unmapped: 108290048 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a720000/0x0/0x1bfc00000, data 0x1f94754/0x21ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514940928 unmapped: 108290048 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5077422 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514940928 unmapped: 108290048 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514940928 unmapped: 108290048 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514940928 unmapped: 108290048 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a720000/0x0/0x1bfc00000, data 0x1f94754/0x21ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514940928 unmapped: 108290048 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a720000/0x0/0x1bfc00000, data 0x1f94754/0x21ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2332f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1289800 session 0x5612d480a960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515096576 unmapped: 108134400 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5082256 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515096576 unmapped: 108134400 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d6c91800 session 0x5612d2646f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1289800 session 0x5612d0a6b2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1b52800 session 0x5612d3a1cf00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2bb2c00 session 0x5612d30cb860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515719168 unmapped: 107511808 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2e43400 session 0x5612d30b63c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d297bc00 session 0x5612daac2b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612dea3e400 session 0x5612d31b1c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515719168 unmapped: 107511808 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.813674927s of 10.039435387s, submitted: 63
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d297bc00 session 0x5612d3a1c3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514883584 unmapped: 108347392 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a42b000/0x0/0x1bfc00000, data 0x1e78754/0x2092000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5072317 data_alloc: 218103808 data_used: 7102464
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514883584 unmapped: 108347392 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0a61400 session 0x5612d0b62d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514883584 unmapped: 108347392 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2942000 session 0x5612d30cba40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514883584 unmapped: 108347392 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a42b000/0x0/0x1bfc00000, data 0x1e78754/0x2092000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514883584 unmapped: 108347392 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1289000 session 0x5612d0a63a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d1289000 session 0x5612d30df2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514883584 unmapped: 108347392 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a42b000/0x0/0x1bfc00000, data 0x1e78764/0x2093000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5074139 data_alloc: 218103808 data_used: 7106560
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514883584 unmapped: 108347392 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a42b000/0x0/0x1bfc00000, data 0x1e78764/0x2093000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 514719744 unmapped: 108511232 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515604480 unmapped: 107626496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515604480 unmapped: 107626496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515604480 unmapped: 107626496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a42b000/0x0/0x1bfc00000, data 0x1e78764/0x2093000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5130939 data_alloc: 218103808 data_used: 15036416
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515604480 unmapped: 107626496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515604480 unmapped: 107626496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a42b000/0x0/0x1bfc00000, data 0x1e78764/0x2093000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515604480 unmapped: 107626496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515604480 unmapped: 107626496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x19a42b000/0x0/0x1bfc00000, data 0x1e78764/0x2093000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515604480 unmapped: 107626496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5130939 data_alloc: 218103808 data_used: 15036416
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515604480 unmapped: 107626496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515604480 unmapped: 107626496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.679121017s of 18.803747177s, submitted: 50
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 515973120 unmapped: 107257856 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517537792 unmapped: 105693184 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199ec0000/0x0/0x1bfc00000, data 0x23dc764/0x25f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199e9a000/0x0/0x1bfc00000, data 0x23fa764/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5188177 data_alloc: 218103808 data_used: 15368192
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6002.4 total, 600.0 interval#012Cumulative writes: 73K writes, 293K keys, 73K commit groups, 1.0 writes per commit group, ingest: 0.30 GB, 0.05 MB/s#012Cumulative WAL: 73K writes, 27K syncs, 2.69 writes per sync, written: 0.30 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5372 writes, 23K keys, 5372 commit groups, 1.0 writes per commit group, ingest: 25.15 MB, 0.04 MB/s#012Interval WAL: 5372 writes, 2074 syncs, 2.59 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.42              0.00         1    0.419       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5612cf175350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5612cf175350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6002.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199e9a000/0x0/0x1bfc00000, data 0x23fa764/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5188177 data_alloc: 218103808 data_used: 15368192
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199e9a000/0x0/0x1bfc00000, data 0x23fa764/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199e9a000/0x0/0x1bfc00000, data 0x23fa764/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5188177 data_alloc: 218103808 data_used: 15368192
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199e9a000/0x0/0x1bfc00000, data 0x23fa764/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517709824 unmapped: 105521152 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199e9a000/0x0/0x1bfc00000, data 0x23fa764/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5188177 data_alloc: 218103808 data_used: 15368192
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517726208 unmapped: 105504768 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517726208 unmapped: 105504768 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517726208 unmapped: 105504768 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517726208 unmapped: 105504768 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199e9a000/0x0/0x1bfc00000, data 0x23fa764/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517726208 unmapped: 105504768 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5188177 data_alloc: 218103808 data_used: 15368192
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517726208 unmapped: 105504768 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199e9a000/0x0/0x1bfc00000, data 0x23fa764/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517726208 unmapped: 105504768 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.312072754s of 24.491689682s, submitted: 71
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517742592 unmapped: 105488384 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199ea9000/0x0/0x1bfc00000, data 0x23fa764/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517750784 unmapped: 105480192 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d0a61400 session 0x5612d4e82780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 ms_handle_reset con 0x5612d2942000 session 0x5612d2926d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517750784 unmapped: 105480192 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5182241 data_alloc: 218103808 data_used: 15355904
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517750784 unmapped: 105480192 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199ea9000/0x0/0x1bfc00000, data 0x23fa764/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517750784 unmapped: 105480192 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 heartbeat osd_stat(store_statfs(0x199ea9000/0x0/0x1bfc00000, data 0x23fa764/0x2615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517750784 unmapped: 105480192 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517750784 unmapped: 105480192 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d1b52c00 session 0x5612d0614000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d7ffb000 session 0x5612d4e83c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d0a61400 session 0x5612dbe0ef00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d1289000 session 0x5612e17130e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d1b52c00 session 0x5612d34470e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d2942000 session 0x5612d30df2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d1b52400 session 0x5612d0b62d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d1b52400 session 0x5612d31b1c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d0a61400 session 0x5612e1713e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d1289000 session 0x5612d30b63c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517791744 unmapped: 105439232 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d1b52c00 session 0x5612d0a6b2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5297501 data_alloc: 218103808 data_used: 15368192
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517791744 unmapped: 105439232 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 heartbeat osd_stat(store_statfs(0x1990d4000/0x0/0x1bfc00000, data 0x31cb4a2/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517791744 unmapped: 105439232 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517808128 unmapped: 105422848 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517808128 unmapped: 105422848 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517808128 unmapped: 105422848 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298941 data_alloc: 218103808 data_used: 15511552
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517808128 unmapped: 105422848 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 heartbeat osd_stat(store_statfs(0x1990d4000/0x0/0x1bfc00000, data 0x31cb4a2/0x33e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517808128 unmapped: 105422848 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.443180084s of 15.677851677s, submitted: 75
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517808128 unmapped: 105422848 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 ms_handle_reset con 0x5612d9185c00 session 0x5612d480ba40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 517816320 unmapped: 105414656 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518545408 unmapped: 104685568 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5399203 data_alloc: 234881024 data_used: 25563136
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518807552 unmapped: 104423424 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518823936 unmapped: 104407040 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 415 heartbeat osd_stat(store_statfs(0x1990ae000/0x0/0x1bfc00000, data 0x31f11c2/0x3410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518840320 unmapped: 104390656 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518905856 unmapped: 104325120 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518905856 unmapped: 104325120 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5409763 data_alloc: 234881024 data_used: 26648576
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518914048 unmapped: 104316928 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518930432 unmapped: 104300544 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x1990aa000/0x0/0x1bfc00000, data 0x31f2d74/0x3413000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518955008 unmapped: 104275968 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518955008 unmapped: 104275968 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.532127380s of 11.108204842s, submitted: 215
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518955008 unmapped: 104275968 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5413537 data_alloc: 234881024 data_used: 26669056
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 518987776 unmapped: 104243200 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 528752640 unmapped: 94478336 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x197fa2000/0x0/0x1bfc00000, data 0x42edd74/0x450e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529547264 unmapped: 93683712 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2942000 session 0x5612d2646f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d5b1a000 session 0x5612d0a14000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529547264 unmapped: 93683712 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d9182400 session 0x5612d31b05a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527327232 unmapped: 95903744 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5400644 data_alloc: 234881024 data_used: 23232512
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x1986a2000/0x0/0x1bfc00000, data 0x34d7d64/0x36f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x1986a2000/0x0/0x1bfc00000, data 0x34d7d64/0x36f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5401780 data_alloc: 234881024 data_used: 23261184
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x1986a2000/0x0/0x1bfc00000, data 0x34d7d64/0x36f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5401780 data_alloc: 234881024 data_used: 23261184
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x1986a2000/0x0/0x1bfc00000, data 0x34d7d64/0x36f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.754859924s of 18.342123032s, submitted: 262
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0a61400 session 0x5612d077cb40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d1289000 session 0x5612dc404000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527269888 unmapped: 95961088 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198dc7000/0x0/0x1bfc00000, data 0x34d7d64/0x36f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527269888 unmapped: 95961088 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527294464 unmapped: 95936512 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0a61400 session 0x5612d30b7e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5044358 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527294464 unmapped: 95936512 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19acdb000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19acdb000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5044358 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19acdb000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19acdb000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5044358 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527302656 unmapped: 95928320 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19acdb000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527310848 unmapped: 95920128 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527310848 unmapped: 95920128 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5044358 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19acdb000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527310848 unmapped: 95920128 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527310848 unmapped: 95920128 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527319040 unmapped: 95911936 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527319040 unmapped: 95911936 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527319040 unmapped: 95911936 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5044358 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527319040 unmapped: 95911936 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19acdb000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527319040 unmapped: 95911936 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527319040 unmapped: 95911936 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527319040 unmapped: 95911936 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19acdb000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527319040 unmapped: 95911936 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5044358 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19acdb000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527327232 unmapped: 95903744 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19acdb000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527327232 unmapped: 95903744 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527327232 unmapped: 95903744 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d156d400 session 0x5612d30de1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d9182c00 session 0x5612d30df680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3b8e000 session 0x5612d4e832c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3b8cc00 session 0x5612d3a1d4a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527335424 unmapped: 95895552 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.921144485s of 32.024456024s, submitted: 47
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3b8cc00 session 0x5612d30b65a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0a61400 session 0x5612d4e825a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d156d400 session 0x5612e17134a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3b8e000 session 0x5612d2bb4000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d9182c00 session 0x5612d30b7860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5088765 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a908000/0x0/0x1bfc00000, data 0x1997d02/0x1bb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a908000/0x0/0x1bfc00000, data 0x1997d02/0x1bb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5088765 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a908000/0x0/0x1bfc00000, data 0x1997d02/0x1bb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a908000/0x0/0x1bfc00000, data 0x1997d02/0x1bb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2bcb800 session 0x5612d314f0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a908000/0x0/0x1bfc00000, data 0x1997d02/0x1bb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5088765 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527564800 unmapped: 95666176 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d1576800 session 0x5612d18e90e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527572992 unmapped: 95657984 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527572992 unmapped: 95657984 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d27b3400 session 0x5612d160b680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.238080978s of 14.302499771s, submitted: 27
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d8e81c00 session 0x5612d2e02d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a906000/0x0/0x1bfc00000, data 0x1997d35/0x1bb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527572992 unmapped: 95657984 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a906000/0x0/0x1bfc00000, data 0x1997d35/0x1bb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5118692 data_alloc: 218103808 data_used: 10772480
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a906000/0x0/0x1bfc00000, data 0x1997d35/0x1bb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5118692 data_alloc: 218103808 data_used: 10772480
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a906000/0x0/0x1bfc00000, data 0x1997d35/0x1bb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5118692 data_alloc: 218103808 data_used: 10772480
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 527581184 unmapped: 95649792 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.935240746s of 12.955540657s, submitted: 7
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a906000/0x0/0x1bfc00000, data 0x1997d35/0x1bb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529088512 unmapped: 94142464 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530677760 unmapped: 92553216 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529981440 unmapped: 93249536 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a131000/0x0/0x1bfc00000, data 0x2164d35/0x2385000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529981440 unmapped: 93249536 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5195132 data_alloc: 218103808 data_used: 12197888
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529981440 unmapped: 93249536 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529981440 unmapped: 93249536 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a131000/0x0/0x1bfc00000, data 0x2164d35/0x2385000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529981440 unmapped: 93249536 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a131000/0x0/0x1bfc00000, data 0x2164d35/0x2385000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529915904 unmapped: 93315072 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529940480 unmapped: 93290496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5194488 data_alloc: 218103808 data_used: 12193792
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529940480 unmapped: 93290496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529940480 unmapped: 93290496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a10a000/0x0/0x1bfc00000, data 0x2193d35/0x23b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529940480 unmapped: 93290496 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.447732925s of 12.260262489s, submitted: 160
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d297b800 session 0x5612d314e000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 529809408 unmapped: 93421568 heap: 623230976 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a10a000/0x0/0x1bfc00000, data 0x2193d35/0x23b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3f4fc00 session 0x5612d2e03a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0bd8000 session 0x5612d18e9e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d297b800 session 0x5612d3447e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 96559104 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5261969 data_alloc: 218103808 data_used: 12193792
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 96559104 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 96559104 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 96559104 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19976b000/0x0/0x1bfc00000, data 0x2b31d97/0x2d53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3b8cc00 session 0x5612d2e03a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 96559104 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19976b000/0x0/0x1bfc00000, data 0x2b31d97/0x2d53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 96559104 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0bd9000 session 0x5612d314e000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19976b000/0x0/0x1bfc00000, data 0x2b31d97/0x2d53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5261969 data_alloc: 218103808 data_used: 12193792
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19976b000/0x0/0x1bfc00000, data 0x2b31d97/0x2d53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d6c91400 session 0x5612d160b680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 96559104 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0bd6c00 session 0x5612d0772d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d8e81c00 session 0x5612d18e90e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 96559104 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530874368 unmapped: 96559104 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530792448 unmapped: 96641024 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19976a000/0x0/0x1bfc00000, data 0x2b31da6/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5328371 data_alloc: 234881024 data_used: 21336064
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19976a000/0x0/0x1bfc00000, data 0x2b31da6/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19976a000/0x0/0x1bfc00000, data 0x2b31da6/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19976a000/0x0/0x1bfc00000, data 0x2b31da6/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5328371 data_alloc: 234881024 data_used: 21336064
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19976a000/0x0/0x1bfc00000, data 0x2b31da6/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.026931763s of 20.635303497s, submitted: 34
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19976a000/0x0/0x1bfc00000, data 0x2b31da6/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1,0,1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5337815 data_alloc: 234881024 data_used: 21753856
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530800640 unmapped: 96632832 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 532250624 unmapped: 95182848 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 532389888 unmapped: 95043584 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 531169280 unmapped: 96264192 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199508000/0x0/0x1bfc00000, data 0x2d93da6/0x2fb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 531177472 unmapped: 96256000 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5392733 data_alloc: 234881024 data_used: 21852160
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 531177472 unmapped: 96256000 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545202176 unmapped: 82231296 heap: 627433472 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 531218432 unmapped: 99893248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.276785374s of 10.128667831s, submitted: 87
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 531218432 unmapped: 99893248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0bd9000 session 0x5612e17121e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 532267008 unmapped: 98844672 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19838c000/0x0/0x1bfc00000, data 0x3f0ddcf/0x4131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2942800 session 0x5612d0a6af00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5504650 data_alloc: 234881024 data_used: 22749184
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d27b2c00 session 0x5612dbe0fc20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 532267008 unmapped: 98844672 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2bcbc00 session 0x5612d30b7860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 532283392 unmapped: 98828288 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d5b1a400 session 0x5612d2926780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d803dc00 session 0x5612d26465a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 532307968 unmapped: 98803712 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 532307968 unmapped: 98803712 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198385000/0x0/0x1bfc00000, data 0x3f15e08/0x4139000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d7ff7800 session 0x5612d0632780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 532144128 unmapped: 98967552 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d27b2c00 session 0x5612d314fc20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369289 data_alloc: 234881024 data_used: 17313792
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 532160512 unmapped: 98951168 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 534290432 unmapped: 96821248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539500544 unmapped: 91611136 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539500544 unmapped: 91611136 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539500544 unmapped: 91611136 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198f55000/0x0/0x1bfc00000, data 0x3346df8/0x3569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5468216 data_alloc: 234881024 data_used: 31375360
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539500544 unmapped: 91611136 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539500544 unmapped: 91611136 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198f55000/0x0/0x1bfc00000, data 0x3346df8/0x3569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539500544 unmapped: 91611136 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539500544 unmapped: 91611136 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539500544 unmapped: 91611136 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5468216 data_alloc: 234881024 data_used: 31375360
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539500544 unmapped: 91611136 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539500544 unmapped: 91611136 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198f55000/0x0/0x1bfc00000, data 0x3346df8/0x3569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.852506638s of 18.491353989s, submitted: 64
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 540172288 unmapped: 90939392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 540172288 unmapped: 90939392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 540573696 unmapped: 90537984 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d1b50000 session 0x5612dbe0e000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5539770 data_alloc: 234881024 data_used: 31346688
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 540811264 unmapped: 90300416 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198601000/0x0/0x1bfc00000, data 0x3c94dd5/0x3eb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 540827648 unmapped: 90284032 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198601000/0x0/0x1bfc00000, data 0x3c94dd5/0x3eb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 540909568 unmapped: 90202112 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 540909568 unmapped: 90202112 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542105600 unmapped: 89006080 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5552512 data_alloc: 234881024 data_used: 31854592
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542105600 unmapped: 89006080 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x1985c5000/0x0/0x1bfc00000, data 0x3ccfdd5/0x3ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542179328 unmapped: 88932352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542179328 unmapped: 88932352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542179328 unmapped: 88932352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x1985c5000/0x0/0x1bfc00000, data 0x3ccfdd5/0x3ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542179328 unmapped: 88932352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5567708 data_alloc: 234881024 data_used: 32088064
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542179328 unmapped: 88932352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.832390785s of 14.083070755s, submitted: 116
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542195712 unmapped: 88915968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542195712 unmapped: 88915968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542195712 unmapped: 88915968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542195712 unmapped: 88915968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x1985c8000/0x0/0x1bfc00000, data 0x3cd3dd5/0x3ef5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5563200 data_alloc: 234881024 data_used: 32100352
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542195712 unmapped: 88915968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x1985c8000/0x0/0x1bfc00000, data 0x3cd3dd5/0x3ef5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542195712 unmapped: 88915968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3f50800 session 0x5612d3a1d680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2bcb800 session 0x5612d30b74a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536068096 unmapped: 95043584 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d8e81c00 session 0x5612e1713e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536068096 unmapped: 95043584 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199928000/0x0/0x1bfc00000, data 0x25ddd63/0x27fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536068096 unmapped: 95043584 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5281257 data_alloc: 218103808 data_used: 16752640
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536068096 unmapped: 95043584 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536068096 unmapped: 95043584 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536068096 unmapped: 95043584 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.507818222s of 11.623309135s, submitted: 45
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2943000 session 0x5612d4e82780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d803f400 session 0x5612d4e825a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199928000/0x0/0x1bfc00000, data 0x25ddd63/0x27fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536068096 unmapped: 95043584 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536068096 unmapped: 95043584 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5085108 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0bd8000 session 0x5612d314eb40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530358272 unmapped: 100753408 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530358272 unmapped: 100753408 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530358272 unmapped: 100753408 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530358272 unmapped: 100753408 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19a92e000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530358272 unmapped: 100753408 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5085108 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530358272 unmapped: 100753408 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530358272 unmapped: 100753408 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3694800 session 0x5612dbe0f2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d8e81c00 session 0x5612d26472c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d8e81c00 session 0x5612d27a0b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.018912315s of 10.002582550s, submitted: 64
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530743296 unmapped: 100368384 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0bd8000 session 0x5612d30b65a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2943000 session 0x5612d480ab40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530743296 unmapped: 100368384 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199d3c000/0x0/0x1bfc00000, data 0x2563d54/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530743296 unmapped: 100368384 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5209208 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530751488 unmapped: 100360192 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199d3c000/0x0/0x1bfc00000, data 0x2563d54/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530751488 unmapped: 100360192 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199d3c000/0x0/0x1bfc00000, data 0x2563d54/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530751488 unmapped: 100360192 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d4653400 session 0x5612d3463e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d31b4400 session 0x5612d0a6b4a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530751488 unmapped: 100360192 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199d3c000/0x0/0x1bfc00000, data 0x2563d54/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0bd8000 session 0x5612d2647a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530751488 unmapped: 100360192 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2943000 session 0x5612d2926d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5213939 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 100352000 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530759680 unmapped: 100352000 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199d17000/0x0/0x1bfc00000, data 0x2587d64/0x27a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 100343808 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 100343808 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 100343808 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5328923 data_alloc: 234881024 data_used: 23146496
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 100343808 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 100343808 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 100343808 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199d17000/0x0/0x1bfc00000, data 0x2587d64/0x27a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 100343808 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199d17000/0x0/0x1bfc00000, data 0x2587d64/0x27a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 100343808 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5328923 data_alloc: 234881024 data_used: 23146496
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530767872 unmapped: 100343808 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530776064 unmapped: 100335616 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 530776064 unmapped: 100335616 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.776540756s of 20.840459824s, submitted: 17
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 534249472 unmapped: 96862208 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 535412736 unmapped: 95698944 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199218000/0x0/0x1bfc00000, data 0x307ed64/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5441249 data_alloc: 234881024 data_used: 25280512
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d1987400 session 0x5612d2bb5e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d7413400 session 0x5612d1992960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d803c000 session 0x5612d480a1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0bd8000 session 0x5612d480a5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 535846912 unmapped: 95264768 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d1987400 session 0x5612d31b0b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2943000 session 0x5612d0a54780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d7413400 session 0x5612d314f680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2945400 session 0x5612d30cad20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d0bd8000 session 0x5612dbe0eb40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 535896064 unmapped: 95215616 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x1987af000/0x0/0x1bfc00000, data 0x3aeed74/0x3d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 535904256 unmapped: 95207424 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 535904256 unmapped: 95207424 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 535904256 unmapped: 95207424 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5530025 data_alloc: 234881024 data_used: 25284608
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 535904256 unmapped: 95207424 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19878e000/0x0/0x1bfc00000, data 0x3b0fd74/0x3d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2bb3400 session 0x5612d160a1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 535904256 unmapped: 95207424 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19878e000/0x0/0x1bfc00000, data 0x3b0fd74/0x3d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 535904256 unmapped: 95207424 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.503906250s of 10.000720024s, submitted: 157
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3694c00 session 0x5612cff53e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d803fc00 session 0x5612d2e03680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 535928832 unmapped: 95182848 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 94298112 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19878e000/0x0/0x1bfc00000, data 0x3b0fd74/0x3d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5587889 data_alloc: 234881024 data_used: 33366016
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 94298112 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 94298112 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19878e000/0x0/0x1bfc00000, data 0x3b0fd74/0x3d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 94298112 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 94298112 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 94298112 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5587801 data_alloc: 234881024 data_used: 33366016
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 94298112 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19878e000/0x0/0x1bfc00000, data 0x3b0fd74/0x3d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536813568 unmapped: 94298112 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536821760 unmapped: 94289920 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536821760 unmapped: 94289920 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19878e000/0x0/0x1bfc00000, data 0x3b0fd74/0x3d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19878e000/0x0/0x1bfc00000, data 0x3b0fd74/0x3d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.954433441s of 10.969482422s, submitted: 5
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d9182400 session 0x5612d0b70f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612dea3cc00 session 0x5612d3a1dc20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 536846336 unmapped: 94265344 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d9182400 session 0x5612d30ca3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5363130 data_alloc: 218103808 data_used: 16453632
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537747456 unmapped: 93364224 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19911c000/0x0/0x1bfc00000, data 0x2dcad02/0x2fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2373f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537067520 unmapped: 94044160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537067520 unmapped: 94044160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537067520 unmapped: 94044160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537067520 unmapped: 94044160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5379658 data_alloc: 234881024 data_used: 17219584
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537067520 unmapped: 94044160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537067520 unmapped: 94044160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x19829b000/0x0/0x1bfc00000, data 0x2e5bd02/0x307a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537001984 unmapped: 94109696 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537001984 unmapped: 94109696 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537001984 unmapped: 94109696 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5374314 data_alloc: 234881024 data_used: 17219584
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537001984 unmapped: 94109696 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537001984 unmapped: 94109696 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d7ff7400 session 0x5612d480b2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537001984 unmapped: 94109696 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198284000/0x0/0x1bfc00000, data 0x2e7bd02/0x309a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198284000/0x0/0x1bfc00000, data 0x2e7bd02/0x309a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 537010176 unmapped: 94101504 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.276833534s of 14.813066483s, submitted: 187
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 93028352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d5b1b000 session 0x5612d195f0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198284000/0x0/0x1bfc00000, data 0x2e7bd02/0x309a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5116294 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 93028352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 93028352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 93028352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 93028352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199b3c000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 93028352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5116294 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538083328 unmapped: 93028352 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199b3c000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538091520 unmapped: 93020160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538091520 unmapped: 93020160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538091520 unmapped: 93020160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538091520 unmapped: 93020160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199b3c000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5116294 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538091520 unmapped: 93020160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538091520 unmapped: 93020160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538091520 unmapped: 93020160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538091520 unmapped: 93020160 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538099712 unmapped: 93011968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199b3c000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5116294 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538099712 unmapped: 93011968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538099712 unmapped: 93011968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199b3c000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538099712 unmapped: 93011968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538099712 unmapped: 93011968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538099712 unmapped: 93011968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5116294 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538099712 unmapped: 93011968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199b3c000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538099712 unmapped: 93011968 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538107904 unmapped: 93003776 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538107904 unmapped: 93003776 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538107904 unmapped: 93003776 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199b3c000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5116294 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538107904 unmapped: 93003776 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538107904 unmapped: 93003776 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538107904 unmapped: 93003776 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538107904 unmapped: 93003776 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538107904 unmapped: 93003776 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5116294 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199b3c000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3b8f400 session 0x5612d2647a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5116294 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199b3c000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.836669922s of 38.861549377s, submitted: 17
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539516928 unmapped: 91594752 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d8e80400 session 0x5612d2e025a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d156c800 session 0x5612dbe0fa40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d7413800 session 0x5612d4e83e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3f4f400 session 0x5612d314e000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3f4e000 session 0x5612d077c000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539533312 unmapped: 91578368 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5221392 data_alloc: 218103808 data_used: 7135232
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539533312 unmapped: 91578368 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b65000/0x0/0x1bfc00000, data 0x218ad54/0x23a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539533312 unmapped: 91578368 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d7ff6c00 session 0x5612d3447680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d1b52c00 session 0x5612d2926f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539533312 unmapped: 91578368 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539533312 unmapped: 91578368 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d1b54800 session 0x5612d314e3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d4653c00 session 0x5612d34630e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2943000 session 0x5612d0633a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539467776 unmapped: 91643904 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2823400 session 0x5612d30df860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5223127 data_alloc: 218103808 data_used: 7139328
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539394048 unmapped: 91717632 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3b8fc00 session 0x5612d31b1c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3695c00 session 0x5612dbe0e1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x218ad87/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5302612 data_alloc: 234881024 data_used: 18059264
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x218ad87/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x218ad87/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.993692398s of 15.341640472s, submitted: 70
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d803dc00 session 0x5612daac25a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612dea3e800 session 0x5612d4e83c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x218ad87/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [0,0,1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2823400 session 0x5612d314f0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301888 data_alloc: 234881024 data_used: 18059264
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x218ad87/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544342016 unmapped: 86769664 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544350208 unmapped: 86761472 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612cff93c00 session 0x5612d077c3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d4652800 session 0x5612e1712000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544350208 unmapped: 86761472 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5396130 data_alloc: 234881024 data_used: 19505152
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544776192 unmapped: 86335488 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544776192 unmapped: 86335488 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544776192 unmapped: 86335488 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x196eb3000/0x0/0x1bfc00000, data 0x2c99d87/0x2eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544776192 unmapped: 86335488 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612dea3d800 session 0x5612d3463a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d297a000 session 0x5612d0a634a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544776192 unmapped: 86335488 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5396450 data_alloc: 234881024 data_used: 19513344
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.783555031s of 12.358042717s, submitted: 142
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d75cc000 session 0x5612d0773a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d8e81400 session 0x5612dc4045a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x196e92000/0x0/0x1bfc00000, data 0x2cbade9/0x2edc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3b8fc00 session 0x5612d3a1c5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612ddc40c00 session 0x5612d2647a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d7412800 session 0x5612d0a63a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5411537 data_alloc: 234881024 data_used: 19517440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x196cf6000/0x0/0x1bfc00000, data 0x2e57d87/0x3078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f50c00 session 0x5612d0b30f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196cf2000/0x0/0x1bfc00000, data 0x2e59a53/0x307b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5415239 data_alloc: 234881024 data_used: 19525632
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.294777870s of 12.366117477s, submitted: 26
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196cf0000/0x0/0x1bfc00000, data 0x2e5ba53/0x307d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5414435 data_alloc: 234881024 data_used: 19525632
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196cf0000/0x0/0x1bfc00000, data 0x2e5ca53/0x307e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b55800 session 0x5612d160b680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d8e81400 session 0x5612e17130e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d9184c00 session 0x5612d2e02780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1598000 session 0x5612dc4052c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b55800 session 0x5612d2e03e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d8e81400 session 0x5612d3446f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f50c00 session 0x5612d2bb5680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d9184c00 session 0x5612d06325a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1599400 session 0x5612d3446000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b55800 session 0x5612d0b630e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f50c00 session 0x5612d0614960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x195cc5000/0x0/0x1bfc00000, data 0x3e84ad5/0x40a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5540221 data_alloc: 234881024 data_used: 19525632
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d0bc7400 session 0x5612d0b62d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f4f400 session 0x5612d29272c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x195cc5000/0x0/0x1bfc00000, data 0x3e84ad5/0x40a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x195cc5000/0x0/0x1bfc00000, data 0x3e84ad5/0x40a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d2945c00 session 0x5612dbe0ed20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d0bc7400 session 0x5612d314e780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.166153431s of 10.152449608s, submitted: 24
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f4f400 session 0x5612daac34a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b55800 session 0x5612d26465a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f50c00 session 0x5612d480ab40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612dea3cc00 session 0x5612d18e9e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545513472 unmapped: 97148928 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545693696 unmapped: 96968704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5671157 data_alloc: 251658240 data_used: 36806656
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553189376 unmapped: 89473024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x195c9f000/0x0/0x1bfc00000, data 0x3ea8b1b/0x40cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,2])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553197568 unmapped: 89464832 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x195c9f000/0x0/0x1bfc00000, data 0x3ea8b1b/0x40cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553197568 unmapped: 89464832 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d0bc7400 session 0x5612cf123e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b55800 session 0x5612dbe0e1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553197568 unmapped: 89464832 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553205760 unmapped: 89456640 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196cdf000/0x0/0x1bfc00000, data 0x3ea8b1b/0x40cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5670929 data_alloc: 251658240 data_used: 36814848
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553205760 unmapped: 89456640 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553205760 unmapped: 89456640 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b53c00 session 0x5612d3a1d680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553213952 unmapped: 89448448 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553213952 unmapped: 89448448 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196cdf000/0x0/0x1bfc00000, data 0x3ea8af8/0x40ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d297b000 session 0x5612d3447e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.533593178s of 11.857751846s, submitted: 49
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553213952 unmapped: 89448448 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5669640 data_alloc: 251658240 data_used: 36814848
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553222144 unmapped: 89440256 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1987000 session 0x5612d0b31860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d0bc7400 session 0x5612d18e9e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554221568 unmapped: 88440832 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88432640 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196722000/0x0/0x1bfc00000, data 0x4462a86/0x4686000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d1b53c00 session 0x5612d26465a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88424448 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d1b55800 session 0x5612daac34a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 418 heartbeat osd_stat(store_statfs(0x1966bb000/0x0/0x1bfc00000, data 0x44cb7a6/0x46f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d297b000 session 0x5612d314e780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554639360 unmapped: 88023040 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 418 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x43757a6/0x459a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5716002 data_alloc: 251658240 data_used: 36581376
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554639360 unmapped: 88023040 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554639360 unmapped: 88023040 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d4653c00 session 0x5612d0b62d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d0bc7400 session 0x5612d0614960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554639360 unmapped: 88023040 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554639360 unmapped: 88023040 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 418 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x43757a6/0x459a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554647552 unmapped: 88014848 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.833458900s of 10.137002945s, submitted: 110
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d1b53c00 session 0x5612d3446000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5718322 data_alloc: 251658240 data_used: 36593664
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554663936 unmapped: 87998464 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1967e9000/0x0/0x1bfc00000, data 0x439c3ca/0x45c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554663936 unmapped: 87998464 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d3f51800 session 0x5612d06325a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d2823400 session 0x5612d3446f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554672128 unmapped: 87990272 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d0bd8000 session 0x5612d2e02780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d297b000 session 0x5612d0a63a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554688512 unmapped: 87973888 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d90a9800 session 0x5612d26461e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d7413800 session 0x5612d4e82d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552427520 unmapped: 90234880 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d0bc7400 session 0x5612d3a1c5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5469723 data_alloc: 234881024 data_used: 19546112
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552427520 unmapped: 90234880 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1979c8000/0x0/0x1bfc00000, data 0x31c0335/0x33e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552427520 unmapped: 90234880 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552427520 unmapped: 90234880 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d157c000 session 0x5612dbe0f4a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552419328 unmapped: 90243072 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1979c8000/0x0/0x1bfc00000, data 0x31c0335/0x33e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552419328 unmapped: 90243072 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5469723 data_alloc: 234881024 data_used: 19546112
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552419328 unmapped: 90243072 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.731872559s of 11.440790176s, submitted: 70
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b52c00 session 0x5612d0b63680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b54800 session 0x5612d30b6780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1979c8000/0x0/0x1bfc00000, data 0x31c0335/0x33e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612ddc41800 session 0x5612d0b71680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990cf000/0x0/0x1bfc00000, data 0x1abb2a0/0x1cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d2e43000 session 0x5612daac2f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b51800 session 0x5612d0a54780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5208940 data_alloc: 218103808 data_used: 7163904
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990cf000/0x0/0x1bfc00000, data 0x1abb2a0/0x1cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b51800 session 0x5612d30dfe00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b52c00 session 0x5612d31b0000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5249166 data_alloc: 218103808 data_used: 12263424
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990d0000/0x0/0x1bfc00000, data 0x1abb2c3/0x1cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990d0000/0x0/0x1bfc00000, data 0x1abb2c3/0x1cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990d0000/0x0/0x1bfc00000, data 0x1abb2c3/0x1cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5249166 data_alloc: 218103808 data_used: 12263424
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990d0000/0x0/0x1bfc00000, data 0x1abb2c3/0x1cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5249166 data_alloc: 218103808 data_used: 12263424
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.600269318s of 19.742877960s, submitted: 62
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547971072 unmapped: 94691328 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549396480 unmapped: 93265920 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x198633000/0x0/0x1bfc00000, data 0x25582c3/0x277b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5331156 data_alloc: 218103808 data_used: 12283904
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x198633000/0x0/0x1bfc00000, data 0x25582c3/0x277b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x198633000/0x0/0x1bfc00000, data 0x25582c3/0x277b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5331156 data_alloc: 218103808 data_used: 12283904
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 93200384 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d3695c00 session 0x5612d480b0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d3b8fc00 session 0x5612d0aead20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d0bc6800 session 0x5612d30df4a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d0bc6800 session 0x5612d34634a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.445962906s of 10.428418159s, submitted: 99
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b51800 session 0x5612d06154a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b52c00 session 0x5612d4e83680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d3695c00 session 0x5612d5b0fa40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d3b8fc00 session 0x5612dc404000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d0bc6800 session 0x5612d26463c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550453248 unmapped: 92209152 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x19800e000/0x0/0x1bfc00000, data 0x2b7c2d3/0x2da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550453248 unmapped: 92209152 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550453248 unmapped: 92209152 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550453248 unmapped: 92209152 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5376529 data_alloc: 218103808 data_used: 12283904
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x19800e000/0x0/0x1bfc00000, data 0x2b7c2d3/0x2da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550461440 unmapped: 92200960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 420 ms_handle_reset con 0x5612d2942000 session 0x5612dc4054a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550469632 unmapped: 92192768 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3b8f000 session 0x5612d30cb0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x198008000/0x0/0x1bfc00000, data 0x2b7e011/0x2da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550477824 unmapped: 92184576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3b8d800 session 0x5612dc404780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bd6c00 session 0x5612d0b623c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550486016 unmapped: 92176384 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc6800 session 0x5612d34621e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2942000 session 0x5612d0a15c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550789120 unmapped: 91873280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5393433 data_alloc: 218103808 data_used: 12304384
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550789120 unmapped: 91873280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197fdf000/0x0/0x1bfc00000, data 0x2ba3d0b/0x2dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197fdf000/0x0/0x1bfc00000, data 0x2ba3d0b/0x2dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5431033 data_alloc: 234881024 data_used: 17612800
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.506317139s of 15.622153282s, submitted: 31
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d9182400 session 0x5612d26461e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3695c00 session 0x5612d30ca780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197fdf000/0x0/0x1bfc00000, data 0x2ba3d6d/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197fdf000/0x0/0x1bfc00000, data 0x2ba3d6d/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5431743 data_alloc: 234881024 data_used: 17616896
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d14dc800 session 0x5612d34470e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197fdf000/0x0/0x1bfc00000, data 0x2ba3d6d/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc6800 session 0x5612d2bb54a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549969920 unmapped: 92692480 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2942000 session 0x5612d3463e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3695c00 session 0x5612d0a9b0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550174720 unmapped: 92487680 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550199296 unmapped: 92463104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197770000/0x0/0x1bfc00000, data 0x3718da0/0x363d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543889 data_alloc: 234881024 data_used: 17956864
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550215680 unmapped: 92446720 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550215680 unmapped: 92446720 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550215680 unmapped: 92446720 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197770000/0x0/0x1bfc00000, data 0x3718da0/0x363d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550215680 unmapped: 92446720 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.979682922s of 12.179447174s, submitted: 77
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5540337 data_alloc: 234881024 data_used: 17960960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x19776e000/0x0/0x1bfc00000, data 0x371bda0/0x3640000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x19776e000/0x0/0x1bfc00000, data 0x371bda0/0x3640000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5540337 data_alloc: 234881024 data_used: 17960960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552435712 unmapped: 90226688 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2bb2400 session 0x5612d0614000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3d83000 session 0x5612d077da40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3d83000 session 0x5612d0a9b680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc6800 session 0x5612d30b65a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549986304 unmapped: 92676096 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549986304 unmapped: 92676096 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.111669540s of 10.333090782s, submitted: 17
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x1975b3000/0x0/0x1bfc00000, data 0x38d4dd9/0x37fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,14])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549986304 unmapped: 92676096 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2942000 session 0x5612d5b0fc20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d1b50c00 session 0x5612d3462000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3f4f400 session 0x5612d160be00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3f4f400 session 0x5612d30def00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc6800 session 0x5612d160b0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5656810 data_alloc: 234881024 data_used: 22204416
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551059456 unmapped: 91602944 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551059456 unmapped: 91602944 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196ce5000/0x0/0x1bfc00000, data 0x41a2e12/0x40c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5666562 data_alloc: 234881024 data_used: 22200320
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196ce4000/0x0/0x1bfc00000, data 0x41a3e12/0x40ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d5b1b000 session 0x5612d0b70d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196ce4000/0x0/0x1bfc00000, data 0x41a3e12/0x40ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5666782 data_alloc: 234881024 data_used: 22200320
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d90a9000 session 0x5612d31b1c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612ddc3f000 session 0x5612d2e02960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.538869858s of 12.626841545s, submitted: 46
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612ddc3f000 session 0x5612d5b0ed20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551067648 unmapped: 91594752 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196ce4000/0x0/0x1bfc00000, data 0x41a3e12/0x40ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551075840 unmapped: 91586560 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552230912 unmapped: 90431488 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5717006 data_alloc: 234881024 data_used: 29851648
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552230912 unmapped: 90431488 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552230912 unmapped: 90431488 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552239104 unmapped: 90423296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196cd9000/0x0/0x1bfc00000, data 0x41aae12/0x40d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5718564 data_alloc: 234881024 data_used: 29855744
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d9182400 session 0x5612dbe0e780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc7400 session 0x5612d0614f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2944400 session 0x5612d0b63680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196d07000/0x0/0x1bfc00000, data 0x4181ddf/0x40a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d6c91400 session 0x5612d2e034a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.445158958s of 11.233171463s, submitted: 38
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc7400 session 0x5612d13ce000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2944400 session 0x5612d2e034a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d6c91400 session 0x5612d0614f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552263680 unmapped: 90398720 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5731585 data_alloc: 234881024 data_used: 29708288
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d1b54800 session 0x5612d2e03680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d2e43000 session 0x5612d0b701e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553254912 unmapped: 89407488 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 422 heartbeat osd_stat(store_statfs(0x196896000/0x0/0x1bfc00000, data 0x42eaa3b/0x4517000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d0bc7400 session 0x5612d5b0ed20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553320448 unmapped: 89341952 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 422 heartbeat osd_stat(store_statfs(0x19686a000/0x0/0x1bfc00000, data 0x4315a18/0x4541000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553320448 unmapped: 89341952 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d1b54800 session 0x5612d3463e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d2944400 session 0x5612d2bb54a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5551451 data_alloc: 234881024 data_used: 20500480
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 422 heartbeat osd_stat(store_statfs(0x1977fc000/0x0/0x1bfc00000, data 0x3387a08/0x35b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.225218773s of 10.995776176s, submitted: 130
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 423 ms_handle_reset con 0x5612d0bc6800 session 0x5612d4e825a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 423 heartbeat osd_stat(store_statfs(0x1977f5000/0x0/0x1bfc00000, data 0x338c5ba/0x35b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [0,0,1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542482432 unmapped: 100179968 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 423 ms_handle_reset con 0x5612d8e81c00 session 0x5612d0a15c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5382813 data_alloc: 218103808 data_used: 12779520
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542482432 unmapped: 100179968 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542482432 unmapped: 100179968 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542482432 unmapped: 100179968 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 423 heartbeat osd_stat(store_statfs(0x198721000/0x0/0x1bfc00000, data 0x245d548/0x2687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542482432 unmapped: 100179968 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 424 ms_handle_reset con 0x5612d0bc6800 session 0x5612d0b63680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542498816 unmapped: 100163584 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5383304 data_alloc: 218103808 data_used: 12779520
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542498816 unmapped: 100163584 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 424 heartbeat osd_stat(store_statfs(0x198726000/0x0/0x1bfc00000, data 0x245f23a/0x2688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542498816 unmapped: 100163584 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 424 ms_handle_reset con 0x5612d156c000 session 0x5612e1712000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 424 ms_handle_reset con 0x5612d4652800 session 0x5612d314e3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542523392 unmapped: 100139008 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 424 ms_handle_reset con 0x5612d2bb2400 session 0x5612d0aeb680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542523392 unmapped: 100139008 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.770137787s of 10.003428459s, submitted: 95
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542523392 unmapped: 100139008 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5232637 data_alloc: 218103808 data_used: 7196672
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542523392 unmapped: 100139008 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542523392 unmapped: 100139008 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1995b0000/0x0/0x1bfc00000, data 0x15d4dcc/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1995b0000/0x0/0x1bfc00000, data 0x15d4dcc/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5232637 data_alloc: 218103808 data_used: 7196672
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1995b0000/0x0/0x1bfc00000, data 0x15d4dcc/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2942000 session 0x5612d30b72c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2942000 session 0x5612d0b630e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.521475792s of 10.530011177s, submitted: 11
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d3694800 session 0x5612e1713a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542539776 unmapped: 100122624 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5234431 data_alloc: 218103808 data_used: 7204864
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1995b0000/0x0/0x1bfc00000, data 0x15d4dcc/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542547968 unmapped: 100114432 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542547968 unmapped: 100114432 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612ddc40c00 session 0x5612d2646f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d75cc000 session 0x5612d30ca1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542564352 unmapped: 100098048 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d0bc7400 session 0x5612d30cb680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1995b1000/0x0/0x1bfc00000, data 0x15d4dcc/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [0,1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2942000 session 0x5612d0b30d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5290733 data_alloc: 218103808 data_used: 7204864
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5290733 data_alloc: 218103808 data_used: 7204864
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5290733 data_alloc: 218103808 data_used: 7204864
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5290733 data_alloc: 218103808 data_used: 7204864
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.075338364s of 24.247449875s, submitted: 37
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2e42800 session 0x5612dbe0f4a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542588928 unmapped: 100073472 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5321642 data_alloc: 218103808 data_used: 10895360
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542736384 unmapped: 99926016 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542736384 unmapped: 99926016 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198ef7000/0x0/0x1bfc00000, data 0x1c8ddef/0x1eb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542736384 unmapped: 99926016 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542736384 unmapped: 99926016 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d75cc800 session 0x5612d30cb4a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d6c91400 session 0x5612e1713860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d156c800 session 0x5612daac2000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2942000 session 0x5612d0615e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542736384 unmapped: 99926016 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2e42800 session 0x5612d314fc20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5390066 data_alloc: 218103808 data_used: 14102528
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1989fa000/0x0/0x1bfc00000, data 0x218adef/0x23b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1989fa000/0x0/0x1bfc00000, data 0x218adef/0x23b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1989fa000/0x0/0x1bfc00000, data 0x218adef/0x23b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5390066 data_alloc: 218103808 data_used: 14102528
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.867590904s of 12.951579094s, submitted: 24
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550084608 unmapped: 92577792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 546947072 unmapped: 95715328 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d3f4e800 session 0x5612d480a3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 546955264 unmapped: 95707136 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d0a61000 session 0x5612d34632c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d3b8f000 session 0x5612d19932c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d0a61000 session 0x5612dbe0f680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 546955264 unmapped: 95707136 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d3000/0x0/0x1bfc00000, data 0x2ab0dff/0x2cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5470536 data_alloc: 218103808 data_used: 14548992
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 546955264 unmapped: 95707136 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 95608832 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d3000/0x0/0x1bfc00000, data 0x2ab0dff/0x2cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d3000/0x0/0x1bfc00000, data 0x2ab0dff/0x2cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d3000/0x0/0x1bfc00000, data 0x2ab0dff/0x2cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5497064 data_alloc: 234881024 data_used: 18354176
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d0000/0x0/0x1bfc00000, data 0x2ab3dff/0x2cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d0000/0x0/0x1bfc00000, data 0x2ab3dff/0x2cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d0000/0x0/0x1bfc00000, data 0x2ab3dff/0x2cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5497064 data_alloc: 234881024 data_used: 18354176
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.660078049s of 14.272805214s, submitted: 75
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d7ff6800 session 0x5612d2bb5860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d3695c00 session 0x5612d0a14000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547127296 unmapped: 95535104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d6c90800 session 0x5612dbe0e000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548225024 unmapped: 94437376 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980bd000/0x0/0x1bfc00000, data 0x2ac6dff/0x2cf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x197a4d000/0x0/0x1bfc00000, data 0x312eddc/0x3358000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548347904 unmapped: 94314496 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5547338 data_alloc: 234881024 data_used: 18329600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548364288 unmapped: 94298112 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547708928 unmapped: 94953472 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1979b7000/0x0/0x1bfc00000, data 0x31ccddc/0x33f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5563800 data_alloc: 234881024 data_used: 18759680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.376773834s of 12.013556480s, submitted: 93
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1979ad000/0x0/0x1bfc00000, data 0x31d6ddc/0x3400000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x19798a000/0x0/0x1bfc00000, data 0x31faddc/0x3424000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5561612 data_alloc: 234881024 data_used: 18759680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x19798a000/0x0/0x1bfc00000, data 0x31faddc/0x3424000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5561612 data_alloc: 234881024 data_used: 18759680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x19798a000/0x0/0x1bfc00000, data 0x31faddc/0x3424000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x19798a000/0x0/0x1bfc00000, data 0x31faddc/0x3424000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.989348412s of 11.007923126s, submitted: 5
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x197984000/0x0/0x1bfc00000, data 0x3200ddc/0x342a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6602.4 total, 600.0 interval#012Cumulative writes: 77K writes, 312K keys, 77K commit groups, 1.0 writes per commit group, ingest: 0.32 GB, 0.05 MB/s#012Cumulative WAL: 77K writes, 29K syncs, 2.69 writes per sync, written: 0.32 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4721 writes, 18K keys, 4721 commit groups, 1.0 writes per commit group, ingest: 21.16 MB, 0.04 MB/s#012Interval WAL: 4721 writes, 1802 syncs, 2.62 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547733504 unmapped: 94928896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2942000 session 0x5612d3a1d2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2e42800 session 0x5612d480a960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5432472 data_alloc: 218103808 data_used: 14544896
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d0a61000 session 0x5612d2e023c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198616000/0x0/0x1bfc00000, data 0x256fdcc/0x2798000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5425431 data_alloc: 218103808 data_used: 14434304
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d3f4ec00 session 0x5612d480ab40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d1b54800 session 0x5612dbe0ef00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d1598800 session 0x5612d30ca960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.920788765s of 10.953658104s, submitted: 17
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d0a61000 session 0x5612d135a000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428479 data_alloc: 218103808 data_used: 14434304
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547749888 unmapped: 94912512 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5429119 data_alloc: 218103808 data_used: 14512128
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 94896128 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5429119 data_alloc: 218103808 data_used: 14512128
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 94896128 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 94896128 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 94896128 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.663141251s of 13.677927017s, submitted: 4
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547774464 unmapped: 94887936 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5442651 data_alloc: 218103808 data_used: 15880192
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985e5000/0x0/0x1bfc00000, data 0x25a0dcc/0x27c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5445381 data_alloc: 218103808 data_used: 15880192
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985e5000/0x0/0x1bfc00000, data 0x25a0dcc/0x27c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 94560256 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5449701 data_alloc: 218103808 data_used: 16822272
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 94560256 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.267621994s of 13.282384872s, submitted: 7
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 425 handle_osd_map epochs [426,426], i have 426, src has [1,426]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x1985e5000/0x0/0x1bfc00000, data 0x25a0dcc/0x27c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [0,0,0,1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d3b8d800 session 0x5612d06150e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 94560256 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 94552064 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 94543872 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d2bcb400 session 0x5612daac3e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d156c000 session 0x5612d0a621e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d6c91400 session 0x5612d160b2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0a61000 session 0x5612d0b705a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d156c000 session 0x5612daac30e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d2bcb400 session 0x5612d0b62d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d3b8d800 session 0x5612d30df0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549199872 unmapped: 101335040 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197687000/0x0/0x1bfc00000, data 0x30ecafa/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5539252 data_alloc: 218103808 data_used: 16834560
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549208064 unmapped: 101326848 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549208064 unmapped: 101326848 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549224448 unmapped: 101310464 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549240832 unmapped: 101294080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549281792 unmapped: 101253120 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197687000/0x0/0x1bfc00000, data 0x30ecafa/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5539076 data_alloc: 218103808 data_used: 16834560
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549289984 unmapped: 101244928 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197687000/0x0/0x1bfc00000, data 0x30ecafa/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.730325699s of 10.394227028s, submitted: 196
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549289984 unmapped: 101244928 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549289984 unmapped: 101244928 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197687000/0x0/0x1bfc00000, data 0x30ecafa/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5539076 data_alloc: 218103808 data_used: 16834560
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197687000/0x0/0x1bfc00000, data 0x30ecafa/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5565596 data_alloc: 218103808 data_used: 16834560
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197685000/0x0/0x1bfc00000, data 0x3387afa/0x3319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.988399506s of 11.228255272s, submitted: 84
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d3f50c00 session 0x5612d480a5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5649536 data_alloc: 234881024 data_used: 28246016
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550232064 unmapped: 100302848 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197685000/0x0/0x1bfc00000, data 0x3387afa/0x3319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d7ff8000 session 0x5612d19932c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0a61400 session 0x5612d34632c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550240256 unmapped: 100294656 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d31b4c00 session 0x5612d480a3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612dea3e400 session 0x5612d314fc20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197685000/0x0/0x1bfc00000, data 0x3387afa/0x3319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550240256 unmapped: 100294656 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550264832 unmapped: 100270080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197661000/0x0/0x1bfc00000, data 0x33abafa/0x333d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550264832 unmapped: 100270080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5654369 data_alloc: 234881024 data_used: 28250112
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197661000/0x0/0x1bfc00000, data 0x33abafa/0x333d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550264832 unmapped: 100270080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550264832 unmapped: 100270080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550264832 unmapped: 100270080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197661000/0x0/0x1bfc00000, data 0x33abafa/0x333d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550273024 unmapped: 100261888 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550281216 unmapped: 100253696 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5654369 data_alloc: 234881024 data_used: 28250112
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.801042557s of 12.839351654s, submitted: 11
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550731776 unmapped: 99803136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552673280 unmapped: 97861632 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553721856 unmapped: 96813056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d98000/0x0/0x1bfc00000, data 0x3c74afa/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553721856 unmapped: 96813056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d98000/0x0/0x1bfc00000, data 0x3c74afa/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553721856 unmapped: 96813056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5726633 data_alloc: 234881024 data_used: 28250112
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553869312 unmapped: 96665600 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d52000/0x0/0x1bfc00000, data 0x3cbaafa/0x3c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5734960 data_alloc: 234881024 data_used: 28508160
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d52000/0x0/0x1bfc00000, data 0x3cbaafa/0x3c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553885696 unmapped: 96649216 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553885696 unmapped: 96649216 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553885696 unmapped: 96649216 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5735440 data_alloc: 234881024 data_used: 28651520
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d52000/0x0/0x1bfc00000, data 0x3cbaafa/0x3c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d52000/0x0/0x1bfc00000, data 0x3cbaafa/0x3c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d52000/0x0/0x1bfc00000, data 0x3cbaafa/0x3c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5735440 data_alloc: 234881024 data_used: 28651520
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.383529663s of 22.635568619s, submitted: 46
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d90a9800 session 0x5612d30cb4a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d1987800 session 0x5612d0a634a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553910272 unmapped: 96624640 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0a61400 session 0x5612d5b0ed20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d1288c00 session 0x5612d480a1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d3f4e800 session 0x5612d2e03c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d3f50000 session 0x5612d3446000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0a61400 session 0x5612d30b7860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553910272 unmapped: 96624640 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d1288c00 session 0x5612dbe0f680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d1987800 session 0x5612dbe0e1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5807497 data_alloc: 234881024 data_used: 28540928
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196409000/0x0/0x1bfc00000, data 0x4602b23/0x4595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554057728 unmapped: 96477184 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554057728 unmapped: 96477184 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554057728 unmapped: 96477184 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196409000/0x0/0x1bfc00000, data 0x4602b5c/0x4595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554123264 unmapped: 96411648 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612dea3e000 session 0x5612d30df860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554139648 unmapped: 96395264 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0bc7000 session 0x5612d0a14b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x19644f000/0x0/0x1bfc00000, data 0x45bcb5c/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5800573 data_alloc: 234881024 data_used: 28418048
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554139648 unmapped: 96395264 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0a61400 session 0x5612d0a15c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554147840 unmapped: 96387072 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x19644f000/0x0/0x1bfc00000, data 0x45bcb5c/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d1288c00 session 0x5612d160b680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612dea3e000 session 0x5612d3a1cb40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 427 handle_osd_map epochs [427,427], i have 427, src has [1,427]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d1987800 session 0x5612d160a960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554188800 unmapped: 96346112 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d1598c00 session 0x5612d31b1a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d0a61400 session 0x5612d3446f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554188800 unmapped: 96346112 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d6c91000 session 0x5612d30cb2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.409293175s of 11.194550514s, submitted: 100
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d1b54800 session 0x5612d314e780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554188800 unmapped: 96346112 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5808549 data_alloc: 234881024 data_used: 31944704
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d7ff9400 session 0x5612d0b62f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556359680 unmapped: 94175232 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 427 heartbeat osd_stat(store_statfs(0x19644e000/0x0/0x1bfc00000, data 0x432382a/0x4550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 93339648 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 93339648 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 handle_osd_map epochs [428,428], i have 428, src has [1,428]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557203456 unmapped: 93331456 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557219840 unmapped: 93315072 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5848563 data_alloc: 251658240 data_used: 36986880
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557219840 unmapped: 93315072 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19646e000/0x0/0x1bfc00000, data 0x43013dc/0x452f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557219840 unmapped: 93315072 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557228032 unmapped: 93306880 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557228032 unmapped: 93306880 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557228032 unmapped: 93306880 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19646e000/0x0/0x1bfc00000, data 0x43013dc/0x452f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.913703918s of 10.942051888s, submitted: 16
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d27b2800 session 0x5612d30ca1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5847683 data_alloc: 251658240 data_used: 36986880
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557228032 unmapped: 93306880 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0a61400 session 0x5612d0aeb680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197417000/0x0/0x1bfc00000, data 0x33593dc/0x3587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560308224 unmapped: 90226688 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560308224 unmapped: 90226688 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560635904 unmapped: 89899008 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560701440 unmapped: 89833472 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5742471 data_alloc: 234881024 data_used: 28274688
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560947200 unmapped: 89587712 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560947200 unmapped: 89587712 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x196b20000/0x0/0x1bfc00000, data 0x3c473dc/0x3e75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560947200 unmapped: 89587712 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560947200 unmapped: 89587712 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x196b20000/0x0/0x1bfc00000, data 0x3c473dc/0x3e75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560947200 unmapped: 89587712 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5742951 data_alloc: 234881024 data_used: 28286976
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 89579520 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.063028336s of 10.723139763s, submitted: 142
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 89579520 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x196b08000/0x0/0x1bfc00000, data 0x3c683dc/0x3e96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 89579520 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d7ff7000 session 0x5612d34634a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d156d400 session 0x5612d06152c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 89579520 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x196b08000/0x0/0x1bfc00000, data 0x3c683dc/0x3e96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560963584 unmapped: 89571328 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526195 data_alloc: 234881024 data_used: 18755584
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x196b08000/0x0/0x1bfc00000, data 0x3c683dc/0x3e96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558989312 unmapped: 91545600 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d8423c00 session 0x5612d3463e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558989312 unmapped: 91545600 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558989312 unmapped: 91545600 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558997504 unmapped: 91537408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197d85000/0x0/0x1bfc00000, data 0x29ed36a/0x2c19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558997504 unmapped: 91537408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5525134 data_alloc: 234881024 data_used: 18751488
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558997504 unmapped: 91537408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558997504 unmapped: 91537408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558997504 unmapped: 91537408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559005696 unmapped: 91529216 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.855386734s of 13.235659599s, submitted: 28
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0a61000 session 0x5612d4e83680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0a61400 session 0x5612d2e03c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.258146286s of 31.315561295s, submitted: 28
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301738 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d7ff7400 session 0x5612d26461e0
Dec  6 03:29:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  6 03:29:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/447824310' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1987800 session 0x5612dbe0ef00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1b000 session 0x5612d0a14000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1ac00 session 0x5612d2e023c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1ac00 session 0x5612d480ad20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612db77f400 session 0x5612d077d2c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d75cd000 session 0x5612dbe0e5a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5361216 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3695c00 session 0x5612d27a0000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d8e80400 session 0x5612d30ca3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5409569 data_alloc: 218103808 data_used: 14041088
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5409569 data_alloc: 218103808 data_used: 14041088
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.014648438s of 19.252319336s, submitted: 53
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560758784 unmapped: 89776128 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5446069 data_alloc: 218103808 data_used: 14417920
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561029120 unmapped: 89505792 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198716000/0x0/0x1bfc00000, data 0x204d3cc/0x227a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19868c000/0x0/0x1bfc00000, data 0x20df3cc/0x230c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457625 data_alloc: 218103808 data_used: 14229504
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198671000/0x0/0x1bfc00000, data 0x21003cc/0x232d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198671000/0x0/0x1bfc00000, data 0x21003cc/0x232d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198671000/0x0/0x1bfc00000, data 0x21003cc/0x232d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5454861 data_alloc: 218103808 data_used: 14233600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198671000/0x0/0x1bfc00000, data 0x21003cc/0x232d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198671000/0x0/0x1bfc00000, data 0x21003cc/0x232d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 89112576 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.889123917s of 16.025707245s, submitted: 91
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5454917 data_alloc: 218103808 data_used: 14233600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560373760 unmapped: 90161152 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3695000 session 0x5612d4e82d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1987c00 session 0x5612d18e90e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979b0000/0x0/0x1bfc00000, data 0x2dc03f5/0x2fee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979b0000/0x0/0x1bfc00000, data 0x2dc042e/0x2fee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5552876 data_alloc: 218103808 data_used: 14233600
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d293bc00 session 0x5612d3a1dc20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d297bc00 session 0x5612d0a63680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561586176 unmapped: 92626944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d2823000 session 0x5612d2e03a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1987c00 session 0x5612d5b0ef00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 92610560 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979ae000/0x0/0x1bfc00000, data 0x2dc0461/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 92610560 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5557948 data_alloc: 218103808 data_used: 14352384
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 92577792 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979ae000/0x0/0x1bfc00000, data 0x2dc0461/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979ae000/0x0/0x1bfc00000, data 0x2dc0461/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5649788 data_alloc: 234881024 data_used: 27287552
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.710140228s of 16.895963669s, submitted: 48
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979ae000/0x0/0x1bfc00000, data 0x2dc0461/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979aa000/0x0/0x1bfc00000, data 0x2dc4461/0x2ff4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5650348 data_alloc: 234881024 data_used: 27291648
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563027968 unmapped: 91185152 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979aa000/0x0/0x1bfc00000, data 0x2dc4461/0x2ff4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562372608 unmapped: 91840512 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197507000/0x0/0x1bfc00000, data 0x3267461/0x3497000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5709724 data_alloc: 234881024 data_used: 28033024
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.023353577s of 10.240151405s, submitted: 84
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197502000/0x0/0x1bfc00000, data 0x326c461/0x349c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d293bc00 session 0x5612d2bb5860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d2823000 session 0x5612d0a9b680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5473326 data_alloc: 218103808 data_used: 14225408
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b53800 session 0x5612dbe0e960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197ce7000/0x0/0x1bfc00000, data 0x21123cc/0x233f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5471313 data_alloc: 218103808 data_used: 14221312
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3695c00 session 0x5612d2bb4f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1ac00 session 0x5612d3a1cd20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19865f000/0x0/0x1bfc00000, data 0x21123cc/0x233f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3695c00 session 0x5612d3446f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555606016 unmapped: 98607104 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555606016 unmapped: 98607104 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555606016 unmapped: 98607104 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555606016 unmapped: 98607104 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555606016 unmapped: 98607104 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.273403168s of 40.580463409s, submitted: 124
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d90a9c00 session 0x5612d3446000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1982a2000/0x0/0x1bfc00000, data 0x24d036a/0x26fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555614208 unmapped: 102801408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555614208 unmapped: 102801408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1982a2000/0x0/0x1bfc00000, data 0x24d036a/0x26fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5439897 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555614208 unmapped: 102801408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d297ac00 session 0x5612d160b680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555622400 unmapped: 102793216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555622400 unmapped: 102793216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d31b5000 session 0x5612d34634a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1982a2000/0x0/0x1bfc00000, data 0x24d036a/0x26fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555622400 unmapped: 102793216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555622400 unmapped: 102793216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5439897 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d31b5000 session 0x5612d0a14000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d297ac00 session 0x5612d2e023c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555433984 unmapped: 102981632 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555433984 unmapped: 102981632 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555622400 unmapped: 102793216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543837 data_alloc: 234881024 data_used: 21110784
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3695c00 session 0x5612d2926000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1ac00 session 0x5612d314f0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543837 data_alloc: 234881024 data_used: 21110784
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1987800 session 0x5612d0773c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1a400 session 0x5612d5b0f860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543837 data_alloc: 234881024 data_used: 21110784
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1987800 session 0x5612e1713c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d297ac00 session 0x5612d0a554a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.093421936s of 27.183341980s, submitted: 19
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0bc6800 session 0x5612d2e03680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:29:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:59.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612d3446b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612d0632780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cec00 session 0x5612d314e780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555712512 unmapped: 102703104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d0a62780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 33.082977295s of 33.105510712s, submitted: 10
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612d3463a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d803f800 session 0x5612d34463c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d2926b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612d480a1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612d34472c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555728896 unmapped: 102686720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cec00 session 0x5612cff53860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0bd7000 session 0x5612d3462960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d0b63680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555728896 unmapped: 102686720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612dc405e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612dbe0e1e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cec00 session 0x5612d31b0d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d75cd800 session 0x5612d0a15a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d27a0000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612d077de00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5464349 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556040192 unmapped: 102375424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612daac2f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cec00 session 0x5612d3a1d860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1288c00 session 0x5612daac3860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d4652000 session 0x5612d30b6b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d2e02000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556040192 unmapped: 102375424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612d2bb5c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612d30b65a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cec00 session 0x5612d34472c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d2926b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612d3463a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612d135b860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d4652000 session 0x5612d2e03680
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d4428800 session 0x5612e1713c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556228608 unmapped: 102187008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198123000/0x0/0x1bfc00000, data 0x264c3ec/0x287b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1a800 session 0x5612d2e02d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556236800 unmapped: 102178816 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d31b5400 session 0x5612d4e83c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197749000/0x0/0x1bfc00000, data 0x30263ec/0x3255000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556204032 unmapped: 102211584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5582949 data_alloc: 218103808 data_used: 10072064
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556204032 unmapped: 102211584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3b8c800 session 0x5612d2e034a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d7ff7000 session 0x5612d0614f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556212224 unmapped: 102203392 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556220416 unmapped: 102195200 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197748000/0x0/0x1bfc00000, data 0x302640f/0x3256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556220416 unmapped: 102195200 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556965888 unmapped: 101449728 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5668643 data_alloc: 234881024 data_used: 21479424
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558243840 unmapped: 100171776 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.766633987s of 13.151865959s, submitted: 106
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d2823000 session 0x5612dbe0e3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558243840 unmapped: 100171776 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197747000/0x0/0x1bfc00000, data 0x3026432/0x3257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559112192 unmapped: 99303424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5851872 data_alloc: 234881024 data_used: 31911936
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d31b5400 session 0x5612d2bb5e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197747000/0x0/0x1bfc00000, data 0x3026432/0x3257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 564133888 unmapped: 94281728 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3b8f000 session 0x5612e17123c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 96780288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 96780288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d7ff9c00 session 0x5612d2e03a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d2bb2800 session 0x5612d07734a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 96780288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612db77ec00 session 0x5612d480ad20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556949504 unmapped: 101466112 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197fc4000/0x0/0x1bfc00000, data 0x265637a/0x2883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5525188 data_alloc: 218103808 data_used: 10665984
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556949504 unmapped: 101466112 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.054722786s of 10.566476822s, submitted: 208
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556949504 unmapped: 101466112 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612dea3d400 session 0x5612d2e023c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0be8000 session 0x5612e17134a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d2bb2800 session 0x5612d0b71e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cf000 session 0x5612e17132c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87ce800 session 0x5612d29272c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87ce400 session 0x5612d26463c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557023232 unmapped: 101392384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1b800 session 0x5612d3446f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557023232 unmapped: 101392384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d9182c00 session 0x5612d3a1d0e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 65.179008484s of 65.303047180s, submitted: 51
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 101220352 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d75cc400 session 0x5612daac30e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d4428800 session 0x5612d30ca3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 101220352 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 101220352 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199040000/0x0/0x1bfc00000, data 0x173236a/0x195e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 101220352 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199040000/0x0/0x1bfc00000, data 0x173236a/0x195e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199040000/0x0/0x1bfc00000, data 0x173236a/0x195e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5387059 data_alloc: 218103808 data_used: 7229440
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557211648 unmapped: 101203968 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557211648 unmapped: 101203968 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 430 ms_handle_reset con 0x5612ddc3fc00 session 0x5612d480a3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 430 heartbeat osd_stat(store_statfs(0x198c28000/0x0/0x1bfc00000, data 0x1735db8/0x1965000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557236224 unmapped: 101179392 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 431 ms_handle_reset con 0x5612d7ff9c00 session 0x5612d3a1da40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612d2bb2800 session 0x5612d3462f00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612ddc40c00 session 0x5612d19930e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612d3b8e000 session 0x5612d34463c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557793280 unmapped: 100622336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5451274 data_alloc: 218103808 data_used: 7241728
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557793280 unmapped: 100622336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612d2bb2800 session 0x5612d30b7c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557793280 unmapped: 100622336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612d2822000 session 0x5612d0a55a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.786208153s of 11.065521240s, submitted: 77
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557727744 unmapped: 100687872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612d2bc6000 session 0x5612d2e03860
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 433 ms_handle_reset con 0x5612d1986400 session 0x5612d26472c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 433 heartbeat osd_stat(store_statfs(0x198229000/0x0/0x1bfc00000, data 0x2132dcd/0x2365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 433 ms_handle_reset con 0x5612d1b55400 session 0x5612d0614000
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 433 ms_handle_reset con 0x5612db77e400 session 0x5612d34623c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557735936 unmapped: 100679680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 433 heartbeat osd_stat(store_statfs(0x198225000/0x0/0x1bfc00000, data 0x2134ab5/0x2368000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557735936 unmapped: 100679680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 433 heartbeat osd_stat(store_statfs(0x198225000/0x0/0x1bfc00000, data 0x2134ab5/0x2368000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5499318 data_alloc: 218103808 data_used: 7254016
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557752320 unmapped: 100663296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 433 heartbeat osd_stat(store_statfs(0x198225000/0x0/0x1bfc00000, data 0x2134ab5/0x2368000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5502292 data_alloc: 218103808 data_used: 7254016
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x198222000/0x0/0x1bfc00000, data 0x2136667/0x236b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5502292 data_alloc: 218103808 data_used: 7254016
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x198222000/0x0/0x1bfc00000, data 0x2136667/0x236b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d3695400 session 0x5612d0a63a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d2942c00 session 0x5612d18e90e0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x198222000/0x0/0x1bfc00000, data 0x2136667/0x236b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612ddc3f000 session 0x5612d2e02b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.438747406s of 16.507854462s, submitted: 32
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d8423800 session 0x5612d2647a40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557776896 unmapped: 100638720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5507991 data_alloc: 218103808 data_used: 7254016
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x1981fd000/0x0/0x1bfc00000, data 0x215a699/0x2391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5545431 data_alloc: 218103808 data_used: 12341248
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x1981fd000/0x0/0x1bfc00000, data 0x215a699/0x2391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557424640 unmapped: 100990976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5545431 data_alloc: 218103808 data_used: 12341248
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557424640 unmapped: 100990976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557424640 unmapped: 100990976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x1981fd000/0x0/0x1bfc00000, data 0x215a699/0x2391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.332267761s of 13.356040955s, submitted: 7
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562978816 unmapped: 95436800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x197598000/0x0/0x1bfc00000, data 0x32dd699/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5700011 data_alloc: 218103808 data_used: 13701120
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d8423800 session 0x5612d0a55e00
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d2942c00 session 0x5612d30b6780
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d3695400 session 0x5612d30ca960
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x197576000/0x0/0x1bfc00000, data 0x32ff699/0x3018000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5689516 data_alloc: 218103808 data_used: 13586432
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.761465073s of 11.035092354s, submitted: 121
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 435 ms_handle_reset con 0x5612d0a61400 session 0x5612d3a1c3c0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 435 heartbeat osd_stat(store_statfs(0x19759b000/0x0/0x1bfc00000, data 0x32db667/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 435 ms_handle_reset con 0x5612d3694c00 session 0x5612d480bc20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 436 ms_handle_reset con 0x5612d5b1b800 session 0x5612d30df4a0
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 436 heartbeat osd_stat(store_statfs(0x19873e000/0x0/0x1bfc00000, data 0x1c19387/0x1e4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [0,0,1])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560824320 unmapped: 97591296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 436 ms_handle_reset con 0x5612d87cc000 session 0x5612d4e83c20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 436 ms_handle_reset con 0x5612d2942c00 session 0x5612d30b6b40
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 436 ms_handle_reset con 0x5612d0a61400 session 0x5612d4e82d20
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5424508 data_alloc: 218103808 data_used: 7258112
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 436 heartbeat osd_stat(store_statfs(0x198d70000/0x0/0x1bfc00000, data 0x15e8ba6/0x181e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 436 heartbeat osd_stat(store_statfs(0x198d70000/0x0/0x1bfc00000, data 0x15e8ba6/0x181e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5424668 data_alloc: 218103808 data_used: 7262208
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560865280 unmapped: 97550336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560865280 unmapped: 97550336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 97476608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 97476608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 97476608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 97476608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560971776 unmapped: 97443840 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: do_command 'config diff' '{prefix=config diff}'
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: do_command 'config show' '{prefix=config show}'
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: do_command 'counter dump' '{prefix=counter dump}'
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: do_command 'counter schema' '{prefix=counter schema}'
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560553984 unmapped: 97861632 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:29:59 np0005548731 ceph-osd[80147]: do_command 'log dump' '{prefix=log dump}'
Dec  6 03:30:00 np0005548731 nova_compute[232433]: 2025-12-06 08:30:00.108 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  6 03:30:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2329781491' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 03:30:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:00.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:00 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 03:30:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  6 03:30:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2986239348' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  6 03:30:00 np0005548731 podman[349998]: 2025-12-06 08:30:00.913820825 +0000 UTC m=+0.064312591 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 03:30:00 np0005548731 podman[349996]: 2025-12-06 08:30:00.928116055 +0000 UTC m=+0.080821634 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  6 03:30:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:30:00.932 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:30:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:30:00.933 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:30:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:30:00.933 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:30:00 np0005548731 podman[349997]: 2025-12-06 08:30:00.937193896 +0000 UTC m=+0.089789273 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:30:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  6 03:30:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3073358271' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 03:30:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec  6 03:30:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/770934710' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  6 03:30:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:01.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:02.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec  6 03:30:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3704065491' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  6 03:30:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec  6 03:30:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2894406726' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  6 03:30:03 np0005548731 nova_compute[232433]: 2025-12-06 08:30:03.144 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:03 np0005548731 nova_compute[232433]: 2025-12-06 08:30:03.811 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec  6 03:30:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1646325394' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  6 03:30:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec  6 03:30:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3163222241' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  6 03:30:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:03.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:04 np0005548731 nova_compute[232433]: 2025-12-06 08:30:04.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:04 np0005548731 nova_compute[232433]: 2025-12-06 08:30:04.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:30:04 np0005548731 nova_compute[232433]: 2025-12-06 08:30:04.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:30:04 np0005548731 nova_compute[232433]: 2025-12-06 08:30:04.126 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:30:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec  6 03:30:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3035065875' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  6 03:30:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec  6 03:30:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1241005051' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  6 03:30:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:04.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec  6 03:30:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1800079149' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  6 03:30:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec  6 03:30:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/841291817' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  6 03:30:04 np0005548731 systemd[1]: Starting Hostname Service...
Dec  6 03:30:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec  6 03:30:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1062355183' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec  6 03:30:04 np0005548731 systemd[1]: Started Hostname Service.
Dec  6 03:30:05 np0005548731 nova_compute[232433]: 2025-12-06 08:30:05.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:05 np0005548731 nova_compute[232433]: 2025-12-06 08:30:05.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:05 np0005548731 nova_compute[232433]: 2025-12-06 08:30:05.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:30:05 np0005548731 nova_compute[232433]: 2025-12-06 08:30:05.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:05 np0005548731 nova_compute[232433]: 2025-12-06 08:30:05.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 03:30:05 np0005548731 nova_compute[232433]: 2025-12-06 08:30:05.110 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec  6 03:30:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2820554929' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  6 03:30:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Dec  6 03:30:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/552085105' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec  6 03:30:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Dec  6 03:30:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3112676870' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec  6 03:30:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec  6 03:30:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3515512720' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  6 03:30:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:05.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Dec  6 03:30:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/871280326' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec  6 03:30:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Dec  6 03:30:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3375895849' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec  6 03:30:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:06.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Dec  6 03:30:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3724135901' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec  6 03:30:07 np0005548731 nova_compute[232433]: 2025-12-06 08:30:07.152 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Dec  6 03:30:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3933450453' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec  6 03:30:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:30:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:07.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:30:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Dec  6 03:30:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1735822335' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec  6 03:30:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:08.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec  6 03:30:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/707535975' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  6 03:30:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 03:30:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 03:30:08 np0005548731 nova_compute[232433]: 2025-12-06 08:30:08.812 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 03:30:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 03:30:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Dec  6 03:30:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3596632528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec  6 03:30:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 03:30:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 03:30:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 03:30:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 03:30:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 03:30:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 03:30:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 03:30:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 03:30:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:09.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.113 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Dec  6 03:30:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2092924722' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.327 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.328 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.328 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.328 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.329 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:30:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:10.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:30:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4294752078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.789 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.932 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.934 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3908MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.934 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:30:10 np0005548731 nova_compute[232433]: 2025-12-06 08:30:10.935 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:30:11 np0005548731 nova_compute[232433]: 2025-12-06 08:30:11.018 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:30:11 np0005548731 nova_compute[232433]: 2025-12-06 08:30:11.018 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:30:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Dec  6 03:30:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1527899473' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec  6 03:30:11 np0005548731 nova_compute[232433]: 2025-12-06 08:30:11.039 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:30:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Dec  6 03:30:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1056357027' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec  6 03:30:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:30:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1456843590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:30:11 np0005548731 nova_compute[232433]: 2025-12-06 08:30:11.533 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:30:11 np0005548731 nova_compute[232433]: 2025-12-06 08:30:11.539 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:30:11 np0005548731 nova_compute[232433]: 2025-12-06 08:30:11.563 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:30:11 np0005548731 nova_compute[232433]: 2025-12-06 08:30:11.565 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:30:11 np0005548731 nova_compute[232433]: 2025-12-06 08:30:11.565 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:30:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Dec  6 03:30:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2186325669' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec  6 03:30:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:11.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Dec  6 03:30:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2466723271' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec  6 03:30:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:12.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Dec  6 03:30:13 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3681133396' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec  6 03:30:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Dec  6 03:30:13 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2735476104' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec  6 03:30:13 np0005548731 nova_compute[232433]: 2025-12-06 08:30:13.855 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:13.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:14 np0005548731 nova_compute[232433]: 2025-12-06 08:30:14.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:14 np0005548731 nova_compute[232433]: 2025-12-06 08:30:14.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Dec  6 03:30:14 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1500227487' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec  6 03:30:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:14.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:15 np0005548731 nova_compute[232433]: 2025-12-06 08:30:15.114 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Dec  6 03:30:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2243827790' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec  6 03:30:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Dec  6 03:30:15 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3844968379' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec  6 03:30:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:15.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:16 np0005548731 ovs-appctl[352826]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  6 03:30:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:30:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:16.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:30:16 np0005548731 ovs-appctl[352831]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  6 03:30:16 np0005548731 ovs-appctl[352835]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec  6 03:30:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Dec  6 03:30:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2410749592' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec  6 03:30:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Dec  6 03:30:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1891305318' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec  6 03:30:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:17.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  6 03:30:18 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2634980621' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  6 03:30:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Dec  6 03:30:18 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/554704831' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec  6 03:30:18 np0005548731 nova_compute[232433]: 2025-12-06 08:30:18.857 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Dec  6 03:30:19 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3572935360' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec  6 03:30:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:19.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec  6 03:30:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1565949019' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  6 03:30:20 np0005548731 nova_compute[232433]: 2025-12-06 08:30:20.116 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:20.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Dec  6 03:30:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4171845927' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec  6 03:30:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:30:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:21.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:30:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:22.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Dec  6 03:30:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/370689176' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec  6 03:30:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Dec  6 03:30:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3762760187' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec  6 03:30:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Dec  6 03:30:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/958197419' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec  6 03:30:23 np0005548731 nova_compute[232433]: 2025-12-06 08:30:23.859 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:30:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:23.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:30:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:24.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:25 np0005548731 nova_compute[232433]: 2025-12-06 08:30:25.118 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Dec  6 03:30:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3655934544' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec  6 03:30:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:30:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:25.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:30:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Dec  6 03:30:26 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/328269644' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec  6 03:30:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:26.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:27 np0005548731 nova_compute[232433]: 2025-12-06 08:30:27.128 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:30:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Dec  6 03:30:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2530206311' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec  6 03:30:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Dec  6 03:30:27 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3573223227' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec  6 03:30:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:27.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:28.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:28 np0005548731 nova_compute[232433]: 2025-12-06 08:30:28.861 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:28 np0005548731 virtqemud[232080]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  6 03:30:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec  6 03:30:28 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3317472033' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  6 03:30:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Dec  6 03:30:29 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3249964146' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec  6 03:30:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Dec  6 03:30:29 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/939267410' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec  6 03:30:29 np0005548731 systemd[1]: Starting Time & Date Service...
Dec  6 03:30:29 np0005548731 systemd[1]: Started Time & Date Service.
Dec  6 03:30:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:29.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:30 np0005548731 nova_compute[232433]: 2025-12-06 08:30:30.121 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:30:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:30.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:30:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  6 03:30:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3466208961' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  6 03:30:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  6 03:30:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1655787077' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  6 03:30:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Dec  6 03:30:30 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/222777010' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec  6 03:30:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:30:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7202.4 total, 600.0 interval#012Cumulative writes: 81K writes, 323K keys, 81K commit groups, 1.0 writes per commit group, ingest: 0.33 GB, 0.05 MB/s#012Cumulative WAL: 81K writes, 30K syncs, 2.67 writes per sync, written: 0.33 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3194 writes, 11K keys, 3194 commit groups, 1.0 writes per commit group, ingest: 9.73 MB, 0.02 MB/s#012Interval WAL: 3194 writes, 1368 syncs, 2.33 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 03:30:31 np0005548731 podman[355227]: 2025-12-06 08:30:31.658410462 +0000 UTC m=+0.066961016 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  6 03:30:31 np0005548731 podman[355225]: 2025-12-06 08:30:31.675432507 +0000 UTC m=+0.084440203 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:30:31 np0005548731 podman[355226]: 2025-12-06 08:30:31.70301238 +0000 UTC m=+0.110637352 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:30:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:31.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:32.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:33 np0005548731 nova_compute[232433]: 2025-12-06 08:30:33.938 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:33.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:34.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:30:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:30:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:30:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:30:34 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:30:35 np0005548731 nova_compute[232433]: 2025-12-06 08:30:35.124 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:35.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:36.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:37.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:38.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:38 np0005548731 nova_compute[232433]: 2025-12-06 08:30:38.942 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:39.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:40 np0005548731 nova_compute[232433]: 2025-12-06 08:30:40.127 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:40.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:30:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:41.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:30:42 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:30:42 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:30:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:42.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:30:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:43.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:30:43 np0005548731 nova_compute[232433]: 2025-12-06 08:30:43.976 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:44.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:45 np0005548731 nova_compute[232433]: 2025-12-06 08:30:45.173 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:45.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:46.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:47.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:48.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:49 np0005548731 nova_compute[232433]: 2025-12-06 08:30:49.016 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:49.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:50 np0005548731 nova_compute[232433]: 2025-12-06 08:30:50.209 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:50.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:51.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:52.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:53.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:54 np0005548731 nova_compute[232433]: 2025-12-06 08:30:54.054 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:54.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:55 np0005548731 nova_compute[232433]: 2025-12-06 08:30:55.269 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:55.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:56.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:30:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:30:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:57.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:30:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:58.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:30:59 np0005548731 nova_compute[232433]: 2025-12-06 08:30:59.056 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:30:59 np0005548731 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  6 03:30:59 np0005548731 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  6 03:30:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:30:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:30:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:59.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:00 np0005548731 nova_compute[232433]: 2025-12-06 08:31:00.271 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:00.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:31:00.933 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:31:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:31:00.934 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:31:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:31:00.934 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:31:01 np0005548731 podman[355520]: 2025-12-06 08:31:01.912332478 +0000 UTC m=+0.065644083 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:31:01 np0005548731 podman[355522]: 2025-12-06 08:31:01.920284212 +0000 UTC m=+0.074060679 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 03:31:01 np0005548731 podman[355521]: 2025-12-06 08:31:01.940387023 +0000 UTC m=+0.093710789 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 03:31:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:01.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:03.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:04 np0005548731 nova_compute[232433]: 2025-12-06 08:31:04.058 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:04 np0005548731 nova_compute[232433]: 2025-12-06 08:31:04.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:31:04 np0005548731 nova_compute[232433]: 2025-12-06 08:31:04.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:31:04 np0005548731 nova_compute[232433]: 2025-12-06 08:31:04.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:31:04 np0005548731 nova_compute[232433]: 2025-12-06 08:31:04.119 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:31:04 np0005548731 nova_compute[232433]: 2025-12-06 08:31:04.119 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:31:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:04.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:05 np0005548731 nova_compute[232433]: 2025-12-06 08:31:05.272 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:31:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:06.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:31:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:07 np0005548731 nova_compute[232433]: 2025-12-06 08:31:07.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:31:07 np0005548731 nova_compute[232433]: 2025-12-06 08:31:07.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:31:07 np0005548731 nova_compute[232433]: 2025-12-06 08:31:07.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:31:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:08.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:08 np0005548731 nova_compute[232433]: 2025-12-06 08:31:08.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:31:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:08.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:09 np0005548731 nova_compute[232433]: 2025-12-06 08:31:09.059 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:31:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2136885558' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:31:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:31:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2136885558' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:31:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:31:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.155 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.155 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.156 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.156 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.156 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.274 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:10.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:31:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3684053451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.569 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.725 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.727 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=3978MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.727 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.727 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.801 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.802 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.814 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.830 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.830 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.850 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.866 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 03:31:10 np0005548731 nova_compute[232433]: 2025-12-06 08:31:10.884 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:31:11 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:31:11 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2578691889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:31:11 np0005548731 nova_compute[232433]: 2025-12-06 08:31:11.313 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:31:11 np0005548731 nova_compute[232433]: 2025-12-06 08:31:11.324 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:31:11 np0005548731 nova_compute[232433]: 2025-12-06 08:31:11.359 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:31:11 np0005548731 nova_compute[232433]: 2025-12-06 08:31:11.361 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:31:11 np0005548731 nova_compute[232433]: 2025-12-06 08:31:11.362 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:31:11 np0005548731 systemd[1]: session-73.scope: Deactivated successfully.
Dec  6 03:31:11 np0005548731 systemd[1]: session-73.scope: Consumed 2min 41.570s CPU time, 1.0G memory peak, read 493.3M from disk, written 342.6M to disk.
Dec  6 03:31:11 np0005548731 systemd-logind[794]: Session 73 logged out. Waiting for processes to exit.
Dec  6 03:31:11 np0005548731 systemd-logind[794]: Removed session 73.
Dec  6 03:31:11 np0005548731 systemd-logind[794]: New session 74 of user zuul.
Dec  6 03:31:11 np0005548731 systemd[1]: Started Session 74 of User zuul.
Dec  6 03:31:11 np0005548731 systemd[1]: session-74.scope: Deactivated successfully.
Dec  6 03:31:11 np0005548731 systemd-logind[794]: Session 74 logged out. Waiting for processes to exit.
Dec  6 03:31:11 np0005548731 systemd-logind[794]: Removed session 74.
Dec  6 03:31:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:12.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:12 np0005548731 systemd-logind[794]: New session 75 of user zuul.
Dec  6 03:31:12 np0005548731 systemd[1]: Started Session 75 of User zuul.
Dec  6 03:31:12 np0005548731 systemd[1]: session-75.scope: Deactivated successfully.
Dec  6 03:31:12 np0005548731 systemd-logind[794]: Session 75 logged out. Waiting for processes to exit.
Dec  6 03:31:12 np0005548731 systemd-logind[794]: Removed session 75.
Dec  6 03:31:12 np0005548731 nova_compute[232433]: 2025-12-06 08:31:12.362 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:31:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:12.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:31:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:31:14 np0005548731 nova_compute[232433]: 2025-12-06 08:31:14.062 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:14.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:15 np0005548731 nova_compute[232433]: 2025-12-06 08:31:15.276 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:16.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:16 np0005548731 nova_compute[232433]: 2025-12-06 08:31:16.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:31:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:16.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:18.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:31:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:18.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:31:19 np0005548731 nova_compute[232433]: 2025-12-06 08:31:19.065 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:20.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:20 np0005548731 nova_compute[232433]: 2025-12-06 08:31:20.278 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:20.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:22.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:31:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:22.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:31:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:24.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:24 np0005548731 nova_compute[232433]: 2025-12-06 08:31:24.116 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:24.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:25 np0005548731 nova_compute[232433]: 2025-12-06 08:31:25.350 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:26.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:26.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:26 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 03:31:26 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 03:31:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:28.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:28.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:29 np0005548731 nova_compute[232433]: 2025-12-06 08:31:29.118 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:31:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:30.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:31:30 np0005548731 nova_compute[232433]: 2025-12-06 08:31:30.353 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:30.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:32.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:31:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:32.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:31:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:32 np0005548731 podman[355805]: 2025-12-06 08:31:32.911494857 +0000 UTC m=+0.066526465 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 03:31:32 np0005548731 podman[355803]: 2025-12-06 08:31:32.934457558 +0000 UTC m=+0.094207471 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 03:31:32 np0005548731 podman[355804]: 2025-12-06 08:31:32.941904449 +0000 UTC m=+0.096479426 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller)
Dec  6 03:31:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:34.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:34 np0005548731 nova_compute[232433]: 2025-12-06 08:31:34.120 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:34.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:35 np0005548731 nova_compute[232433]: 2025-12-06 08:31:35.357 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:36.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:36.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:31:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:38.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:31:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:38.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:39 np0005548731 nova_compute[232433]: 2025-12-06 08:31:39.121 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:31:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:40.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:31:40 np0005548731 nova_compute[232433]: 2025-12-06 08:31:40.360 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:31:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:40.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:31:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:42.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:42.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:44.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:44 np0005548731 nova_compute[232433]: 2025-12-06 08:31:44.122 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:44 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:31:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:44.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:31:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:45 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:45 np0005548731 nova_compute[232433]: 2025-12-06 08:31:45.403 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:46.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:46 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:31:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:46.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:31:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:48.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:48.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:31:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:31:49 np0005548731 nova_compute[232433]: 2025-12-06 08:31:49.123 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:50.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:50 np0005548731 nova_compute[232433]: 2025-12-06 08:31:50.404 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:50.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:52.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:52.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:54.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:54 np0005548731 nova_compute[232433]: 2025-12-06 08:31:54.125 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:54 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:31:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:54.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:55 np0005548731 nova_compute[232433]: 2025-12-06 08:31:55.407 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:31:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:56.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:56.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:31:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:31:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:58.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:31:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:31:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:31:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:58.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:31:59 np0005548731 nova_compute[232433]: 2025-12-06 08:31:59.127 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:00.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:00 np0005548731 nova_compute[232433]: 2025-12-06 08:32:00.410 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:00.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:32:00.934 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:32:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:32:00.935 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:32:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:32:00.935 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:32:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:32:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:02.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:32:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:02.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:03 np0005548731 podman[356234]: 2025-12-06 08:32:03.897499925 +0000 UTC m=+0.060080248 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  6 03:32:03 np0005548731 podman[356232]: 2025-12-06 08:32:03.897499795 +0000 UTC m=+0.059970025 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  6 03:32:03 np0005548731 podman[356233]: 2025-12-06 08:32:03.980749758 +0000 UTC m=+0.141895826 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 03:32:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:04.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:04 np0005548731 nova_compute[232433]: 2025-12-06 08:32:04.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:32:04 np0005548731 nova_compute[232433]: 2025-12-06 08:32:04.129 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:04.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:05 np0005548731 nova_compute[232433]: 2025-12-06 08:32:05.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:32:05 np0005548731 nova_compute[232433]: 2025-12-06 08:32:05.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:32:05 np0005548731 nova_compute[232433]: 2025-12-06 08:32:05.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:32:05 np0005548731 nova_compute[232433]: 2025-12-06 08:32:05.148 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:32:05 np0005548731 nova_compute[232433]: 2025-12-06 08:32:05.463 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:06.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.191021) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926191085, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 2613, "num_deletes": 257, "total_data_size": 5997628, "memory_usage": 6087920, "flush_reason": "Manual Compaction"}
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926221463, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 3884538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91197, "largest_seqno": 93805, "table_properties": {"data_size": 3873046, "index_size": 7153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 28458, "raw_average_key_size": 22, "raw_value_size": 3848520, "raw_average_value_size": 2976, "num_data_blocks": 308, "num_entries": 1293, "num_filter_entries": 1293, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009734, "oldest_key_time": 1765009734, "file_creation_time": 1765009926, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 30534 microseconds, and 7822 cpu microseconds.
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.221527) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 3884538 bytes OK
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.221584) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.223805) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.223821) EVENT_LOG_v1 {"time_micros": 1765009926223815, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.223837) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 5985176, prev total WAL file size 5985176, number of live WAL files 2.
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.225364) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353334' seq:72057594037927935, type:22 .. '6C6F676D0033373837' seq:0, type:0; will stop at (end)
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(3793KB)], [186(12MB)]
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926225738, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 17048975, "oldest_snapshot_seqno": -1}
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 12440 keys, 16895867 bytes, temperature: kUnknown
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926346666, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 16895867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16814792, "index_size": 48898, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 328663, "raw_average_key_size": 26, "raw_value_size": 16596672, "raw_average_value_size": 1334, "num_data_blocks": 1868, "num_entries": 12440, "num_filter_entries": 12440, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765009926, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.347381) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 16895867 bytes
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.349855) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.7 rd, 139.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.6 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 12971, records dropped: 531 output_compression: NoCompression
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.349888) EVENT_LOG_v1 {"time_micros": 1765009926349873, "job": 120, "event": "compaction_finished", "compaction_time_micros": 121156, "compaction_time_cpu_micros": 36233, "output_level": 6, "num_output_files": 1, "total_output_size": 16895867, "num_input_records": 12971, "num_output_records": 12440, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926351676, "job": 120, "event": "table_file_deletion", "file_number": 188}
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926356758, "job": 120, "event": "table_file_deletion", "file_number": 186}
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.225230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.356881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.356885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.356886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.356887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:06 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:06.356889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:06.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:08.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:08.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:09 np0005548731 nova_compute[232433]: 2025-12-06 08:32:09.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:32:09 np0005548731 nova_compute[232433]: 2025-12-06 08:32:09.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:32:09 np0005548731 nova_compute[232433]: 2025-12-06 08:32:09.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:32:09 np0005548731 nova_compute[232433]: 2025-12-06 08:32:09.132 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:10.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:10 np0005548731 nova_compute[232433]: 2025-12-06 08:32:10.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:32:10 np0005548731 nova_compute[232433]: 2025-12-06 08:32:10.465 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:10.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:12.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.163 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.164 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.164 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.164 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.165 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:32:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:12.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:32:12 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2621224397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.636 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.790 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.791 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4096MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.791 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:32:12 np0005548731 nova_compute[232433]: 2025-12-06 08:32:12.792 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:32:12 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:13 np0005548731 nova_compute[232433]: 2025-12-06 08:32:13.089 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:32:13 np0005548731 nova_compute[232433]: 2025-12-06 08:32:13.090 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:32:13 np0005548731 nova_compute[232433]: 2025-12-06 08:32:13.147 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:32:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:32:13 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3688201196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:32:13 np0005548731 nova_compute[232433]: 2025-12-06 08:32:13.572 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:32:13 np0005548731 nova_compute[232433]: 2025-12-06 08:32:13.578 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:32:13 np0005548731 nova_compute[232433]: 2025-12-06 08:32:13.643 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:32:13 np0005548731 nova_compute[232433]: 2025-12-06 08:32:13.645 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:32:13 np0005548731 nova_compute[232433]: 2025-12-06 08:32:13.645 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:32:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:14.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:14 np0005548731 nova_compute[232433]: 2025-12-06 08:32:14.133 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:14.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:15 np0005548731 nova_compute[232433]: 2025-12-06 08:32:15.515 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:16.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:16.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:17 np0005548731 nova_compute[232433]: 2025-12-06 08:32:17.646 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:32:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.003000071s ======
Dec  6 03:32:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:18.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Dec  6 03:32:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:18.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.603228) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938603301, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 384, "num_deletes": 251, "total_data_size": 367228, "memory_usage": 375896, "flush_reason": "Manual Compaction"}
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938609402, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 241741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93810, "largest_seqno": 94189, "table_properties": {"data_size": 239516, "index_size": 388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5636, "raw_average_key_size": 18, "raw_value_size": 235083, "raw_average_value_size": 778, "num_data_blocks": 17, "num_entries": 302, "num_filter_entries": 302, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009927, "oldest_key_time": 1765009927, "file_creation_time": 1765009938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 6200 microseconds, and 1614 cpu microseconds.
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.609443) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 241741 bytes OK
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.609461) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.611156) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.611169) EVENT_LOG_v1 {"time_micros": 1765009938611164, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.611184) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 364709, prev total WAL file size 364709, number of live WAL files 2.
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.611621) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(236KB)], [189(16MB)]
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938611683, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 17137608, "oldest_snapshot_seqno": -1}
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 12232 keys, 15213624 bytes, temperature: kUnknown
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938749662, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 15213624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15135375, "index_size": 46584, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30597, "raw_key_size": 325045, "raw_average_key_size": 26, "raw_value_size": 14922287, "raw_average_value_size": 1219, "num_data_blocks": 1764, "num_entries": 12232, "num_filter_entries": 12232, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765009938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.749922) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 15213624 bytes
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.961664) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.2 rd, 110.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 16.1 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(133.8) write-amplify(62.9) OK, records in: 12742, records dropped: 510 output_compression: NoCompression
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.961704) EVENT_LOG_v1 {"time_micros": 1765009938961688, "job": 122, "event": "compaction_finished", "compaction_time_micros": 138039, "compaction_time_cpu_micros": 41533, "output_level": 6, "num_output_files": 1, "total_output_size": 15213624, "num_input_records": 12742, "num_output_records": 12232, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938961930, "job": 122, "event": "table_file_deletion", "file_number": 191}
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938965267, "job": 122, "event": "table_file_deletion", "file_number": 189}
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.611521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.965423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.965428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.965429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.965431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:18 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:32:18.965433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:32:19 np0005548731 nova_compute[232433]: 2025-12-06 08:32:19.135 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:20.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:20 np0005548731 nova_compute[232433]: 2025-12-06 08:32:20.517 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:20.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:22.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:22.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:24.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:24 np0005548731 nova_compute[232433]: 2025-12-06 08:32:24.137 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:25 np0005548731 nova_compute[232433]: 2025-12-06 08:32:25.520 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:26.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:27 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:28 np0005548731 nova_compute[232433]: 2025-12-06 08:32:28.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:32:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:28.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:28.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:29 np0005548731 nova_compute[232433]: 2025-12-06 08:32:29.138 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:30.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:30 np0005548731 nova_compute[232433]: 2025-12-06 08:32:30.522 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:30.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:32.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:32.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:32 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:34.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:34 np0005548731 nova_compute[232433]: 2025-12-06 08:32:34.139 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:34 np0005548731 podman[356455]: 2025-12-06 08:32:34.8893526 +0000 UTC m=+0.049780997 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 03:32:34 np0005548731 podman[356460]: 2025-12-06 08:32:34.905386041 +0000 UTC m=+0.055545307 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  6 03:32:34 np0005548731 podman[356456]: 2025-12-06 08:32:34.933370224 +0000 UTC m=+0.088979943 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  6 03:32:35 np0005548731 nova_compute[232433]: 2025-12-06 08:32:35.524 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:36.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:36.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:37 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:38.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:38.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:39 np0005548731 nova_compute[232433]: 2025-12-06 08:32:39.140 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:40.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:40 np0005548731 nova_compute[232433]: 2025-12-06 08:32:40.526 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:40.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:32:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:42.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:32:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:42.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:42 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:44.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:44 np0005548731 nova_compute[232433]: 2025-12-06 08:32:44.188 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:44.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:45 np0005548731 nova_compute[232433]: 2025-12-06 08:32:45.583 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:46.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:46.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:47 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:48.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:48.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:49 np0005548731 nova_compute[232433]: 2025-12-06 08:32:49.189 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:50.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:50.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:50 np0005548731 nova_compute[232433]: 2025-12-06 08:32:50.585 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:32:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:52.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:32:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:52 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:54.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:54 np0005548731 nova_compute[232433]: 2025-12-06 08:32:54.192 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:32:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:54.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:32:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:32:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:32:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:32:55 np0005548731 nova_compute[232433]: 2025-12-06 08:32:55.587 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:32:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:56.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:56.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:32:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:32:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:58.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:32:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:32:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:32:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:58.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:32:59 np0005548731 nova_compute[232433]: 2025-12-06 08:32:59.194 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:00.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:00 np0005548731 nova_compute[232433]: 2025-12-06 08:33:00.591 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:00.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:33:00.935 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:33:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:33:00.935 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:33:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:33:00.935 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:33:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:02.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:02.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:04 np0005548731 nova_compute[232433]: 2025-12-06 08:33:04.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:33:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:04.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:04 np0005548731 nova_compute[232433]: 2025-12-06 08:33:04.196 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:04.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:05 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:33:05 np0005548731 podman[356737]: 2025-12-06 08:33:05.250863402 +0000 UTC m=+0.067411827 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  6 03:33:05 np0005548731 podman[356739]: 2025-12-06 08:33:05.255498776 +0000 UTC m=+0.067116830 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  6 03:33:05 np0005548731 podman[356738]: 2025-12-06 08:33:05.317605762 +0000 UTC m=+0.133492700 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:33:05 np0005548731 nova_compute[232433]: 2025-12-06 08:33:05.592 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:06 np0005548731 nova_compute[232433]: 2025-12-06 08:33:06.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:33:06 np0005548731 nova_compute[232433]: 2025-12-06 08:33:06.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:33:06 np0005548731 nova_compute[232433]: 2025-12-06 08:33:06.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:33:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:06.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:06 np0005548731 nova_compute[232433]: 2025-12-06 08:33:06.425 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:33:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:33:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:06.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:08.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:08.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:09 np0005548731 nova_compute[232433]: 2025-12-06 08:33:09.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:33:09 np0005548731 nova_compute[232433]: 2025-12-06 08:33:09.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:33:09 np0005548731 nova_compute[232433]: 2025-12-06 08:33:09.228 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:10 np0005548731 nova_compute[232433]: 2025-12-06 08:33:10.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:33:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:10.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:10 np0005548731 nova_compute[232433]: 2025-12-06 08:33:10.594 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:10.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:12 np0005548731 nova_compute[232433]: 2025-12-06 08:33:12.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:33:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:12.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:12.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:13 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.137 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.137 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.138 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.138 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.139 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:33:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:14.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.229 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:33:14 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1660156342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.561 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:33:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:14.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.757 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.759 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4093MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.759 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:33:14 np0005548731 nova_compute[232433]: 2025-12-06 08:33:14.759 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:33:15 np0005548731 nova_compute[232433]: 2025-12-06 08:33:15.597 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:16.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:16 np0005548731 nova_compute[232433]: 2025-12-06 08:33:16.563 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:33:16 np0005548731 nova_compute[232433]: 2025-12-06 08:33:16.564 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:33:16 np0005548731 nova_compute[232433]: 2025-12-06 08:33:16.602 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:33:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:33:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:16.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:33:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:33:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2460955246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:33:17 np0005548731 nova_compute[232433]: 2025-12-06 08:33:17.049 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:33:17 np0005548731 nova_compute[232433]: 2025-12-06 08:33:17.055 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:33:17 np0005548731 nova_compute[232433]: 2025-12-06 08:33:17.085 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:33:17 np0005548731 nova_compute[232433]: 2025-12-06 08:33:17.087 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:33:17 np0005548731 nova_compute[232433]: 2025-12-06 08:33:17.087 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:33:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:18.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:18.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:19 np0005548731 nova_compute[232433]: 2025-12-06 08:33:19.087 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:33:19 np0005548731 nova_compute[232433]: 2025-12-06 08:33:19.230 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:20.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:20 np0005548731 nova_compute[232433]: 2025-12-06 08:33:20.599 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:20.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:22.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:22.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:24.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:24 np0005548731 nova_compute[232433]: 2025-12-06 08:33:24.231 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:24.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:25 np0005548731 nova_compute[232433]: 2025-12-06 08:33:25.601 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:26.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:26 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Dec  6 03:33:26 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:26.576532) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:33:26 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Dec  6 03:33:26 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006576634, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 819, "num_deletes": 250, "total_data_size": 1612476, "memory_usage": 1639720, "flush_reason": "Manual Compaction"}
Dec  6 03:33:26 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Dec  6 03:33:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:26.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:27 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010007400896, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 681489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94194, "largest_seqno": 95008, "table_properties": {"data_size": 678239, "index_size": 1093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8768, "raw_average_key_size": 20, "raw_value_size": 671349, "raw_average_value_size": 1590, "num_data_blocks": 49, "num_entries": 422, "num_filter_entries": 422, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009939, "oldest_key_time": 1765009939, "file_creation_time": 1765010006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:33:27 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 824381 microseconds, and 3829 cpu microseconds.
Dec  6 03:33:27 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:27.400952) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 681489 bytes OK
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:27.400978) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.169313) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.169410) EVENT_LOG_v1 {"time_micros": 1765010008169390, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.169454) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 1608257, prev total WAL file size 1654403, number of live WAL files 2.
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.171149) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323739' seq:72057594037927935, type:22 .. '6D6772737461740033353330' seq:0, type:0; will stop at (end)
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(665KB)], [192(14MB)]
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010008171289, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 15895113, "oldest_snapshot_seqno": -1}
Dec  6 03:33:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:33:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:28.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 12167 keys, 12430865 bytes, temperature: kUnknown
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010008311923, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 12430865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12357039, "index_size": 42289, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 323909, "raw_average_key_size": 26, "raw_value_size": 12149059, "raw_average_value_size": 998, "num_data_blocks": 1586, "num_entries": 12167, "num_filter_entries": 12167, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765010008, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.312230) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 12430865 bytes
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.313759) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.0 rd, 88.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(41.6) write-amplify(18.2) OK, records in: 12654, records dropped: 487 output_compression: NoCompression
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.313795) EVENT_LOG_v1 {"time_micros": 1765010008313780, "job": 124, "event": "compaction_finished", "compaction_time_micros": 140698, "compaction_time_cpu_micros": 65519, "output_level": 6, "num_output_files": 1, "total_output_size": 12430865, "num_input_records": 12654, "num_output_records": 12167, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010008314151, "job": 124, "event": "table_file_deletion", "file_number": 194}
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010008317816, "job": 124, "event": "table_file_deletion", "file_number": 192}
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.170992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.318040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.318047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.318049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.318053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:33:28.318056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:33:28 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:28.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:29 np0005548731 nova_compute[232433]: 2025-12-06 08:33:29.233 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:30.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:30 np0005548731 nova_compute[232433]: 2025-12-06 08:33:30.604 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:30.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:33:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:32.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:33:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:32.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:33 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:33:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:34.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:33:34 np0005548731 nova_compute[232433]: 2025-12-06 08:33:34.234 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:34.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:35 np0005548731 nova_compute[232433]: 2025-12-06 08:33:35.607 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:35 np0005548731 podman[356985]: 2025-12-06 08:33:35.888957181 +0000 UTC m=+0.053510819 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:33:35 np0005548731 podman[356987]: 2025-12-06 08:33:35.904153171 +0000 UTC m=+0.062046256 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  6 03:33:35 np0005548731 podman[356986]: 2025-12-06 08:33:35.934225506 +0000 UTC m=+0.091540726 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  6 03:33:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:36.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:33:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:36.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:33:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:38.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:38 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:38.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:39 np0005548731 nova_compute[232433]: 2025-12-06 08:33:39.236 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:40.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:40 np0005548731 nova_compute[232433]: 2025-12-06 08:33:40.609 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:40.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:42.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:42.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:43 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:44.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:44 np0005548731 nova_compute[232433]: 2025-12-06 08:33:44.238 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:44.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:45 np0005548731 nova_compute[232433]: 2025-12-06 08:33:45.612 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:46.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:46.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:48.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:48 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:48.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:49 np0005548731 nova_compute[232433]: 2025-12-06 08:33:49.240 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:50.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:50 np0005548731 nova_compute[232433]: 2025-12-06 08:33:50.613 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:50.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:52.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:52.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:54 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:54.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:54 np0005548731 nova_compute[232433]: 2025-12-06 08:33:54.243 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:54.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:55 np0005548731 nova_compute[232433]: 2025-12-06 08:33:55.660 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:33:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:56.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:33:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:56.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:33:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:58.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:33:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:33:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:58.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:33:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:33:59 np0005548731 nova_compute[232433]: 2025-12-06 08:33:59.244 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:00.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:00 np0005548731 nova_compute[232433]: 2025-12-06 08:34:00.662 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:00.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:34:00.936 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:34:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:34:00.936 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:34:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:34:00.937 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:34:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:02.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:02.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:34:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:04.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:04 np0005548731 nova_compute[232433]: 2025-12-06 08:34:04.246 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:04.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:05 np0005548731 nova_compute[232433]: 2025-12-06 08:34:05.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:34:05 np0005548731 nova_compute[232433]: 2025-12-06 08:34:05.664 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:06.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:06.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:34:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:34:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:34:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:34:06 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:34:06 np0005548731 podman[357252]: 2025-12-06 08:34:06.906324967 +0000 UTC m=+0.060666803 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 03:34:06 np0005548731 podman[357250]: 2025-12-06 08:34:06.917393487 +0000 UTC m=+0.080606099 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:34:06 np0005548731 podman[357251]: 2025-12-06 08:34:06.922560242 +0000 UTC m=+0.081559042 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 03:34:07 np0005548731 nova_compute[232433]: 2025-12-06 08:34:07.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:34:07 np0005548731 nova_compute[232433]: 2025-12-06 08:34:07.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:34:07 np0005548731 nova_compute[232433]: 2025-12-06 08:34:07.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:34:07 np0005548731 nova_compute[232433]: 2025-12-06 08:34:07.133 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:34:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:34:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:08.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:34:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:08.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:34:09 np0005548731 nova_compute[232433]: 2025-12-06 08:34:09.281 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:10 np0005548731 nova_compute[232433]: 2025-12-06 08:34:10.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:34:10 np0005548731 nova_compute[232433]: 2025-12-06 08:34:10.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:34:10 np0005548731 nova_compute[232433]: 2025-12-06 08:34:10.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:34:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:10.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:10 np0005548731 nova_compute[232433]: 2025-12-06 08:34:10.667 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:10.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:12 np0005548731 nova_compute[232433]: 2025-12-06 08:34:12.101 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:34:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:34:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:12.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:34:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:34:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:12.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:34:14 np0005548731 nova_compute[232433]: 2025-12-06 08:34:14.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:34:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:34:14 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:34:14 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:34:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:14.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:14 np0005548731 nova_compute[232433]: 2025-12-06 08:34:14.282 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:14.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:15 np0005548731 nova_compute[232433]: 2025-12-06 08:34:15.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:34:15 np0005548731 nova_compute[232433]: 2025-12-06 08:34:15.669 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:34:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:34:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:16.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.261 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.261 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.261 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.262 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.262 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:34:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:16.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:16 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:34:16 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3168349565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.761 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.895 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.896 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4100MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.896 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.896 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.960 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.961 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:34:16 np0005548731 nova_compute[232433]: 2025-12-06 08:34:16.981 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:34:17 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:34:17 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/578781008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:34:17 np0005548731 nova_compute[232433]: 2025-12-06 08:34:17.405 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:34:17 np0005548731 nova_compute[232433]: 2025-12-06 08:34:17.410 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:34:17 np0005548731 nova_compute[232433]: 2025-12-06 08:34:17.429 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:34:17 np0005548731 nova_compute[232433]: 2025-12-06 08:34:17.430 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:34:17 np0005548731 nova_compute[232433]: 2025-12-06 08:34:17.431 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:34:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:18.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:18 np0005548731 nova_compute[232433]: 2025-12-06 08:34:18.431 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:34:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:18.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:34:19 np0005548731 nova_compute[232433]: 2025-12-06 08:34:19.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:20.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:20 np0005548731 nova_compute[232433]: 2025-12-06 08:34:20.670 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:20.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:22.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:34:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:22.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:34:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:34:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:24.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:34:24 np0005548731 nova_compute[232433]: 2025-12-06 08:34:24.285 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:24.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:34:25 np0005548731 nova_compute[232433]: 2025-12-06 08:34:25.671 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:26.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:26.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:28.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:28.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:29 np0005548731 nova_compute[232433]: 2025-12-06 08:34:29.287 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:29 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:34:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:30.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:30 np0005548731 nova_compute[232433]: 2025-12-06 08:34:30.674 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:30.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:32.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:32.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:33 np0005548731 nova_compute[232433]: 2025-12-06 08:34:33.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:34:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:34.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:34 np0005548731 nova_compute[232433]: 2025-12-06 08:34:34.288 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:34.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:34 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:34:35 np0005548731 nova_compute[232433]: 2025-12-06 08:34:35.677 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:36.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:34:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:36.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:34:37 np0005548731 podman[357526]: 2025-12-06 08:34:37.886254575 +0000 UTC m=+0.053105318 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:34:37 np0005548731 podman[357528]: 2025-12-06 08:34:37.898188116 +0000 UTC m=+0.059435732 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  6 03:34:37 np0005548731 podman[357527]: 2025-12-06 08:34:37.91226855 +0000 UTC m=+0.076660103 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 03:34:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:38.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:38.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:39 np0005548731 nova_compute[232433]: 2025-12-06 08:34:39.290 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:39 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:34:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:40.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:40 np0005548731 nova_compute[232433]: 2025-12-06 08:34:40.681 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:40.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:42.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:42.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:44.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:44 np0005548731 nova_compute[232433]: 2025-12-06 08:34:44.291 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:44.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:44 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:34:45 np0005548731 nova_compute[232433]: 2025-12-06 08:34:45.685 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:46.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:46.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:47 np0005548731 nova_compute[232433]: 2025-12-06 08:34:47.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:34:47 np0005548731 nova_compute[232433]: 2025-12-06 08:34:47.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 03:34:47 np0005548731 nova_compute[232433]: 2025-12-06 08:34:47.141 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 03:34:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:48.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:34:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:48.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:34:49 np0005548731 nova_compute[232433]: 2025-12-06 08:34:49.293 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:34:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:34:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:50.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:34:50 np0005548731 nova_compute[232433]: 2025-12-06 08:34:50.689 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:50.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:52.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:54.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:54 np0005548731 nova_compute[232433]: 2025-12-06 08:34:54.294 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:54.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:34:55 np0005548731 nova_compute[232433]: 2025-12-06 08:34:55.691 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:34:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:34:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:56.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:34:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:56.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:58.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:34:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:34:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:58.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:34:59 np0005548731 nova_compute[232433]: 2025-12-06 08:34:59.296 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:00.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:00 np0005548731 nova_compute[232433]: 2025-12-06 08:35:00.695 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:00.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:35:00.937 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:35:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:35:00.938 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:35:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:35:00.938 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:35:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:02.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:02.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:04.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:04 np0005548731 nova_compute[232433]: 2025-12-06 08:35:04.298 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:04.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:05 np0005548731 nova_compute[232433]: 2025-12-06 08:35:05.698 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:06.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:06.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:07 np0005548731 nova_compute[232433]: 2025-12-06 08:35:07.140 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:08 np0005548731 nova_compute[232433]: 2025-12-06 08:35:08.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:08 np0005548731 nova_compute[232433]: 2025-12-06 08:35:08.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:35:08 np0005548731 nova_compute[232433]: 2025-12-06 08:35:08.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:35:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:35:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:08.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:35:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:35:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:08.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:35:08 np0005548731 podman[357656]: 2025-12-06 08:35:08.915491929 +0000 UTC m=+0.070889526 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd)
Dec  6 03:35:08 np0005548731 nova_compute[232433]: 2025-12-06 08:35:08.918 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:35:08 np0005548731 podman[357654]: 2025-12-06 08:35:08.934122813 +0000 UTC m=+0.091165070 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  6 03:35:08 np0005548731 podman[357655]: 2025-12-06 08:35:08.946517925 +0000 UTC m=+0.103815289 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:35:09 np0005548731 nova_compute[232433]: 2025-12-06 08:35:09.299 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:35:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2342233228' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:35:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:35:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2342233228' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:35:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:10.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:10 np0005548731 nova_compute[232433]: 2025-12-06 08:35:10.700 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:35:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:10.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:35:11 np0005548731 nova_compute[232433]: 2025-12-06 08:35:11.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:11 np0005548731 nova_compute[232433]: 2025-12-06 08:35:11.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:11 np0005548731 nova_compute[232433]: 2025-12-06 08:35:11.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:35:12 np0005548731 nova_compute[232433]: 2025-12-06 08:35:12.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:35:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:12.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:35:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:12.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:14 np0005548731 nova_compute[232433]: 2025-12-06 08:35:14.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:14 np0005548731 nova_compute[232433]: 2025-12-06 08:35:14.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 03:35:14 np0005548731 nova_compute[232433]: 2025-12-06 08:35:14.301 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:14.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:14 np0005548731 nova_compute[232433]: 2025-12-06 08:35:14.597 232437 DEBUG oslo_concurrency.processutils [None req-973963d8-dc59-4fec-ad5a-82e5018c1182 05522e78304c4c4eb4be044936c2fa3e 5ed95c9b17ee4dcb83395850789304e6 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:35:14 np0005548731 nova_compute[232433]: 2025-12-06 08:35:14.638 232437 DEBUG oslo_concurrency.processutils [None req-973963d8-dc59-4fec-ad5a-82e5018c1182 05522e78304c4c4eb4be044936c2fa3e 5ed95c9b17ee4dcb83395850789304e6 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:35:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:14.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:15 np0005548731 nova_compute[232433]: 2025-12-06 08:35:15.701 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:16 np0005548731 nova_compute[232433]: 2025-12-06 08:35:16.207 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:16 np0005548731 nova_compute[232433]: 2025-12-06 08:35:16.207 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:16.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:16 np0005548731 podman[358062]: 2025-12-06 08:35:16.471230571 +0000 UTC m=+0.062200525 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 03:35:16 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 03:35:16 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 03:35:16 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:35:16 np0005548731 podman[358062]: 2025-12-06 08:35:16.569946183 +0000 UTC m=+0.160916107 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 03:35:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:16.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:17 np0005548731 podman[358215]: 2025-12-06 08:35:17.288857509 +0000 UTC m=+0.077157708 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 03:35:17 np0005548731 podman[358215]: 2025-12-06 08:35:17.302993993 +0000 UTC m=+0.091294112 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 03:35:17 np0005548731 podman[358278]: 2025-12-06 08:35:17.556053532 +0000 UTC m=+0.077950228 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public)
Dec  6 03:35:17 np0005548731 podman[358278]: 2025-12-06 08:35:17.574062731 +0000 UTC m=+0.095959407 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, name=keepalived, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, build-date=2023-02-22T09:23:20, distribution-scope=public)
Dec  6 03:35:17 np0005548731 nova_compute[232433]: 2025-12-06 08:35:17.742 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:35:17 np0005548731 nova_compute[232433]: 2025-12-06 08:35:17.743 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:35:17 np0005548731 nova_compute[232433]: 2025-12-06 08:35:17.743 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:35:17 np0005548731 nova_compute[232433]: 2025-12-06 08:35:17.743 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:35:17 np0005548731 nova_compute[232433]: 2025-12-06 08:35:17.744 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:35:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:35:18 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3215890770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:35:18 np0005548731 nova_compute[232433]: 2025-12-06 08:35:18.172 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:35:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:18.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:18 np0005548731 nova_compute[232433]: 2025-12-06 08:35:18.370 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:35:18 np0005548731 nova_compute[232433]: 2025-12-06 08:35:18.371 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4071MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:35:18 np0005548731 nova_compute[232433]: 2025-12-06 08:35:18.372 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:35:18 np0005548731 nova_compute[232433]: 2025-12-06 08:35:18.372 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:35:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:35:18 np0005548731 nova_compute[232433]: 2025-12-06 08:35:18.733 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:35:18 np0005548731 nova_compute[232433]: 2025-12-06 08:35:18.733 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:35:18 np0005548731 nova_compute[232433]: 2025-12-06 08:35:18.805 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:35:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:35:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:18.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:35:19 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:35:19 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/766304778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:35:19 np0005548731 nova_compute[232433]: 2025-12-06 08:35:19.304 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:35:19 np0005548731 nova_compute[232433]: 2025-12-06 08:35:19.305 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:19 np0005548731 nova_compute[232433]: 2025-12-06 08:35:19.312 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:35:19 np0005548731 nova_compute[232433]: 2025-12-06 08:35:19.329 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:35:19 np0005548731 nova_compute[232433]: 2025-12-06 08:35:19.330 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:35:19 np0005548731 nova_compute[232433]: 2025-12-06 08:35:19.330 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:35:19 np0005548731 nova_compute[232433]: 2025-12-06 08:35:19.331 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:19 np0005548731 podman[358627]: 2025-12-06 08:35:19.737147172 +0000 UTC m=+0.044965895 container create 91466a7c41d95d3b854d84dd06b6ed0fce3acc040761ef247bc47d7bfb56b9c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 03:35:19 np0005548731 systemd[1]: Started libpod-conmon-91466a7c41d95d3b854d84dd06b6ed0fce3acc040761ef247bc47d7bfb56b9c5.scope.
Dec  6 03:35:19 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:35:19 np0005548731 podman[358627]: 2025-12-06 08:35:19.807763392 +0000 UTC m=+0.115582125 container init 91466a7c41d95d3b854d84dd06b6ed0fce3acc040761ef247bc47d7bfb56b9c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_liskov, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:35:19 np0005548731 podman[358627]: 2025-12-06 08:35:19.718395837 +0000 UTC m=+0.026214570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 03:35:19 np0005548731 podman[358627]: 2025-12-06 08:35:19.81424359 +0000 UTC m=+0.122062313 container start 91466a7c41d95d3b854d84dd06b6ed0fce3acc040761ef247bc47d7bfb56b9c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  6 03:35:19 np0005548731 podman[358627]: 2025-12-06 08:35:19.817645962 +0000 UTC m=+0.125464685 container attach 91466a7c41d95d3b854d84dd06b6ed0fce3acc040761ef247bc47d7bfb56b9c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  6 03:35:19 np0005548731 suspicious_liskov[358643]: 167 167
Dec  6 03:35:19 np0005548731 systemd[1]: libpod-91466a7c41d95d3b854d84dd06b6ed0fce3acc040761ef247bc47d7bfb56b9c5.scope: Deactivated successfully.
Dec  6 03:35:19 np0005548731 podman[358627]: 2025-12-06 08:35:19.818921523 +0000 UTC m=+0.126740246 container died 91466a7c41d95d3b854d84dd06b6ed0fce3acc040761ef247bc47d7bfb56b9c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 03:35:19 np0005548731 systemd[1]: var-lib-containers-storage-overlay-4b5e69a8009c5ada4279bb1177d9c56e6ee0ddb9c482bcd0f821df139e6c38e6-merged.mount: Deactivated successfully.
Dec  6 03:35:19 np0005548731 podman[358627]: 2025-12-06 08:35:19.851963127 +0000 UTC m=+0.159781850 container remove 91466a7c41d95d3b854d84dd06b6ed0fce3acc040761ef247bc47d7bfb56b9c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_liskov, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  6 03:35:19 np0005548731 systemd[1]: libpod-conmon-91466a7c41d95d3b854d84dd06b6ed0fce3acc040761ef247bc47d7bfb56b9c5.scope: Deactivated successfully.
Dec  6 03:35:20 np0005548731 podman[358668]: 2025-12-06 08:35:20.005223977 +0000 UTC m=+0.036122330 container create ae4c2c1f790c81687373aa8d1bf5392dc7d4f6e610699c3e0619682adc165f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  6 03:35:20 np0005548731 systemd[1]: Started libpod-conmon-ae4c2c1f790c81687373aa8d1bf5392dc7d4f6e610699c3e0619682adc165f01.scope.
Dec  6 03:35:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:20 np0005548731 systemd[1]: Started libcrun container.
Dec  6 03:35:20 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eda254f4085feaf1fb8aad30ffc9ad45fe1ea7dd8c472fe4fcf3b390063dba5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  6 03:35:20 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eda254f4085feaf1fb8aad30ffc9ad45fe1ea7dd8c472fe4fcf3b390063dba5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  6 03:35:20 np0005548731 podman[358668]: 2025-12-06 08:35:19.990390556 +0000 UTC m=+0.021288929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  6 03:35:20 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eda254f4085feaf1fb8aad30ffc9ad45fe1ea7dd8c472fe4fcf3b390063dba5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  6 03:35:20 np0005548731 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eda254f4085feaf1fb8aad30ffc9ad45fe1ea7dd8c472fe4fcf3b390063dba5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  6 03:35:20 np0005548731 podman[358668]: 2025-12-06 08:35:20.103661683 +0000 UTC m=+0.134560066 container init ae4c2c1f790c81687373aa8d1bf5392dc7d4f6e610699c3e0619682adc165f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  6 03:35:20 np0005548731 podman[358668]: 2025-12-06 08:35:20.117636943 +0000 UTC m=+0.148535296 container start ae4c2c1f790c81687373aa8d1bf5392dc7d4f6e610699c3e0619682adc165f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  6 03:35:20 np0005548731 podman[358668]: 2025-12-06 08:35:20.121506717 +0000 UTC m=+0.152405090 container attach ae4c2c1f790c81687373aa8d1bf5392dc7d4f6e610699c3e0619682adc165f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  6 03:35:20 np0005548731 nova_compute[232433]: 2025-12-06 08:35:20.252 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:20 np0005548731 nova_compute[232433]: 2025-12-06 08:35:20.253 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:35:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:20.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:35:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:35:20 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:35:20 np0005548731 nova_compute[232433]: 2025-12-06 08:35:20.703 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:35:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:20.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]: [
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:    {
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:        "available": false,
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:        "ceph_device": false,
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:        "lsm_data": {},
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:        "lvs": [],
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:        "path": "/dev/sr0",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:        "rejected_reasons": [
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "Has a FileSystem",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "Insufficient space (<5GB)"
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:        ],
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:        "sys_api": {
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "actuators": null,
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "device_nodes": "sr0",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "devname": "sr0",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "human_readable_size": "482.00 KB",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "id_bus": "ata",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "model": "QEMU DVD-ROM",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "nr_requests": "2",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "parent": "/dev/sr0",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "partitions": {},
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "path": "/dev/sr0",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "removable": "1",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "rev": "2.5+",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "ro": "0",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "rotational": "1",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "sas_address": "",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "sas_device_handle": "",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "scheduler_mode": "mq-deadline",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "sectors": 0,
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "sectorsize": "2048",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "size": 493568.0,
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "support_discard": "2048",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "type": "disk",
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:            "vendor": "QEMU"
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:        }
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]:    }
Dec  6 03:35:21 np0005548731 kind_matsumoto[358684]: ]
Dec  6 03:35:21 np0005548731 systemd[1]: libpod-ae4c2c1f790c81687373aa8d1bf5392dc7d4f6e610699c3e0619682adc165f01.scope: Deactivated successfully.
Dec  6 03:35:21 np0005548731 systemd[1]: libpod-ae4c2c1f790c81687373aa8d1bf5392dc7d4f6e610699c3e0619682adc165f01.scope: Consumed 1.292s CPU time.
Dec  6 03:35:21 np0005548731 conmon[358684]: conmon ae4c2c1f790c81687373 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae4c2c1f790c81687373aa8d1bf5392dc7d4f6e610699c3e0619682adc165f01.scope/container/memory.events
Dec  6 03:35:21 np0005548731 podman[358668]: 2025-12-06 08:35:21.392581941 +0000 UTC m=+1.423480314 container died ae4c2c1f790c81687373aa8d1bf5392dc7d4f6e610699c3e0619682adc165f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Dec  6 03:35:21 np0005548731 systemd[1]: var-lib-containers-storage-overlay-2eda254f4085feaf1fb8aad30ffc9ad45fe1ea7dd8c472fe4fcf3b390063dba5-merged.mount: Deactivated successfully.
Dec  6 03:35:21 np0005548731 podman[358668]: 2025-12-06 08:35:21.451978906 +0000 UTC m=+1.482877269 container remove ae4c2c1f790c81687373aa8d1bf5392dc7d4f6e610699c3e0619682adc165f01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_matsumoto, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  6 03:35:21 np0005548731 systemd[1]: libpod-conmon-ae4c2c1f790c81687373aa8d1bf5392dc7d4f6e610699c3e0619682adc165f01.scope: Deactivated successfully.
Dec  6 03:35:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:35:21.581 143965 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=110, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=109) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  6 03:35:21 np0005548731 nova_compute[232433]: 2025-12-06 08:35:21.582 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:21 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:35:21.583 143965 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  6 03:35:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:22.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:22.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:35:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:35:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:35:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:35:24 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:35:24 np0005548731 nova_compute[232433]: 2025-12-06 08:35:24.305 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:24.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:24.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:25 np0005548731 nova_compute[232433]: 2025-12-06 08:35:25.743 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:26.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:26.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:28.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:28 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:35:28.586 143965 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9f96b960-b4f2-40bd-ae99-08121f5e8b78, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '110'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  6 03:35:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:28.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:29 np0005548731 nova_compute[232433]: 2025-12-06 08:35:29.306 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:35:30 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:35:30 np0005548731 nova_compute[232433]: 2025-12-06 08:35:30.746 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:30.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:32.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:32.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:34 np0005548731 nova_compute[232433]: 2025-12-06 08:35:34.307 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:34.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:34.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:35 np0005548731 nova_compute[232433]: 2025-12-06 08:35:35.821 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:36.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:36.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:38.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:38.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:39 np0005548731 nova_compute[232433]: 2025-12-06 08:35:39.308 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:39 np0005548731 podman[359937]: 2025-12-06 08:35:39.892401139 +0000 UTC m=+0.056580799 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 03:35:39 np0005548731 podman[359939]: 2025-12-06 08:35:39.898477066 +0000 UTC m=+0.061162410 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  6 03:35:39 np0005548731 podman[359938]: 2025-12-06 08:35:39.918268208 +0000 UTC m=+0.082098340 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  6 03:35:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:40.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:40 np0005548731 nova_compute[232433]: 2025-12-06 08:35:40.823 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:42.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:42.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:43 np0005548731 nova_compute[232433]: 2025-12-06 08:35:43.249 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:35:44 np0005548731 nova_compute[232433]: 2025-12-06 08:35:44.310 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:44.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:44.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:45 np0005548731 nova_compute[232433]: 2025-12-06 08:35:45.826 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:46.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:46.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:48.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:48.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:49 np0005548731 nova_compute[232433]: 2025-12-06 08:35:49.312 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:50.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:50 np0005548731 nova_compute[232433]: 2025-12-06 08:35:50.886 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:35:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:52.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:35:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:52.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:54 np0005548731 nova_compute[232433]: 2025-12-06 08:35:54.314 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:54.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:35:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:54.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:35:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:35:55 np0005548731 nova_compute[232433]: 2025-12-06 08:35:55.889 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:35:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:56.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:35:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:56.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:35:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:35:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:58.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:35:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:35:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:35:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:58.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:35:59 np0005548731 nova_compute[232433]: 2025-12-06 08:35:59.317 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:00.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:00.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:00 np0005548731 nova_compute[232433]: 2025-12-06 08:36:00.892 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:36:00.938 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:36:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:36:00.938 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:36:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:36:00.938 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:36:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:36:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:02.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:36:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:02.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:04 np0005548731 nova_compute[232433]: 2025-12-06 08:36:04.318 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:36:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:04.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:36:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:04.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:05 np0005548731 nova_compute[232433]: 2025-12-06 08:36:05.895 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:06.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:36:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:06.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:36:07 np0005548731 nova_compute[232433]: 2025-12-06 08:36:07.126 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:36:08 np0005548731 nova_compute[232433]: 2025-12-06 08:36:08.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:36:08 np0005548731 nova_compute[232433]: 2025-12-06 08:36:08.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:36:08 np0005548731 nova_compute[232433]: 2025-12-06 08:36:08.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:36:08 np0005548731 nova_compute[232433]: 2025-12-06 08:36:08.190 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:36:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:08.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:08.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:09 np0005548731 nova_compute[232433]: 2025-12-06 08:36:09.320 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:10.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:36:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:10.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:36:10 np0005548731 podman[360064]: 2025-12-06 08:36:10.8933835 +0000 UTC m=+0.053532464 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  6 03:36:10 np0005548731 nova_compute[232433]: 2025-12-06 08:36:10.897 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:10 np0005548731 podman[360066]: 2025-12-06 08:36:10.903453855 +0000 UTC m=+0.058611828 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 03:36:10 np0005548731 podman[360065]: 2025-12-06 08:36:10.918999963 +0000 UTC m=+0.077148769 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  6 03:36:12 np0005548731 nova_compute[232433]: 2025-12-06 08:36:12.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:36:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:12.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:12.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:13 np0005548731 nova_compute[232433]: 2025-12-06 08:36:13.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:36:13 np0005548731 nova_compute[232433]: 2025-12-06 08:36:13.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.272852) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173272891, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 1806, "num_deletes": 251, "total_data_size": 4446365, "memory_usage": 4499760, "flush_reason": "Manual Compaction"}
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173293098, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 2901434, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95013, "largest_seqno": 96814, "table_properties": {"data_size": 2893790, "index_size": 4586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15560, "raw_average_key_size": 20, "raw_value_size": 2878691, "raw_average_value_size": 3738, "num_data_blocks": 201, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010007, "oldest_key_time": 1765010007, "file_creation_time": 1765010173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 20343 microseconds, and 6183 cpu microseconds.
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.293173) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 2901434 bytes OK
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.293214) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.310416) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.310440) EVENT_LOG_v1 {"time_micros": 1765010173310433, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.310458) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 4438272, prev total WAL file size 4438272, number of live WAL files 2.
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.311511) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(2833KB)], [195(11MB)]
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173311599, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 15332299, "oldest_snapshot_seqno": -1}
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 12418 keys, 13372126 bytes, temperature: kUnknown
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173632239, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 13372126, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13295816, "index_size": 44133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 329791, "raw_average_key_size": 26, "raw_value_size": 13082690, "raw_average_value_size": 1053, "num_data_blocks": 1660, "num_entries": 12418, "num_filter_entries": 12418, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765010173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.632627) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 13372126 bytes
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.636868) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 47.8 rd, 41.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 11.9 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(9.9) write-amplify(4.6) OK, records in: 12937, records dropped: 519 output_compression: NoCompression
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.636899) EVENT_LOG_v1 {"time_micros": 1765010173636885, "job": 126, "event": "compaction_finished", "compaction_time_micros": 320765, "compaction_time_cpu_micros": 34013, "output_level": 6, "num_output_files": 1, "total_output_size": 13372126, "num_input_records": 12937, "num_output_records": 12418, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173638466, "job": 126, "event": "table_file_deletion", "file_number": 197}
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173643303, "job": 126, "event": "table_file_deletion", "file_number": 195}
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.311419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.643413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.643421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.643426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.643430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:36:13 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:36:13.643434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:36:14 np0005548731 nova_compute[232433]: 2025-12-06 08:36:14.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:36:14 np0005548731 nova_compute[232433]: 2025-12-06 08:36:14.322 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:14.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:14.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:15 np0005548731 nova_compute[232433]: 2025-12-06 08:36:15.899 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:16.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:16.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:17 np0005548731 nova_compute[232433]: 2025-12-06 08:36:17.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:36:17 np0005548731 nova_compute[232433]: 2025-12-06 08:36:17.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.138 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.138 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.138 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.138 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.139 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:36:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:18.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:18 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:36:18 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4153081065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.587 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.736 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.737 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4099MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.738 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.738 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:36:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:36:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:18.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.905 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.905 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.919 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.941 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.942 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.957 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.974 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 03:36:18 np0005548731 nova_compute[232433]: 2025-12-06 08:36:18.994 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:36:19 np0005548731 nova_compute[232433]: 2025-12-06 08:36:19.324 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:36:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3293839874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:36:20 np0005548731 nova_compute[232433]: 2025-12-06 08:36:20.329 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:36:20 np0005548731 nova_compute[232433]: 2025-12-06 08:36:20.337 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:36:20 np0005548731 nova_compute[232433]: 2025-12-06 08:36:20.354 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:36:20 np0005548731 nova_compute[232433]: 2025-12-06 08:36:20.356 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:36:20 np0005548731 nova_compute[232433]: 2025-12-06 08:36:20.356 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:36:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:20.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:20.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:20 np0005548731 nova_compute[232433]: 2025-12-06 08:36:20.902 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:22.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:22.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:24 np0005548731 nova_compute[232433]: 2025-12-06 08:36:24.326 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:24.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:36:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:24.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:36:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:25 np0005548731 nova_compute[232433]: 2025-12-06 08:36:25.905 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:26.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:26.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:28.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:36:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:28.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:36:29 np0005548731 nova_compute[232433]: 2025-12-06 08:36:29.329 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:30.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:30 np0005548731 nova_compute[232433]: 2025-12-06 08:36:30.908 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:36:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:30.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:36:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:36:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:36:31 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:36:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:32.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:32.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:34 np0005548731 nova_compute[232433]: 2025-12-06 08:36:34.331 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:34.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:34.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:35 np0005548731 nova_compute[232433]: 2025-12-06 08:36:35.911 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:36:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:36.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:36:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:36.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:36:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:38.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:36:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:38.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:39 np0005548731 nova_compute[232433]: 2025-12-06 08:36:39.332 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:36:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:36:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:40 np0005548731 nova_compute[232433]: 2025-12-06 08:36:40.352 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:36:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:40.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:40 np0005548731 nova_compute[232433]: 2025-12-06 08:36:40.914 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:36:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:40.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:36:41 np0005548731 podman[360466]: 2025-12-06 08:36:41.955681487 +0000 UTC m=+0.119373296 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:36:41 np0005548731 podman[360468]: 2025-12-06 08:36:41.956025905 +0000 UTC m=+0.117605813 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 03:36:41 np0005548731 podman[360467]: 2025-12-06 08:36:41.984380896 +0000 UTC m=+0.146696201 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:36:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:42.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:42.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:44 np0005548731 nova_compute[232433]: 2025-12-06 08:36:44.335 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:44.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:44.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:45 np0005548731 nova_compute[232433]: 2025-12-06 08:36:45.916 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.002000047s ======
Dec  6 03:36:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:46.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec  6 03:36:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:46.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:36:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:48.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:36:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:48.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:49 np0005548731 nova_compute[232433]: 2025-12-06 08:36:49.336 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:50.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:50 np0005548731 nova_compute[232433]: 2025-12-06 08:36:50.919 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:50.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:52.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:36:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:52.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:36:54 np0005548731 nova_compute[232433]: 2025-12-06 08:36:54.338 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:36:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:54.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:36:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:36:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:54.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:36:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:36:55 np0005548731 nova_compute[232433]: 2025-12-06 08:36:55.921 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:36:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:36:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:56.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:36:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:56.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:58.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:36:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:36:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:36:59 np0005548731 nova_compute[232433]: 2025-12-06 08:36:59.339 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:00.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:00 np0005548731 nova_compute[232433]: 2025-12-06 08:37:00.923 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:37:00.938 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:37:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:37:00.939 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:37:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:37:00.939 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:37:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:00.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:02.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:37:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:02.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:37:04 np0005548731 nova_compute[232433]: 2025-12-06 08:37:04.341 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:37:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:04.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:37:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:37:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:04.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:37:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:05 np0005548731 nova_compute[232433]: 2025-12-06 08:37:05.926 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:06.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:37:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:06.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:37:08 np0005548731 nova_compute[232433]: 2025-12-06 08:37:08.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:37:08 np0005548731 nova_compute[232433]: 2025-12-06 08:37:08.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:37:08 np0005548731 nova_compute[232433]: 2025-12-06 08:37:08.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:37:08 np0005548731 nova_compute[232433]: 2025-12-06 08:37:08.268 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:37:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:08.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:37:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:08.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:37:09 np0005548731 nova_compute[232433]: 2025-12-06 08:37:09.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:37:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:37:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2873814189' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:37:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:37:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2873814189' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:37:09 np0005548731 nova_compute[232433]: 2025-12-06 08:37:09.341 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:10.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:10 np0005548731 nova_compute[232433]: 2025-12-06 08:37:10.928 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:37:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:37:12 np0005548731 podman[360622]: 2025-12-06 08:37:12.161939206 +0000 UTC m=+0.057263825 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  6 03:37:12 np0005548731 podman[360624]: 2025-12-06 08:37:12.229593614 +0000 UTC m=+0.109803695 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  6 03:37:12 np0005548731 podman[360623]: 2025-12-06 08:37:12.236318118 +0000 UTC m=+0.131620116 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  6 03:37:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:12.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:12.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:13 np0005548731 nova_compute[232433]: 2025-12-06 08:37:13.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:37:13 np0005548731 nova_compute[232433]: 2025-12-06 08:37:13.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:37:14 np0005548731 nova_compute[232433]: 2025-12-06 08:37:14.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:37:14 np0005548731 nova_compute[232433]: 2025-12-06 08:37:14.377 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:14.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:14.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:15 np0005548731 nova_compute[232433]: 2025-12-06 08:37:15.930 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:16 np0005548731 nova_compute[232433]: 2025-12-06 08:37:16.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:37:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:16.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:16.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:18.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:18.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:19 np0005548731 nova_compute[232433]: 2025-12-06 08:37:19.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:37:19 np0005548731 nova_compute[232433]: 2025-12-06 08:37:19.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:37:19 np0005548731 nova_compute[232433]: 2025-12-06 08:37:19.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:37:19 np0005548731 nova_compute[232433]: 2025-12-06 08:37:19.380 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.150 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.151 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.151 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.151 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.151 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:37:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:20.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:37:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/943641337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:37:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.572 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.756 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.759 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4130MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.759 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.760 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:37:20 np0005548731 nova_compute[232433]: 2025-12-06 08:37:20.934 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:37:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:20.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:37:21 np0005548731 nova_compute[232433]: 2025-12-06 08:37:21.155 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:37:21 np0005548731 nova_compute[232433]: 2025-12-06 08:37:21.155 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:37:21 np0005548731 nova_compute[232433]: 2025-12-06 08:37:21.169 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:37:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:37:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1486169835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:37:21 np0005548731 nova_compute[232433]: 2025-12-06 08:37:21.597 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:37:21 np0005548731 nova_compute[232433]: 2025-12-06 08:37:21.602 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:37:21 np0005548731 nova_compute[232433]: 2025-12-06 08:37:21.656 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:37:21 np0005548731 nova_compute[232433]: 2025-12-06 08:37:21.657 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:37:21 np0005548731 nova_compute[232433]: 2025-12-06 08:37:21.658 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:37:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:22.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:24 np0005548731 nova_compute[232433]: 2025-12-06 08:37:24.382 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:24.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:24.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:25 np0005548731 nova_compute[232433]: 2025-12-06 08:37:25.936 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:26.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:26.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:28.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:28.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:29 np0005548731 nova_compute[232433]: 2025-12-06 08:37:29.386 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:30.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:30 np0005548731 nova_compute[232433]: 2025-12-06 08:37:30.938 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:31.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:32.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:37:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:33.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:37:34 np0005548731 nova_compute[232433]: 2025-12-06 08:37:34.386 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:34.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:35.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:35 np0005548731 nova_compute[232433]: 2025-12-06 08:37:35.941 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:36.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:37.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:37:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:38.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:37:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:37:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:39.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:37:39 np0005548731 nova_compute[232433]: 2025-12-06 08:37:39.389 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:37:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:37:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:37:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:40.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:40 np0005548731 nova_compute[232433]: 2025-12-06 08:37:40.942 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:41.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:42.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:42 np0005548731 podman[360953]: 2025-12-06 08:37:42.923229908 +0000 UTC m=+0.069779749 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  6 03:37:42 np0005548731 podman[360955]: 2025-12-06 08:37:42.941195775 +0000 UTC m=+0.086815623 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  6 03:37:42 np0005548731 podman[360954]: 2025-12-06 08:37:42.942380654 +0000 UTC m=+0.092534962 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 03:37:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:37:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:43.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:37:44 np0005548731 nova_compute[232433]: 2025-12-06 08:37:44.391 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:44.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:37:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:45.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:37:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:45 np0005548731 nova_compute[232433]: 2025-12-06 08:37:45.945 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:46.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:47.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:37:47 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:37:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:48.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:49.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:49 np0005548731 nova_compute[232433]: 2025-12-06 08:37:49.392 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:50.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:50 np0005548731 nova_compute[232433]: 2025-12-06 08:37:50.948 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:51.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:52.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:53.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:54 np0005548731 nova_compute[232433]: 2025-12-06 08:37:54.396 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:54.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:55.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:37:55 np0005548731 nova_compute[232433]: 2025-12-06 08:37:55.952 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:37:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:37:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:56.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:37:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:37:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:57.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:37:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:37:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:58.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:37:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:37:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:37:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:59.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:37:59 np0005548731 nova_compute[232433]: 2025-12-06 08:37:59.448 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:00.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:38:00.939 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:38:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:38:00.939 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:38:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:38:00.939 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:38:00 np0005548731 nova_compute[232433]: 2025-12-06 08:38:00.954 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:01.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:02.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:03.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:04 np0005548731 nova_compute[232433]: 2025-12-06 08:38:04.486 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:04.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:38:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:05.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:38:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:05 np0005548731 nova_compute[232433]: 2025-12-06 08:38:05.955 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:06.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:07.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:08.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:09.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:09 np0005548731 nova_compute[232433]: 2025-12-06 08:38:09.488 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:10.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:10 np0005548731 nova_compute[232433]: 2025-12-06 08:38:10.658 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:38:10 np0005548731 nova_compute[232433]: 2025-12-06 08:38:10.658 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:38:10 np0005548731 nova_compute[232433]: 2025-12-06 08:38:10.658 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:38:10 np0005548731 nova_compute[232433]: 2025-12-06 08:38:10.707 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:38:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:10 np0005548731 nova_compute[232433]: 2025-12-06 08:38:10.957 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:11 np0005548731 nova_compute[232433]: 2025-12-06 08:38:11.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:38:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:11.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:12.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:13 np0005548731 podman[361182]: 2025-12-06 08:38:13.88987332 +0000 UTC m=+0.048360999 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:38:13 np0005548731 podman[361184]: 2025-12-06 08:38:13.897458964 +0000 UTC m=+0.053432041 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  6 03:38:13 np0005548731 podman[361183]: 2025-12-06 08:38:13.944931769 +0000 UTC m=+0.100453586 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  6 03:38:14 np0005548731 nova_compute[232433]: 2025-12-06 08:38:14.491 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:14.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:15 np0005548731 nova_compute[232433]: 2025-12-06 08:38:15.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:38:15 np0005548731 nova_compute[232433]: 2025-12-06 08:38:15.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:38:15 np0005548731 nova_compute[232433]: 2025-12-06 08:38:15.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:38:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:15.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:15 np0005548731 nova_compute[232433]: 2025-12-06 08:38:15.960 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:16.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:17 np0005548731 nova_compute[232433]: 2025-12-06 08:38:17.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:38:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:38:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:17.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:38:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:18.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:19.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:19 np0005548731 nova_compute[232433]: 2025-12-06 08:38:19.493 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.157 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.157 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.157 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.158 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.158 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:38:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:20.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:38:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1308302036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.583 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:38:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.740 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.741 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4123MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.741 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:38:20 np0005548731 nova_compute[232433]: 2025-12-06 08:38:20.742 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:38:21 np0005548731 nova_compute[232433]: 2025-12-06 08:38:21.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:21 np0005548731 nova_compute[232433]: 2025-12-06 08:38:21.056 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:38:21 np0005548731 nova_compute[232433]: 2025-12-06 08:38:21.056 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:38:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:21.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:21 np0005548731 nova_compute[232433]: 2025-12-06 08:38:21.132 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:38:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:38:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2898152300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:38:21 np0005548731 nova_compute[232433]: 2025-12-06 08:38:21.549 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:38:21 np0005548731 nova_compute[232433]: 2025-12-06 08:38:21.555 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:38:21 np0005548731 nova_compute[232433]: 2025-12-06 08:38:21.816 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:38:21 np0005548731 nova_compute[232433]: 2025-12-06 08:38:21.819 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:38:21 np0005548731 nova_compute[232433]: 2025-12-06 08:38:21.819 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:38:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:22.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:22 np0005548731 nova_compute[232433]: 2025-12-06 08:38:22.821 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:38:22 np0005548731 nova_compute[232433]: 2025-12-06 08:38:22.821 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:38:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:23.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:24 np0005548731 nova_compute[232433]: 2025-12-06 08:38:24.494 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:24.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:25.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:26 np0005548731 nova_compute[232433]: 2025-12-06 08:38:26.003 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:26.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:27.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:28.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:29.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:29 np0005548731 nova_compute[232433]: 2025-12-06 08:38:29.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:30.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:31 np0005548731 nova_compute[232433]: 2025-12-06 08:38:31.005 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:38:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:31.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:38:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:32.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:38:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:33.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:38:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:34.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:34 np0005548731 nova_compute[232433]: 2025-12-06 08:38:34.542 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:35.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:36 np0005548731 nova_compute[232433]: 2025-12-06 08:38:36.046 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:36.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:38:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:37.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:38:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:38.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:39.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:39 np0005548731 nova_compute[232433]: 2025-12-06 08:38:39.597 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:40.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:41 np0005548731 nova_compute[232433]: 2025-12-06 08:38:41.049 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:41 np0005548731 nova_compute[232433]: 2025-12-06 08:38:41.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:38:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:41.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:42.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:43.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:38:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:44.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:38:44 np0005548731 nova_compute[232433]: 2025-12-06 08:38:44.645 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:44 np0005548731 podman[361358]: 2025-12-06 08:38:44.888520231 +0000 UTC m=+0.051957165 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  6 03:38:44 np0005548731 podman[361360]: 2025-12-06 08:38:44.902313496 +0000 UTC m=+0.059345384 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:38:44 np0005548731 podman[361359]: 2025-12-06 08:38:44.917419014 +0000 UTC m=+0.077072806 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:38:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:45.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:46 np0005548731 nova_compute[232433]: 2025-12-06 08:38:46.051 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:46.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:38:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:47.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:38:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  6 03:38:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:38:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:38:48 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:38:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:48.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:38:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:49.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:38:49 np0005548731 nova_compute[232433]: 2025-12-06 08:38:49.646 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:38:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:50.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:38:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:51 np0005548731 nova_compute[232433]: 2025-12-06 08:38:51.054 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:51.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:52.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:53.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:54.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:54 np0005548731 nova_compute[232433]: 2025-12-06 08:38:54.646 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:55.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:38:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:38:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:38:56 np0005548731 nova_compute[232433]: 2025-12-06 08:38:56.096 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:38:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:38:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:56.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:38:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:57.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:58.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:38:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:38:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:59.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:38:59 np0005548731 nova_compute[232433]: 2025-12-06 08:38:59.686 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:39:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:00.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:39:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:39:00.940 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:39:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:39:00.941 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:39:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:39:00.941 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:39:01 np0005548731 nova_compute[232433]: 2025-12-06 08:39:01.100 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:01.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:02.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:03.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:04.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:04 np0005548731 nova_compute[232433]: 2025-12-06 08:39:04.725 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:05.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:06 np0005548731 nova_compute[232433]: 2025-12-06 08:39:06.102 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:06.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:07.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:08.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:09 np0005548731 nova_compute[232433]: 2025-12-06 08:39:09.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:39:09 np0005548731 nova_compute[232433]: 2025-12-06 08:39:09.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:39:09 np0005548731 nova_compute[232433]: 2025-12-06 08:39:09.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:39:09 np0005548731 nova_compute[232433]: 2025-12-06 08:39:09.150 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:39:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:09.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:09 np0005548731 nova_compute[232433]: 2025-12-06 08:39:09.726 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:39:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:10.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:39:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:11 np0005548731 nova_compute[232433]: 2025-12-06 08:39:11.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:39:11 np0005548731 nova_compute[232433]: 2025-12-06 08:39:11.104 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:11.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:12.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:13.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:14.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:14 np0005548731 nova_compute[232433]: 2025-12-06 08:39:14.729 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:39:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:15.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:39:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:15 np0005548731 podman[361721]: 2025-12-06 08:39:15.92037701 +0000 UTC m=+0.077919768 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  6 03:39:15 np0005548731 podman[361720]: 2025-12-06 08:39:15.922258435 +0000 UTC m=+0.079873175 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  6 03:39:15 np0005548731 podman[361722]: 2025-12-06 08:39:15.930305471 +0000 UTC m=+0.087221813 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 03:39:16 np0005548731 nova_compute[232433]: 2025-12-06 08:39:16.106 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:39:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:16.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:39:17 np0005548731 nova_compute[232433]: 2025-12-06 08:39:17.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:39:17 np0005548731 nova_compute[232433]: 2025-12-06 08:39:17.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:39:17 np0005548731 nova_compute[232433]: 2025-12-06 08:39:17.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:39:17 np0005548731 nova_compute[232433]: 2025-12-06 08:39:17.103 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:39:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:17.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:39:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:18.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:39:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:19.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:19 np0005548731 nova_compute[232433]: 2025-12-06 08:39:19.730 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.130 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.130 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:39:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:39:20 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3790213319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.554 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:39:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:39:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:20.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.736 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.738 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4107MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.738 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.738 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:39:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.809 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.809 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:39:20 np0005548731 nova_compute[232433]: 2025-12-06 08:39:20.829 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:39:21 np0005548731 nova_compute[232433]: 2025-12-06 08:39:21.108 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:21.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:39:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/509942672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:39:21 np0005548731 nova_compute[232433]: 2025-12-06 08:39:21.251 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:39:21 np0005548731 nova_compute[232433]: 2025-12-06 08:39:21.256 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:39:21 np0005548731 nova_compute[232433]: 2025-12-06 08:39:21.273 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:39:21 np0005548731 nova_compute[232433]: 2025-12-06 08:39:21.274 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:39:21 np0005548731 nova_compute[232433]: 2025-12-06 08:39:21.275 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:39:22 np0005548731 nova_compute[232433]: 2025-12-06 08:39:22.276 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:39:22 np0005548731 nova_compute[232433]: 2025-12-06 08:39:22.277 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:39:22 np0005548731 nova_compute[232433]: 2025-12-06 08:39:22.277 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:39:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:22.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:23.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:24.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:24 np0005548731 nova_compute[232433]: 2025-12-06 08:39:24.734 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:39:24 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.0 total, 600.0 interval#012Cumulative writes: 19K writes, 98K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1473 writes, 7100 keys, 1473 commit groups, 1.0 writes per commit group, ingest: 15.71 MB, 0.03 MB/s#012Interval WAL: 1473 writes, 1473 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     34.5      3.51              0.35        63    0.056       0      0       0.0       0.0#012  L6      1/0   12.75 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.6    104.1     89.8      7.59              2.00        62    0.122    546K    33K       0.0       0.0#012 Sum      1/0   12.75 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.6     71.2     72.3     11.09              2.35       125    0.089    546K    33K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.5     38.9     39.1      1.60              0.20         8    0.200     51K   2047       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    104.1     89.8      7.59              2.00        62    0.122    546K    33K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     34.5      3.50              0.35        62    0.057       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.0 total, 600.0 interval#012Flush(GB): cumulative 0.118, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.78 GB write, 0.10 MB/s write, 0.77 GB read, 0.10 MB/s read, 11.1 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619171151f0#2 capacity: 304.00 MB usage: 86.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000553 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5346,82.89 MB,27.2668%) FilterBlock(125,1.47 MB,0.482012%) IndexBlock(125,2.34 MB,0.770684%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  6 03:39:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:25.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:26 np0005548731 nova_compute[232433]: 2025-12-06 08:39:26.110 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:26.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:27.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:28.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:29.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:29 np0005548731 nova_compute[232433]: 2025-12-06 08:39:29.738 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:30.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:31 np0005548731 nova_compute[232433]: 2025-12-06 08:39:31.112 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:39:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:31.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:39:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:39:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:32.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:39:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:33.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:34.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:34 np0005548731 nova_compute[232433]: 2025-12-06 08:39:34.740 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:39:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:35.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:39:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:36 np0005548731 nova_compute[232433]: 2025-12-06 08:39:36.115 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:36.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:37.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:38.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:39.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:39 np0005548731 nova_compute[232433]: 2025-12-06 08:39:39.742 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:40.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:41 np0005548731 nova_compute[232433]: 2025-12-06 08:39:41.169 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:41.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:42.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:43.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:44.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:44 np0005548731 nova_compute[232433]: 2025-12-06 08:39:44.743 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:45.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:46 np0005548731 nova_compute[232433]: 2025-12-06 08:39:46.171 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:46.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:46 np0005548731 podman[361895]: 2025-12-06 08:39:46.894333036 +0000 UTC m=+0.054929397 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  6 03:39:46 np0005548731 podman[361893]: 2025-12-06 08:39:46.946360732 +0000 UTC m=+0.111318240 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  6 03:39:46 np0005548731 podman[361894]: 2025-12-06 08:39:46.964273259 +0000 UTC m=+0.125887835 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  6 03:39:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:39:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:47.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:39:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:48.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:39:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:49.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:39:49 np0005548731 nova_compute[232433]: 2025-12-06 08:39:49.744 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:50 np0005548731 nova_compute[232433]: 2025-12-06 08:39:50.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:39:50 np0005548731 nova_compute[232433]: 2025-12-06 08:39:50.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 03:39:50 np0005548731 nova_compute[232433]: 2025-12-06 08:39:50.164 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 03:39:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:39:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:50.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:39:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:51 np0005548731 nova_compute[232433]: 2025-12-06 08:39:51.174 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:51.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:52.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:53.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:39:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:54.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:39:54 np0005548731 nova_compute[232433]: 2025-12-06 08:39:54.748 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:55.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:39:56 np0005548731 nova_compute[232433]: 2025-12-06 08:39:56.176 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:39:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:39:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:56.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:39:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:39:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:57.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:57.772830) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010397772894, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 2309, "num_deletes": 258, "total_data_size": 5724403, "memory_usage": 5802960, "flush_reason": "Manual Compaction"}
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010397802072, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 3754820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 96820, "largest_seqno": 99123, "table_properties": {"data_size": 3745477, "index_size": 5900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18572, "raw_average_key_size": 19, "raw_value_size": 3726930, "raw_average_value_size": 3998, "num_data_blocks": 260, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010174, "oldest_key_time": 1765010174, "file_creation_time": 1765010397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 29310 microseconds, and 9038 cpu microseconds.
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:57.802143) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 3754820 bytes OK
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:57.802165) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:57.805055) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:57.805069) EVENT_LOG_v1 {"time_micros": 1765010397805064, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:57.805086) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 5714378, prev total WAL file size 5714378, number of live WAL files 2.
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:57.806473) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373836' seq:72057594037927935, type:22 .. '6C6F676D0034303430' seq:0, type:0; will stop at (end)
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(3666KB)], [198(12MB)]
Dec  6 03:39:57 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010397806506, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 17126946, "oldest_snapshot_seqno": -1}
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 12821 keys, 17003364 bytes, temperature: kUnknown
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010398026940, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 17003364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16920706, "index_size": 49521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32069, "raw_key_size": 339116, "raw_average_key_size": 26, "raw_value_size": 16696617, "raw_average_value_size": 1302, "num_data_blocks": 1891, "num_entries": 12821, "num_filter_entries": 12821, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765010397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:58.027248) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 17003364 bytes
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:58.031944) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 77.7 rd, 77.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.8 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(9.1) write-amplify(4.5) OK, records in: 13350, records dropped: 529 output_compression: NoCompression
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:58.031980) EVENT_LOG_v1 {"time_micros": 1765010398031966, "job": 128, "event": "compaction_finished", "compaction_time_micros": 220538, "compaction_time_cpu_micros": 39357, "output_level": 6, "num_output_files": 1, "total_output_size": 17003364, "num_input_records": 13350, "num_output_records": 12821, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010398032971, "job": 128, "event": "table_file_deletion", "file_number": 200}
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010398035734, "job": 128, "event": "table_file_deletion", "file_number": 198}
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:57.806412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:58.035804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:58.035810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:58.035811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:58.035812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:39:58 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:39:58.035814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:39:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:58.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:39:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:39:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:59.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:39:59 np0005548731 nova_compute[232433]: 2025-12-06 08:39:59.749 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:00.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:40:00.942 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:40:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:40:00.942 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:40:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:40:00.942 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:40:01 np0005548731 nova_compute[232433]: 2025-12-06 08:40:01.178 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:01 np0005548731 ceph-mon[77458]: overall HEALTH_OK
Dec  6 03:40:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:01.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.289805) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402289831, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 307, "num_deletes": 251, "total_data_size": 122934, "memory_usage": 128888, "flush_reason": "Manual Compaction"}
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402291813, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 80391, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 99128, "largest_seqno": 99430, "table_properties": {"data_size": 78459, "index_size": 159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5111, "raw_average_key_size": 18, "raw_value_size": 74592, "raw_average_value_size": 269, "num_data_blocks": 7, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010398, "oldest_key_time": 1765010398, "file_creation_time": 1765010402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 2075 microseconds, and 623 cpu microseconds.
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.291879) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 80391 bytes OK
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.291894) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.292939) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.292951) EVENT_LOG_v1 {"time_micros": 1765010402292948, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.292961) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 120714, prev total WAL file size 120714, number of live WAL files 2.
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.293257) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(78KB)], [201(16MB)]
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402293305, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 17083755, "oldest_snapshot_seqno": -1}
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 12588 keys, 14953401 bytes, temperature: kUnknown
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402403018, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 14953401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14874394, "index_size": 46446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31493, "raw_key_size": 334973, "raw_average_key_size": 26, "raw_value_size": 14656628, "raw_average_value_size": 1164, "num_data_blocks": 1752, "num_entries": 12588, "num_filter_entries": 12588, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765010402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.403285) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 14953401 bytes
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.408837) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.6 rd, 136.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 16.2 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(398.5) write-amplify(186.0) OK, records in: 13098, records dropped: 510 output_compression: NoCompression
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.408856) EVENT_LOG_v1 {"time_micros": 1765010402408848, "job": 130, "event": "compaction_finished", "compaction_time_micros": 109788, "compaction_time_cpu_micros": 33093, "output_level": 6, "num_output_files": 1, "total_output_size": 14953401, "num_input_records": 13098, "num_output_records": 12588, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402408976, "job": 130, "event": "table_file_deletion", "file_number": 203}
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402411749, "job": 130, "event": "table_file_deletion", "file_number": 201}
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.293163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.411847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.411854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.411856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.411858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:40:02 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:40:02.411860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:40:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:02.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:03.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:40:03 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:40:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:04.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:04 np0005548731 nova_compute[232433]: 2025-12-06 08:40:04.751 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:05.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:06 np0005548731 nova_compute[232433]: 2025-12-06 08:40:06.181 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:06.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:07.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:08.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:09 np0005548731 nova_compute[232433]: 2025-12-06 08:40:09.164 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:09 np0005548731 nova_compute[232433]: 2025-12-06 08:40:09.164 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:40:09 np0005548731 nova_compute[232433]: 2025-12-06 08:40:09.164 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:40:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:40:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3340009868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:40:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:40:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3340009868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:40:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:09.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:09 np0005548731 nova_compute[232433]: 2025-12-06 08:40:09.291 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:40:09 np0005548731 nova_compute[232433]: 2025-12-06 08:40:09.755 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:10.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:11 np0005548731 nova_compute[232433]: 2025-12-06 08:40:11.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:11 np0005548731 nova_compute[232433]: 2025-12-06 08:40:11.184 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:11.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:12.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:13.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:14 np0005548731 nova_compute[232433]: 2025-12-06 08:40:14.757 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:15.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:16 np0005548731 nova_compute[232433]: 2025-12-06 08:40:16.229 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:16.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:17 np0005548731 nova_compute[232433]: 2025-12-06 08:40:17.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:17 np0005548731 nova_compute[232433]: 2025-12-06 08:40:17.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:40:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:40:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:17.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:40:17 np0005548731 podman[362252]: 2025-12-06 08:40:17.891318163 +0000 UTC m=+0.053356919 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  6 03:40:17 np0005548731 podman[362253]: 2025-12-06 08:40:17.916860585 +0000 UTC m=+0.075251222 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  6 03:40:17 np0005548731 podman[362254]: 2025-12-06 08:40:17.922525853 +0000 UTC m=+0.077163609 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  6 03:40:18 np0005548731 nova_compute[232433]: 2025-12-06 08:40:18.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:18.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:19 np0005548731 nova_compute[232433]: 2025-12-06 08:40:19.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:19.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:19 np0005548731 nova_compute[232433]: 2025-12-06 08:40:19.759 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:20.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.136 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.136 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.137 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.137 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.137 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.264 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:21.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:40:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3187170638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.617 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.752 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.753 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4121MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.753 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.754 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.909 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.910 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:40:21 np0005548731 nova_compute[232433]: 2025-12-06 08:40:21.929 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:40:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:40:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2298997144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:40:22 np0005548731 nova_compute[232433]: 2025-12-06 08:40:22.373 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:40:22 np0005548731 nova_compute[232433]: 2025-12-06 08:40:22.380 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:40:22 np0005548731 nova_compute[232433]: 2025-12-06 08:40:22.407 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:40:22 np0005548731 nova_compute[232433]: 2025-12-06 08:40:22.408 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:40:22 np0005548731 nova_compute[232433]: 2025-12-06 08:40:22.408 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:40:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:22.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:23.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:23 np0005548731 nova_compute[232433]: 2025-12-06 08:40:23.409 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:23 np0005548731 nova_compute[232433]: 2025-12-06 08:40:23.409 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:24 np0005548731 nova_compute[232433]: 2025-12-06 08:40:24.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:24 np0005548731 nova_compute[232433]: 2025-12-06 08:40:24.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:24 np0005548731 nova_compute[232433]: 2025-12-06 08:40:24.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 03:40:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:24.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:24 np0005548731 nova_compute[232433]: 2025-12-06 08:40:24.762 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:40:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:25.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:40:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:26 np0005548731 nova_compute[232433]: 2025-12-06 08:40:26.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:26 np0005548731 nova_compute[232433]: 2025-12-06 08:40:26.267 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:26.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:27.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:28.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:29.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:29 np0005548731 nova_compute[232433]: 2025-12-06 08:40:29.764 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:30.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:40:31 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7802.4 total, 600.0 interval#012Cumulative writes: 81K writes, 325K keys, 81K commit groups, 1.0 writes per commit group, ingest: 0.33 GB, 0.04 MB/s#012Cumulative WAL: 81K writes, 30K syncs, 2.66 writes per sync, written: 0.33 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 672 writes, 1020 keys, 672 commit groups, 1.0 writes per commit group, ingest: 0.33 MB, 0.00 MB/s#012Interval WAL: 672 writes, 336 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 03:40:31 np0005548731 nova_compute[232433]: 2025-12-06 08:40:31.270 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:31.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:32.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:33.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:34.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:34 np0005548731 nova_compute[232433]: 2025-12-06 08:40:34.770 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:35.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:36 np0005548731 nova_compute[232433]: 2025-12-06 08:40:36.272 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:36.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:37.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:40:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:38.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:40:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:39.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:39 np0005548731 nova_compute[232433]: 2025-12-06 08:40:39.772 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:40.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:41 np0005548731 nova_compute[232433]: 2025-12-06 08:40:41.275 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:41.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:42 np0005548731 nova_compute[232433]: 2025-12-06 08:40:42.434 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:40:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:42.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:43.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:44.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:44 np0005548731 nova_compute[232433]: 2025-12-06 08:40:44.774 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:45.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:46 np0005548731 nova_compute[232433]: 2025-12-06 08:40:46.278 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:46.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:47.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:48.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:48 np0005548731 podman[362426]: 2025-12-06 08:40:48.909261846 +0000 UTC m=+0.058125556 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec  6 03:40:48 np0005548731 podman[362424]: 2025-12-06 08:40:48.909644445 +0000 UTC m=+0.062447451 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  6 03:40:48 np0005548731 podman[362425]: 2025-12-06 08:40:48.937760379 +0000 UTC m=+0.079647859 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  6 03:40:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:49.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:49 np0005548731 nova_compute[232433]: 2025-12-06 08:40:49.774 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:50.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:51 np0005548731 nova_compute[232433]: 2025-12-06 08:40:51.281 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:51.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:52.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:53.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:54 np0005548731 nova_compute[232433]: 2025-12-06 08:40:54.777 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:55.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:40:56 np0005548731 nova_compute[232433]: 2025-12-06 08:40:56.283 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:40:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:56.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:40:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:57.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:40:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:40:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:58.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:40:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:40:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:40:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:59.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:40:59 np0005548731 nova_compute[232433]: 2025-12-06 08:40:59.779 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:00.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:41:00.942 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:41:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:41:00.943 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:41:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:41:00.943 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:41:01 np0005548731 nova_compute[232433]: 2025-12-06 08:41:01.286 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:41:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:01.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:41:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:02.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:03.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:04.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:04 np0005548731 nova_compute[232433]: 2025-12-06 08:41:04.781 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:05.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:06 np0005548731 nova_compute[232433]: 2025-12-06 08:41:06.287 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:06.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:41:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:41:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:41:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:41:07 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:41:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:07.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:08.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:41:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:09.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:41:09 np0005548731 nova_compute[232433]: 2025-12-06 08:41:09.782 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:11 np0005548731 nova_compute[232433]: 2025-12-06 08:41:11.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:41:11 np0005548731 nova_compute[232433]: 2025-12-06 08:41:11.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:41:11 np0005548731 nova_compute[232433]: 2025-12-06 08:41:11.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:41:11 np0005548731 nova_compute[232433]: 2025-12-06 08:41:11.291 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:11 np0005548731 nova_compute[232433]: 2025-12-06 08:41:11.346 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:41:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:11.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:12.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:13 np0005548731 nova_compute[232433]: 2025-12-06 08:41:13.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:41:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:13.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:41:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:14.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:41:14 np0005548731 nova_compute[232433]: 2025-12-06 08:41:14.783 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:15.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:41:15 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:41:16 np0005548731 nova_compute[232433]: 2025-12-06 08:41:16.293 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:16.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:41:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:17.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:41:18 np0005548731 nova_compute[232433]: 2025-12-06 08:41:18.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:41:18 np0005548731 nova_compute[232433]: 2025-12-06 08:41:18.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:41:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:18.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:19 np0005548731 nova_compute[232433]: 2025-12-06 08:41:19.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:41:19 np0005548731 nova_compute[232433]: 2025-12-06 08:41:19.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:41:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:41:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:19.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:41:19 np0005548731 nova_compute[232433]: 2025-12-06 08:41:19.787 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:19 np0005548731 podman[362784]: 2025-12-06 08:41:19.90137174 +0000 UTC m=+0.053403021 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  6 03:41:19 np0005548731 podman[362782]: 2025-12-06 08:41:19.919311136 +0000 UTC m=+0.077035085 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true)
Dec  6 03:41:19 np0005548731 podman[362783]: 2025-12-06 08:41:19.929612017 +0000 UTC m=+0.083085443 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec  6 03:41:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:20.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:21 np0005548731 nova_compute[232433]: 2025-12-06 08:41:21.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:41:21 np0005548731 nova_compute[232433]: 2025-12-06 08:41:21.325 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:41:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:21.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:41:22 np0005548731 nova_compute[232433]: 2025-12-06 08:41:22.012 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:41:22 np0005548731 nova_compute[232433]: 2025-12-06 08:41:22.012 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:41:22 np0005548731 nova_compute[232433]: 2025-12-06 08:41:22.013 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:41:22 np0005548731 nova_compute[232433]: 2025-12-06 08:41:22.013 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:41:22 np0005548731 nova_compute[232433]: 2025-12-06 08:41:22.013 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:41:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:41:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4159804911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:41:22 np0005548731 nova_compute[232433]: 2025-12-06 08:41:22.438 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:41:22 np0005548731 nova_compute[232433]: 2025-12-06 08:41:22.603 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:41:22 np0005548731 nova_compute[232433]: 2025-12-06 08:41:22.604 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4093MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:41:22 np0005548731 nova_compute[232433]: 2025-12-06 08:41:22.605 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:41:22 np0005548731 nova_compute[232433]: 2025-12-06 08:41:22.605 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:41:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:41:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:22.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:41:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:23.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:23 np0005548731 nova_compute[232433]: 2025-12-06 08:41:23.468 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:41:23 np0005548731 nova_compute[232433]: 2025-12-06 08:41:23.468 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:41:23 np0005548731 nova_compute[232433]: 2025-12-06 08:41:23.481 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 03:41:23 np0005548731 nova_compute[232433]: 2025-12-06 08:41:23.496 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 03:41:23 np0005548731 nova_compute[232433]: 2025-12-06 08:41:23.497 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 03:41:23 np0005548731 nova_compute[232433]: 2025-12-06 08:41:23.520 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 03:41:23 np0005548731 nova_compute[232433]: 2025-12-06 08:41:23.549 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 03:41:23 np0005548731 nova_compute[232433]: 2025-12-06 08:41:23.584 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:41:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:41:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3839346878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:41:23 np0005548731 nova_compute[232433]: 2025-12-06 08:41:23.997 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:41:24 np0005548731 nova_compute[232433]: 2025-12-06 08:41:24.002 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:41:24 np0005548731 nova_compute[232433]: 2025-12-06 08:41:24.018 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:41:24 np0005548731 nova_compute[232433]: 2025-12-06 08:41:24.020 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:41:24 np0005548731 nova_compute[232433]: 2025-12-06 08:41:24.020 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:41:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:24.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:24 np0005548731 nova_compute[232433]: 2025-12-06 08:41:24.788 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:25 np0005548731 nova_compute[232433]: 2025-12-06 08:41:25.020 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:41:25 np0005548731 nova_compute[232433]: 2025-12-06 08:41:25.021 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:41:25 np0005548731 nova_compute[232433]: 2025-12-06 08:41:25.106 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:41:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:41:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:25.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:41:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:26 np0005548731 nova_compute[232433]: 2025-12-06 08:41:26.327 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:26.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:27.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:28.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:29.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:29 np0005548731 nova_compute[232433]: 2025-12-06 08:41:29.790 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:30.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:31 np0005548731 nova_compute[232433]: 2025-12-06 08:41:31.330 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:31.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:32.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:33.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:34.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:34 np0005548731 nova_compute[232433]: 2025-12-06 08:41:34.791 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:41:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:35.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:41:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:36 np0005548731 nova_compute[232433]: 2025-12-06 08:41:36.333 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:41:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:36.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:41:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:41:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:37.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:41:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:38.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:39.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:39 np0005548731 nova_compute[232433]: 2025-12-06 08:41:39.794 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:40.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:41 np0005548731 nova_compute[232433]: 2025-12-06 08:41:41.336 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:41.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:42.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:41:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:43.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:41:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:44.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:44 np0005548731 nova_compute[232433]: 2025-12-06 08:41:44.796 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:45.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:46 np0005548731 nova_compute[232433]: 2025-12-06 08:41:46.337 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:46.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:47.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:48.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:49.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:49 np0005548731 nova_compute[232433]: 2025-12-06 08:41:49.799 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:41:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:50.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:41:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:50 np0005548731 podman[362957]: 2025-12-06 08:41:50.893177832 +0000 UTC m=+0.045408967 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  6 03:41:50 np0005548731 podman[362959]: 2025-12-06 08:41:50.903259227 +0000 UTC m=+0.048430370 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3)
Dec  6 03:41:50 np0005548731 podman[362958]: 2025-12-06 08:41:50.926435171 +0000 UTC m=+0.075036857 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  6 03:41:51 np0005548731 nova_compute[232433]: 2025-12-06 08:41:51.339 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:51.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #205. Immutable memtables: 0.
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.491305) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 205
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512491335, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 1271, "num_deletes": 251, "total_data_size": 2920639, "memory_usage": 2944712, "flush_reason": "Manual Compaction"}
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #206: started
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512500698, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 206, "file_size": 1156403, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 99435, "largest_seqno": 100701, "table_properties": {"data_size": 1152133, "index_size": 1793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11184, "raw_average_key_size": 20, "raw_value_size": 1142950, "raw_average_value_size": 2116, "num_data_blocks": 81, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010402, "oldest_key_time": 1765010402, "file_creation_time": 1765010512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 9440 microseconds, and 5214 cpu microseconds.
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.500743) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #206: 1156403 bytes OK
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.500762) [db/memtable_list.cc:519] [default] Level-0 commit table #206 started
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.503054) [db/memtable_list.cc:722] [default] Level-0 commit table #206: memtable #1 done
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.503070) EVENT_LOG_v1 {"time_micros": 1765010512503064, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.503089) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 2914683, prev total WAL file size 2914683, number of live WAL files 2.
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000202.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.504006) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353239' seq:72057594037927935, type:22 .. '6D6772737461740033373831' seq:0, type:0; will stop at (end)
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [206(1129KB)], [204(14MB)]
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512504041, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [206], "files_L6": [204], "score": -1, "input_data_size": 16109804, "oldest_snapshot_seqno": -1}
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #207: 12662 keys, 13008832 bytes, temperature: kUnknown
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512586124, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 207, "file_size": 13008832, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12932662, "index_size": 43417, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31685, "raw_key_size": 336669, "raw_average_key_size": 26, "raw_value_size": 12716680, "raw_average_value_size": 1004, "num_data_blocks": 1629, "num_entries": 12662, "num_filter_entries": 12662, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765010512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 207, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.586352) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 13008832 bytes
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.587963) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.1 rd, 158.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 14.3 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(25.2) write-amplify(11.2) OK, records in: 13128, records dropped: 466 output_compression: NoCompression
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.587989) EVENT_LOG_v1 {"time_micros": 1765010512587977, "job": 132, "event": "compaction_finished", "compaction_time_micros": 82152, "compaction_time_cpu_micros": 29727, "output_level": 6, "num_output_files": 1, "total_output_size": 13008832, "num_input_records": 13128, "num_output_records": 12662, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512588430, "job": 132, "event": "table_file_deletion", "file_number": 206}
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000204.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512592077, "job": 132, "event": "table_file_deletion", "file_number": 204}
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.503941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.592107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.592111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.592113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.592114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:41:52 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:41:52.592116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:41:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:52.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:53.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:54.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:54 np0005548731 nova_compute[232433]: 2025-12-06 08:41:54.800 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:55.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:41:56 np0005548731 nova_compute[232433]: 2025-12-06 08:41:56.342 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:41:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:56.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:57.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:58.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:41:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:41:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:59.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:41:59 np0005548731 nova_compute[232433]: 2025-12-06 08:41:59.801 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:00.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:42:00.944 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:42:00.944 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:42:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:42:00.944 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:42:01 np0005548731 nova_compute[232433]: 2025-12-06 08:42:01.389 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:01.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:02.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:42:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:03.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:42:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:42:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:04.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:42:04 np0005548731 nova_compute[232433]: 2025-12-06 08:42:04.803 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:05.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:06 np0005548731 nova_compute[232433]: 2025-12-06 08:42:06.392 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:42:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:06.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:42:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:07.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:08.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:42:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:09.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:42:09 np0005548731 nova_compute[232433]: 2025-12-06 08:42:09.805 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:42:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:10.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:42:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:11 np0005548731 nova_compute[232433]: 2025-12-06 08:42:11.396 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:11.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:12 np0005548731 nova_compute[232433]: 2025-12-06 08:42:12.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:42:12 np0005548731 nova_compute[232433]: 2025-12-06 08:42:12.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:42:12 np0005548731 nova_compute[232433]: 2025-12-06 08:42:12.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:42:12 np0005548731 nova_compute[232433]: 2025-12-06 08:42:12.120 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:42:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:12.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:42:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:13.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:42:14 np0005548731 nova_compute[232433]: 2025-12-06 08:42:14.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:42:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:14.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:14 np0005548731 nova_compute[232433]: 2025-12-06 08:42:14.807 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:15.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:16 np0005548731 nova_compute[232433]: 2025-12-06 08:42:16.398 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:16.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:42:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:42:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:42:17 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:42:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:42:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:17.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:42:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:42:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:42:18 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:42:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:18.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:19 np0005548731 nova_compute[232433]: 2025-12-06 08:42:19.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:42:19 np0005548731 nova_compute[232433]: 2025-12-06 08:42:19.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:42:19 np0005548731 nova_compute[232433]: 2025-12-06 08:42:19.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:42:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:19 np0005548731 nova_compute[232433]: 2025-12-06 08:42:19.808 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:20.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:21 np0005548731 nova_compute[232433]: 2025-12-06 08:42:21.099 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:42:21 np0005548731 nova_compute[232433]: 2025-12-06 08:42:21.103 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:42:21 np0005548731 nova_compute[232433]: 2025-12-06 08:42:21.399 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:21.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:21 np0005548731 podman[363266]: 2025-12-06 08:42:21.896479066 +0000 UTC m=+0.058838364 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:42:21 np0005548731 podman[363267]: 2025-12-06 08:42:21.925225325 +0000 UTC m=+0.085571324 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  6 03:42:21 np0005548731 podman[363268]: 2025-12-06 08:42:21.925613804 +0000 UTC m=+0.081658388 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  6 03:42:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:22.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:23.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:23 np0005548731 nova_compute[232433]: 2025-12-06 08:42:23.687 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:42:23 np0005548731 nova_compute[232433]: 2025-12-06 08:42:23.687 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:42:23 np0005548731 nova_compute[232433]: 2025-12-06 08:42:23.687 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:42:23 np0005548731 nova_compute[232433]: 2025-12-06 08:42:23.688 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:42:23 np0005548731 nova_compute[232433]: 2025-12-06 08:42:23.688 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:42:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:42:24 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1943936127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:42:24 np0005548731 nova_compute[232433]: 2025-12-06 08:42:24.128 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:42:24 np0005548731 nova_compute[232433]: 2025-12-06 08:42:24.274 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:42:24 np0005548731 nova_compute[232433]: 2025-12-06 08:42:24.276 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4100MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:42:24 np0005548731 nova_compute[232433]: 2025-12-06 08:42:24.276 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:42:24 np0005548731 nova_compute[232433]: 2025-12-06 08:42:24.276 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:42:24 np0005548731 nova_compute[232433]: 2025-12-06 08:42:24.745 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:42:24 np0005548731 nova_compute[232433]: 2025-12-06 08:42:24.745 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:42:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:24.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:24 np0005548731 nova_compute[232433]: 2025-12-06 08:42:24.811 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:24 np0005548731 nova_compute[232433]: 2025-12-06 08:42:24.892 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:42:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:42:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3695553771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:42:25 np0005548731 nova_compute[232433]: 2025-12-06 08:42:25.319 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:42:25 np0005548731 nova_compute[232433]: 2025-12-06 08:42:25.327 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:42:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:25.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:25 np0005548731 nova_compute[232433]: 2025-12-06 08:42:25.729 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:42:25 np0005548731 nova_compute[232433]: 2025-12-06 08:42:25.731 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:42:25 np0005548731 nova_compute[232433]: 2025-12-06 08:42:25.731 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:42:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:42:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:42:26 np0005548731 nova_compute[232433]: 2025-12-06 08:42:26.402 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:26 np0005548731 nova_compute[232433]: 2025-12-06 08:42:26.733 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:42:26 np0005548731 nova_compute[232433]: 2025-12-06 08:42:26.733 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:42:26 np0005548731 nova_compute[232433]: 2025-12-06 08:42:26.733 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:42:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:26.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:27.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:28.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:29.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:29 np0005548731 nova_compute[232433]: 2025-12-06 08:42:29.812 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:30.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:31 np0005548731 nova_compute[232433]: 2025-12-06 08:42:31.405 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:31.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:32.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:33.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:34.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:34 np0005548731 nova_compute[232433]: 2025-12-06 08:42:34.813 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:36 np0005548731 nova_compute[232433]: 2025-12-06 08:42:36.408 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:36.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:37.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:42:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:38.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:42:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:42:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:39.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:42:39 np0005548731 nova_compute[232433]: 2025-12-06 08:42:39.816 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:40.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:41 np0005548731 nova_compute[232433]: 2025-12-06 08:42:41.412 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:41.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:42 np0005548731 nova_compute[232433]: 2025-12-06 08:42:42.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:42:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:42.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:43.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:44.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:44 np0005548731 nova_compute[232433]: 2025-12-06 08:42:44.817 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:45.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:46 np0005548731 nova_compute[232433]: 2025-12-06 08:42:46.415 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:42:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:46.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:42:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:47.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:48.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:49.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:49 np0005548731 nova_compute[232433]: 2025-12-06 08:42:49.818 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:50.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:51 np0005548731 nova_compute[232433]: 2025-12-06 08:42:51.428 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:51.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:52.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:52 np0005548731 podman[363488]: 2025-12-06 08:42:52.88830746 +0000 UTC m=+0.053134024 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 03:42:52 np0005548731 podman[363489]: 2025-12-06 08:42:52.913848773 +0000 UTC m=+0.076492733 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:42:52 np0005548731 podman[363490]: 2025-12-06 08:42:52.914323254 +0000 UTC m=+0.063402694 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:42:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:53.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:42:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:54.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:42:54 np0005548731 nova_compute[232433]: 2025-12-06 08:42:54.858 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:55.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:42:56 np0005548731 nova_compute[232433]: 2025-12-06 08:42:56.431 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:42:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:42:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:56.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:42:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:57.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:42:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:58.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:42:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:42:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:42:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:59.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:42:59 np0005548731 nova_compute[232433]: 2025-12-06 08:42:59.860 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:00.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:43:00.945 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:43:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:43:00.945 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:43:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:43:00.945 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:43:01 np0005548731 nova_compute[232433]: 2025-12-06 08:43:01.434 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:01.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:02.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:03.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:04.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:04 np0005548731 nova_compute[232433]: 2025-12-06 08:43:04.862 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:05.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:06 np0005548731 nova_compute[232433]: 2025-12-06 08:43:06.437 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:06.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:07.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:08.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:43:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3926080530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:43:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:43:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3926080530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:43:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:09.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:09 np0005548731 nova_compute[232433]: 2025-12-06 08:43:09.896 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:10.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:11 np0005548731 nova_compute[232433]: 2025-12-06 08:43:11.439 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:11.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:12 np0005548731 nova_compute[232433]: 2025-12-06 08:43:12.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:43:12 np0005548731 nova_compute[232433]: 2025-12-06 08:43:12.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:43:12 np0005548731 nova_compute[232433]: 2025-12-06 08:43:12.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:43:12 np0005548731 nova_compute[232433]: 2025-12-06 08:43:12.138 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:43:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:12.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:13.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:14.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:14 np0005548731 nova_compute[232433]: 2025-12-06 08:43:14.900 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:15.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:16 np0005548731 nova_compute[232433]: 2025-12-06 08:43:16.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:43:16 np0005548731 nova_compute[232433]: 2025-12-06 08:43:16.442 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:16.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:17.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:18.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:19.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:19 np0005548731 nova_compute[232433]: 2025-12-06 08:43:19.902 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:20 np0005548731 nova_compute[232433]: 2025-12-06 08:43:20.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:43:20 np0005548731 nova_compute[232433]: 2025-12-06 08:43:20.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:43:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.133 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.133 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.133 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.133 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.133 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.445 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:43:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2520334797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:43:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:21.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.556 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.701 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.702 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4089MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.702 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.702 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.785 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.786 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:43:21 np0005548731 nova_compute[232433]: 2025-12-06 08:43:21.809 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:43:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:43:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2003389108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:43:22 np0005548731 nova_compute[232433]: 2025-12-06 08:43:22.238 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:43:22 np0005548731 nova_compute[232433]: 2025-12-06 08:43:22.243 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:43:22 np0005548731 nova_compute[232433]: 2025-12-06 08:43:22.317 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:43:22 np0005548731 nova_compute[232433]: 2025-12-06 08:43:22.319 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:43:22 np0005548731 nova_compute[232433]: 2025-12-06 08:43:22.319 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:43:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:23.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:23 np0005548731 podman[363711]: 2025-12-06 08:43:23.88692018 +0000 UTC m=+0.047667441 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 03:43:23 np0005548731 podman[363713]: 2025-12-06 08:43:23.926458473 +0000 UTC m=+0.084633781 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  6 03:43:23 np0005548731 podman[363712]: 2025-12-06 08:43:23.959769703 +0000 UTC m=+0.119409167 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 03:43:24 np0005548731 nova_compute[232433]: 2025-12-06 08:43:24.314 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:43:24 np0005548731 nova_compute[232433]: 2025-12-06 08:43:24.314 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:43:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:24.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:24 np0005548731 nova_compute[232433]: 2025-12-06 08:43:24.903 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:25.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:26 np0005548731 nova_compute[232433]: 2025-12-06 08:43:26.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:43:26 np0005548731 nova_compute[232433]: 2025-12-06 08:43:26.448 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:43:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:43:26 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:43:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:26.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:27.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:28 np0005548731 nova_compute[232433]: 2025-12-06 08:43:28.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:43:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:28.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:29.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:29 np0005548731 nova_compute[232433]: 2025-12-06 08:43:29.965 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:30.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:31 np0005548731 nova_compute[232433]: 2025-12-06 08:43:31.450 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:31.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:32.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:43:32 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:43:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:33.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:34.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:34 np0005548731 nova_compute[232433]: 2025-12-06 08:43:34.967 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:35.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:36 np0005548731 nova_compute[232433]: 2025-12-06 08:43:36.463 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:36.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:37.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:38.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:39.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:39 np0005548731 nova_compute[232433]: 2025-12-06 08:43:39.992 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:40.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:41 np0005548731 nova_compute[232433]: 2025-12-06 08:43:41.466 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:41.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:42.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:43.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #208. Immutable memtables: 0.
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.725098) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 208
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623725125, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 1337, "num_deletes": 251, "total_data_size": 3036734, "memory_usage": 3073136, "flush_reason": "Manual Compaction"}
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #209: started
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623737304, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 209, "file_size": 1981695, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 100706, "largest_seqno": 102038, "table_properties": {"data_size": 1975905, "index_size": 3120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12323, "raw_average_key_size": 19, "raw_value_size": 1964333, "raw_average_value_size": 3183, "num_data_blocks": 138, "num_entries": 617, "num_filter_entries": 617, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010513, "oldest_key_time": 1765010513, "file_creation_time": 1765010623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 12265 microseconds, and 4483 cpu microseconds.
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.737357) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #209: 1981695 bytes OK
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.737375) [db/memtable_list.cc:519] [default] Level-0 commit table #209 started
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.740545) [db/memtable_list.cc:722] [default] Level-0 commit table #209: memtable #1 done
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.740623) EVENT_LOG_v1 {"time_micros": 1765010623740608, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.740664) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 3030484, prev total WAL file size 3030484, number of live WAL files 2.
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000205.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.741946) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [209(1935KB)], [207(12MB)]
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623742021, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [209], "files_L6": [207], "score": -1, "input_data_size": 14990527, "oldest_snapshot_seqno": -1}
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #210: 12762 keys, 12944645 bytes, temperature: kUnknown
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623837087, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 210, "file_size": 12944645, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12867901, "index_size": 43735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31941, "raw_key_size": 339422, "raw_average_key_size": 26, "raw_value_size": 12650294, "raw_average_value_size": 991, "num_data_blocks": 1637, "num_entries": 12762, "num_filter_entries": 12762, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765010623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 210, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.837368) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12944645 bytes
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.838987) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.6 rd, 136.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.4 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(14.1) write-amplify(6.5) OK, records in: 13279, records dropped: 517 output_compression: NoCompression
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.839009) EVENT_LOG_v1 {"time_micros": 1765010623838999, "job": 134, "event": "compaction_finished", "compaction_time_micros": 95138, "compaction_time_cpu_micros": 59994, "output_level": 6, "num_output_files": 1, "total_output_size": 12944645, "num_input_records": 13279, "num_output_records": 12762, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623839703, "job": 134, "event": "table_file_deletion", "file_number": 209}
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000207.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623842721, "job": 134, "event": "table_file_deletion", "file_number": 207}
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.741786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.842798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.842805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.842808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.842810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:43:43 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:43:43.842813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:43:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:44.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:44 np0005548731 nova_compute[232433]: 2025-12-06 08:43:44.995 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:45.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:46 np0005548731 nova_compute[232433]: 2025-12-06 08:43:46.468 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:43:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:46.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:43:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:47.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:48.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:49.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:49 np0005548731 nova_compute[232433]: 2025-12-06 08:43:49.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:50.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:51 np0005548731 nova_compute[232433]: 2025-12-06 08:43:51.470 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:51.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:52.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:53.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:54 np0005548731 podman[364021]: 2025-12-06 08:43:54.892336382 +0000 UTC m=+0.053607005 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:43:54 np0005548731 podman[364020]: 2025-12-06 08:43:54.928002441 +0000 UTC m=+0.085242136 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  6 03:43:54 np0005548731 podman[364019]: 2025-12-06 08:43:54.928444141 +0000 UTC m=+0.089609091 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  6 03:43:54 np0005548731 nova_compute[232433]: 2025-12-06 08:43:54.997 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:55.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:43:56 np0005548731 nova_compute[232433]: 2025-12-06 08:43:56.474 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:43:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:57.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:43:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:43:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:43:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:43:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:59.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:43:59 np0005548731 nova_compute[232433]: 2025-12-06 08:43:59.998 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:44:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:00.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:44:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:44:00.945 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:44:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:44:00.946 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:44:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:44:00.946 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:44:01 np0005548731 nova_compute[232433]: 2025-12-06 08:44:01.476 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:01.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:44:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:02.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:44:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:03.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:04.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:05 np0005548731 nova_compute[232433]: 2025-12-06 08:44:05.000 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:05.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:06 np0005548731 nova_compute[232433]: 2025-12-06 08:44:06.478 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:06.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:07.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:08.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:44:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2208837905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:44:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:44:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2208837905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:44:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:09.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:10 np0005548731 nova_compute[232433]: 2025-12-06 08:44:10.002 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:10.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:11 np0005548731 nova_compute[232433]: 2025-12-06 08:44:11.480 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:11.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:44:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:12.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:44:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:13.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:14 np0005548731 nova_compute[232433]: 2025-12-06 08:44:14.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:44:14 np0005548731 nova_compute[232433]: 2025-12-06 08:44:14.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:44:14 np0005548731 nova_compute[232433]: 2025-12-06 08:44:14.106 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:44:14 np0005548731 nova_compute[232433]: 2025-12-06 08:44:14.129 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:44:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:14.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:15 np0005548731 nova_compute[232433]: 2025-12-06 08:44:15.005 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:44:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:15.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:44:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:16 np0005548731 nova_compute[232433]: 2025-12-06 08:44:16.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:44:16 np0005548731 nova_compute[232433]: 2025-12-06 08:44:16.482 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:16.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:17.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:44:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:18.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:44:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:19.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:20 np0005548731 nova_compute[232433]: 2025-12-06 08:44:20.006 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:20.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.128 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.129 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.129 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.129 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.484 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:21 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:44:21 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3146289939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.560 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:44:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:21.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.690 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.691 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4103MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.691 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.692 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.897 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.897 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:44:21 np0005548731 nova_compute[232433]: 2025-12-06 08:44:21.942 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:44:22 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:44:22 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/591882036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:44:22 np0005548731 nova_compute[232433]: 2025-12-06 08:44:22.374 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:44:22 np0005548731 nova_compute[232433]: 2025-12-06 08:44:22.379 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:44:22 np0005548731 nova_compute[232433]: 2025-12-06 08:44:22.399 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:44:22 np0005548731 nova_compute[232433]: 2025-12-06 08:44:22.401 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:44:22 np0005548731 nova_compute[232433]: 2025-12-06 08:44:22.401 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:44:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:44:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:22.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:44:23 np0005548731 nova_compute[232433]: 2025-12-06 08:44:23.401 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:44:23 np0005548731 nova_compute[232433]: 2025-12-06 08:44:23.402 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:44:23 np0005548731 nova_compute[232433]: 2025-12-06 08:44:23.402 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:44:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:23.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:24 np0005548731 nova_compute[232433]: 2025-12-06 08:44:24.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:44:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:44:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:24.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:44:25 np0005548731 nova_compute[232433]: 2025-12-06 08:44:25.007 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:25.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:25 np0005548731 podman[364241]: 2025-12-06 08:44:25.901288054 +0000 UTC m=+0.063600909 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:44:25 np0005548731 podman[364242]: 2025-12-06 08:44:25.977398867 +0000 UTC m=+0.137502098 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:44:25 np0005548731 podman[364243]: 2025-12-06 08:44:25.977470018 +0000 UTC m=+0.123704261 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:44:26 np0005548731 nova_compute[232433]: 2025-12-06 08:44:26.486 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:44:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:26.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:44:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:27.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:28 np0005548731 nova_compute[232433]: 2025-12-06 08:44:28.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:44:28 np0005548731 nova_compute[232433]: 2025-12-06 08:44:28.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:44:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:44:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:44:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:44:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:29.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:44:30 np0005548731 nova_compute[232433]: 2025-12-06 08:44:30.008 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:30.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:31 np0005548731 nova_compute[232433]: 2025-12-06 08:44:31.488 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:31.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:44:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:32.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:44:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:33.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:44:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:44:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:44:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:44:33 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:44:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:34.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:35 np0005548731 nova_compute[232433]: 2025-12-06 08:44:35.010 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:35.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:36 np0005548731 nova_compute[232433]: 2025-12-06 08:44:36.491 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:36.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:37.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:38.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:44:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:39.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:44:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:44:39 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:44:40 np0005548731 nova_compute[232433]: 2025-12-06 08:44:40.052 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:40.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:41 np0005548731 nova_compute[232433]: 2025-12-06 08:44:41.494 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:44:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:41.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:44:42 np0005548731 nova_compute[232433]: 2025-12-06 08:44:42.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:44:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:42.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:43.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:44.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:45 np0005548731 nova_compute[232433]: 2025-12-06 08:44:45.054 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:44:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:45.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:44:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:46 np0005548731 nova_compute[232433]: 2025-12-06 08:44:46.496 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:44:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:46.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:44:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:47.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:44:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:48.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:44:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:49.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:50 np0005548731 nova_compute[232433]: 2025-12-06 08:44:50.056 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:50.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:51 np0005548731 nova_compute[232433]: 2025-12-06 08:44:51.499 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:44:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:51.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:44:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:52.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:53.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:54.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:55 np0005548731 nova_compute[232433]: 2025-12-06 08:44:55.058 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:55 np0005548731 nova_compute[232433]: 2025-12-06 08:44:55.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:44:55 np0005548731 nova_compute[232433]: 2025-12-06 08:44:55.105 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  6 03:44:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:55.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:44:56 np0005548731 nova_compute[232433]: 2025-12-06 08:44:56.501 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:44:56 np0005548731 podman[364601]: 2025-12-06 08:44:56.898779163 +0000 UTC m=+0.058700860 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:44:56 np0005548731 podman[364599]: 2025-12-06 08:44:56.903453077 +0000 UTC m=+0.066744505 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:44:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:56.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:56 np0005548731 podman[364600]: 2025-12-06 08:44:56.963520769 +0000 UTC m=+0.121235732 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec  6 03:44:57 np0005548731 nova_compute[232433]: 2025-12-06 08:44:57.668 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  6 03:44:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:57.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:58.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:44:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:44:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:44:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:59.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:00 np0005548731 nova_compute[232433]: 2025-12-06 08:45:00.061 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:00.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:45:00.946 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:45:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:45:00.946 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:45:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:45:00.946 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:45:01 np0005548731 nova_compute[232433]: 2025-12-06 08:45:01.504 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:01.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:02.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:02 np0005548731 ceph-mgr[77818]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec  6 03:45:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:03.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:45:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:04.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:45:05 np0005548731 nova_compute[232433]: 2025-12-06 08:45:05.064 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:05.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:06 np0005548731 nova_compute[232433]: 2025-12-06 08:45:06.505 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:06.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:07.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:45:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:08.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:45:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:45:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3111797188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:45:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:45:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3111797188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:45:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:09.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:10 np0005548731 nova_compute[232433]: 2025-12-06 08:45:10.065 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:10.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:11 np0005548731 nova_compute[232433]: 2025-12-06 08:45:11.539 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:45:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:11.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:45:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:12.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:13.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:14.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:15 np0005548731 nova_compute[232433]: 2025-12-06 08:45:15.104 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:15.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:16 np0005548731 nova_compute[232433]: 2025-12-06 08:45:16.541 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:16.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:17.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:18 np0005548731 nova_compute[232433]: 2025-12-06 08:45:18.668 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:45:18 np0005548731 nova_compute[232433]: 2025-12-06 08:45:18.668 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:45:18 np0005548731 nova_compute[232433]: 2025-12-06 08:45:18.669 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:45:18 np0005548731 nova_compute[232433]: 2025-12-06 08:45:18.690 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:45:18 np0005548731 nova_compute[232433]: 2025-12-06 08:45:18.690 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:45:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:18.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:19.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:20 np0005548731 nova_compute[232433]: 2025-12-06 08:45:20.105 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:20.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:21 np0005548731 nova_compute[232433]: 2025-12-06 08:45:21.544 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:21.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:22 np0005548731 nova_compute[232433]: 2025-12-06 08:45:22.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:45:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:22.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:23 np0005548731 nova_compute[232433]: 2025-12-06 08:45:23.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:45:23 np0005548731 nova_compute[232433]: 2025-12-06 08:45:23.145 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:45:23 np0005548731 nova_compute[232433]: 2025-12-06 08:45:23.145 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:45:23 np0005548731 nova_compute[232433]: 2025-12-06 08:45:23.145 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:45:23 np0005548731 nova_compute[232433]: 2025-12-06 08:45:23.145 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:45:23 np0005548731 nova_compute[232433]: 2025-12-06 08:45:23.146 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:45:23 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:45:23 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3671994025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:45:23 np0005548731 nova_compute[232433]: 2025-12-06 08:45:23.565 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:45:23 np0005548731 nova_compute[232433]: 2025-12-06 08:45:23.718 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:45:23 np0005548731 nova_compute[232433]: 2025-12-06 08:45:23.719 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4104MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:45:23 np0005548731 nova_compute[232433]: 2025-12-06 08:45:23.719 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:45:23 np0005548731 nova_compute[232433]: 2025-12-06 08:45:23.720 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:45:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:23.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:24 np0005548731 nova_compute[232433]: 2025-12-06 08:45:24.003 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:45:24 np0005548731 nova_compute[232433]: 2025-12-06 08:45:24.004 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:45:24 np0005548731 nova_compute[232433]: 2025-12-06 08:45:24.020 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:45:24 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:45:24 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1430437182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:45:24 np0005548731 nova_compute[232433]: 2025-12-06 08:45:24.465 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:45:24 np0005548731 nova_compute[232433]: 2025-12-06 08:45:24.470 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:45:24 np0005548731 nova_compute[232433]: 2025-12-06 08:45:24.490 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:45:24 np0005548731 nova_compute[232433]: 2025-12-06 08:45:24.491 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:45:24 np0005548731 nova_compute[232433]: 2025-12-06 08:45:24.492 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:45:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:24.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:25 np0005548731 nova_compute[232433]: 2025-12-06 08:45:25.107 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:25 np0005548731 nova_compute[232433]: 2025-12-06 08:45:25.492 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:45:25 np0005548731 nova_compute[232433]: 2025-12-06 08:45:25.493 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:45:25 np0005548731 nova_compute[232433]: 2025-12-06 08:45:25.493 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:45:25 np0005548731 nova_compute[232433]: 2025-12-06 08:45:25.493 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:45:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:45:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:25.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:45:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:26 np0005548731 nova_compute[232433]: 2025-12-06 08:45:26.547 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:26.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:27.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:27 np0005548731 podman[364771]: 2025-12-06 08:45:27.903409247 +0000 UTC m=+0.054216149 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  6 03:45:27 np0005548731 podman[364769]: 2025-12-06 08:45:27.921002696 +0000 UTC m=+0.077921758 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  6 03:45:27 np0005548731 podman[364770]: 2025-12-06 08:45:27.949026398 +0000 UTC m=+0.104006402 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:45:28 np0005548731 nova_compute[232433]: 2025-12-06 08:45:28.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:45:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:28.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:29 np0005548731 nova_compute[232433]: 2025-12-06 08:45:29.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:45:29 np0005548731 nova_compute[232433]: 2025-12-06 08:45:29.104 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  6 03:45:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:29.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:30 np0005548731 nova_compute[232433]: 2025-12-06 08:45:30.109 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:30 np0005548731 nova_compute[232433]: 2025-12-06 08:45:30.122 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:45:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:30.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:31 np0005548731 nova_compute[232433]: 2025-12-06 08:45:31.549 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:45:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:31.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:45:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:32.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:33.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:34 np0005548731 nova_compute[232433]: 2025-12-06 08:45:34.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:45:34 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:34 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:34 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:34.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:35 np0005548731 nova_compute[232433]: 2025-12-06 08:45:35.110 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:35.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:36 np0005548731 nova_compute[232433]: 2025-12-06 08:45:36.551 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:36 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:36 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:36 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:36.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:45:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:37.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:45:38 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:38 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:38 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:38.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:39.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:40 np0005548731 nova_compute[232433]: 2025-12-06 08:45:40.111 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:40 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:40 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:40 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:40.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:41 np0005548731 podman[365176]: 2025-12-06 08:45:41.124720603 +0000 UTC m=+0.054589440 container exec 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:45:41 np0005548731 podman[365176]: 2025-12-06 08:45:41.215151074 +0000 UTC m=+0.145019891 container exec_died 541e7276f6038dd900483907d1613bdfcbf99ea3714cf53a920a7189986fab54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-2, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec  6 03:45:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:45:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:45:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  6 03:45:41 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  6 03:45:41 np0005548731 nova_compute[232433]: 2025-12-06 08:45:41.555 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:41.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:41 np0005548731 podman[365332]: 2025-12-06 08:45:41.780651867 +0000 UTC m=+0.049207960 container exec 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 03:45:41 np0005548731 podman[365332]: 2025-12-06 08:45:41.790794873 +0000 UTC m=+0.059350946 container exec_died 42755d5ddab81b86d84b40c9464a8af637e2d196c1a098a61593efb6a9875167 (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-2-nyemkw)
Dec  6 03:45:41 np0005548731 podman[365398]: 2025-12-06 08:45:41.990465822 +0000 UTC m=+0.047269320 container exec 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, release=1793, architecture=x86_64, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-type=git, description=keepalived for Ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9)
Dec  6 03:45:42 np0005548731 podman[365398]: 2025-12-06 08:45:42.002804673 +0000 UTC m=+0.059608151 container exec_died 60f905acca99db805341691264f7f91f9f4f43c91aa728d21396595dfca7cf39 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-2-cossgt, name=keepalived, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, release=1793, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec  6 03:45:42 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:42 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:42 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:42.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:45:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:45:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:45:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:45:43 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:45:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:43.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:44 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:44 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:45:44 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:44.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:45:45 np0005548731 nova_compute[232433]: 2025-12-06 08:45:45.114 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:45.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:46 np0005548731 nova_compute[232433]: 2025-12-06 08:45:46.558 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:46 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:46 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:46 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:46.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:47.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:48 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:48 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:45:48 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:48.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:45:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:45:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:45:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:49.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:50 np0005548731 nova_compute[232433]: 2025-12-06 08:45:50.121 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:50 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:50 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:45:50 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:50.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:45:51 np0005548731 nova_compute[232433]: 2025-12-06 08:45:51.560 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:51.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:52 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:52 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:52 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:52.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:45:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:53.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:45:54 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:54 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:54 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:54.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:55 np0005548731 nova_compute[232433]: 2025-12-06 08:45:55.122 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:55.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:45:56 np0005548731 nova_compute[232433]: 2025-12-06 08:45:56.561 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:45:56 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:56 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:56 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:56.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:57.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:58 np0005548731 podman[365672]: 2025-12-06 08:45:58.898298796 +0000 UTC m=+0.055718187 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec  6 03:45:58 np0005548731 podman[365674]: 2025-12-06 08:45:58.933492643 +0000 UTC m=+0.085490462 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  6 03:45:58 np0005548731 podman[365673]: 2025-12-06 08:45:58.953604863 +0000 UTC m=+0.110577522 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  6 03:45:58 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:58 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:58 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:58.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:45:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:45:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:45:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:59.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:00 np0005548731 nova_compute[232433]: 2025-12-06 08:46:00.124 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:46:00.947 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:46:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:46:00.947 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:46:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:46:00.948 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:46:00 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:00 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:00 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:00.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:01 np0005548731 nova_compute[232433]: 2025-12-06 08:46:01.563 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:01.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:02 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:02 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:02 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:02.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:46:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:03.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:46:04 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:04 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:04 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:04.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:05 np0005548731 nova_compute[232433]: 2025-12-06 08:46:05.126 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:46:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:05.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:46:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:06 np0005548731 nova_compute[232433]: 2025-12-06 08:46:06.565 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:06 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:06 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:06 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:06.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:07.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:08 np0005548731 nova_compute[232433]: 2025-12-06 08:46:08.390 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:46:08 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:08 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:08 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:08.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:09.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:10 np0005548731 nova_compute[232433]: 2025-12-06 08:46:10.127 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:10 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:10 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:46:10 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:46:11 np0005548731 nova_compute[232433]: 2025-12-06 08:46:11.567 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:11.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:12 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:12 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:46:12 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:12.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:46:13 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:13 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:13 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:13.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:14 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:14 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:14 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:14.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:15 np0005548731 nova_compute[232433]: 2025-12-06 08:46:15.129 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:15 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:15 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:46:15 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:15.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:46:15 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:16 np0005548731 nova_compute[232433]: 2025-12-06 08:46:16.570 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:16 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:16 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:16 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:16.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:17 np0005548731 nova_compute[232433]: 2025-12-06 08:46:17.120 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:46:17 np0005548731 nova_compute[232433]: 2025-12-06 08:46:17.121 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  6 03:46:17 np0005548731 nova_compute[232433]: 2025-12-06 08:46:17.121 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  6 03:46:17 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:17 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:17 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:17.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:18 np0005548731 nova_compute[232433]: 2025-12-06 08:46:18.504 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  6 03:46:18 np0005548731 nova_compute[232433]: 2025-12-06 08:46:18.504 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:46:18 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:18 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:46:18 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:46:19 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:19 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:19 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:19.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:20 np0005548731 nova_compute[232433]: 2025-12-06 08:46:20.133 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:20 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:20 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:20 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:20 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:21 np0005548731 nova_compute[232433]: 2025-12-06 08:46:21.572 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:21 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:21 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:21 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:21.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:22 np0005548731 nova_compute[232433]: 2025-12-06 08:46:22.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:46:22 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:22 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:22 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:22.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:23 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:23 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:46:23 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:23.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:46:24 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:24 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:24 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:24.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.104 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.130 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.131 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.131 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.131 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.155 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:46:25 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/75782707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.524 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.666 232437 WARNING nova.virt.libvirt.driver [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.668 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4089MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.669 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.669 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.729 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.730 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.745 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing inventories for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.763 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating ProviderTree inventory for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.764 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Updating inventory in ProviderTree for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.776 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing aggregate associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.799 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Refreshing trait associations for resource provider 6d00757a-082f-486d-ae84-869a2ba2e6e7, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  6 03:46:25 np0005548731 nova_compute[232433]: 2025-12-06 08:46:25.817 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  6 03:46:25 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:25 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:25 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:25.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:25 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:26 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  6 03:46:26 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3420502171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  6 03:46:26 np0005548731 nova_compute[232433]: 2025-12-06 08:46:26.240 232437 DEBUG oslo_concurrency.processutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  6 03:46:26 np0005548731 nova_compute[232433]: 2025-12-06 08:46:26.245 232437 DEBUG nova.compute.provider_tree [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed in ProviderTree for provider: 6d00757a-082f-486d-ae84-869a2ba2e6e7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  6 03:46:26 np0005548731 nova_compute[232433]: 2025-12-06 08:46:26.261 232437 DEBUG nova.scheduler.client.report [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Inventory has not changed for provider 6d00757a-082f-486d-ae84-869a2ba2e6e7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  6 03:46:26 np0005548731 nova_compute[232433]: 2025-12-06 08:46:26.263 232437 DEBUG nova.compute.resource_tracker [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  6 03:46:26 np0005548731 nova_compute[232433]: 2025-12-06 08:46:26.263 232437 DEBUG oslo_concurrency.lockutils [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:46:26 np0005548731 nova_compute[232433]: 2025-12-06 08:46:26.573 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:26 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:26 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:26 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:26.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:27 np0005548731 nova_compute[232433]: 2025-12-06 08:46:27.258 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:46:27 np0005548731 nova_compute[232433]: 2025-12-06 08:46:27.259 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:46:27 np0005548731 nova_compute[232433]: 2025-12-06 08:46:27.259 232437 DEBUG nova.compute.manager [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  6 03:46:27 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:27 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:27 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:27.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:28 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:28 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:28 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:28.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:29 np0005548731 nova_compute[232433]: 2025-12-06 08:46:29.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:46:29 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:29 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:29 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:29.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:29 np0005548731 podman[365841]: 2025-12-06 08:46:29.905455826 +0000 UTC m=+0.062474342 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  6 03:46:29 np0005548731 podman[365843]: 2025-12-06 08:46:29.916772101 +0000 UTC m=+0.066332565 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  6 03:46:29 np0005548731 podman[365842]: 2025-12-06 08:46:29.927511433 +0000 UTC m=+0.081159857 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  6 03:46:30 np0005548731 nova_compute[232433]: 2025-12-06 08:46:30.136 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:30 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:30 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:30 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:46:30 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:30.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:46:31 np0005548731 nova_compute[232433]: 2025-12-06 08:46:31.576 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:31 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:31 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:31 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:31.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:32 np0005548731 nova_compute[232433]: 2025-12-06 08:46:32.105 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:46:32 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:32 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:32 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:32.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:33 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:33 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:33 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:33.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:35.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:35 np0005548731 nova_compute[232433]: 2025-12-06 08:46:35.137 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:35 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:35 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:35 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:35.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:35 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:36 np0005548731 nova_compute[232433]: 2025-12-06 08:46:36.578 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:37.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:37 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:37 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:37 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:37.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:39.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:39 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:39 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:39 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:39.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:40 np0005548731 nova_compute[232433]: 2025-12-06 08:46:40.138 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:40 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:41.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:41 np0005548731 nova_compute[232433]: 2025-12-06 08:46:41.582 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:41 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:41 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:41 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:41.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:43.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:43 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:43 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:43 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:43.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:45.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:45 np0005548731 nova_compute[232433]: 2025-12-06 08:46:45.140 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:45 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:45 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:46:45 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:45.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:46:45 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:46 np0005548731 nova_compute[232433]: 2025-12-06 08:46:46.585 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:46 np0005548731 systemd-logind[794]: New session 76 of user zuul.
Dec  6 03:46:46 np0005548731 systemd[1]: Started Session 76 of User zuul.
Dec  6 03:46:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:47.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:47 np0005548731 nova_compute[232433]: 2025-12-06 08:46:47.100 232437 DEBUG oslo_service.periodic_task [None req-babe6d6b-cb2b-4b4e-8d69-29338748025a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  6 03:46:47 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:47 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:47 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:47.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:46:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:49.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:46:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  6 03:46:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:46:49 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  6 03:46:49 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:49 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:49 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:49.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:50 np0005548731 nova_compute[232433]: 2025-12-06 08:46:50.169 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  6 03:46:50 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/584181407' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  6 03:46:50 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:46:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:51.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:46:51 np0005548731 nova_compute[232433]: 2025-12-06 08:46:51.588 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:51 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:51 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:51 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:51.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:53.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:53 np0005548731 ovs-vsctl[366386]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  6 03:46:53 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:53 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:46:53 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:53.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:46:53 np0005548731 virtqemud[232080]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  6 03:46:53 np0005548731 virtqemud[232080]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  6 03:46:53 np0005548731 virtqemud[232080]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  6 03:46:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: cache status {prefix=cache status} (starting...)
Dec  6 03:46:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:54 np0005548731 lvm[366733]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  6 03:46:54 np0005548731 lvm[366733]: VG ceph_vg0 finished
Dec  6 03:46:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: client ls {prefix=client ls} (starting...)
Dec  6 03:46:54 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:55.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:55 np0005548731 nova_compute[232433]: 2025-12-06 08:46:55.172 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: damage ls {prefix=damage ls} (starting...)
Dec  6 03:46:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Dec  6 03:46:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2466793297' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  6 03:46:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump loads {prefix=dump loads} (starting...)
Dec  6 03:46:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  6 03:46:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  6 03:46:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  6 03:46:55 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1438137494' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  6 03:46:55 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:55 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:55 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:55.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:55 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:46:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  6 03:46:55 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:46:55 np0005548731 ceph-mon[77458]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec  6 03:46:56 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  6 03:46:56 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec  6 03:46:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1607452064' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  6 03:46:56 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  6 03:46:56 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:56 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  6 03:46:56 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Dec  6 03:46:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3755701381' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  6 03:46:56 np0005548731 nova_compute[232433]: 2025-12-06 08:46:56.590 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:46:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec  6 03:46:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/161953549' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  6 03:46:56 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec  6 03:46:56 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3388070215' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  6 03:46:56 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: ops {prefix=ops} (starting...)
Dec  6 03:46:56 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:46:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1827320869' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  6 03:46:57 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: session ls {prefix=session ls} (starting...)
Dec  6 03:46:57 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow Can't run that command on an inactive MDS!
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/360114230' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  6 03:46:57 np0005548731 ceph-mds[82074]: mds.cephfs.compute-2.tjfgow asok_command: status {prefix=status} (starting...)
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #211. Immutable memtables: 0.
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.601616) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 211
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817601878, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 2135, "num_deletes": 251, "total_data_size": 5270790, "memory_usage": 5339872, "flush_reason": "Manual Compaction"}
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #212: started
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817618012, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 212, "file_size": 3445968, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 102043, "largest_seqno": 104173, "table_properties": {"data_size": 3437078, "index_size": 5511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17756, "raw_average_key_size": 19, "raw_value_size": 3419286, "raw_average_value_size": 3761, "num_data_blocks": 241, "num_entries": 909, "num_filter_entries": 909, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010624, "oldest_key_time": 1765010624, "file_creation_time": 1765010817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 16691 microseconds, and 6130 cpu microseconds.
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.618315) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #212: 3445968 bytes OK
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.618414) [db/memtable_list.cc:519] [default] Level-0 commit table #212 started
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.620368) [db/memtable_list.cc:722] [default] Level-0 commit table #212: memtable #1 done
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.620381) EVENT_LOG_v1 {"time_micros": 1765010817620377, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.620396) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 5261294, prev total WAL file size 5261294, number of live WAL files 2.
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000208.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.621989) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600353030' seq:72057594037927935, type:22 .. '6B7600373532' seq:0, type:0; will stop at (end)
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [212(3365KB)], [210(12MB)]
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817622054, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [212], "files_L6": [210], "score": -1, "input_data_size": 16390613, "oldest_snapshot_seqno": -1}
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #213: 13152 keys, 15263209 bytes, temperature: kUnknown
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817699365, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 213, "file_size": 15263209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15181949, "index_size": 47291, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32901, "raw_key_size": 349511, "raw_average_key_size": 26, "raw_value_size": 14955292, "raw_average_value_size": 1137, "num_data_blocks": 1775, "num_entries": 13152, "num_filter_entries": 13152, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002564, "oldest_key_time": 0, "file_creation_time": 1765010817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc01f9a1-fda8-4008-aee1-1fa5ff4371a1", "db_session_id": "CWBL8WMDM4D79BU86E9T", "orig_file_number": 213, "seqno_to_time_mapping": "N/A"}}
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.699608) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 15263209 bytes
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.700943) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.8 rd, 197.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 12.3 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(9.2) write-amplify(4.4) OK, records in: 13671, records dropped: 519 output_compression: NoCompression
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.700959) EVENT_LOG_v1 {"time_micros": 1765010817700951, "job": 136, "event": "compaction_finished", "compaction_time_micros": 77384, "compaction_time_cpu_micros": 40086, "output_level": 6, "num_output_files": 1, "total_output_size": 15263209, "num_input_records": 13671, "num_output_records": 13152, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817701479, "job": 136, "event": "table_file_deletion", "file_number": 212}
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000210.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817703291, "job": 136, "event": "table_file_deletion", "file_number": 210}
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.621894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.703358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.703363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.703365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.703367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: rocksdb: (Original Log Time 2025/12/06-08:46:57.703368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2119058871' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  6 03:46:57 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:57 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:57 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:57.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  6 03:46:57 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3589491132' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  6 03:46:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec  6 03:46:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3233308557' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  6 03:46:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  6 03:46:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3987429521' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 03:46:58 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  6 03:46:58 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1567968785' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  6 03:46:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:46:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:59.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:46:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec  6 03:46:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/951096703' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  6 03:46:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  6 03:46:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3234307305' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  6 03:46:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec  6 03:46:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1574297695' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  6 03:46:59 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:46:59 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:46:59 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:59.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:46:59 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  6 03:46:59 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3285086935' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  6 03:47:00 np0005548731 nova_compute[232433]: 2025-12-06 08:47:00.173 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:47:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  6 03:47:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3703404667' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  6 03:47:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  6 03:47:00 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1156329899' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538107904 unmapped: 93003776 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5116294 data_alloc: 218103808 data_used: 7135232
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199b3c000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3b8f400 session 0x5612d2647a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5116294 data_alloc: 218103808 data_used: 7135232
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x199b3c000/0x0/0x1bfc00000, data 0x15c4cf2/0x17e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x248df9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 538124288 unmapped: 92987392 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.836669922s of 38.861549377s, submitted: 17
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539516928 unmapped: 91594752 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d8e80400 session 0x5612d2e025a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d156c800 session 0x5612dbe0fa40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d7413800 session 0x5612d4e83e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3f4f400 session 0x5612d314e000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3f4e000 session 0x5612d077c000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539533312 unmapped: 91578368 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5221392 data_alloc: 218103808 data_used: 7135232
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539533312 unmapped: 91578368 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b65000/0x0/0x1bfc00000, data 0x218ad54/0x23a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539533312 unmapped: 91578368 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d7ff6c00 session 0x5612d3447680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d1b52c00 session 0x5612d2926f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539533312 unmapped: 91578368 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539533312 unmapped: 91578368 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d1b54800 session 0x5612d314e3c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d4653c00 session 0x5612d34630e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2943000 session 0x5612d0633a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539467776 unmapped: 91643904 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2823400 session 0x5612d30df860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5223127 data_alloc: 218103808 data_used: 7139328
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539394048 unmapped: 91717632 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3b8fc00 session 0x5612d31b1c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3695c00 session 0x5612dbe0e1e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x218ad87/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5302612 data_alloc: 234881024 data_used: 18059264
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x218ad87/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x218ad87/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.993692398s of 15.341640472s, submitted: 70
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d803dc00 session 0x5612daac25a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612dea3e800 session 0x5612d4e83c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x218ad87/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [0,0,1])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d2823400 session 0x5612d314f0e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301888 data_alloc: 234881024 data_used: 18059264
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 539410432 unmapped: 91701248 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #59. Immutable memtables: 15.
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x198b63000/0x0/0x1bfc00000, data 0x218ad87/0x23ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24cef9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544342016 unmapped: 86769664 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544350208 unmapped: 86761472 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612cff93c00 session 0x5612d077c3c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d4652800 session 0x5612e1712000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544350208 unmapped: 86761472 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5396130 data_alloc: 234881024 data_used: 19505152
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544776192 unmapped: 86335488 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544776192 unmapped: 86335488 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544776192 unmapped: 86335488 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x196eb3000/0x0/0x1bfc00000, data 0x2c99d87/0x2eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544776192 unmapped: 86335488 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612dea3d800 session 0x5612d3463a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d297a000 session 0x5612d0a634a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544776192 unmapped: 86335488 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5396450 data_alloc: 234881024 data_used: 19513344
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.783555031s of 12.358042717s, submitted: 142
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d75cc000 session 0x5612d0773a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d8e81400 session 0x5612dc4045a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x196e92000/0x0/0x1bfc00000, data 0x2cbade9/0x2edc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d3b8fc00 session 0x5612d3a1c5a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612ddc40c00 session 0x5612d2647a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 ms_handle_reset con 0x5612d7412800 session 0x5612d0a63a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5411537 data_alloc: 234881024 data_used: 19517440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 heartbeat osd_stat(store_statfs(0x196cf6000/0x0/0x1bfc00000, data 0x2e57d87/0x3078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 544784384 unmapped: 86327296 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f50c00 session 0x5612d0b30f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196cf2000/0x0/0x1bfc00000, data 0x2e59a53/0x307b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5415239 data_alloc: 234881024 data_used: 19525632
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.294777870s of 12.366117477s, submitted: 26
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196cf0000/0x0/0x1bfc00000, data 0x2e5ba53/0x307d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5414435 data_alloc: 234881024 data_used: 19525632
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196cf0000/0x0/0x1bfc00000, data 0x2e5ca53/0x307e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545841152 unmapped: 85270528 heap: 631111680 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b55800 session 0x5612d160b680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d8e81400 session 0x5612e17130e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d9184c00 session 0x5612d2e02780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1598000 session 0x5612dc4052c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b55800 session 0x5612d2e03e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d8e81400 session 0x5612d3446f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f50c00 session 0x5612d2bb5680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d9184c00 session 0x5612d06325a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1599400 session 0x5612d3446000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b55800 session 0x5612d0b630e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f50c00 session 0x5612d0614960
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x195cc5000/0x0/0x1bfc00000, data 0x3e84ad5/0x40a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5540221 data_alloc: 234881024 data_used: 19525632
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d0bc7400 session 0x5612d0b62d20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f4f400 session 0x5612d29272c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x195cc5000/0x0/0x1bfc00000, data 0x3e84ad5/0x40a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x195cc5000/0x0/0x1bfc00000, data 0x3e84ad5/0x40a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d2945c00 session 0x5612dbe0ed20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d0bc7400 session 0x5612d314e780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545497088 unmapped: 97165312 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.166153431s of 10.152449608s, submitted: 24
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f4f400 session 0x5612daac34a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b55800 session 0x5612d26465a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d3f50c00 session 0x5612d480ab40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612dea3cc00 session 0x5612d18e9e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545513472 unmapped: 97148928 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 545693696 unmapped: 96968704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5671157 data_alloc: 251658240 data_used: 36806656
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553189376 unmapped: 89473024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x195c9f000/0x0/0x1bfc00000, data 0x3ea8b1b/0x40cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,2])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553197568 unmapped: 89464832 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x195c9f000/0x0/0x1bfc00000, data 0x3ea8b1b/0x40cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x25e8f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553197568 unmapped: 89464832 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d0bc7400 session 0x5612cf123e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b55800 session 0x5612dbe0e1e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553197568 unmapped: 89464832 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553205760 unmapped: 89456640 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196cdf000/0x0/0x1bfc00000, data 0x3ea8b1b/0x40cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5670929 data_alloc: 251658240 data_used: 36814848
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553205760 unmapped: 89456640 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553205760 unmapped: 89456640 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1b53c00 session 0x5612d3a1d680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553213952 unmapped: 89448448 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553213952 unmapped: 89448448 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196cdf000/0x0/0x1bfc00000, data 0x3ea8af8/0x40ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d297b000 session 0x5612d3447e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.533593178s of 11.857751846s, submitted: 49
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553213952 unmapped: 89448448 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5669640 data_alloc: 251658240 data_used: 36814848
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553222144 unmapped: 89440256 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d1987000 session 0x5612d0b31860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 ms_handle_reset con 0x5612d0bc7400 session 0x5612d18e9e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554221568 unmapped: 88440832 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554229760 unmapped: 88432640 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 417 heartbeat osd_stat(store_statfs(0x196722000/0x0/0x1bfc00000, data 0x4462a86/0x4686000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d1b53c00 session 0x5612d26465a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554237952 unmapped: 88424448 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d1b55800 session 0x5612daac34a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 418 heartbeat osd_stat(store_statfs(0x1966bb000/0x0/0x1bfc00000, data 0x44cb7a6/0x46f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d297b000 session 0x5612d314e780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554639360 unmapped: 88023040 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 418 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x43757a6/0x459a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5716002 data_alloc: 251658240 data_used: 36581376
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554639360 unmapped: 88023040 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554639360 unmapped: 88023040 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d4653c00 session 0x5612d0b62d20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d0bc7400 session 0x5612d0614960
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554639360 unmapped: 88023040 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554639360 unmapped: 88023040 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 418 heartbeat osd_stat(store_statfs(0x19680b000/0x0/0x1bfc00000, data 0x43757a6/0x459a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554647552 unmapped: 88014848 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.833458900s of 10.137002945s, submitted: 110
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 418 ms_handle_reset con 0x5612d1b53c00 session 0x5612d3446000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5718322 data_alloc: 251658240 data_used: 36593664
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554663936 unmapped: 87998464 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1967e9000/0x0/0x1bfc00000, data 0x439c3ca/0x45c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554663936 unmapped: 87998464 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d3f51800 session 0x5612d06325a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d2823400 session 0x5612d3446f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554672128 unmapped: 87990272 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d0bd8000 session 0x5612d2e02780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d297b000 session 0x5612d0a63a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554688512 unmapped: 87973888 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d90a9800 session 0x5612d26461e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d7413800 session 0x5612d4e82d20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552427520 unmapped: 90234880 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d0bc7400 session 0x5612d3a1c5a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5469723 data_alloc: 234881024 data_used: 19546112
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552427520 unmapped: 90234880 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1979c8000/0x0/0x1bfc00000, data 0x31c0335/0x33e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552427520 unmapped: 90234880 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552427520 unmapped: 90234880 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d157c000 session 0x5612dbe0f4a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552419328 unmapped: 90243072 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1979c8000/0x0/0x1bfc00000, data 0x31c0335/0x33e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552419328 unmapped: 90243072 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5469723 data_alloc: 234881024 data_used: 19546112
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552419328 unmapped: 90243072 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.731872559s of 11.440790176s, submitted: 70
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b52c00 session 0x5612d0b63680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b54800 session 0x5612d30b6780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1979c8000/0x0/0x1bfc00000, data 0x31c0335/0x33e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612ddc41800 session 0x5612d0b71680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990cf000/0x0/0x1bfc00000, data 0x1abb2a0/0x1cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d2e43000 session 0x5612daac2f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b51800 session 0x5612d0a54780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5208940 data_alloc: 218103808 data_used: 7163904
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990cf000/0x0/0x1bfc00000, data 0x1abb2a0/0x1cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542916608 unmapped: 99745792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b51800 session 0x5612d30dfe00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b52c00 session 0x5612d31b0000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5249166 data_alloc: 218103808 data_used: 12263424
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990d0000/0x0/0x1bfc00000, data 0x1abb2c3/0x1cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990d0000/0x0/0x1bfc00000, data 0x1abb2c3/0x1cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990d0000/0x0/0x1bfc00000, data 0x1abb2c3/0x1cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5249166 data_alloc: 218103808 data_used: 12263424
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x1990d0000/0x0/0x1bfc00000, data 0x1abb2c3/0x1cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5249166 data_alloc: 218103808 data_used: 12263424
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542932992 unmapped: 99729408 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.600269318s of 19.742877960s, submitted: 62
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547971072 unmapped: 94691328 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549396480 unmapped: 93265920 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x198633000/0x0/0x1bfc00000, data 0x25582c3/0x277b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5331156 data_alloc: 218103808 data_used: 12283904
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x198633000/0x0/0x1bfc00000, data 0x25582c3/0x277b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x198633000/0x0/0x1bfc00000, data 0x25582c3/0x277b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549453824 unmapped: 93208576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5331156 data_alloc: 218103808 data_used: 12283904
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549462016 unmapped: 93200384 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d3695c00 session 0x5612d480b0e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d3b8fc00 session 0x5612d0aead20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d0bc6800 session 0x5612d30df4a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d0bc6800 session 0x5612d34634a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.445962906s of 10.428418159s, submitted: 99
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b51800 session 0x5612d06154a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d1b52c00 session 0x5612d4e83680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d3695c00 session 0x5612d5b0fa40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d3b8fc00 session 0x5612dc404000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 ms_handle_reset con 0x5612d0bc6800 session 0x5612d26463c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550453248 unmapped: 92209152 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x19800e000/0x0/0x1bfc00000, data 0x2b7c2d3/0x2da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550453248 unmapped: 92209152 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550453248 unmapped: 92209152 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550453248 unmapped: 92209152 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5376529 data_alloc: 218103808 data_used: 12283904
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 heartbeat osd_stat(store_statfs(0x19800e000/0x0/0x1bfc00000, data 0x2b7c2d3/0x2da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550461440 unmapped: 92200960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 420 ms_handle_reset con 0x5612d2942000 session 0x5612dc4054a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550469632 unmapped: 92192768 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3b8f000 session 0x5612d30cb0e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x198008000/0x0/0x1bfc00000, data 0x2b7e011/0x2da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550477824 unmapped: 92184576 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3b8d800 session 0x5612dc404780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bd6c00 session 0x5612d0b623c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550486016 unmapped: 92176384 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc6800 session 0x5612d34621e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2942000 session 0x5612d0a15c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550789120 unmapped: 91873280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5393433 data_alloc: 218103808 data_used: 12304384
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550789120 unmapped: 91873280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197fdf000/0x0/0x1bfc00000, data 0x2ba3d0b/0x2dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197fdf000/0x0/0x1bfc00000, data 0x2ba3d0b/0x2dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5431033 data_alloc: 234881024 data_used: 17612800
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.506317139s of 15.622153282s, submitted: 31
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d9182400 session 0x5612d26461e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3695c00 session 0x5612d30ca780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197fdf000/0x0/0x1bfc00000, data 0x2ba3d6d/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197fdf000/0x0/0x1bfc00000, data 0x2ba3d6d/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5431743 data_alloc: 234881024 data_used: 17616896
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d14dc800 session 0x5612d34470e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197fdf000/0x0/0x1bfc00000, data 0x2ba3d6d/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550805504 unmapped: 91856896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc6800 session 0x5612d2bb54a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549969920 unmapped: 92692480 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2942000 session 0x5612d3463e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3695c00 session 0x5612d0a9b0e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550174720 unmapped: 92487680 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550199296 unmapped: 92463104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197770000/0x0/0x1bfc00000, data 0x3718da0/0x363d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543889 data_alloc: 234881024 data_used: 17956864
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550215680 unmapped: 92446720 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550215680 unmapped: 92446720 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550215680 unmapped: 92446720 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x197770000/0x0/0x1bfc00000, data 0x3718da0/0x363d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550215680 unmapped: 92446720 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.979682922s of 12.179447174s, submitted: 77
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5540337 data_alloc: 234881024 data_used: 17960960
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x19776e000/0x0/0x1bfc00000, data 0x371bda0/0x3640000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x19776e000/0x0/0x1bfc00000, data 0x371bda0/0x3640000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5540337 data_alloc: 234881024 data_used: 17960960
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550117376 unmapped: 92545024 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552435712 unmapped: 90226688 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2bb2400 session 0x5612d0614000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3d83000 session 0x5612d077da40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3d83000 session 0x5612d0a9b680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc6800 session 0x5612d30b65a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549986304 unmapped: 92676096 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549986304 unmapped: 92676096 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.111669540s of 10.333090782s, submitted: 17
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x1975b3000/0x0/0x1bfc00000, data 0x38d4dd9/0x37fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,14])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549986304 unmapped: 92676096 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2942000 session 0x5612d5b0fc20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d1b50c00 session 0x5612d3462000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3f4f400 session 0x5612d160be00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d3f4f400 session 0x5612d30def00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc6800 session 0x5612d160b0e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5656810 data_alloc: 234881024 data_used: 22204416
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551059456 unmapped: 91602944 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551059456 unmapped: 91602944 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196ce5000/0x0/0x1bfc00000, data 0x41a2e12/0x40c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5666562 data_alloc: 234881024 data_used: 22200320
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196ce4000/0x0/0x1bfc00000, data 0x41a3e12/0x40ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d5b1b000 session 0x5612d0b70d20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196ce4000/0x0/0x1bfc00000, data 0x41a3e12/0x40ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5666782 data_alloc: 234881024 data_used: 22200320
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d90a9000 session 0x5612d31b1c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612ddc3f000 session 0x5612d2e02960
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551485440 unmapped: 91176960 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.538869858s of 12.626841545s, submitted: 46
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612ddc3f000 session 0x5612d5b0ed20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551067648 unmapped: 91594752 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196ce4000/0x0/0x1bfc00000, data 0x41a3e12/0x40ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 551075840 unmapped: 91586560 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552230912 unmapped: 90431488 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5717006 data_alloc: 234881024 data_used: 29851648
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552230912 unmapped: 90431488 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552230912 unmapped: 90431488 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552239104 unmapped: 90423296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196cd9000/0x0/0x1bfc00000, data 0x41aae12/0x40d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5718564 data_alloc: 234881024 data_used: 29855744
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d9182400 session 0x5612dbe0e780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc7400 session 0x5612d0614f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2944400 session 0x5612d0b63680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 heartbeat osd_stat(store_statfs(0x196d07000/0x0/0x1bfc00000, data 0x4181ddf/0x40a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d6c91400 session 0x5612d2e034a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.445158958s of 11.233171463s, submitted: 38
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d0bc7400 session 0x5612d13ce000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552247296 unmapped: 90415104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 ms_handle_reset con 0x5612d2944400 session 0x5612d2e034a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d6c91400 session 0x5612d0614f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552263680 unmapped: 90398720 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5731585 data_alloc: 234881024 data_used: 29708288
Dec  6 03:47:00 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d1b54800 session 0x5612d2e03680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d2e43000 session 0x5612d0b701e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553254912 unmapped: 89407488 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 422 heartbeat osd_stat(store_statfs(0x196896000/0x0/0x1bfc00000, data 0x42eaa3b/0x4517000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d0bc7400 session 0x5612d5b0ed20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553320448 unmapped: 89341952 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 422 heartbeat osd_stat(store_statfs(0x19686a000/0x0/0x1bfc00000, data 0x4315a18/0x4541000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553320448 unmapped: 89341952 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d1b54800 session 0x5612d3463e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 422 ms_handle_reset con 0x5612d2944400 session 0x5612d2bb54a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5551451 data_alloc: 234881024 data_used: 20500480
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 422 heartbeat osd_stat(store_statfs(0x1977fc000/0x0/0x1bfc00000, data 0x3387a08/0x35b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552312832 unmapped: 90349568 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.225218773s of 10.995776176s, submitted: 130
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 423 ms_handle_reset con 0x5612d0bc6800 session 0x5612d4e825a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 423 heartbeat osd_stat(store_statfs(0x1977f5000/0x0/0x1bfc00000, data 0x338c5ba/0x35b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [0,0,1])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542482432 unmapped: 100179968 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 423 ms_handle_reset con 0x5612d8e81c00 session 0x5612d0a15c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5382813 data_alloc: 218103808 data_used: 12779520
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542482432 unmapped: 100179968 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542482432 unmapped: 100179968 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542482432 unmapped: 100179968 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 423 heartbeat osd_stat(store_statfs(0x198721000/0x0/0x1bfc00000, data 0x245d548/0x2687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542482432 unmapped: 100179968 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 424 ms_handle_reset con 0x5612d0bc6800 session 0x5612d0b63680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542498816 unmapped: 100163584 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5383304 data_alloc: 218103808 data_used: 12779520
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542498816 unmapped: 100163584 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 424 heartbeat osd_stat(store_statfs(0x198726000/0x0/0x1bfc00000, data 0x245f23a/0x2688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542498816 unmapped: 100163584 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 424 ms_handle_reset con 0x5612d156c000 session 0x5612e1712000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 424 ms_handle_reset con 0x5612d4652800 session 0x5612d314e3c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542523392 unmapped: 100139008 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 424 ms_handle_reset con 0x5612d2bb2400 session 0x5612d0aeb680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542523392 unmapped: 100139008 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.770137787s of 10.003428459s, submitted: 95
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542523392 unmapped: 100139008 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5232637 data_alloc: 218103808 data_used: 7196672
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542523392 unmapped: 100139008 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542523392 unmapped: 100139008 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1995b0000/0x0/0x1bfc00000, data 0x15d4dcc/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1995b0000/0x0/0x1bfc00000, data 0x15d4dcc/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5232637 data_alloc: 218103808 data_used: 7196672
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1995b0000/0x0/0x1bfc00000, data 0x15d4dcc/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2942000 session 0x5612d30b72c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2942000 session 0x5612d0b630e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542531584 unmapped: 100130816 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.521475792s of 10.530011177s, submitted: 11
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d3694800 session 0x5612e1713a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542539776 unmapped: 100122624 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5234431 data_alloc: 218103808 data_used: 7204864
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1995b0000/0x0/0x1bfc00000, data 0x15d4dcc/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542547968 unmapped: 100114432 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542547968 unmapped: 100114432 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612ddc40c00 session 0x5612d2646f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d75cc000 session 0x5612d30ca1e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542564352 unmapped: 100098048 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d0bc7400 session 0x5612d30cb680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1995b1000/0x0/0x1bfc00000, data 0x15d4dcc/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [0,1])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2942000 session 0x5612d0b30d20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5290733 data_alloc: 218103808 data_used: 7204864
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5290733 data_alloc: 218103808 data_used: 7204864
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5290733 data_alloc: 218103808 data_used: 7204864
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542572544 unmapped: 100089856 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5290733 data_alloc: 218103808 data_used: 7204864
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198f1c000/0x0/0x1bfc00000, data 0x1c69dcc/0x1e92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.075338364s of 24.247449875s, submitted: 37
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2e42800 session 0x5612dbe0f4a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542580736 unmapped: 100081664 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542588928 unmapped: 100073472 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5321642 data_alloc: 218103808 data_used: 10895360
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542736384 unmapped: 99926016 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542736384 unmapped: 99926016 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198ef7000/0x0/0x1bfc00000, data 0x1c8ddef/0x1eb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542736384 unmapped: 99926016 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542736384 unmapped: 99926016 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d75cc800 session 0x5612d30cb4a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d6c91400 session 0x5612e1713860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d156c800 session 0x5612daac2000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2942000 session 0x5612d0615e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 542736384 unmapped: 99926016 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2e42800 session 0x5612d314fc20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5390066 data_alloc: 218103808 data_used: 14102528
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1989fa000/0x0/0x1bfc00000, data 0x218adef/0x23b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1989fa000/0x0/0x1bfc00000, data 0x218adef/0x23b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1989fa000/0x0/0x1bfc00000, data 0x218adef/0x23b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5390066 data_alloc: 218103808 data_used: 14102528
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 543072256 unmapped: 99590144 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.867590904s of 12.951579094s, submitted: 24
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550084608 unmapped: 92577792 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 546947072 unmapped: 95715328 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d3f4e800 session 0x5612d480a3c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 546955264 unmapped: 95707136 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d0a61000 session 0x5612d34632c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d3b8f000 session 0x5612d19932c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d0a61000 session 0x5612dbe0f680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 546955264 unmapped: 95707136 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d3000/0x0/0x1bfc00000, data 0x2ab0dff/0x2cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5470536 data_alloc: 218103808 data_used: 14548992
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 546955264 unmapped: 95707136 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547053568 unmapped: 95608832 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d3000/0x0/0x1bfc00000, data 0x2ab0dff/0x2cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d3000/0x0/0x1bfc00000, data 0x2ab0dff/0x2cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d3000/0x0/0x1bfc00000, data 0x2ab0dff/0x2cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5497064 data_alloc: 234881024 data_used: 18354176
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d0000/0x0/0x1bfc00000, data 0x2ab3dff/0x2cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d0000/0x0/0x1bfc00000, data 0x2ab3dff/0x2cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980d0000/0x0/0x1bfc00000, data 0x2ab3dff/0x2cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5497064 data_alloc: 234881024 data_used: 18354176
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.660078049s of 14.272805214s, submitted: 75
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d7ff6800 session 0x5612d2bb5860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547119104 unmapped: 95543296 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d3695c00 session 0x5612d0a14000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547127296 unmapped: 95535104 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d6c90800 session 0x5612dbe0e000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548225024 unmapped: 94437376 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1980bd000/0x0/0x1bfc00000, data 0x2ac6dff/0x2cf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x197a4d000/0x0/0x1bfc00000, data 0x312eddc/0x3358000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548347904 unmapped: 94314496 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5547338 data_alloc: 234881024 data_used: 18329600
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548364288 unmapped: 94298112 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547708928 unmapped: 94953472 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1979b7000/0x0/0x1bfc00000, data 0x31ccddc/0x33f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5563800 data_alloc: 234881024 data_used: 18759680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.376773834s of 12.013556480s, submitted: 93
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1979ad000/0x0/0x1bfc00000, data 0x31d6ddc/0x3400000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x19798a000/0x0/0x1bfc00000, data 0x31faddc/0x3424000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5561612 data_alloc: 234881024 data_used: 18759680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547717120 unmapped: 94945280 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x19798a000/0x0/0x1bfc00000, data 0x31faddc/0x3424000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5561612 data_alloc: 234881024 data_used: 18759680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x19798a000/0x0/0x1bfc00000, data 0x31faddc/0x3424000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x19798a000/0x0/0x1bfc00000, data 0x31faddc/0x3424000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.989348412s of 11.007923126s, submitted: 5
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547725312 unmapped: 94937088 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x197984000/0x0/0x1bfc00000, data 0x3200ddc/0x342a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6602.4 total, 600.0 interval#012Cumulative writes: 77K writes, 312K keys, 77K commit groups, 1.0 writes per commit group, ingest: 0.32 GB, 0.05 MB/s#012Cumulative WAL: 77K writes, 29K syncs, 2.69 writes per sync, written: 0.32 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4721 writes, 18K keys, 4721 commit groups, 1.0 writes per commit group, ingest: 21.16 MB, 0.04 MB/s#012Interval WAL: 4721 writes, 1802 syncs, 2.62 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547733504 unmapped: 94928896 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2942000 session 0x5612d3a1d2c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d2e42800 session 0x5612d480a960
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5432472 data_alloc: 218103808 data_used: 14544896
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d0a61000 session 0x5612d2e023c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x198616000/0x0/0x1bfc00000, data 0x256fdcc/0x2798000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5425431 data_alloc: 218103808 data_used: 14434304
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d3f4ec00 session 0x5612d480ab40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d1b54800 session 0x5612dbe0ef00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d1598800 session 0x5612d30ca960
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.920788765s of 10.953658104s, submitted: 17
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 ms_handle_reset con 0x5612d0a61000 session 0x5612d135a000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428479 data_alloc: 218103808 data_used: 14434304
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547741696 unmapped: 94920704 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547749888 unmapped: 94912512 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5429119 data_alloc: 218103808 data_used: 14512128
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547758080 unmapped: 94904320 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 94896128 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5429119 data_alloc: 218103808 data_used: 14512128
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 94896128 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 94896128 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547766272 unmapped: 94896128 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.663141251s of 13.677927017s, submitted: 4
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547774464 unmapped: 94887936 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5442651 data_alloc: 218103808 data_used: 15880192
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985f2000/0x0/0x1bfc00000, data 0x2593dcc/0x27bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985e5000/0x0/0x1bfc00000, data 0x25a0dcc/0x27c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5445381 data_alloc: 218103808 data_used: 15880192
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 heartbeat osd_stat(store_statfs(0x1985e5000/0x0/0x1bfc00000, data 0x25a0dcc/0x27c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 547840000 unmapped: 94822400 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 94560256 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5449701 data_alloc: 218103808 data_used: 16822272
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 94560256 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.267621994s of 13.282384872s, submitted: 7
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 425 handle_osd_map epochs [426,426], i have 426, src has [1,426]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x1985e5000/0x0/0x1bfc00000, data 0x25a0dcc/0x27c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x24e4f9c6), peers [0,1] op hist [0,0,0,1])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d3b8d800 session 0x5612d06150e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548102144 unmapped: 94560256 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548110336 unmapped: 94552064 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 548118528 unmapped: 94543872 heap: 642662400 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d2bcb400 session 0x5612daac3e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d156c000 session 0x5612d0a621e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d6c91400 session 0x5612d160b2c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0a61000 session 0x5612d0b705a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d156c000 session 0x5612daac30e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d2bcb400 session 0x5612d0b62d20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d3b8d800 session 0x5612d30df0e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549199872 unmapped: 101335040 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197687000/0x0/0x1bfc00000, data 0x30ecafa/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5539252 data_alloc: 218103808 data_used: 16834560
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549208064 unmapped: 101326848 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549208064 unmapped: 101326848 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549224448 unmapped: 101310464 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549240832 unmapped: 101294080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549281792 unmapped: 101253120 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197687000/0x0/0x1bfc00000, data 0x30ecafa/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5539076 data_alloc: 218103808 data_used: 16834560
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549289984 unmapped: 101244928 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197687000/0x0/0x1bfc00000, data 0x30ecafa/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.730325699s of 10.394227028s, submitted: 196
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549289984 unmapped: 101244928 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549289984 unmapped: 101244928 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197687000/0x0/0x1bfc00000, data 0x30ecafa/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5539076 data_alloc: 218103808 data_used: 16834560
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197687000/0x0/0x1bfc00000, data 0x30ecafa/0x3317000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5565596 data_alloc: 218103808 data_used: 16834560
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197685000/0x0/0x1bfc00000, data 0x3387afa/0x3319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.988399506s of 11.228255272s, submitted: 84
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d3f50c00 session 0x5612d480a5a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 549339136 unmapped: 101195776 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5649536 data_alloc: 234881024 data_used: 28246016
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550232064 unmapped: 100302848 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197685000/0x0/0x1bfc00000, data 0x3387afa/0x3319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d7ff8000 session 0x5612d19932c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0a61400 session 0x5612d34632c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550240256 unmapped: 100294656 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d31b4c00 session 0x5612d480a3c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612dea3e400 session 0x5612d314fc20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197685000/0x0/0x1bfc00000, data 0x3387afa/0x3319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550240256 unmapped: 100294656 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550264832 unmapped: 100270080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197661000/0x0/0x1bfc00000, data 0x33abafa/0x333d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550264832 unmapped: 100270080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5654369 data_alloc: 234881024 data_used: 28250112
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197661000/0x0/0x1bfc00000, data 0x33abafa/0x333d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550264832 unmapped: 100270080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550264832 unmapped: 100270080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550264832 unmapped: 100270080 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x197661000/0x0/0x1bfc00000, data 0x33abafa/0x333d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550273024 unmapped: 100261888 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550281216 unmapped: 100253696 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5654369 data_alloc: 234881024 data_used: 28250112
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.801042557s of 12.839351654s, submitted: 11
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 550731776 unmapped: 99803136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 552673280 unmapped: 97861632 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553721856 unmapped: 96813056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d98000/0x0/0x1bfc00000, data 0x3c74afa/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553721856 unmapped: 96813056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d98000/0x0/0x1bfc00000, data 0x3c74afa/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553721856 unmapped: 96813056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5726633 data_alloc: 234881024 data_used: 28250112
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553869312 unmapped: 96665600 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d52000/0x0/0x1bfc00000, data 0x3cbaafa/0x3c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5734960 data_alloc: 234881024 data_used: 28508160
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553877504 unmapped: 96657408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d52000/0x0/0x1bfc00000, data 0x3cbaafa/0x3c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553885696 unmapped: 96649216 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553885696 unmapped: 96649216 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553885696 unmapped: 96649216 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5735440 data_alloc: 234881024 data_used: 28651520
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d52000/0x0/0x1bfc00000, data 0x3cbaafa/0x3c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d52000/0x0/0x1bfc00000, data 0x3cbaafa/0x3c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196d52000/0x0/0x1bfc00000, data 0x3cbaafa/0x3c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5735440 data_alloc: 234881024 data_used: 28651520
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553902080 unmapped: 96632832 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.383529663s of 22.635568619s, submitted: 46
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d90a9800 session 0x5612d30cb4a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d1987800 session 0x5612d0a634a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553910272 unmapped: 96624640 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0a61400 session 0x5612d5b0ed20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d1288c00 session 0x5612d480a1e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d3f4e800 session 0x5612d2e03c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d3f50000 session 0x5612d3446000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0a61400 session 0x5612d30b7860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 553910272 unmapped: 96624640 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d1288c00 session 0x5612dbe0f680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d1987800 session 0x5612dbe0e1e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5807497 data_alloc: 234881024 data_used: 28540928
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196409000/0x0/0x1bfc00000, data 0x4602b23/0x4595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554057728 unmapped: 96477184 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554057728 unmapped: 96477184 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554057728 unmapped: 96477184 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x196409000/0x0/0x1bfc00000, data 0x4602b5c/0x4595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554123264 unmapped: 96411648 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612dea3e000 session 0x5612d30df860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554139648 unmapped: 96395264 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0bc7000 session 0x5612d0a14b40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x19644f000/0x0/0x1bfc00000, data 0x45bcb5c/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5800573 data_alloc: 234881024 data_used: 28418048
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554139648 unmapped: 96395264 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d0a61400 session 0x5612d0a15c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554147840 unmapped: 96387072 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 heartbeat osd_stat(store_statfs(0x19644f000/0x0/0x1bfc00000, data 0x45bcb5c/0x454f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612d1288c00 session 0x5612d160b680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 ms_handle_reset con 0x5612dea3e000 session 0x5612d3a1cb40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 427 handle_osd_map epochs [427,427], i have 427, src has [1,427]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d1987800 session 0x5612d160a960
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554188800 unmapped: 96346112 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d1598c00 session 0x5612d31b1a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d0a61400 session 0x5612d3446f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554188800 unmapped: 96346112 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d6c91000 session 0x5612d30cb2c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.409293175s of 11.194550514s, submitted: 100
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d1b54800 session 0x5612d314e780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 554188800 unmapped: 96346112 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5808549 data_alloc: 234881024 data_used: 31944704
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 427 ms_handle_reset con 0x5612d7ff9400 session 0x5612d0b62f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556359680 unmapped: 94175232 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 427 heartbeat osd_stat(store_statfs(0x19644e000/0x0/0x1bfc00000, data 0x432382a/0x4550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 93339648 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 93339648 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 handle_osd_map epochs [428,428], i have 428, src has [1,428]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557203456 unmapped: 93331456 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557219840 unmapped: 93315072 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5848563 data_alloc: 251658240 data_used: 36986880
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557219840 unmapped: 93315072 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19646e000/0x0/0x1bfc00000, data 0x43013dc/0x452f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557219840 unmapped: 93315072 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557228032 unmapped: 93306880 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557228032 unmapped: 93306880 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557228032 unmapped: 93306880 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19646e000/0x0/0x1bfc00000, data 0x43013dc/0x452f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.913703918s of 10.942051888s, submitted: 16
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d27b2800 session 0x5612d30ca1e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5847683 data_alloc: 251658240 data_used: 36986880
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557228032 unmapped: 93306880 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0a61400 session 0x5612d0aeb680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197417000/0x0/0x1bfc00000, data 0x33593dc/0x3587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560308224 unmapped: 90226688 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560308224 unmapped: 90226688 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560635904 unmapped: 89899008 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560701440 unmapped: 89833472 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5742471 data_alloc: 234881024 data_used: 28274688
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560947200 unmapped: 89587712 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560947200 unmapped: 89587712 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x196b20000/0x0/0x1bfc00000, data 0x3c473dc/0x3e75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560947200 unmapped: 89587712 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560947200 unmapped: 89587712 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x196b20000/0x0/0x1bfc00000, data 0x3c473dc/0x3e75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560947200 unmapped: 89587712 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5742951 data_alloc: 234881024 data_used: 28286976
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 89579520 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.063028336s of 10.723139763s, submitted: 142
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 89579520 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x196b08000/0x0/0x1bfc00000, data 0x3c683dc/0x3e96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 89579520 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d7ff7000 session 0x5612d34634a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d156d400 session 0x5612d06152c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 89579520 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x196b08000/0x0/0x1bfc00000, data 0x3c683dc/0x3e96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560963584 unmapped: 89571328 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526195 data_alloc: 234881024 data_used: 18755584
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x196b08000/0x0/0x1bfc00000, data 0x3c683dc/0x3e96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558989312 unmapped: 91545600 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d8423c00 session 0x5612d3463e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558989312 unmapped: 91545600 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558989312 unmapped: 91545600 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558997504 unmapped: 91537408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197d85000/0x0/0x1bfc00000, data 0x29ed36a/0x2c19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558997504 unmapped: 91537408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5525134 data_alloc: 234881024 data_used: 18751488
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558997504 unmapped: 91537408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558997504 unmapped: 91537408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558997504 unmapped: 91537408 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559005696 unmapped: 91529216 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.855386734s of 13.235659599s, submitted: 28
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0a61000 session 0x5612d4e83680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0a61400 session 0x5612d2e03c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558841856 unmapped: 91693056 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558850048 unmapped: 91684864 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558858240 unmapped: 91676672 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5298148 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558866432 unmapped: 91668480 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.258146286s of 31.315561295s, submitted: 28
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5301738 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d7ff7400 session 0x5612d26461e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1987800 session 0x5612dbe0ef00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1b000 session 0x5612d0a14000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1ac00 session 0x5612d2e023c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1ac00 session 0x5612d480ad20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612db77f400 session 0x5612d077d2c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d75cd000 session 0x5612dbe0e5a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5361216 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3695c00 session 0x5612d27a0000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d8e80400 session 0x5612d30ca3c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:47:00.948 143965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:47:00.948 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  6 03:47:00 np0005548731 ovn_metadata_agent[143960]: 2025-12-06 08:47:00.948 143965 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5409569 data_alloc: 218103808 data_used: 14041088
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5409569 data_alloc: 218103808 data_used: 14041088
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559947776 unmapped: 90587136 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198ace000/0x0/0x1bfc00000, data 0x1ca33cc/0x1ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.014648438s of 19.252319336s, submitted: 53
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560758784 unmapped: 89776128 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5446069 data_alloc: 218103808 data_used: 14417920
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561029120 unmapped: 89505792 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198716000/0x0/0x1bfc00000, data 0x204d3cc/0x227a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19868c000/0x0/0x1bfc00000, data 0x20df3cc/0x230c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5457625 data_alloc: 218103808 data_used: 14229504
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560365568 unmapped: 90169344 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198671000/0x0/0x1bfc00000, data 0x21003cc/0x232d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198671000/0x0/0x1bfc00000, data 0x21003cc/0x232d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198671000/0x0/0x1bfc00000, data 0x21003cc/0x232d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5454861 data_alloc: 218103808 data_used: 14233600
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198671000/0x0/0x1bfc00000, data 0x21003cc/0x232d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 89120768 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198671000/0x0/0x1bfc00000, data 0x21003cc/0x232d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 89112576 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.889123917s of 16.025707245s, submitted: 91
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5454917 data_alloc: 218103808 data_used: 14233600
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560373760 unmapped: 90161152 heap: 650534912 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3695000 session 0x5612d4e82d20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1987c00 session 0x5612d18e90e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979b0000/0x0/0x1bfc00000, data 0x2dc03f5/0x2fee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979b0000/0x0/0x1bfc00000, data 0x2dc042e/0x2fee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5552876 data_alloc: 218103808 data_used: 14233600
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561577984 unmapped: 92635136 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d293bc00 session 0x5612d3a1dc20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d297bc00 session 0x5612d0a63680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561586176 unmapped: 92626944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d2823000 session 0x5612d2e03a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1987c00 session 0x5612d5b0ef00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 92610560 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979ae000/0x0/0x1bfc00000, data 0x2dc0461/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 92610560 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5557948 data_alloc: 218103808 data_used: 14352384
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 92577792 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979ae000/0x0/0x1bfc00000, data 0x2dc0461/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979ae000/0x0/0x1bfc00000, data 0x2dc0461/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5649788 data_alloc: 234881024 data_used: 27287552
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.710140228s of 16.895963669s, submitted: 48
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979ae000/0x0/0x1bfc00000, data 0x2dc0461/0x2ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979aa000/0x0/0x1bfc00000, data 0x2dc4461/0x2ff4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5650348 data_alloc: 234881024 data_used: 27291648
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 92119040 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563027968 unmapped: 91185152 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1979aa000/0x0/0x1bfc00000, data 0x2dc4461/0x2ff4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562372608 unmapped: 91840512 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197507000/0x0/0x1bfc00000, data 0x3267461/0x3497000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5709724 data_alloc: 234881024 data_used: 28033024
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.023353577s of 10.240151405s, submitted: 84
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197502000/0x0/0x1bfc00000, data 0x326c461/0x349c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 91815936 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d293bc00 session 0x5612d2bb5860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d2823000 session 0x5612d0a9b680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5473326 data_alloc: 218103808 data_used: 14225408
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b53800 session 0x5612dbe0e960
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197ce7000/0x0/0x1bfc00000, data 0x21123cc/0x233f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5471313 data_alloc: 218103808 data_used: 14221312
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 93650944 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3695c00 session 0x5612d2bb4f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1ac00 session 0x5612d3a1cd20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19865f000/0x0/0x1bfc00000, data 0x21123cc/0x233f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3695c00 session 0x5612d3446f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555589632 unmapped: 98623488 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555597824 unmapped: 98615296 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555606016 unmapped: 98607104 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199197000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555606016 unmapped: 98607104 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327823 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555606016 unmapped: 98607104 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555606016 unmapped: 98607104 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555606016 unmapped: 98607104 heap: 654213120 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.273403168s of 40.580463409s, submitted: 124
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d90a9c00 session 0x5612d3446000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1982a2000/0x0/0x1bfc00000, data 0x24d036a/0x26fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555614208 unmapped: 102801408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555614208 unmapped: 102801408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1982a2000/0x0/0x1bfc00000, data 0x24d036a/0x26fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5439897 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555614208 unmapped: 102801408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d297ac00 session 0x5612d160b680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555622400 unmapped: 102793216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555622400 unmapped: 102793216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d31b5000 session 0x5612d34634a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x1982a2000/0x0/0x1bfc00000, data 0x24d036a/0x26fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555622400 unmapped: 102793216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555622400 unmapped: 102793216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5439897 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d31b5000 session 0x5612d0a14000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d297ac00 session 0x5612d2e023c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555433984 unmapped: 102981632 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555433984 unmapped: 102981632 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555622400 unmapped: 102793216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543837 data_alloc: 234881024 data_used: 21110784
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3695c00 session 0x5612d2926000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1ac00 session 0x5612d314f0e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543837 data_alloc: 234881024 data_used: 21110784
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1987800 session 0x5612d0773c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1a400 session 0x5612d5b0f860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5543837 data_alloc: 234881024 data_used: 21110784
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x19827e000/0x0/0x1bfc00000, data 0x24f436a/0x2720000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1987800 session 0x5612e1713c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d297ac00 session 0x5612d0a554a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.093421936s of 27.183341980s, submitted: 19
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0bc6800 session 0x5612d2e03680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555655168 unmapped: 102760448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 podman[367873]: 2025-12-06 08:47:00.966476364 +0000 UTC m=+0.114134398 container health_status 26c2ce9bdb83c6db5ccad7f5b2a7cb4e9354ab0235ad2f942ea42c8feda2142b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec  6 03:47:00 np0005548731 podman[367889]: 2025-12-06 08:47:00.972286485 +0000 UTC m=+0.122071041 container health_status ae4fd08cdec31d6ae2a6b91ffff7deb6ca5a8375cb52ee4b685cc6f25796824e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555663360 unmapped: 102752256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555679744 unmapped: 102735872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555696128 unmapped: 102719488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5335615 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555704320 unmapped: 102711296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199198000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612d3446b40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612d0632780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cec00 session 0x5612d314e780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555712512 unmapped: 102703104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d0a62780
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 33.082977295s of 33.105510712s, submitted: 10
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612d3463a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d803f800 session 0x5612d34463c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d2926b40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612d480a1e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612d34472c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555728896 unmapped: 102686720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cec00 session 0x5612cff53860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0bd7000 session 0x5612d3462960
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d0b63680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 555728896 unmapped: 102686720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612dc405e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612dbe0e1e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cec00 session 0x5612d31b0d20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d75cd800 session 0x5612d0a15a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d27a0000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612d077de00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5464349 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556040192 unmapped: 102375424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612daac2f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cec00 session 0x5612d3a1d860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1288c00 session 0x5612daac3860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d4652000 session 0x5612d30b6b40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d2e02000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556040192 unmapped: 102375424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612d2bb5c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612d30b65a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cec00 session 0x5612d34472c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d157d000 session 0x5612d2926b40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d1b51800 session 0x5612d3463a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cdc00 session 0x5612d135b860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d4652000 session 0x5612d2e03680
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d4428800 session 0x5612e1713c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556228608 unmapped: 102187008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198123000/0x0/0x1bfc00000, data 0x264c3ec/0x287b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1a800 session 0x5612d2e02d20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556236800 unmapped: 102178816 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d31b5400 session 0x5612d4e83c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197749000/0x0/0x1bfc00000, data 0x30263ec/0x3255000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556204032 unmapped: 102211584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5582949 data_alloc: 218103808 data_used: 10072064
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556204032 unmapped: 102211584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3b8c800 session 0x5612d2e034a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d7ff7000 session 0x5612d0614f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556212224 unmapped: 102203392 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556220416 unmapped: 102195200 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197748000/0x0/0x1bfc00000, data 0x302640f/0x3256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556220416 unmapped: 102195200 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556965888 unmapped: 101449728 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5668643 data_alloc: 234881024 data_used: 21479424
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558243840 unmapped: 100171776 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.766633987s of 13.151865959s, submitted: 106
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d2823000 session 0x5612dbe0e3c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 558243840 unmapped: 100171776 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197747000/0x0/0x1bfc00000, data 0x3026432/0x3257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 559112192 unmapped: 99303424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5851872 data_alloc: 234881024 data_used: 31911936
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d31b5400 session 0x5612d2bb5e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197747000/0x0/0x1bfc00000, data 0x3026432/0x3257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 564133888 unmapped: 94281728 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d3b8f000 session 0x5612e17123c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 96780288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 96780288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d7ff9c00 session 0x5612d2e03a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d2bb2800 session 0x5612d07734a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 96780288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612db77ec00 session 0x5612d480ad20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556949504 unmapped: 101466112 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x197fc4000/0x0/0x1bfc00000, data 0x265637a/0x2883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5525188 data_alloc: 218103808 data_used: 10665984
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556949504 unmapped: 101466112 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.054722786s of 10.566476822s, submitted: 208
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556949504 unmapped: 101466112 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612dea3d400 session 0x5612d2e023c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d0be8000 session 0x5612e17134a0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d2bb2800 session 0x5612d0b71e00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556982272 unmapped: 101433344 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556990464 unmapped: 101425152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 556998656 unmapped: 101416960 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87cf000 session 0x5612e17132c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87ce800 session 0x5612d29272c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d87ce400 session 0x5612d26463c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557015040 unmapped: 101400576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x198993000/0x0/0x1bfc00000, data 0x15da36a/0x1806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557023232 unmapped: 101392384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5369644 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d5b1b800 session 0x5612d3446f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557023232 unmapped: 101392384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d9182c00 session 0x5612d3a1d0e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 65.179008484s of 65.303047180s, submitted: 51
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 101220352 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d75cc400 session 0x5612daac30e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 ms_handle_reset con 0x5612d4428800 session 0x5612d30ca3c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 101220352 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 101220352 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199040000/0x0/0x1bfc00000, data 0x173236a/0x195e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557195264 unmapped: 101220352 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199040000/0x0/0x1bfc00000, data 0x173236a/0x195e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 heartbeat osd_stat(store_statfs(0x199040000/0x0/0x1bfc00000, data 0x173236a/0x195e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2525f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5387059 data_alloc: 218103808 data_used: 7229440
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557211648 unmapped: 101203968 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557211648 unmapped: 101203968 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 430 ms_handle_reset con 0x5612ddc3fc00 session 0x5612d480a3c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 430 heartbeat osd_stat(store_statfs(0x198c28000/0x0/0x1bfc00000, data 0x1735db8/0x1965000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557236224 unmapped: 101179392 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 431 ms_handle_reset con 0x5612d7ff9c00 session 0x5612d3a1da40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612d2bb2800 session 0x5612d3462f00
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612ddc40c00 session 0x5612d19930e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612d3b8e000 session 0x5612d34463c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557793280 unmapped: 100622336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5451274 data_alloc: 218103808 data_used: 7241728
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557793280 unmapped: 100622336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612d2bb2800 session 0x5612d30b7c20
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557793280 unmapped: 100622336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612d2822000 session 0x5612d0a55a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.786208153s of 11.065521240s, submitted: 77
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557727744 unmapped: 100687872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 432 ms_handle_reset con 0x5612d2bc6000 session 0x5612d2e03860
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 433 ms_handle_reset con 0x5612d1986400 session 0x5612d26472c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 433 heartbeat osd_stat(store_statfs(0x198229000/0x0/0x1bfc00000, data 0x2132dcd/0x2365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 433 ms_handle_reset con 0x5612d1b55400 session 0x5612d0614000
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 433 ms_handle_reset con 0x5612db77e400 session 0x5612d34623c0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557735936 unmapped: 100679680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 433 heartbeat osd_stat(store_statfs(0x198225000/0x0/0x1bfc00000, data 0x2134ab5/0x2368000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557735936 unmapped: 100679680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 433 heartbeat osd_stat(store_statfs(0x198225000/0x0/0x1bfc00000, data 0x2134ab5/0x2368000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5499318 data_alloc: 218103808 data_used: 7254016
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557752320 unmapped: 100663296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 433 heartbeat osd_stat(store_statfs(0x198225000/0x0/0x1bfc00000, data 0x2134ab5/0x2368000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5502292 data_alloc: 218103808 data_used: 7254016
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557760512 unmapped: 100655104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x198222000/0x0/0x1bfc00000, data 0x2136667/0x236b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5502292 data_alloc: 218103808 data_used: 7254016
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x198222000/0x0/0x1bfc00000, data 0x2136667/0x236b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d3695400 session 0x5612d0a63a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d2942c00 session 0x5612d18e90e0
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x198222000/0x0/0x1bfc00000, data 0x2136667/0x236b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557768704 unmapped: 100646912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612ddc3f000 session 0x5612d2e02b40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.438747406s of 16.507854462s, submitted: 32
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d8423800 session 0x5612d2647a40
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557776896 unmapped: 100638720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5507991 data_alloc: 218103808 data_used: 7254016
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x1981fd000/0x0/0x1bfc00000, data 0x215a699/0x2391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5545431 data_alloc: 218103808 data_used: 12341248
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x1981fd000/0x0/0x1bfc00000, data 0x215a699/0x2391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557416448 unmapped: 100999168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557424640 unmapped: 100990976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:00 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5545431 data_alloc: 218103808 data_used: 12341248
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557424640 unmapped: 100990976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 557424640 unmapped: 100990976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x1981fd000/0x0/0x1bfc00000, data 0x215a699/0x2391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.332267761s of 13.356040955s, submitted: 7
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562978816 unmapped: 95436800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x197598000/0x0/0x1bfc00000, data 0x32dd699/0x2ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5700011 data_alloc: 218103808 data_used: 13701120
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d8423800 session 0x5612d0a55e00
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d2942c00 session 0x5612d30b6780
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 434 ms_handle_reset con 0x5612d3695400 session 0x5612d30ca960
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 434 heartbeat osd_stat(store_statfs(0x197576000/0x0/0x1bfc00000, data 0x32ff699/0x3018000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5689516 data_alloc: 218103808 data_used: 13586432
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 563298304 unmapped: 95117312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.761465073s of 11.035092354s, submitted: 121
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 435 ms_handle_reset con 0x5612d0a61400 session 0x5612d3a1c3c0
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 435 heartbeat osd_stat(store_statfs(0x19759b000/0x0/0x1bfc00000, data 0x32db667/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 435 ms_handle_reset con 0x5612d3694c00 session 0x5612d480bc20
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 436 ms_handle_reset con 0x5612d5b1b800 session 0x5612d30df4a0
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 436 heartbeat osd_stat(store_statfs(0x19873e000/0x0/0x1bfc00000, data 0x1c19387/0x1e4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [0,0,1])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560824320 unmapped: 97591296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 436 ms_handle_reset con 0x5612d87cc000 session 0x5612d4e83c20
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 436 ms_handle_reset con 0x5612d2942c00 session 0x5612d30b6b40
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 436 ms_handle_reset con 0x5612d0a61400 session 0x5612d4e82d20
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5424508 data_alloc: 218103808 data_used: 7258112
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 436 heartbeat osd_stat(store_statfs(0x198d70000/0x0/0x1bfc00000, data 0x15e8ba6/0x181e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 436 heartbeat osd_stat(store_statfs(0x198d70000/0x0/0x1bfc00000, data 0x15e8ba6/0x181e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5424668 data_alloc: 218103808 data_used: 7262208
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560848896 unmapped: 97566720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560857088 unmapped: 97558528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560865280 unmapped: 97550336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560865280 unmapped: 97550336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560873472 unmapped: 97542144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560881664 unmapped: 97533952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560889856 unmapped: 97525760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560898048 unmapped: 97517568 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 podman[367875]: 2025-12-06 08:47:01.004268484 +0000 UTC m=+0.142339605 container health_status a118c3becf56cfbcae208065771ebf10a65162bb7b96389971293e534823fb78 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560906240 unmapped: 97509376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560922624 unmapped: 97492992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560930816 unmapped: 97484800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 97476608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 97476608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 97476608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560939008 unmapped: 97476608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560955392 unmapped: 97460224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560971776 unmapped: 97443840 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'config diff' '{prefix=config diff}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'config show' '{prefix=config show}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'counter dump' '{prefix=counter dump}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'counter schema' '{prefix=counter schema}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560553984 unmapped: 97861632 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'log dump' '{prefix=log dump}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560553984 unmapped: 97861632 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'perf dump' '{prefix=perf dump}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'perf schema' '{prefix=perf schema}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560414720 unmapped: 98000896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560414720 unmapped: 98000896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560414720 unmapped: 98000896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560414720 unmapped: 98000896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560422912 unmapped: 97992704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560422912 unmapped: 97992704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560422912 unmapped: 97992704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560422912 unmapped: 97992704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560422912 unmapped: 97992704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560422912 unmapped: 97992704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560422912 unmapped: 97992704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560422912 unmapped: 97992704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560447488 unmapped: 97968128 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560447488 unmapped: 97968128 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560447488 unmapped: 97968128 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560447488 unmapped: 97968128 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560447488 unmapped: 97968128 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560447488 unmapped: 97968128 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560447488 unmapped: 97968128 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560447488 unmapped: 97968128 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560455680 unmapped: 97959936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560455680 unmapped: 97959936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560455680 unmapped: 97959936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560455680 unmapped: 97959936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560455680 unmapped: 97959936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560455680 unmapped: 97959936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560455680 unmapped: 97959936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560455680 unmapped: 97959936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560463872 unmapped: 97951744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560463872 unmapped: 97951744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7202.4 total, 600.0 interval#012Cumulative writes: 81K writes, 323K keys, 81K commit groups, 1.0 writes per commit group, ingest: 0.33 GB, 0.05 MB/s#012Cumulative WAL: 81K writes, 30K syncs, 2.67 writes per sync, written: 0.33 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3194 writes, 11K keys, 3194 commit groups, 1.0 writes per commit group, ingest: 9.73 MB, 0.02 MB/s#012Interval WAL: 3194 writes, 1368 syncs, 2.33 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560463872 unmapped: 97951744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560463872 unmapped: 97951744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560463872 unmapped: 97951744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560463872 unmapped: 97951744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560463872 unmapped: 97951744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560463872 unmapped: 97951744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560488448 unmapped: 97927168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560488448 unmapped: 97927168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560488448 unmapped: 97927168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560488448 unmapped: 97927168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560488448 unmapped: 97927168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560488448 unmapped: 97927168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560488448 unmapped: 97927168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560488448 unmapped: 97927168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: mgrc ms_handle_reset ms_handle_reset con 0x5612d2943c00
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/798720280
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/798720280,v1:192.168.122.100:6801/798720280]
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: mgrc handle_mgr_configure stats_period=5
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560480256 unmapped: 97935360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560480256 unmapped: 97935360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560480256 unmapped: 97935360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560480256 unmapped: 97935360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560496640 unmapped: 97918976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6c000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560504832 unmapped: 97910784 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428842 data_alloc: 218103808 data_used: 7270400
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560504832 unmapped: 97910784 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560504832 unmapped: 97910784 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 159.218887329s of 159.453582764s, submitted: 118
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560513024 unmapped: 97902592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560521216 unmapped: 97894400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560521216 unmapped: 97894400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6d000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560521216 unmapped: 97894400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560529408 unmapped: 97886208 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560537600 unmapped: 97878016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560553984 unmapped: 97861632 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 97853440 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6d000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560562176 unmapped: 97853440 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560586752 unmapped: 97828864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.165604591s of 10.034895897s, submitted: 174
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 97812480 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560603136 unmapped: 97812480 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6d000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560619520 unmapped: 97796096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560627712 unmapped: 97787904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 560627712 unmapped: 97787904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561684480 unmapped: 96731136 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x198d6d000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2566f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561692672 unmapped: 96722944 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561709056 unmapped: 96706560 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561717248 unmapped: 96698368 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561725440 unmapped: 96690176 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.014736176s of 10.144598007s, submitted: 138
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561725440 unmapped: 96690176 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561725440 unmapped: 96690176 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 96673792 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561750016 unmapped: 96665600 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561750016 unmapped: 96665600 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561750016 unmapped: 96665600 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561750016 unmapped: 96665600 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561766400 unmapped: 96649216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561766400 unmapped: 96649216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561766400 unmapped: 96649216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561766400 unmapped: 96649216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561766400 unmapped: 96649216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561766400 unmapped: 96649216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561766400 unmapped: 96649216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561766400 unmapped: 96649216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561790976 unmapped: 96624640 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561790976 unmapped: 96624640 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561790976 unmapped: 96624640 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561790976 unmapped: 96624640 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561799168 unmapped: 96616448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561799168 unmapped: 96616448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561799168 unmapped: 96616448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 ms_handle_reset con 0x5612d0bd8800 session 0x5612d5b0e000
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561799168 unmapped: 96616448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561799168 unmapped: 96616448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561799168 unmapped: 96616448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561799168 unmapped: 96616448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:47:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:47:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:01.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561840128 unmapped: 96575488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561840128 unmapped: 96575488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561856512 unmapped: 96559104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561856512 unmapped: 96559104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561856512 unmapped: 96559104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561856512 unmapped: 96559104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561856512 unmapped: 96559104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561856512 unmapped: 96559104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561856512 unmapped: 96559104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561856512 unmapped: 96559104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561872896 unmapped: 96542720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561872896 unmapped: 96542720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561872896 unmapped: 96542720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561872896 unmapped: 96542720 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561881088 unmapped: 96534528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561881088 unmapped: 96534528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561881088 unmapped: 96534528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561881088 unmapped: 96534528 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561913856 unmapped: 96501760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561913856 unmapped: 96501760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561913856 unmapped: 96501760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561913856 unmapped: 96501760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561913856 unmapped: 96501760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561913856 unmapped: 96501760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561913856 unmapped: 96501760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561913856 unmapped: 96501760 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561938432 unmapped: 96477184 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561938432 unmapped: 96477184 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561938432 unmapped: 96477184 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561963008 unmapped: 96452608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561963008 unmapped: 96452608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561963008 unmapped: 96452608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561963008 unmapped: 96452608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561963008 unmapped: 96452608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561963008 unmapped: 96452608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561963008 unmapped: 96452608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561963008 unmapped: 96452608 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561979392 unmapped: 96436224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561979392 unmapped: 96436224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561979392 unmapped: 96436224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561979392 unmapped: 96436224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561979392 unmapped: 96436224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561979392 unmapped: 96436224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561979392 unmapped: 96436224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561979392 unmapped: 96436224 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561995776 unmapped: 96419840 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561995776 unmapped: 96419840 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561995776 unmapped: 96419840 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561995776 unmapped: 96419840 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561995776 unmapped: 96419840 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561995776 unmapped: 96419840 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561995776 unmapped: 96419840 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561995776 unmapped: 96419840 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562003968 unmapped: 96411648 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562003968 unmapped: 96411648 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562003968 unmapped: 96411648 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562003968 unmapped: 96411648 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562003968 unmapped: 96411648 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562003968 unmapped: 96411648 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562003968 unmapped: 96411648 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562003968 unmapped: 96411648 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562028544 unmapped: 96387072 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562028544 unmapped: 96387072 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562028544 unmapped: 96387072 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562036736 unmapped: 96378880 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562036736 unmapped: 96378880 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562036736 unmapped: 96378880 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562036736 unmapped: 96378880 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562036736 unmapped: 96378880 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562044928 unmapped: 96370688 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562044928 unmapped: 96370688 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562044928 unmapped: 96370688 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562044928 unmapped: 96370688 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562044928 unmapped: 96370688 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562044928 unmapped: 96370688 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562044928 unmapped: 96370688 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562044928 unmapped: 96370688 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562053120 unmapped: 96362496 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562053120 unmapped: 96362496 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562053120 unmapped: 96362496 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562053120 unmapped: 96362496 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562053120 unmapped: 96362496 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562053120 unmapped: 96362496 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562053120 unmapped: 96362496 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562053120 unmapped: 96362496 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562061312 unmapped: 96354304 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562061312 unmapped: 96354304 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562061312 unmapped: 96354304 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562061312 unmapped: 96354304 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562061312 unmapped: 96354304 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562061312 unmapped: 96354304 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562061312 unmapped: 96354304 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562061312 unmapped: 96354304 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562085888 unmapped: 96329728 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562085888 unmapped: 96329728 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562085888 unmapped: 96329728 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562085888 unmapped: 96329728 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 96321536 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 96321536 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 96321536 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562094080 unmapped: 96321536 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562110464 unmapped: 96305152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562110464 unmapped: 96305152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562110464 unmapped: 96305152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562110464 unmapped: 96305152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562110464 unmapped: 96305152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562110464 unmapped: 96305152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562110464 unmapped: 96305152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562110464 unmapped: 96305152 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562126848 unmapped: 96288768 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562126848 unmapped: 96288768 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562126848 unmapped: 96288768 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562126848 unmapped: 96288768 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562126848 unmapped: 96288768 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562126848 unmapped: 96288768 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562126848 unmapped: 96288768 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562135040 unmapped: 96280576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562143232 unmapped: 96272384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562143232 unmapped: 96272384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562143232 unmapped: 96272384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562143232 unmapped: 96272384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562143232 unmapped: 96272384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562143232 unmapped: 96272384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562143232 unmapped: 96272384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562143232 unmapped: 96272384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562167808 unmapped: 96247808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562167808 unmapped: 96247808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562167808 unmapped: 96247808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562167808 unmapped: 96247808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562167808 unmapped: 96247808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562167808 unmapped: 96247808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562167808 unmapped: 96247808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562167808 unmapped: 96247808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562184192 unmapped: 96231424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562184192 unmapped: 96231424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562184192 unmapped: 96231424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562184192 unmapped: 96231424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562184192 unmapped: 96231424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562184192 unmapped: 96231424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562184192 unmapped: 96231424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562184192 unmapped: 96231424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562192384 unmapped: 96223232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562192384 unmapped: 96223232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562192384 unmapped: 96223232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562192384 unmapped: 96223232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562192384 unmapped: 96223232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562192384 unmapped: 96223232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562192384 unmapped: 96223232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562192384 unmapped: 96223232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562208768 unmapped: 96206848 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562208768 unmapped: 96206848 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562208768 unmapped: 96206848 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562208768 unmapped: 96206848 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562216960 unmapped: 96198656 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562216960 unmapped: 96198656 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562216960 unmapped: 96198656 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562216960 unmapped: 96198656 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562241536 unmapped: 96174080 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562241536 unmapped: 96174080 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562241536 unmapped: 96174080 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562241536 unmapped: 96174080 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562241536 unmapped: 96174080 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562241536 unmapped: 96174080 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562241536 unmapped: 96174080 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562241536 unmapped: 96174080 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562257920 unmapped: 96157696 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562257920 unmapped: 96157696 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562257920 unmapped: 96157696 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562257920 unmapped: 96157696 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562257920 unmapped: 96157696 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562257920 unmapped: 96157696 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562257920 unmapped: 96157696 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562257920 unmapped: 96157696 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562266112 unmapped: 96149504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562266112 unmapped: 96149504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562266112 unmapped: 96149504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562266112 unmapped: 96149504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562266112 unmapped: 96149504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562266112 unmapped: 96149504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562266112 unmapped: 96149504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562282496 unmapped: 96133120 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562282496 unmapped: 96133120 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562282496 unmapped: 96133120 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562282496 unmapped: 96133120 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562282496 unmapped: 96133120 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562282496 unmapped: 96133120 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562282496 unmapped: 96133120 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562282496 unmapped: 96133120 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562298880 unmapped: 96116736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562298880 unmapped: 96116736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562298880 unmapped: 96116736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562298880 unmapped: 96116736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562298880 unmapped: 96116736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562298880 unmapped: 96116736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562307072 unmapped: 96108544 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562307072 unmapped: 96108544 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562323456 unmapped: 96092160 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562323456 unmapped: 96092160 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562323456 unmapped: 96092160 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562323456 unmapped: 96092160 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562323456 unmapped: 96092160 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562323456 unmapped: 96092160 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562323456 unmapped: 96092160 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562323456 unmapped: 96092160 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562331648 unmapped: 96083968 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562331648 unmapped: 96083968 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562331648 unmapped: 96083968 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562331648 unmapped: 96083968 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562331648 unmapped: 96083968 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562331648 unmapped: 96083968 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562339840 unmapped: 96075776 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562339840 unmapped: 96075776 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562348032 unmapped: 96067584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562348032 unmapped: 96067584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562348032 unmapped: 96067584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562348032 unmapped: 96067584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562348032 unmapped: 96067584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562348032 unmapped: 96067584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562348032 unmapped: 96067584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562348032 unmapped: 96067584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562372608 unmapped: 96043008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562372608 unmapped: 96043008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562372608 unmapped: 96043008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562372608 unmapped: 96043008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562372608 unmapped: 96043008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562372608 unmapped: 96043008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562372608 unmapped: 96043008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562372608 unmapped: 96043008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 96018432 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 96018432 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 96018432 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 96018432 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 96018432 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 96018432 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 96018432 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562397184 unmapped: 96018432 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562413568 unmapped: 96002048 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562413568 unmapped: 96002048 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562413568 unmapped: 96002048 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562413568 unmapped: 96002048 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562413568 unmapped: 96002048 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562413568 unmapped: 96002048 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562413568 unmapped: 96002048 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562413568 unmapped: 96002048 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562429952 unmapped: 95985664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562429952 unmapped: 95985664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562438144 unmapped: 95977472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562438144 unmapped: 95977472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562438144 unmapped: 95977472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562438144 unmapped: 95977472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562438144 unmapped: 95977472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562438144 unmapped: 95977472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562446336 unmapped: 95969280 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562446336 unmapped: 95969280 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562446336 unmapped: 95969280 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562454528 unmapped: 95961088 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562454528 unmapped: 95961088 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562454528 unmapped: 95961088 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562454528 unmapped: 95961088 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562454528 unmapped: 95961088 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562462720 unmapped: 95952896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562462720 unmapped: 95952896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562462720 unmapped: 95952896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562462720 unmapped: 95952896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562462720 unmapped: 95952896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562462720 unmapped: 95952896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562462720 unmapped: 95952896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562462720 unmapped: 95952896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562479104 unmapped: 95936512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562479104 unmapped: 95936512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562479104 unmapped: 95936512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562479104 unmapped: 95936512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562487296 unmapped: 95928320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562487296 unmapped: 95928320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562487296 unmapped: 95928320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562487296 unmapped: 95928320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562511872 unmapped: 95903744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562511872 unmapped: 95903744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562511872 unmapped: 95903744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562511872 unmapped: 95903744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562511872 unmapped: 95903744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562511872 unmapped: 95903744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562511872 unmapped: 95903744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562511872 unmapped: 95903744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562520064 unmapped: 95895552 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562520064 unmapped: 95895552 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562520064 unmapped: 95895552 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562520064 unmapped: 95895552 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562520064 unmapped: 95895552 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562520064 unmapped: 95895552 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562528256 unmapped: 95887360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562528256 unmapped: 95887360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562536448 unmapped: 95879168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562536448 unmapped: 95879168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562536448 unmapped: 95879168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562536448 unmapped: 95879168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562536448 unmapped: 95879168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562536448 unmapped: 95879168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562536448 unmapped: 95879168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562536448 unmapped: 95879168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562561024 unmapped: 95854592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562561024 unmapped: 95854592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562561024 unmapped: 95854592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562561024 unmapped: 95854592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562561024 unmapped: 95854592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562561024 unmapped: 95854592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562561024 unmapped: 95854592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562561024 unmapped: 95854592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562569216 unmapped: 95846400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562569216 unmapped: 95846400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562569216 unmapped: 95846400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562569216 unmapped: 95846400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562569216 unmapped: 95846400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562569216 unmapped: 95846400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562569216 unmapped: 95846400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562569216 unmapped: 95846400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562585600 unmapped: 95830016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562585600 unmapped: 95830016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562585600 unmapped: 95830016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562585600 unmapped: 95830016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562585600 unmapped: 95830016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562585600 unmapped: 95830016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562585600 unmapped: 95830016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562585600 unmapped: 95830016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562593792 unmapped: 95821824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562593792 unmapped: 95821824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562593792 unmapped: 95821824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562593792 unmapped: 95821824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562593792 unmapped: 95821824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562593792 unmapped: 95821824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562593792 unmapped: 95821824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562593792 unmapped: 95821824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562618368 unmapped: 95797248 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562618368 unmapped: 95797248 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562626560 unmapped: 95789056 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562626560 unmapped: 95789056 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562626560 unmapped: 95789056 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562634752 unmapped: 95780864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562634752 unmapped: 95780864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562634752 unmapped: 95780864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562642944 unmapped: 95772672 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562642944 unmapped: 95772672 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562642944 unmapped: 95772672 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562642944 unmapped: 95772672 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562642944 unmapped: 95772672 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562642944 unmapped: 95772672 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562642944 unmapped: 95772672 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562642944 unmapped: 95772672 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562659328 unmapped: 95756288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562659328 unmapped: 95756288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562659328 unmapped: 95756288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562659328 unmapped: 95756288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562659328 unmapped: 95756288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562659328 unmapped: 95756288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562659328 unmapped: 95756288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562659328 unmapped: 95756288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562667520 unmapped: 95748096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562667520 unmapped: 95748096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562667520 unmapped: 95748096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562667520 unmapped: 95748096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562667520 unmapped: 95748096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562667520 unmapped: 95748096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562667520 unmapped: 95748096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562667520 unmapped: 95748096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562675712 unmapped: 95739904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562675712 unmapped: 95739904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562675712 unmapped: 95739904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562675712 unmapped: 95739904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 562675712 unmapped: 95739904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561111040 unmapped: 97304576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561111040 unmapped: 97304576 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561119232 unmapped: 97296384 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561135616 unmapped: 97280000 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561135616 unmapped: 97280000 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561135616 unmapped: 97280000 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561135616 unmapped: 97280000 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561135616 unmapped: 97280000 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561135616 unmapped: 97280000 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561135616 unmapped: 97280000 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561135616 unmapped: 97280000 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561143808 unmapped: 97271808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561143808 unmapped: 97271808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561143808 unmapped: 97271808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561143808 unmapped: 97271808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561143808 unmapped: 97271808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561143808 unmapped: 97271808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561143808 unmapped: 97271808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561143808 unmapped: 97271808 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561160192 unmapped: 97255424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561160192 unmapped: 97255424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561160192 unmapped: 97255424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561160192 unmapped: 97255424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561160192 unmapped: 97255424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561160192 unmapped: 97255424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561160192 unmapped: 97255424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561160192 unmapped: 97255424 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561168384 unmapped: 97247232 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561192960 unmapped: 97222656 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561192960 unmapped: 97222656 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561192960 unmapped: 97222656 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561201152 unmapped: 97214464 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561201152 unmapped: 97214464 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561201152 unmapped: 97214464 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561201152 unmapped: 97214464 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561201152 unmapped: 97214464 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561209344 unmapped: 97206272 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561209344 unmapped: 97206272 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7802.4 total, 600.0 interval#012Cumulative writes: 81K writes, 325K keys, 81K commit groups, 1.0 writes per commit group, ingest: 0.33 GB, 0.04 MB/s#012Cumulative WAL: 81K writes, 30K syncs, 2.66 writes per sync, written: 0.33 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 672 writes, 1020 keys, 672 commit groups, 1.0 writes per commit group, ingest: 0.33 MB, 0.00 MB/s#012Interval WAL: 672 writes, 336 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561209344 unmapped: 97206272 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561209344 unmapped: 97206272 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561209344 unmapped: 97206272 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561209344 unmapped: 97206272 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561209344 unmapped: 97206272 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561209344 unmapped: 97206272 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561225728 unmapped: 97189888 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561225728 unmapped: 97189888 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561225728 unmapped: 97189888 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561225728 unmapped: 97189888 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561225728 unmapped: 97189888 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561225728 unmapped: 97189888 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561225728 unmapped: 97189888 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561225728 unmapped: 97189888 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561242112 unmapped: 97173504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561242112 unmapped: 97173504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561242112 unmapped: 97173504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561242112 unmapped: 97173504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561242112 unmapped: 97173504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561242112 unmapped: 97173504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561242112 unmapped: 97173504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561242112 unmapped: 97173504 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561250304 unmapped: 97165312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561250304 unmapped: 97165312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561250304 unmapped: 97165312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561250304 unmapped: 97165312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561250304 unmapped: 97165312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561250304 unmapped: 97165312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561250304 unmapped: 97165312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561250304 unmapped: 97165312 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561266688 unmapped: 97148928 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561266688 unmapped: 97148928 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561266688 unmapped: 97148928 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561274880 unmapped: 97140736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561274880 unmapped: 97140736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561274880 unmapped: 97140736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561274880 unmapped: 97140736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561274880 unmapped: 97140736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561274880 unmapped: 97140736 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 582.677917480s of 583.901062012s, submitted: 24
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561283072 unmapped: 97132544 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561291264 unmapped: 97124352 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561291264 unmapped: 97124352 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561291264 unmapped: 97124352 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561299456 unmapped: 97116160 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561324032 unmapped: 97091584 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561340416 unmapped: 97075200 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561348608 unmapped: 97067008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561348608 unmapped: 97067008 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561389568 unmapped: 97026048 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.359865189s of 10.003479958s, submitted: 214
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561405952 unmapped: 97009664 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 97001472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 97001472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 97001472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 97001472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 97001472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 97001472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 97001472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561414144 unmapped: 97001472 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 96993280 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 96993280 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 96993280 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 96993280 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 96993280 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 96993280 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561422336 unmapped: 96993280 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561430528 unmapped: 96985088 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 96976896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 96976896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 96976896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 96976896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 96976896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 96976896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 96976896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 96976896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 96976896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 96976896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561438720 unmapped: 96976896 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561446912 unmapped: 96968704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561446912 unmapped: 96968704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561446912 unmapped: 96968704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561446912 unmapped: 96968704 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561455104 unmapped: 96960512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561455104 unmapped: 96960512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561455104 unmapped: 96960512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561455104 unmapped: 96960512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561455104 unmapped: 96960512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561455104 unmapped: 96960512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561455104 unmapped: 96960512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561455104 unmapped: 96960512 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561463296 unmapped: 96952320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561463296 unmapped: 96952320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561463296 unmapped: 96952320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561463296 unmapped: 96952320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561463296 unmapped: 96952320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561463296 unmapped: 96952320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561463296 unmapped: 96952320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561463296 unmapped: 96952320 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561479680 unmapped: 96935936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561479680 unmapped: 96935936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561479680 unmapped: 96935936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561479680 unmapped: 96935936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561479680 unmapped: 96935936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561479680 unmapped: 96935936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561479680 unmapped: 96935936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561479680 unmapped: 96935936 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561487872 unmapped: 96927744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561487872 unmapped: 96927744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561487872 unmapped: 96927744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561487872 unmapped: 96927744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561487872 unmapped: 96927744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561487872 unmapped: 96927744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561487872 unmapped: 96927744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561487872 unmapped: 96927744 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561504256 unmapped: 96911360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561504256 unmapped: 96911360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561504256 unmapped: 96911360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561504256 unmapped: 96911360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561504256 unmapped: 96911360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561504256 unmapped: 96911360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561504256 unmapped: 96911360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561504256 unmapped: 96911360 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561512448 unmapped: 96903168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561512448 unmapped: 96903168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561512448 unmapped: 96903168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561512448 unmapped: 96903168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561512448 unmapped: 96903168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561512448 unmapped: 96903168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561512448 unmapped: 96903168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561512448 unmapped: 96903168 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561520640 unmapped: 96894976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561520640 unmapped: 96894976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561520640 unmapped: 96894976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561520640 unmapped: 96894976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561520640 unmapped: 96894976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561520640 unmapped: 96894976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561520640 unmapped: 96894976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561520640 unmapped: 96894976 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561528832 unmapped: 96886784 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561528832 unmapped: 96886784 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561528832 unmapped: 96886784 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561528832 unmapped: 96886784 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561528832 unmapped: 96886784 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561528832 unmapped: 96886784 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561528832 unmapped: 96886784 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561528832 unmapped: 96886784 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561537024 unmapped: 96878592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561537024 unmapped: 96878592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561537024 unmapped: 96878592 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561545216 unmapped: 96870400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561545216 unmapped: 96870400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561545216 unmapped: 96870400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561545216 unmapped: 96870400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561545216 unmapped: 96870400 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561553408 unmapped: 96862208 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561553408 unmapped: 96862208 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561553408 unmapped: 96862208 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561553408 unmapped: 96862208 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561553408 unmapped: 96862208 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561553408 unmapped: 96862208 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561553408 unmapped: 96862208 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561553408 unmapped: 96862208 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561561600 unmapped: 96854016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561561600 unmapped: 96854016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561561600 unmapped: 96854016 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561569792 unmapped: 96845824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561569792 unmapped: 96845824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561569792 unmapped: 96845824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561569792 unmapped: 96845824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561569792 unmapped: 96845824 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561586176 unmapped: 96829440 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561586176 unmapped: 96829440 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561586176 unmapped: 96829440 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561586176 unmapped: 96829440 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561586176 unmapped: 96829440 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561586176 unmapped: 96829440 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561586176 unmapped: 96829440 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561586176 unmapped: 96829440 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 96813056 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 96813056 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 96813056 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 96813056 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 96813056 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 96813056 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 96813056 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561602560 unmapped: 96813056 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561610752 unmapped: 96804864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561610752 unmapped: 96804864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561610752 unmapped: 96804864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561610752 unmapped: 96804864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561610752 unmapped: 96804864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561610752 unmapped: 96804864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561610752 unmapped: 96804864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561610752 unmapped: 96804864 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561627136 unmapped: 96788480 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561627136 unmapped: 96788480 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561627136 unmapped: 96788480 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 96780288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 96780288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 96780288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 96780288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561635328 unmapped: 96780288 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561643520 unmapped: 96772096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561643520 unmapped: 96772096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561643520 unmapped: 96772096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561643520 unmapped: 96772096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561643520 unmapped: 96772096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561643520 unmapped: 96772096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561643520 unmapped: 96772096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561643520 unmapped: 96772096 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561651712 unmapped: 96763904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561651712 unmapped: 96763904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561651712 unmapped: 96763904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561651712 unmapped: 96763904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561651712 unmapped: 96763904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561651712 unmapped: 96763904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561651712 unmapped: 96763904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561651712 unmapped: 96763904 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561668096 unmapped: 96747520 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561668096 unmapped: 96747520 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561668096 unmapped: 96747520 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561668096 unmapped: 96747520 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561676288 unmapped: 96739328 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561676288 unmapped: 96739328 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561676288 unmapped: 96739328 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561676288 unmapped: 96739328 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561684480 unmapped: 96731136 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561684480 unmapped: 96731136 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561684480 unmapped: 96731136 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561684480 unmapped: 96731136 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561684480 unmapped: 96731136 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561684480 unmapped: 96731136 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561684480 unmapped: 96731136 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561684480 unmapped: 96731136 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561700864 unmapped: 96714752 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561700864 unmapped: 96714752 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561700864 unmapped: 96714752 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561700864 unmapped: 96714752 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561700864 unmapped: 96714752 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561700864 unmapped: 96714752 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561700864 unmapped: 96714752 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561700864 unmapped: 96714752 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561709056 unmapped: 96706560 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561709056 unmapped: 96706560 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561709056 unmapped: 96706560 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561709056 unmapped: 96706560 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561709056 unmapped: 96706560 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561709056 unmapped: 96706560 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561709056 unmapped: 96706560 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561709056 unmapped: 96706560 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561725440 unmapped: 96690176 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561725440 unmapped: 96690176 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561733632 unmapped: 96681984 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561733632 unmapped: 96681984 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561733632 unmapped: 96681984 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561733632 unmapped: 96681984 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561733632 unmapped: 96681984 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561733632 unmapped: 96681984 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 96673792 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 96673792 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 96673792 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 96673792 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 96673792 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 96673792 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 96673792 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561741824 unmapped: 96673792 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561758208 unmapped: 96657408 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561766400 unmapped: 96649216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561766400 unmapped: 96649216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561766400 unmapped: 96649216 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561774592 unmapped: 96641024 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561782784 unmapped: 96632832 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561799168 unmapped: 96616448 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561807360 unmapped: 96608256 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561815552 unmapped: 96600064 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561823744 unmapped: 96591872 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561831936 unmapped: 96583680 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561840128 unmapped: 96575488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561840128 unmapped: 96575488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561840128 unmapped: 96575488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561840128 unmapped: 96575488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561840128 unmapped: 96575488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561840128 unmapped: 96575488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561848320 unmapped: 96567296 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561856512 unmapped: 96559104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561856512 unmapped: 96559104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561856512 unmapped: 96559104 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561864704 unmapped: 96550912 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561897472 unmapped: 96518144 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561905664 unmapped: 96509952 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561930240 unmapped: 96485376 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561938432 unmapped: 96477184 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561946624 unmapped: 96468992 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561954816 unmapped: 96460800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561954816 unmapped: 96460800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561954816 unmapped: 96460800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561954816 unmapped: 96460800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561954816 unmapped: 96460800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: bluestore.MempoolThread(0x5612cf253b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5428922 data_alloc: 218103808 data_used: 7303168
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561954816 unmapped: 96460800 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'config diff' '{prefix=config diff}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'config show' '{prefix=config show}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561700864 unmapped: 96714752 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'counter dump' '{prefix=counter dump}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'counter schema' '{prefix=counter schema}'
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561840128 unmapped: 96575488 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: osd.2 437 heartbeat osd_stat(store_statfs(0x197bcd000/0x0/0x1bfc00000, data 0x15ea790/0x1821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2680f9c6), peers [0,1] op hist [])
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: prioritycache tune_memory target: 4294967296 mapped: 561889280 unmapped: 96526336 heap: 658415616 old mem: 2845415832 new mem: 2845415832
Dec  6 03:47:01 np0005548731 ceph-osd[80147]: do_command 'log dump' '{prefix=log dump}'
Dec  6 03:47:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  6 03:47:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2117309582' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  6 03:47:01 np0005548731 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  6 03:47:01 np0005548731 nova_compute[232433]: 2025-12-06 08:47:01.591 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:47:01 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  6 03:47:01 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2316704263' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  6 03:47:01 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:47:01 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:47:01 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:01.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:47:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec  6 03:47:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2097908437' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  6 03:47:02 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec  6 03:47:02 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2303681952' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  6 03:47:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:47:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:47:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:03.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec  6 03:47:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec  6 03:47:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3063384302' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  6 03:47:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec  6 03:47:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1311921730' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  6 03:47:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec  6 03:47:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4246357108' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  6 03:47:03 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:47:03 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:47:03 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:03.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:47:03 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec  6 03:47:03 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2991099391' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  6 03:47:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec  6 03:47:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1493012721' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  6 03:47:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec  6 03:47:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4072605533' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  6 03:47:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec  6 03:47:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2684433230' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  6 03:47:04 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec  6 03:47:04 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2338196638' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4249903457' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  6 03:47:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:47:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:47:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:05.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2475362331' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec  6 03:47:05 np0005548731 nova_compute[232433]: 2025-12-06 08:47:05.174 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3925471560' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3960332756' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  6 03:47:05 np0005548731 systemd[1]: Starting Hostname Service...
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3820400097' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1667974750' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec  6 03:47:05 np0005548731 systemd[1]: Started Hostname Service.
Dec  6 03:47:05 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:47:05 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:47:05 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:47:05 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:05.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:47:06 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Dec  6 03:47:06 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/238822674' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec  6 03:47:06 np0005548731 nova_compute[232433]: 2025-12-06 08:47:06.593 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:47:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:47:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:47:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:07.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:47:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Dec  6 03:47:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4031061109' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec  6 03:47:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 03:47:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 03:47:07 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Dec  6 03:47:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/396089789' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec  6 03:47:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 03:47:07 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 03:47:07 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:47:07 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:47:07 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:07.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:47:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec  6 03:47:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1229308044' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec  6 03:47:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 03:47:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 03:47:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 03:47:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 03:47:08 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Dec  6 03:47:08 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1850885369' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec  6 03:47:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:47:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000024s ======
Dec  6 03:47:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:09.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec  6 03:47:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec  6 03:47:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec  6 03:47:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec  6 03:47:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec  6 03:47:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  6 03:47:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4094367856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  6 03:47:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  6 03:47:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4094367856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  6 03:47:09 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Dec  6 03:47:09 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3519464518' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec  6 03:47:09 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:47:09 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.000000000s ======
Dec  6 03:47:09 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:09.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  6 03:47:10 np0005548731 nova_compute[232433]: 2025-12-06 08:47:10.176 232437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  6 03:47:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Dec  6 03:47:10 np0005548731 ceph-mon[77458]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2162878835' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec  6 03:47:10 np0005548731 ceph-mon[77458]: mon.compute-2@1(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  6 03:47:11 np0005548731 radosgw[81136]: ====== starting new request req=0x7fc5ee4c96f0 =====
Dec  6 03:47:11 np0005548731 radosgw[81136]: ====== req done req=0x7fc5ee4c96f0 op status=0 http_status=200 latency=0.001000023s ======
Dec  6 03:47:11 np0005548731 radosgw[81136]: beast: 0x7fc5ee4c96f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:11.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
